Menu
Home Articles About Work With Me
Network of glowing server connections representing AI tool integration
Technology Apr 09, 2026 • 18 min read

What the Hell Is an MCP Server? (And Why Won't Anyone Shut Up About Them?)

MCP servers explained for humans. What they are, who can use them, their limitations, how to build one, and five deployment patterns from local to cloud. With real code and hard-won lessons.

Share:
Lee Foropoulos

Lee Foropoulos

18 min read

Continue where you left off?
Text size:

Contents

If you've been anywhere near AI Twitter, Hacker News, or a Discord server in the last six months, you've seen the acronym MCP thrown around like confetti at a tech bro's wedding. Everyone's building one. Everyone's configuring one. Nobody's explaining what the hell one actually is.

Until now. Grab a coffee. We're fixing this.

Tangled web of glowing fiber optic cables representing the complexity of AI integrations
Before MCP: every AI needed custom integrations for every tool. After MCP: one protocol to rule them all.

The Elevator Pitch (No Buzzwords, I Promise)

An MCP server is a translator between an AI and your stuff.

That's it. That's the whole thing.

Your AI (Claude, GPT, whatever) can read text and write text. It cannot, on its own, check your calendar, query your database, search Unsplash for stock photos, or calculate the gematria value of your cat's name. It's a brain in a jar.

An MCP server gives the brain hands.

An MCP server gives the brain hands.

MCP stands for Model Context Protocol. It's an open standard (created by Anthropic, the Claude people) that defines how an AI model talks to external tools. Think of it like USB for AI, a universal plug that lets any AI connect to any tool, as long as both speak MCP.

OK But Why Should I Care?

Because without MCP, every AI integration looks like this:

1User: "What's on my calendar tomorrow?"
2AI:   "I don't have access to your calendar. Please check 
3       Google Calendar and paste the results here."
4User: *alt-tabs, copies, pastes*
5AI:   "Great, you have 3 meetings."

With MCP:

1User: "What's on my calendar tomorrow?"
2AI:   *calls google-calendar MCP tool*
3AI:   "You have 3 meetings: standup at 9, design review at 11,
4       and a 1-on-1 with your manager at 2."

The AI calls the tool directly. No copy-paste. No alt-tab. No "please run this curl command and show me the output." The AI has hands now.

0
copy-paste steps needed when your AI has MCP tools connected

The Architecture (With an Analogy That Actually Works)

Think of a restaurant.

  • The Customer = You (the human asking questions)
  • The Waiter = The AI model (Claude, GPT, etc.)
  • The Kitchen = Your actual systems (databases, APIs, file systems)
  • The MCP Server = The ordering system that connects the waiter to the kitchen

Without MCP, the waiter walks up to your table and says "I can describe food really well, but I can't actually get you any." With MCP, the waiter takes your order, sends it to the kitchen, and brings back a steak.

Chef working in a professional kitchen
The MCP server is the ordering system. The kitchen (your tools) does the real work. The waiter (the AI) just needs to speak the right language.

Here's how it looks technically:

You ←→ AI Model ←→ MCP Protocol ←→ MCP Server ←→ Your Tools
        (brain)      (the standard)    (translator)    (the stuff)

The MCP protocol defines the conversation format. The MCP server implements it. Your tools don't need to know anything about AI; they just do their thing, and the MCP server handles the translation.

The Two Flavors: stdio and HTTP

MCP servers come in two flavors, like ice cream if ice cream were a communication protocol:

1. stdio (Standard Input/Output)

The AI spawns the MCP server as a child process on your machine. They talk through stdin/stdout, the same way Unix pipes work.

AI Process ←→ stdin/stdout ←→ MCP Server Process

When to use it: When the AI and the MCP server are on the same machine. This is what Claude Desktop and Claude CLI use. It's simple, fast, and secure (nothing goes over the network).

2. HTTP/SSE (Server-Sent Events)

The MCP server runs as a web service somewhere on your network (or the internet). The AI connects to it over HTTP.

AI Process ←→ HTTPS ←→ MCP Server (running somewhere else)

When to use it: When the tools are on a different machine, or you want multiple AI sessions to share the same MCP server.

Which One Should You Pick?

Start with stdio. It works on your machine with zero networking. Graduate to HTTP/SSE when you need remote access or multi-session sharing. Most people never need to leave stdio.

What's Inside an MCP Server?

An MCP server does three things:

  1. Advertises tools: "Hey AI, here's what I can do: search the web, read files, query a database, calculate gematria values..."
  2. Accepts tool calls: The AI says "run the search tool with query best pizza near me"
  3. Returns results: The MCP server runs the tool and sends back the result as text

That's the entire protocol. Tools in, results out. The AI decides when to call a tool based on your conversation. The MCP server decides how to execute it.

Here's the simplest possible MCP server in JavaScript:

javascript
1import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
2import { StdioServerTransport } from 
3  '@modelcontextprotocol/sdk/server/stdio.js';
4import { z } from 'zod';
5
6const server = new McpServer({
7  name: 'my-first-mcp',
8  version: '1.0.0',
9});
10
11// Register a tool
12server.tool(
13  'roll_dice',                          // tool name
14  'Roll a dice with N sides',           // description
15  { sides: z.number().default(6) },     // parameters
16  async ({ sides }) => ({               // handler
17    content: [{
18      type: 'text',
19      text: `Rolled: ${Math.floor(Math.random() * sides) + 1}`,
20    }],
21  })
22);
23
24// Connect via stdio
25const transport = new StdioServerTransport();
26await server.connect(transport);
15
lines of JavaScript to build a complete, working MCP server from scratch

That's a complete, working MCP server. It rolls dice. The AI can now roll dice. We live in the future.

15 lines of JavaScript. One protocol. Your AI just got hands.

How to Set One Up (Step by Step)

Let's build a real MCP server and connect it to Claude CLI. We'll use the dice roller because I don't want you accidentally dropping your production database on step 3 of a tutorial.

Step 1: Create the Project

bash
1mkdir my-mcp-server && cd my-mcp-server
2npm init -y
3npm install @modelcontextprotocol/sdk zod

Step 2: Write the Server

Create server.mjs:

javascript
1import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
2import { StdioServerTransport } from 
3  '@modelcontextprotocol/sdk/server/stdio.js';
4import { z } from 'zod';
5
6const server = new McpServer({
7  name: 'dice-roller',
8  version: '1.0.0',
9});
10
11server.tool(
12  'roll_dice',
13  'Roll one or more dice with any number of sides.',
14  {
15    sides: z.number().describe('Sides per die').default(6),
16    count: z.number().describe('Number of dice').default(1),
17  },
18  async ({ sides, count }) => {
19    const rolls = Array.from(
20      { length: count },
21      () => Math.floor(Math.random() * sides) + 1
22    );
23    return {
24      content: [{
25        type: 'text',
26        text: JSON.stringify({ rolls, total: rolls.reduce((a,b) => a+b, 0) }),
27      }],
28    };
29  }
30);
31
32await server.connect(new StdioServerTransport());

Step 3: Configure Claude CLI

Add your MCP server to Claude's settings. Edit ~/.claude/settings.json:

json
1{
2  "mcpServers": {
3    "dice-roller": {
4      "command": "node",
5      "args": ["/absolute/path/to/my-mcp-server/server.mjs"]
6    }
7  }
8}

The Path Must Be Absolute

Relative paths won't work. Use the full path to your server script. Also: command is what runs the server (usually node or python), and args are the arguments. You can add env for API keys.

Step 4: Restart Claude CLI

MCP servers load on startup. If you changed the config, you must restart.

bash
claude

Type /mcp in the Claude prompt to check your server loaded. You should see dice-roller listed.

Step 5: Use It

Just talk naturally:

1You:    Roll 4d20 for my D&D attack
2Claude: *calls roll_dice with sides=20, count=4*
3Claude: Rolls: 17, 3, 14, 20 (critical!). Total: 54.

The AI reads your tool descriptions and parameter schemas to figure out when and how to call them. Good descriptions equal good tool usage. Bad descriptions equal the AI rolling dice when you asked about the weather.

Close-up of polyhedral dice on a game board
From "roll 4d20" to structured tool call to JSON result. The AI handles the translation. You just talk.

The Limitations (Nobody Talks About These)

MCP sounds magical until you hit the walls. And you will hit the walls. Here's what the hype train skips over.

Not Every AI Can Talk to MCP Servers

MCP is not a universal AI standard. It's an Anthropic-led protocol that some others have adopted. Here's who can actually connect:

AI PlatformMCP SupportNotes
Claude CLI / Claude CodeFullNative settings.json config
Claude DesktopFullNative stdio MCP servers
Claude.ai (Web)LimitedOnly Anthropic's own OAuth servers
Claude Code Web SessionsLimitedSandbox blocks custom domains
ChatGPT / GPTNoUses OpenAI's own function calling
GeminiNoGoogle's own extensions system
Cursor / WindsurfGrowingCheck their docs

The Web Client Trap

If you're using Claude through the web browser, your custom MCP server is useless. The web sandbox can't reach your local network, can't spawn processes, and only allows connections to Anthropic's pre-approved domains. We learned this the hard way: our Cloudflare tunnel worked perfectly from a browser, but Claude Code web returned 403 host_not_allowed.

stdio Means "Same Machine Only"

The most common MCP transport (stdio) requires the AI and the MCP server to run on the same physical machine. No network involved. This means you can't share one MCP server across multiple sessions, and heavy tools run on your laptop.

No Streaming, No Push

MCP is request-response. The AI asks, the tool answers. Your MCP server can't push notifications ("hey, new email!"), stream real-time data, or maintain long-lived connections. The AI polls when it wants to know.

Security Is Your Problem

MCP servers run with whatever permissions the process has. Give an MCP server database access, and it has access to the entire database. There's no built-in sandboxing or permission scoping. You build those guardrails, or you don't.

0
built-in security guardrails in the MCP protocol. Every permission boundary is yours to implement.
MCP gives the AI hands. What it grabs is entirely up to you.

The Proxy Pattern: When Your Tools Are Somewhere Else

Here's a real problem we solved this week, and it's one you'll hit too.

The setup: We have a tool server called AgentFoundry running on a MacBook at 192.168.50.167:10000. It has 45 tools: gematria calculations, web search, image search, database queries. It's been running for months with a REST API.

The problem: We want Claude CLI on a Windows desktop to use those tools. But Claude CLI uses stdio MCP (local process only). AgentFoundry's MCP server spawns a full local instance (needs its own API keys). And Claude's web client can't reach our LAN at all.

The solution: A thin MCP proxy: 80 lines of JavaScript that pretends to be a local MCP server but secretly forwards everything to the remote REST API.

1Claude CLI (Windows desktop, WSL2)
2    ↕ stdio (MCP protocol, local)
3proxy.mjs (80 lines of JavaScript)
4    ↕ HTTP REST (over the LAN)
5AgentFoundry (MacBook, port 10000)
Satellite dish against a purple sky symbolizing remote communication bridging
The proxy pattern: a thin local bridge that gives your AI hands that reach across the network.

Here's the core of our actual proxy:

javascript
1import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
2import { StdioServerTransport } from 
3  '@modelcontextprotocol/sdk/server/stdio.js';
4
5const BASE_URL = process.env.AGENT_FOUNDRY_URL;
6const TOKEN = process.env.API_BEARER_TOKEN;
7
8// Discover tools from the remote server
9const tools = await fetch(`${BASE_URL}/tools`, {
10  headers: { Authorization: `Bearer ${TOKEN}` },
11}).then(r => r.json());
12
13const server = new McpServer({ 
14  name: 'agent-foundry-proxy', version: '0.1.0' 
15});
16
17// Register each remote tool as a local MCP tool
18for (const tool of tools) {
19  server.tool(tool.name, tool.description, async (args) => {
20    const result = await fetch(
21      `${BASE_URL}/tools/${encodeURIComponent(tool.name)}/run`,
22      {
23        method: 'POST',
24        headers: {
25          'Content-Type': 'application/json',
26          Authorization: `Bearer ${TOKEN}`,
27        },
28        body: JSON.stringify({ input: args }),
29      }
30    ).then(r => r.json());
31
32    return {
33      content: [{ type: 'text', text: JSON.stringify(result.data) }],
34      isError: !result.ok,
35    };
36  });
37}
38
39await server.connect(new StdioServerTransport());
45
remote tools made available to Claude CLI through an 80-line proxy, no local API keys needed

When you need this pattern:

  • Your tools run on a server, NAS, or different machine
  • You want one tool server shared across multiple sessions
  • Your existing API predates MCP and you don't want to rewrite it
  • Your AI runs in a sandboxed environment that can't install heavy dependencies

Five Ways to Deploy an MCP Server

1. Local stdio (The Default)

AI ←→ stdio ←→ MCP Server
        (same machine)

The AI spawns the MCP server as a child process. Both run on your machine.

Pros: Simple, fast, secure, no network. Cons: Same machine only. Best for: Personal tools, local development, file system access.

2. REST Proxy (What We Built)

AI ←→ stdio ←→ Proxy ←→ HTTP ←→ Remote API
      (local)          (network)

A thin proxy runs locally and forwards MCP tool calls to a remote REST API.

Pros: Works with any existing REST API. Remote tools, local interface. Cons: Extra hop, requires the proxy script. Best for: Cross-machine setups, teams sharing one tool server.

3. Docker Container

AI ←→ stdio ←→ docker run ←→ MCP Server (in container)

The MCP server runs inside a Docker container.

Pros: Isolated, reproducible, can include heavy dependencies. Cons: Docker required, slower startup. Best for: Tools needing specific runtimes, sandboxed execution.

4. HTTP/SSE Remote Server

AI ←→ HTTPS/SSE ←→ MCP Server (remote)

The MCP server runs as a web service and speaks MCP over HTTP with Server-Sent Events.

Pros: True remote access, shareable, scalable. Cons: More complex, requires HTTPS, not all AI clients support it. Best for: Team-wide shared tools, cloud deployments.

5. Cloudflare Tunnel (The "Have Your Cake" Pattern)

AI (anywhere) ←→ HTTPS ←→ Cloudflare ←→ Tunnel ←→ Local Server

Your MCP server runs locally, but a Cloudflare Tunnel gives it a public HTTPS URL.

Pros: No port forwarding, free, encrypted, tools stay on your hardware. Cons: Latency, subject to AI client egress restrictions. Best for: Making local tools accessible from multiple locations.

The Tunnel Gotcha

Even with a perfect tunnel, some AI clients block outbound connections to custom domains. Our Cloudflare tunnel to AgentFoundry worked flawlessly from browsers but was rejected by Claude Code's web sandbox. This is why we fell back to the proxy pattern with Claude CLI running locally on the same LAN.

The best deployment pattern is the one that actually works with your AI client's limitations.

Who Should Build an MCP Server?

Build one if:

  • You have an API or tools you want AI to use directly
  • You're tired of copy-pasting between your AI and your terminal
  • You have a team and want to give AI access to shared internal tools
  • You're building an AI product that needs tool integration

Don't bother if:

  • You only use ChatGPT (no MCP support; use OpenAI function calling)
  • Your tools already exist as MCP servers (check community registries first)
  • You just need the AI to read files and run terminal commands (Claude CLI already does this)

Where It's Going

MCP is less than a year old and already has thousands of servers in the wild. Anthropic open-sourced the spec, and the community ran with it. There are MCP servers for Notion, Linear, Figma, Kubernetes, Terraform, and probably your toaster by the time you read this.

1000+
community-built MCP servers available, and the ecosystem is growing weekly

The bet is that MCP becomes the USB-C of AI: one standard that everything plugs into. Instead of every AI company building custom integrations with every tool company, everyone just speaks MCP.

Is that bet going to pay off? Ask me in a year. But right now, sitting in my terminal with Claude calling 45 custom tools over my local network while I drink coffee, it feels like the future showed up early and forgot to knock.

Person working at a desk with multiple monitors showing code and data
The end state: your AI, your tools, your network. No copy-paste. No middlemen. Just ask and receive.

Common Pitfalls (Learned the Hard Way)

"Experience is what you get when you didn't get what you wanted."

1. "MCP server not showing up"

Did you restart Claude CLI? MCP servers load on startup, not hot-reload. Is the path absolute? Run the server manually (node server.mjs) to see if it crashes.

2. "Tools are there but AI never uses them"

Your tool descriptions suck. The AI reads them to decide when to use a tool. "does stuff" is not a description. Your parameter names matter too: q tells the AI nothing, search_query tells it everything.

3. "Getting permission denied"

Check file permissions. Check that node is in the PATH that Claude CLI uses (WSL and system Node can differ). We spent an hour chasing "command not found" because npm installed globally to a Windows path that WSL couldn't see.

4. "Works locally, not remotely"

stdio only works locally. For remote access, you need the proxy pattern or HTTP/SSE transport. There is no magic flag to make stdio cross a network boundary.

Your MCP Action Plan 0/6
How was this article?

Share

Link copied to clipboard!

You Might Also Like

Lee Foropoulos

Lee Foropoulos

Business Development Lead at Lookatmedia, fractional executive, and founder of gotHABITS.

🔔

Never Miss a Post

Get notified when new articles are published. No email required.

You will see a banner on the site when a new post is published, plus a browser notification if you allow it.

Browser notifications only. No spam, no email.