Architecture 5 min read

MCP (Model Context Protocol)

Published on November 10, 2025

MCP (Model Context Protocol)

Language models (LLMs) like Claude are powerful, but by default they don’t have access to your calendar, your database, your local files, or your tools’ APIs. For an AI assistant to be truly useful in your workflow, it needs to read and act on your data and services. MCP (Model Context Protocol) is the open standard Anthropic launched to connect AI applications to external data sources, tools, and workflows in a uniform way. Think of it as a “USB-C port for AI applications”: a standard way for Claude (and other apps that adopt MCP) to connect to your systems.

In this article I explain what MCP is, what it’s for, and how it lets the AI use your local data and tools without each integration being ad hoc.

The problem: isolated AIs

An LLM in a web or desktop app usually only has:

  • What you type in the chat.
  • What it “knows” from training (with a cutoff date).
  • Sometimes, files you upload manually.

It doesn’t have ongoing access to:

  • Your filesystem, database, or CMS.
  • Tools like GitHub, Notion, Google Drive, Postgres, or an automated browser.
  • Workflows that combine several of these.

Each vendor (OpenAI, Anthropic, etc.) has added its own integrations (plugins, extensions), but there hasn’t been a common protocol that lets any AI application connect to any data source or tool in a standard way. MCP aims to be that standard.

What MCP is

Model Context Protocol is an open protocol that defines how:

  • An AI application (host), e.g. Claude Desktop or a product that uses MCP, connects to…
  • MCP servers, which expose resources (data) and tools (actions) that the AI can read and invoke.

In practice:

  • Resources: Data the AI can read: files, database rows, CMS pages, etc. The MCP server exposes them with an identifier and a format (e.g. text or blob).
  • Tools: Actions the AI can run: search the web, create a GitHub issue, query Postgres, run a script, etc. The server describes them (name, parameters) and the host invokes them when the model decides to use them.

Communication is typically JSON-RPC 2.0: the host (Claude Desktop, another app) talks to the MCP server (which you or a third party implement) to list resources, read resources, list tools, and call tools. So the AI “sees” your data and “uses” your tools without each integration being vendor-specific.

What MCP is for

  • Connect the AI to your data: An MCP server can expose your wiki, your database, your filesystem, or your CMS. The AI can read that context to answer with up-to-date, verifiable information.
  • Give the AI tools: Another MCP server can expose “search the web”, “create a GitHub issue”, “query Postgres”, “open a URL in Puppeteer”. The AI decides when to call each tool based on the conversation.
  • One protocol for many sources: Instead of each AI app having its own plugin system, MCP lets any compatible client connect to any MCP server. Servers already exist for Google Drive, Slack, GitHub, Postgres, Notion, Puppeteer, etc., and the community keeps adding more.

So MCP moves the AI toward “assistant that uses your data and tools” instead of “isolated chat”.

How it’s used in practice

  • Claude Desktop / Claude.ai: Anthropic supports connecting to MCP servers. You configure the server’s URL or command (e.g. a server that exposes your filesystem or GitHub), and Claude can list and read resources and call tools.
  • Other clients: Editors (Zed), IDEs, and dev tools (Replit, Codeium, Sourcegraph) can implement the “host” side of MCP and connect to the same servers. So the ecosystem of tools the AI can use grows without depending on a single vendor.

If you want the AI to use your local data (e.g. your docs or your DB), you implement or use an MCP server that exposes resources and/or tools; then you configure your client (Claude Desktop or another) to connect to that server.

Architecture in short

  • Host (client): The AI application the user runs (Claude Desktop, etc.). It connects to one or more MCP servers.
  • MCP server: Process that exposes resources (data) and tools (actions). It can be official, community, or your own.
  • Protocol: JSON-RPC 2.0 over stdio, HTTP, or SSE depending on implementation. The spec defines methods (list resources, read resource, list tools, call tool, etc.).

My personal perspective

MCP is an important step toward AIs that don’t live only in a chat: they read your documents, query your databases, and use your tools in a standard way. Being open means any client and any server can interoperate without depending on one vendor.

For developers, it means you can expose your data and your tools via an MCP server and have Claude (and other compatible clients) use them without building ad hoc integrations for each product. For users, it means the AI assistant can work with your calendar, your code, your wiki, or your DB if someone (you or the community) exposes an MCP server for it.

To go deeper, the specification and SDKs are at modelcontextprotocol.io; Anthropic documents the MCP connector on their site. It’s a young standard but already useful for connecting AIs to your local data and workflows.