Back to blog
AIIndustry insights

A Primer on the Model Context Protocol (MCP)

In this article, we dive deep into what MCP actually is, how it works behind the scenes, and why it’s being called the “USB-C for AI models.” You’ll explore how it simplifies AI integrations, the roles of Hosts, Clients, and Servers, and the security risks developers need to keep in mind.

Saurabh Rai

Saurabh Rai

7 min read
A Primer on the Model Context Protocol (MCP)

Started by Anthropic for its Claude AI in November 2024. MCP, which stands for Model Context Protocol, has taken over the AI world. Everyone is trying to build their own MCP Server from OpenAI and Neon to Cloudflare and Sentry. Let’s dive in and give you a quick idea of the MCP or Model Context Protocol.

MODEL: Refers to a large language model or an AI model

CONTEXT: The data or relevant information (which is given to the AI model)

PROTOCOL: A set of rules or standards.

So, MCP becomes a set of standards that we use to provide context or relevant information to the large language model. Anthropic describes it like a USB-C port for AI models—one universal way to connect and interact.

Why USB-C, though?

Remember the mess of different chargers and cables we used to have? USB-C simplified all that. One port connects almost everything – power, data, monitors, and more. It’s a universal standard.

deep dive mcp 1

MCP aims to do the same for AI. Instead of every AI tool needing a unique, custom-built connection to every single data source (imagine trying to connect 10 different AI models to 20 different tools—that’s 200 custom integrations!), MCP creates one standard way for them to talk. This turns the "M x N" problem into a simpler "M + N" setup.

How does MCP work?

deep dive mcp 2

  1. The Host: This is usually your AI application, like an IDE (think Zed, VS Code or Cursor) or a desktop AI app (like Claude Desktop). The Host is the manager, overseeing everything.

  2. MCP Clients: Inside the Host, you have Clients. Think of them as dedicated communication lines. Each Client connects to one specific MCP Server.

  3. MCP Servers: These are the gateways. A server sits before a specific data source, such as your company's document drive, a code repository, or a database, or a tool like a web search function. It knows how to talk to that specific system and expose its capabilities using the MCP rules.

These components talk using JSON-RPC 2.0, which is just a structured way to send messages back and forth using JSON. This keeps communication clear and consistent, no matter who built the Client or Server.

What kind of "context" can they exchange?

MCP standardizes what gets passed back and forth using three main building blocks, called "Primitives":

  1. Resources: This is structured data that the Server can give the AI. Think of code snippets, parts of a document, or database results – anything that adds factual context. Usually, the application controls when resources are provided.

  2. Prompts: These are pre-made instructions or templates that the Server can offer. Imagine having saved prompts for summarizing text or generating code in a specific style. Often, the user chooses when to use these.

  3. Tools: These are actual actions the AI can ask the Server to perform (usually needing your okay first!). These include querying a database, searching the web, or even sending an email. The AI model typically decides when it needs a tool.

By standardizing these, any AI using MCP can understand how to request data (Resources), use preset instructions (Prompts), or perform actions (Tools) through any compatible MCP Server.

Local setup vs. cloud access: local and remote servers

Now, where do these MCP Servers run? There are two ways:

  1. Local MCP Servers: These run on your computer, often alongside the Host application (like your IDE). You might set them up yourself, give them an API key, and they listen for requests directly from the client on your machine. They often use a direct communication pipe (stdio - standard input/output). Great for developers working locally.

  2. Remote MCP Servers: These live out in the cloud. You typically connect to them online, often using secure login methods, such as OAuth. They communicate using web protocols (SSE/HTTP - Server-Sent Events and standard web requests).

  • Why Remote? No setup is needed for the end-user—just sign in! They’re easily updated, and, crucially, they allow web-based AI agents to access tools and data, not just local desktop apps.

  • Vercel and Cloudflare allow hosting for remote or server-based MCPs, making it easier to deploy and make changes as needed.

The communication channels: transport types

How do the messages actually travel between the Client and Server? That's the "transport layer". MCP is flexible here (transport-agnostic), but the two common ways tie into our local vs. remote idea:

  • Standard Input/Output (stdio): Perfect for local servers. Messages are passed directly between the Client and Server processes running on the same machine. Simple, direct, no network needed.

  • Server-Sent Events (SSE) / HTTP: The go-to for remote servers. The Server sends messages to the Client using efficient SSE streaming, and the Client sends messages to the Server using standard HTTP requests. Works across the internet and through firewalls.

So, MCP defines the what (Primitives) and how (JSON-RPC structure), while the transport layer provides the pathway (stdio for local, SSE/HTTP for remote).

Security in MCPs

Model Context Protocol (MCP), while promising for AI agent interoperability, presents significant security challenges that users and developers must address proactively, as it lacks robust security governance by default.

Core Security Concerns:
  • Lack of Inherent Security: MCP itself doesn't enforce strict security measures. The responsibility falls heavily on implementers to build secure layers around it.

  • Tool Poisoning: Malicious actors can inject harmful instructions or commands through tool descriptions or context data provided to the AI. Without proper validation and sanitization, the AI might execute these instructions unknowingly.

  • Giving sensitive data to another tool: This is a significant risk where an AI assistant has access to multiple tools. A malicious or compromised tool could trick the AI into using the permissions of another, more privileged tool to access or exfiltrate sensitive data without proper authorization or user awareness.

Local vs. Remote Server Risks:
  • Local Servers: Pose a high risk as they often run with user-level privileges, granting access to local files, networks, and system resources. They can be challenging to sandbox effectively, resembling the risks associated with running untrusted desktop applications.

  • Remote Servers: Introduce network-based attack vectors (like Man-in-the-Middle), data exfiltration concerns, and challenges related to secure authentication, authorization, and API key management.

Addressing these risks requires deliberate strategies, such as treating each tool as a distinct security boundary with the minimum necessary permissions. And implementing robust controls like OAuth, especially for remote servers, and adhering to the principle of least privilege for local servers.

While MCP is a powerful & robust foundational protocol, it should not be considered secure "out of the box." Developers and users must be vigilant and implement comprehensive security measures to mitigate the inherent risks.

Some use cases of MCPs

  • Smarter Coding Assistants in IDEs: MCP allows AI coding assistants within Integrated Development Environments (IDEs) to connect directly to your specific codebase, documentation, and related tools.

  • Hyper-Contextual Enterprise Chatbots: Instead of a generic chatbot, enterprises can build assistants that securely tap into internal knowledge bases.

  • More Capable AI Assistants: AI applications can use MCP to securely interact with local files, applications, and services on your computer.

Wrapping up

MCP is creating a common language and plug-and-play standard for connecting AI models to the world of data and tools. By simplifying these connections, making them secure, and standardizing how information is exchanged, it's paving the way for more intelligent, more capable, and genuinely helpful AI assistants across desktops, IDEs, and the web.

Extra resources to learn more about MCPs:

Ready to get started?

Scale your integration strategy and deliver the integrations your customers need in record time.

Ready to get started?
Trusted by
Nmbrs
Benefex
Principal Group
Invoice2go by BILL
Trengo
MessageMedia
Lever
Ponto | Isabel Group
Apideck Blog

Insights, guides, and updates from Apideck

Discover company news, API insights, and expert blog posts. Explore practical integration guides and tech articles to make the most of Apideck's platform.