NEW! Data443 Acquires VaikoraReal-Time AI Runtime Control & Enforcement for AI Agent

Home | Blog | Model Context Protocol (MCP): Architecture & Use Cases

Model Context Protocol (MCP): Architecture & Use Cases

MCP, the Model Context Protocol, is an open standard introduced by Anthropic in November 2024 that defines how AI applications connect to external tools, data sources, and prompts through a Host / Client / Server architecture using JSON-RPC 2.0. MCP is the answer to a simple question: how does an LLM call a tool, read a database, or open a file in a way that any compliant AI application can consume? This guide explains the architecture, the transport layer, and three concrete use cases — and shows where a runtime control layer like Vaikora fits.

What Does MCP Stand For?

MCP stands for Model Context Protocol. It is an open specification published by Anthropic and maintained as an open standard. The reference implementations are written in Python and TypeScript, and the protocol is supported by a growing list of Hosts including Claude Desktop, IDEs, and multi-agent runtimes.

Who Built MCP?

MCP was introduced by Anthropic in November 2024 as an open specification. Although Anthropic authored the original spec, MCP is governed as an open standard with public RFCs and community contributions. Reference implementations and an SDK are available in Python and TypeScript.

How Does MCP Work? The Host / Client / Server Model

MCP defines three roles. Understanding the difference between them is the most important thing in this guide, because every other MCP concept maps onto one of the three.

Host

The Host is the AI application that the user interacts with. Examples include Claude Desktop, an IDE assistant, or a custom agent runtime. The Host is responsible for orchestrating LLM calls, holding the conversation state, and deciding when to invoke tools.

Client

The Client is a connector that lives inside the Host and maintains a 1:1 relationship with one Server. If a Host needs to talk to three MCP Servers, it spins up three Clients. Each Client handles message routing, error handling, and lifecycle management for its Server.

Server

The Server is a lightweight process that exposes capabilities to the Host. Servers expose three things: tools (functions the LLM can call), resources (read-only data the LLM can request, such as files or database rows), and prompts (reusable templates the Host can surface to the user).

MCP Sequence Diagram (Tool Call)

The diagram below shows the canonical MCP sequence for a single tool call. The Host receives a user message, the LLM decides to invoke a tool, the Client forwards the call to the Server, and the Server returns the result.

  User           Host           Client          Server
   |              |               |                |
   |–message—->|               |                |
   |              |–LLM call—->|                |
   |              |<–tool_use—-|                |
   |              |               |–tools/call—>|
   |              |               |                |–executes
   |              |               |<–tool_result–|
   |              |<–result——|                |
   |<–response—|               |                |
   |              |               |                |

MCP Transport: JSON-RPC 2.0 Over stdio or HTTP+SSE

MCP uses JSON-RPC 2.0 as its message format. Two transports are supported. stdio — the parent process spawns the Server as a subprocess and communicates over standard input and standard output. This is the default for local Servers (file system, shell, local databases) and offers the lowest possible latency. HTTP+SSE — Server-Sent Events over HTTP. This is used for remote Servers that need to be reached over a network, and supports streaming partial results back to the Client.

Sample MCP Request That Vaikora Intercepts

A typical MCP tools/call request looks like the snippet below. This is the exact payload that flows through Vaikora when it sits as a middleware Client between the Host and the MCP Server.

{
  “jsonrpc”: “2.0”,
  “id”: 42,
  “method”: “tools/call”,
  “params”: {
    “name”: “query_customers”,
    “arguments”: {
      “customer_id”: “cust_8821”,
      “fields”: [“name”, “email”, “order_history”]    }
  }
}

Vaikora evaluates this request against its policy engine before it ever reaches the Server. If the requested arguments contain PII or violate a policy (for example, a tool not on the allow-list), Vaikora returns a JSON-RPC error to the Client and the Server is never invoked.

Three Real MCP Use Cases

MCP shines anywhere an LLM needs to reach beyond the model. Three patterns cover the majority of production deployments.

1. Live Context (Resources)

Pattern: The LLM reads structured, up-to-date data from a Server-exposed resource. Example: an IDE assistant reads the contents of the open file from a file-system MCP Server, or a customer-support agent reads the latest ticket from a Zendesk MCP Server. Resources are read-only and addressable by URI, which makes them easy to cache and audit.

2. Tool Calling

Pattern: The LLM invokes a function exposed by a Server. Example: a coding agent runs a test suite via a shell MCP Server, or a sales agent creates an opportunity via a Salesforce MCP Server. Tools are the highest-risk MCP capability because they execute real-world side effects — they are also the primary surface a runtime control layer like Vaikora must enforce.

3. External Data Aggregation

Pattern: The Host federates context from multiple Servers into one conversation. Example: a research assistant pulls documents from a SharePoint MCP Server, customer records from a CRM MCP Server, and metrics from a data-warehouse MCP Server, then synthesizes them in a single answer. The Host is responsible for orchestration; each Server stays focused on one source of truth.

MCP at a Glance

Property Value
Full name
Model Context Protocol
Author
Anthropic (open specification)
Released
November 2024
Roles
Host, Client, Server
Message format
JSON-RPC 2.0
Transports
stdio, HTTP+SSE
Capabilities exposed by Server
Tools, Resources, Prompts
Reference SDKs
Python, TypeScript
Best for
LLM tool calls, IDE integrations, RAG over external sources

A Quick Security Caveat

MCP standardizes how tools are called. It does not standardize which tools are safe to call, which arguments are safe to pass, or which results are safe to return to the model. Three risks are worth flagging now (a deeper threat model is covered in our companion post on MCP security):

  • Untrusted tools. An MCP Server is just a process that speaks JSON-RPC. A malicious or compromised Server can return crafted output that hijacks the agent.
  • Prompt injection in tool results. Tool outputs are appended to the LLM context. An attacker who controls a tool’s output (for example, the contents of a fetched web page) can inject instructions.
  • PII egress. Tool arguments and resource queries can carry SSNs, credit cards, or PHI directly to a remote Server. Once that payload leaves the boundary, it is gone.

These risks are not unique to MCP — they apply to any tool-calling agent — but MCP makes them sharper because the protocol is now standardized and easy to integrate.

Where Vaikora Fits in an MCP Stack

Vaikora interoperates with MCP servers as a middleware Client. The Host treats Vaikora as just another Client; Vaikora forwards the call to the real MCP Server only after the request has been evaluated by the deterministic policy engine and 7-factor probabilistic risk scoring. The same policy engine runs on the response: if the tool result contains PII or matches a prompt-injection pattern, Vaikora can redact, block, or require approval before the LLM sees it.

Performance characteristics in line with the rest of the platform: P50 ≈ 8ms, P95 ≈ 22ms, P99 < 50ms; block path 18ms; throughput 10,000+ actions per second. Audit logging is SHA-256 hash-chained and supports a metadata-only mode (content: false) so prompt and tool-result content never enters audit storage.

Next Steps

If you are integrating MCP today, the next questions are usually “how do I secure tool calls?” and “how do I keep PII out of tool arguments?” The Vaikora MCP Security guide threat-models the MCP attack surface, lists five concrete threats with mitigations, and shows how to deploy a single inline Vaikora layer that applies one policy engine and one audit log to every tool call.

Your AI Agents Need a Control Layer

See how Vaikora intercepts, evaluates, and enforces policy on every AI agent action — in real time, before execution.

 Frequently Asked Questions

Is MCP open source?

Yes. MCP is published as an open specification by Anthropic, with reference implementations released under permissive licenses. The Python and TypeScript SDKs are available on GitHub, and the protocol RFCs are public.

What is the difference between an MCP Host and an MCP Client?

The Host is the AI application the user interacts with — Claude Desktop, an IDE, or a custom agent runtime. The Client is a thin connector that lives inside the Host and manages the connection to exactly one Server. A Host with three MCP Servers configured runs three Clients.

What transports does MCP support?

MCP currently supports two transports: stdio for local Servers (parent process spawns the Server, communicates over standard input and output) and HTTP+SSE for remote Servers (Server-Sent Events over HTTPS). Both transports carry JSON-RPC 2.0 messages.

What can an MCP Server expose?

An MCP Server can expose three types of capabilities: tools (functions the LLM can call), resources (read-only data the Host can request, addressable by URI), and prompts (reusable prompt templates the Host can offer to the user). A given Server may expose any combination of the three.

Can MCP and A2A be used together?

Yes. MCP and A2A solve different problems. MCP standardizes the LLM-to-tool relationship inside a single agent. A2A standardizes the agent-to-agent relationship between separate agents. Most production multi-agent stacks use MCP for tool access and A2A for delegation.

How do I secure MCP tool calls?

MCP itself does not define a security model beyond the transport. Production stacks add an inline enforcement layer between the Host and the Server. Vaikora is purpose-built for this: it intercepts every MCP tool call and tool result, evaluates them against a deterministic policy engine with 7-factor probabilistic risk scoring, applies PII redaction and prompt-injection detection, and writes a tamper-evident SHA-256 hash-chained audit log. Most deployments are operational within 48 hours and require no core application rewrite.