AI Integration

MCP vs A2A — When Your AI Agent Needs to Talk to Another Agent

Apr 12, 2026 7 min read

When AI systems talk to each other, “MCP vs A2A” isn’t just terminology—it’s an architecture decision that shapes how your agents scale, fail, and evolve. This article walks through how ShopAgent uses both protocols side by side, and how to decide which one belongs where in your own stack.

Written for infrastructure and agent developers building multi-agent systems who need to decide how their agents communicate.

Two protocols, two very different problems. Understanding when to use each one — and why getting it wrong makes your system harder to maintain.

If you have been following the AI agent space, you have seen both MCP (Model Context Protocol) and A2A (Agent-to-Agent) described as ways to connect AI systems. They are often mentioned in the same breath. They are not the same thing.

MCP connects an agent to a tool or data source. A2A connects an agent to another agent. The distinction matters because the two problems require different contracts, different failure modes, and different design decisions.

In ShopAgent, we use both — MCP for the product catalogue and A2A for the recommender and inventory agents. They live in the same codebase, called in the same node, but they do fundamentally different jobs. This post explains why we chose each protocol for each role.

WHAT PROBLEM EACH PROTOCOL SOLVES

 

MCP answers the question: how does an AI model call a tool in a way that any model can understand and any tool can implement?

The answer is a standardised description layer. An MCP server exposes tools with a schema. The model reads the schema, decides which tool to call, and sends a structured request. The server runs the function and returns the result. The model does not know if the tool is a database query, an API call, or a file read. It just sees a schema and a response.

A2A answers a different question: how does one AI agent delegate a task to another agent when both agents have their own internal logic, their own state, and potentially their own models?

The answer is a task-based communication protocol over JSON-RPC. An A2A call sends a task to a remote agent, waits for it to complete, and reads the artifact it produces. The caller does not know how the remote agent works internally. It just knows what task it asked for and what format the result comes back in.

The key difference: MCP is about calling a function. A2A is about delegating work to an independent agent that may do many things internally before returning a result.

MCP vs A2A at a glance:

  MCP A2A
Relationship Caller → Tool Agent → Agent (peer-to-peer)
Determinism Deterministic function calls Non-deterministic; internal reasoning
Discovery Tool schemas only Agent Cards with capability declarations
Failure modes Call fails or succeeds Task may partially complete, time out, or degrade gracefully
State Stateless May maintain internal state

MCP: THE AGENT-TO-TOOL CONTRACT

 

In ShopAgent, the product catalogue is exposed as an MCP server. The orchestrator calls it like this:

raw_products = await mcp_search_products(
    query=prefs.query,
    category=prefs.category,
    max_price=prefs.max_price,
)

Under the hood, mcp_search_products is a tool call to an MCP server running on its own port. The server handles the SQLite query and returns a list of product dicts. The orchestrator never touches the database directly.

This is the right protocol for this job because:

  • The data source is deterministic. Given a query and filters, it returns products. No internal reasoning, no state, no model calls.
  • The schema is well-defined and stable. query, category, max_price are fixed fields.
  • The relationship is tool-caller to tool — the orchestrator is clearly in charge, the MCP server does exactly what it is told.

If the catalogue were swapped for an Elasticsearch cluster or a third-party product API, the MCP server contract would not change. The orchestrator would not know or care.

A2A: THE AGENT-TO-AGENT CONTRACT

 

After MCP returns the raw product list, the orchestrator delegates to two A2A agents.

The recommender agent receives the full product list and the user's preferences:

recommendations = await a2a_get_recommendations(
    products=raw_products,
    preferences={"query": prefs.query, "max_price": prefs.max_price},
)

The inventory agent receives a list of product IDs and returns stock levels and delivery estimates:

inventory = await a2a_get_inventory(product_ids=product_ids)

Both of these are JSON-RPC calls to independent containers. The protocol used internally by each agent — whether it runs a model, queries a database, or calls another service — is invisible to the orchestrator.

This is the right protocol for these jobs because:

  • The recommender has its own scoring logic. It is not just a function call — it is an agent that makes decisions about how to rank products based on multiple signals. That internal complexity belongs to the recommender, not the orchestrator.
  • The inventory agent connects to a separate database and a shipping provider. It has its own failure modes (db_degraded, network timeouts). The orchestrator should not be responsible for handling those details.
  • Both agents can be upgraded, swapped, or scaled independently. The orchestrator just needs the contract to stay stable.

The A2A contract is task-based, not function-based. The orchestrator sends a task description and reads an artifact. It does not care how the work was done.

AGENT CARDS AND SERVICE DISCOVERY

 

One capability A2A has that MCP does not is built-in service discovery through Agent Cards.

Every A2A agent exposes a card at /.well-known/agent-card.json:

{
  "name": "inventory_agent",
  "description": "Checks real-time stock availability and delivery estimates...",
  "url": "http://shop-a2a-inventory:8002/",
  "capabilities": {"streaming": false},
  "skills": [
    {
      "id": "stock_check",
      "name": "Stock Check",
      "description": "Return in_stock status, stock count, estimated delivery...",
      "tags": ["inventory", "stock", "availability", "delivery"]
    }
  ]
}

An orchestrator that has never been configured to talk to this agent can read the card, understand what the agent does, and decide whether to use it. In a multi-agent system where agents are discovered dynamically rather than hardcoded, this is a meaningful capability.

MCP does not have this. An MCP server exposes tool schemas but not a machine-readable description of who it is, what problems it solves, and what its capabilities are.

HOW THEY WORK TOGETHER IN THE SEARCH NODE

 

The search_products node in the orchestrator uses all three calls in sequence:

async def search_products(state):
    # Step 1: MCP — get raw product data
    raw_products = await mcp_search_products(...)

    # Step 2: A2A — enrich with recommendations
    recommendations = await a2a_get_recommendations(
        products=raw_products, preferences={...}
    )

    # Step 3: A2A — add stock + delivery information
    inventory = await a2a_get_inventory(product_ids=[...])

    # Step 4: Assemble enriched candidates
    candidates = [
        ProductCandidate(
            recommendation_score=recommendations.get(p["id"], {}).get("score", 0),
            estimated_delivery=inventory.get(p["id"], {}).get("estimated_delivery"),
            ...
        )
        for p in raw_products
    ]

MCP gives you the raw data. A2A agents give you the intelligence layered on top of it. Neither knows the other exists. The orchestrator assembles the result.

Each A2A call also degrades gracefully:

try:
    recommendations = await a2a_get_recommendations(...)
except Exception:
    # Log warning, proceed with MCP order, set recommender_available=False

If the recommender is down, the search still works — products are just shown in catalogue order instead of preference-ranked order. The orchestrator informs the user that recommendations are temporarily unavailable. The UI keeps working.

WHEN TO USE WHICH

 

Use MCP when:

  • You are connecting to a deterministic data source or tool
  • The logic is a function: input in, output out
  • There is no internal agent reasoning, model calls, or stateful processing
  • You want schema-driven tool discovery for model consumption

Use A2A when:

  • The work requires an independent agent to make decisions
  • The remote service has its own failure modes and internal complexity that should not leak to the caller
  • You want service discovery with capability declarations (not just function schemas)
  • The relationship is peer-to-peer rather than caller-to-function

A useful heuristic: if you would be happy calling it a function, use MCP. If you would describe it as delegating work to another agent, use A2A.

WHAT WE'D DO DIFFERENTLY

 

Async A2A calls — The recommender and inventory A2A calls are currently sequential. We call recommendations first, wait for the result, then call inventory. These are independent operations. Each A2A call takes roughly 200–400 ms. Running them concurrently with asyncio.gather would cut the combined wait from ~600 ms (sequential) to ~400 ms (parallel), shaving roughly 200 ms off the search node — a noticeable improvement on every query.

Cache the Agent Cards — The inventory and recommender Agent Cards are read from disk at startup. In a larger system with dynamic agent discovery, you would want a registry that watches for card changes and refreshes the routing table.

THE TAKEAWAY

 

MCP and A2A are complementary, not competing. MCP is for tools — deterministic, schema-driven, function-like. A2A is for agents — independent, reasoning, task-based. Using MCP for the product catalogue and A2A for the recommender and inventory agents is not an arbitrary choice. It is the right tool for each job.

The more important point: getting this distinction right at the architecture stage makes the system easier to scale. When a new data source needs to be added, you add an MCP tool. When a new specialised capability needs to be added — fraud scoring, personalised sizing, warranty checks — you add an A2A agent. The orchestrator stays stable because the protocol contracts stay stable.

The ShopAgent demo is live at https://shop-agent.agilecreativeminds.nl. See the demo showcase or follow the demo walkthrough. Built by Agile Creative Minds.