Six containers. A shared SQLite database. A Python orchestrator that cannot start until four protocol services are healthy. Here is how we wired it all together in Docker Compose — and the one mistake that broke inventory checks.
ShopAgent runs as six Docker containers:
shop-mcp-catalog— the MCP product catalogue servershop-a2a-recommender— the A2A recommender agentshop-a2a-inventory— the A2A inventory agentshop-ucp-merchant— the UCP merchant + AP2 payment serviceshop-orchestrator— the LangGraph orchestratorshop-frontend— the Next.js frontend
These are not independent services. The orchestrator depends on all four protocol services being healthy before it can handle a request. The inventory agent needs access to the product database. The MCP catalogue server seeds that database on startup.
Getting the startup order right, the health checks right, and the volume mounts right is the infrastructure work that makes the AI layer possible. This post walks through how we did it.
THE DEPENDENCY CHAIN
The startup order is:
shop-mcp-catalogstarts, seeds the SQLite database, becomes healthyshop-a2a-inventory,shop-a2a-recommender,shop-ucp-merchantstart in parallel, become healthyshop-orchestratorstarts after all four protocol services are healthyshop-frontendstarts after the orchestrator is healthy
In Docker Compose, this is expressed with depends_on and condition: service_healthy:
shop-orchestrator:
build: ./backend
depends_on:
shop-mcp-catalog:
condition: service_healthy
shop-a2a-recommender:
condition: service_healthy
shop-a2a-inventory:
condition: service_healthy
shop-ucp-merchant:
condition: service_healthy
environment:
MCP_CATALOG_URL: http://shop-mcp-catalog:8010
A2A_RECOMMENDER_URL: http://shop-a2a-recommender:8001
A2A_INVENTORY_URL: http://shop-a2a-inventory:8002
UCP_MERCHANT_URL: http://shop-ucp-merchant:8003
Without condition: service_healthy, Docker Compose would start the orchestrator as soon as the protocol containers exist — not when they are ready to accept requests. The orchestrator's first health check would fail because it cannot reach its dependencies.
HEALTH CHECKS THAT ACTUALLY WORK
Each service exposes a /health endpoint. The Docker health check polls it:
shop-mcp-catalog:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8010/health"]
interval: 5s
timeout: 3s
retries: 5
start_period: 10s
start_period: 10s is important. It gives the service time to start before health checks begin. Without it, the first few checks fail while the Python process is still initialising, and Docker might mark the service as unhealthy before it has had a chance to start.
retries: 5 with interval: 5s gives a service 25 seconds after the start period to become healthy. If the MCP server takes 30 seconds to seed the database and start accepting requests, the orchestrator would incorrectly mark it as unhealthy without a generous enough retry window.
The health endpoint itself is a fast, cheap check — not a deep test:
@app.get("/health")
async def health():
return {"status": "ok", "service": "mcp-catalog"}
Deep health checks (database connectivity, external API availability) belong in monitoring, not in Docker health checks. The Docker health check just confirms the process is alive and the HTTP server is responding.
THE SHARED DATABASE VOLUME
The product catalogue is a SQLite database seeded by the MCP server on startup. Both the MCP server (which reads and searches products) and the inventory agent (which reads stock counts) need access to the same file.
The volume is defined at the top level of the Compose file:
volumes:
catalog-data:
And mounted into both services:
shop-mcp-catalog:
volumes:
- catalog-data:/data/catalog
environment:
DB_PATH: /data/catalog/products.db
shop-a2a-inventory:
volumes:
- catalog-data:/data/catalog
environment:
DB_PATH: /data/catalog/products.db
Both containers see the same SQLite file at /data/catalog/products.db. The MCP server writes to it during seed. The inventory agent reads stock counts from it during requests.
THE VOLUME MOUNT BUG — AND WHY IT MATTERED
For a significant period of development, the inventory agent's volume mount was missing. The Compose configuration had the catalog-data volume on the MCP server but not on the inventory container:
# Missing from shop-a2a-inventory:
volumes:
- catalog-data:/data/catalog # <-- this line was absent
The DB_PATH environment variable pointed to /data/catalog/products.db. The file did not exist at that path in the inventory container. SQLite's aiosqlite.connect() does not raise an error when the database file is missing — it creates an empty database instead.
The inventory agent queried the empty database and found no products. Stock count for every product was 0. The shipping provider's estimate() function, called with stock=0, returned "Currently unavailable — out of stock". The orchestrator received this and set the delivery estimate to "Availability unknown" for every product.
# inventory.py — what happened without the volume mount
async with aiosqlite.connect(DB_PATH) as db:
# DB_PATH = /data/catalog/products.db
# File did not exist — SQLite created an empty database
rows = await cursor.fetchall() # returns []
stock_map = {} # empty — no products found
# For each product: stock = 0
# estimate(stock=0) → "Currently unavailable — out of stock"
The fix was one line in docker-compose.yml. The learning: always verify that shared data is actually shared, not just assumed to be. A missing volume mount and a missing file are silent failures in SQLite.
SERVICE DISCOVERY INSIDE DOCKER
Docker Compose creates a shared network for all services in the same Compose file. Services can reach each other by service name. The orchestrator connects to the MCP server at http://shop-mcp-catalog:8010, not http://localhost:8010.
All inter-service URLs are set via environment variables:
shop-orchestrator:
environment:
MCP_CATALOG_URL: http://shop-mcp-catalog:8010
A2A_RECOMMENDER_URL: http://shop-a2a-recommender:8001
A2A_INVENTORY_URL: http://shop-a2a-inventory:8002
UCP_MERCHANT_URL: http://shop-ucp-merchant:8003
ORCHESTRATOR_URL: http://shop-orchestrator:8000
shop-frontend:
environment:
ORCHESTRATOR_URL: http://shop-orchestrator:8000
No hardcoded hostnames in the application code. The code reads from environment variables. This means the same orchestrator image works in local development (where services are shop-orchestrator:8000) and in production (where a Traefik proxy handles external routing, but internal service communication still uses Docker's DNS).
THE PRODUCTION OVERRIDE FILE
Production adds HTTPS via Traefik and ensures all containers restart on failure. These settings do not belong in the base docker-compose.yml — they are environment-specific.
Docker Compose supports override files:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
The production override adds:
services:
shop-frontend:
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.shop-agent.rule=Host(`shop-agent.agilecreativeminds.nl`)"
- "traefik.http.routers.shop-agent.entrypoints=websecure"
- "traefik.http.routers.shop-agent.tls.certresolver=letsencrypt"
shop-orchestrator:
restart: always
shop-mcp-catalog:
restart: always
# ... other services
The base docker-compose.yml is the shared config: build context, environment variables, health checks, volumes, port bindings. The override is additive: Traefik labels and restart policies. The base file never has restart: always — that is a production concern, not a development one. Leaving it out locally means you can stop and restart individual containers during development without them auto-restarting.
WHAT WE'D DO DIFFERENTLY
Resource limits — None of the containers have memory or CPU limits. In production, an unresponsive container with no limit can consume all available memory and bring down the entire host. Adding deploy.resources.limits per service with conservative values (512m memory for protocol services, 1g for the orchestrator) would contain failures.
Separate networks — All six services share one Docker network. In production, you would want to separate the frontend-facing network (frontend + orchestrator) from the backend service network (orchestrator + protocol services), so the frontend cannot directly reach the inventory agent or the UCP merchant service.
Healthcheck on the orchestrator — The orchestrator currently depends on its protocol services being healthy, but it does not expose its own health endpoint for the frontend to depend on. Adding one would make the dependency chain complete.
THE TAKEAWAY
Multi-service Docker Compose applications require explicit dependency ordering, health checks that have enough patience for slow starts, and carefully specified volume mounts for shared data. Any of these, when missing or wrong, produces silent failures that are difficult to diagnose.
The inventory volume mount bug is a good example: no error was thrown, the application appeared to be running, but every product showed as out of stock. The root cause was invisible until we checked the container's filesystem directly and discovered the database was empty.
The ShopAgent demo is live at https://shop-agent.agilecreativeminds.nl. See the demo showcase or follow the demo walkthrough. Built by Agile Creative Minds.