Agent-to-Agent (A2A) is a coordination protocol for AI systems. One agent discovers another, reads its capability declaration, sends a typed task request, and receives a structured response. The specification is still evolving, but the core is stable: Agent Cards, a discovery endpoint, and TaskRequest/TaskResponse message shapes.
This article describes a minimal implementation on AWS Lambda. No A2A SDK. No service mesh. The entire protocol fits in four Python files and uses a Lambda Function URL as the invocation endpoint. The live demo runs four ops agents: SRE, Cost, Security, and CTO. The CTO agent delegates to the Security agent via A2A.
What A2A actually is
Strip away the specification language and A2A is three things:
First, a discovery endpoint. Every A2A-compliant agent publishes its Agent Card at
/.well-known/agent-registry (or similar). The card describes who the agent is,
what it can do, how to call it, and what rate limits apply.
Second, a typed invocation. Callers send a TaskRequest to the agent's
POST /agents/{id} endpoint. The request includes a natural language task,
a structured context dict, and a session ID for tracing. The agent returns a
TaskResponse with a status, result, and timing metadata.
Third, composability. Because every agent speaks the same protocol, they can call each other. The CTO agent doesn't need to know how the Security agent works internally. It sends a TaskRequest and receives a TaskResponse. The protocol boundary is the contract.
The Agent Card
Each agent has a card that a calling agent can read before invoking. Here is the Security Agent's card from the demo:
{
"a2a_version": "0.1",
"agent_id": "security",
"name": "Security Agent",
"description": "Runs security posture checks: S3 public access blocks, root account access keys, Lambda environment variable secrets detection.",
"url": "https://a2a.ticketyboo.dev/agents/security",
"capabilities": [
"s3_public_access_check",
"root_access_key_check",
"lambda_env_secret_check",
"security_posture_summary"
],
"model": "deterministic",
"invocation": {
"method": "POST",
"path": "/agents/security",
"content_type": "application/json"
},
"input_schema": {
"type": "object",
"properties": {
"task": {"type": "string"},
"checks": {
"type": "array",
"items": {"type": "string", "enum": ["s3", "iam", "lambda"]}
}
},
"required": ["task"]
},
"output_schema": {
"type": "object",
"properties": {
"overall_pass": {"type": "boolean"},
"checks": {"type": "array"},
"summary": {"type": "string"}
}
},
"rate_limits": {
"requests_per_hour": 60,
"description": "Standard A2A rate limit per calling agent"
},
"owner": "ticketyboo-ops",
"environment": "production"
}
The card gives a calling agent everything it needs: where to call, what to send, what to expect back, and how often. The CTO agent reads this card before delegating, which means it can validate the task it is about to send against the input schema.
TaskRequest and TaskResponse
The message shapes are Python dataclasses. They serialise to JSON over HTTP. The protocol does not require protobuf or any binary format.
@dataclass
class TaskRequest:
caller_agent: str # 'user' or agent_id
callee_agent: str
task: str # natural language task description
context: dict # structured parameters
session_id: str # UUID for tracing
@dataclass
class TaskResponse:
session_id: str
exchange_id: str # UUID for this specific exchange
caller_agent: str
callee_agent: str
task: str
status: str # completed | failed | delegated
result: dict
error: str
duration_ms: int
delegated_to: Optional[str]
created_at: str
The delegated_to field is the key to tracing delegation chains. When the CTO
agent delegates to Security, its TaskResponse has delegated_to="security".
The frontend renders this as a badge on the exchange entry.
Lambda as A2A infrastructure
Each Lambda invocation is stateless. A2A sessions are stored in DynamoDB. The single-table design holds three record types under the same table:
# Session record
PK = "SESSION#{session_id}"
SK = "META"
GSI1PK = "STATUS#active" # for recent sessions query
# Exchange record
PK = "SESSION#{session_id}"
SK = "EXCHANGE#{exchange_id}"
# Rate limit record (1h TTL)
PK = "RATELIMIT#{ip}"
SK = "REQ#{timestamp}"
# Daily counter
PK = "DAILY#usage"
SK = "2026-03-27" # today's date
The TTL attribute handles cleanup automatically. Session records expire after 30 days. Rate limit records expire after 1 hour. No scheduled cleanup Lambda required.
Lambda Function URLs give each agent a stable HTTPS endpoint with no API Gateway required. The Function URL is set in the Agent Card as the invocation URL. CORS is configured on the Function URL resource in Terraform, not in application code.
The delegation path
This is the interaction when a user invokes the CTO agent with include_security: true:
# 1. User sends TaskRequest to CTO agent
POST /agents/cto
{
"task": "Summarise engineering governance status",
"caller_agent": "user",
"context": {"include_security": true}
}
# 2. CTO agent dispatches a nested TaskRequest to Security agent
# (internal function call in this demo, HTTP call in production)
TaskRequest(
caller_agent="cto",
callee_agent="security",
task="Run full security posture check and return results.",
session_id=request.session_id # same session
)
# 3. Security agent runs checks, writes exchange to DynamoDB, returns TaskResponse
# 4. CTO agent has security_result, builds Haiku prompt, calls LLM
# 5. CTO agent returns TaskResponse with delegated_to="security"
# DynamoDB now has:
# SESSION#{id} / META
# SESSION#{id} / EXCHANGE#{security_exchange_id}
# SESSION#{id} / EXCHANGE#{cto_exchange_id}
The session ID threads through both exchanges. A client calling
GET /sessions/{session_id} gets the full interaction graph: who called whom,
what was sent, what came back, and how long each step took.
The handler
The Lambda handler routes on HTTP method and path. There is no framework. The routing logic is under 40 lines:
def lambda_handler(event: dict, context: Any) -> dict:
method = event["requestContext"]["http"]["method"].upper()
path = event.get("rawPath", "/")
parts = [p for p in path.strip("/").split("/") if p]
if method == "OPTIONS":
return _build_response(200, {})
# GET /.well-known/agent-registry
if method == "GET" and parts == [".well-known", "agent-registry"]:
return handle_agent_registry(event)
# GET or POST /agents/{id}
if parts and parts[0] == "agents" and len(parts) == 2:
agent_id = parts[1]
if method == "GET":
return handle_get_agent_card(agent_id)
elif method == "POST":
return handle_invoke_agent(event, agent_id)
# GET /sessions or /sessions/{id}
if parts and parts[0] == "sessions":
if len(parts) == 1:
return handle_list_sessions()
elif len(parts) == 2:
return handle_get_session(parts[1])
return _error_response(404, "not_found", f"Route not found: {method} {path}")
The invocation handler applies checks in order: kill switch, rate limits (daily global, per-IP hourly), input validation, then dispatch. Each check has a specific error code and message. The frontend surfaces these to the user rather than showing raw HTTP status codes.
Cost on Free Tier
With the demo rate limits (50 exchanges/day), this runs comfortably within AWS Free Tier. The cost ceiling is set by the Anthropic API, not AWS infrastructure:
- DynamoDB: PAY_PER_REQUEST, well within 25 WCU/RCU free allocation
- Lambda: 50 invocations/day is negligible against 1M/month free tier
- Lambda Function URL: no charge for the URL itself
- CloudWatch Logs: 30-day retention, within 5 GB free ingestion
- Anthropic Haiku (CTO agent): ~$0.001 per invocation at 300 output tokens
The SSM kill switch (/ticketyboo/a2a/enabled = false) lets the daily limit
be bypassed in either direction without a code deploy. The daily limit SSM parameter
(/ticketyboo/a2a/daily-limit) can be adjusted independently.
What the demo leaves out
This is a demonstration implementation, not a production one. Three things are simplified:
Authentication. The demo uses NONE authorization on the Function URL. A
production A2A deployment would use mutual TLS or a signed token in the request headers.
The Agent Card should include an authentication scheme field.
Async tasks. The demo is synchronous: the caller waits for the full response. The A2A specification includes a streaming and async task model for long-running work. Lambda supports async invocation but the tracing model becomes more complex.
Agent discovery across teams. The registry in this demo is static, defined in
agent_cards.py. A multi-team deployment would want a dynamic registry with
registration and health-check semantics. DynamoDB is sufficient for this at small scale.
Patterns demonstrated
This demo implements three of the 21 agentic design patterns from the Gulli taxonomy:
- Pattern 8: Tool Use and Function Calling. The CTO agent calls the Security agent as a tool. The Agent Card is the tool schema. The TaskRequest/TaskResponse pair is the function call/return.
- Pattern 13: Multi-Agent Coordination. Two specialised agents (CTO and Security) collaborate to produce a result neither could produce alone. The coordination protocol is explicit and observable.
- Pattern 19: Exploration and Discovery. Agents discover each other's capabilities at runtime via the registry endpoint. A caller reads an Agent Card before constructing a TaskRequest, which means it adapts to whatever capabilities are currently declared, not a hardcoded assumption.
Found this useful? Buy me a coffee to keep the demos running.
Scan any public GitHub repo for dependency risk, secrets, and code quality issues — free, no account needed.
Scan a repo free See governance agents →