Open‑A2A: Decentralized Agent‑to‑Agent protocol
Like TCP/IP for AI Agents: let Agents talk directly, across networks and runtimes — not through centralized platforms.
- Zero platform fee: value flows directly between participants.
- Pluggable transport: NATS, Relay, HTTP/WebSocket, DHT, P2P.
- Runtime adapters: OpenClaw, ZeroClaw, MCP, local Agents.
Why Open‑A2A?
Today, most “agent collaboration” is still trapped inside platforms. Open‑A2A moves collaboration down to an open protocol layer.
Today
- Platforms charge 10–30% in fees.
- Platforms own data, ranking, and discovery.
- All collaboration goes through centralized apps and APIs.
Open‑A2A vision
- Zero platform fee; value settles directly between Agents.
- Data sovereignty belongs to individuals; AI serves its owner.
- Agents discover and negotiate directly using open protocols.
How it works
Open‑A2A is a three‑layer mesh: digital cabin, communication, and intent & collaboration.
- L1 Digital Cabin — DID identity, Solid Pod / profile.json for preferences, local Agent runtimes (OpenClaw, ZeroClaw, MCP, Ollama…).
- L2 Communication — transport abstraction (NATS reference implementation, extensible to HTTP/WebSocket/DHT/P2P), discovery (NATS/DHT), clustering & federation.
- L3 Collaboration — Intent protocol (RFC‑001), Offers, Logistics, OrderConfirm and settlement primitives to express “broadcast + collect”, “request + reply”, “delegate + confirm”.
This repository ships a Python reference SDK (open_a2a/) and a Bridge/adapter
(bridge/) for OpenClaw and other runtimes.
- DID identity, Solid Pod, runtimes
- NATS / Relay / HTTP / DHT / P2P
- Discovery & federation
- Intent / Offer / Confirm
- Logistics & settlement primitives
Get started
Run a local demo, deploy a full node, or plug your existing Agent runtime into the network.
1. Local demo (A→B→C)
Run the full Consumer → Merchant(s) → Carrier flow on your machine using the Python SDK and a local NATS node.
Follow the README quick start2. Run a full node
Use deploy/quickstart/docker-compose.full.yml to start NATS + Relay + Solid + Bridge on a server.
Turn it into a public or private entry node.
For long‑running public nodes (operators), see the Node X operator kit.
3. Integrate your runtime
Connect OpenClaw, ZeroClaw, or your own Agent runtime via the HTTP Bridge. Use simple HTTP tools and webhooks — no need to speak NATS directly.
See OpenClaw integration
Who is this for?
• Node operators: run a stable public entry node using the Node X kit
(deploy/node-x/, guided by scripts/setup-node-x.sh init).
• OpenClaw users: if you already run OpenClaw locally or in Docker, attach it via the
one‑click Bridge helper (scripts/setup-openclaw-bridge.sh) or the bare‑metal bridge script.
OpenClaw integration
If you already run OpenClaw, you can attach it to Open‑A2A with one command and two HTTP endpoints.
One‑click setup (Docker)
On the same server as OpenClaw, use the helper script to start NATS + Relay + Solid + Bridge via Docker Compose.
bash scripts/setup-openclaw-bridge.sh
# optional diagnostics:
bash scripts/setup-openclaw-bridge.sh diagnose
The script creates/updates .env, auto‑detects the OpenClaw Gateway container name
when possible, and brings up the full stack.
Bare‑metal bridge (no Docker)
Prefer to avoid Docker? Run the Bridge directly on the host using the bare‑metal helper script.
bash scripts/setup-openclaw-bridge-baremetal.sh
This uses a local virtualenv (.venv) and Uvicorn to start the Bridge on
0.0.0.0:8080, then you configure OpenClaw to call
POST /api/publish_intent and receive /hooks/agent.
Specs & documentation
Dive into the protocol RFCs, SDK, and architecture docs.