RabbitMQ solved a problem in the 2000s that nobody could see coming: applications needed to talk to each other reliably, and the connections between them were more important than the applications themselves. Before RabbitMQ, every application-to-application connection was a custom integration – a bespoke pipe built by hand, with its own error handling, its own retry logic, its own failure mode.
RabbitMQ said: stop building custom pipes. Route everything through a broker. The broker handles delivery guarantees, routing, access control, and failure recovery. Applications send messages. The broker makes sure they arrive.
AI agents in 2026 face the same problem. And WorkingAgents is the same answer.
The Problem RabbitMQ Solved
In the early days of distributed systems, applications connected to each other directly. Service A called Service B over HTTP. Service B called Service C. Service C called the database. If Service B was down, Service A failed. If the database was slow, everything was slow. If someone needed to add Service D, they had to modify Services A, B, and C to know about it.
The connections looked like spaghetti. Every new service made the tangle worse. Every failure cascaded. Every change required coordinating across multiple teams.
RabbitMQ introduced a broker – a central routing layer that sat between all the services:
- Services no longer connected to each other directly. They connected to the broker.
- The broker handled routing: Service A publishes a message to a topic. Services B, C, and D subscribe to that topic. The broker delivers. Nobody needs to know who else is listening.
- The broker handled reliability: if Service B is temporarily down, the message waits in a queue until B comes back. No data lost.
- The broker handled access control: Service A can publish to certain topics but not others. Service D can read from queues but not write to them.
- The broker handled flow control: if Service C is overwhelmed, the broker slows delivery to prevent C from crashing.
The result: applications became simpler because they only needed to know about the broker, not about each other. The broker became the control plane – the single place where routing, access, reliability, and monitoring were managed.
RabbitMQ became the most widely deployed open-source message broker in the world. Goldman Sachs uses it. Thousands of enterprises use it. It is the infrastructure layer that makes distributed systems manageable.
AI Agents Have the Same Problem
Replace “applications” with “AI agents” and “services” with “tools and data sources.” The pattern is identical:
Without a broker, every agent connects directly to every tool. Agent A calls the CRM API. Agent B calls the database. Agent C calls the email service. Agent D calls the payment system. Each connection is a custom integration with its own authentication, its own error handling, its own permission model.
The connections are spaghetti. Again. Every new agent makes the tangle worse. Every new tool multiplies the integration surface. Every failure cascades – if the CRM API goes down, every agent that depends on it fails in its own unpredictable way.
The same problems resurface:
- No central routing. Nobody knows which agents are calling which tools, how often, or with what data. There is no single place to see the full picture.
- No access control at the connection layer. Each tool has its own authentication, but there is no unified permission model that says “Agent A can read CRM data but not modify it” or “Agent B can send emails but not access financial records.”
- No delivery guarantees. If a tool call fails, does the agent retry? How many times? What happens if the retry succeeds but the original call also eventually succeeds? Duplicate actions with no coordination.
- No flow control. An agent stuck in a loop hammers an API with thousands of requests per second. Nothing throttles it. The API goes down. Every other agent depending on that API goes down with it.
- No audit trail at the connection layer. Individual tools may log their own access, but there is no unified record of “which agent called which tool, with what parameters, on behalf of which user, at what time, and what was the result.”
This is exactly the problem RabbitMQ solved for applications. WorkingAgents solves it for AI agents.
How the Comparison Maps
| RabbitMQ Concept | WorkingAgents Equivalent |
|---|---|
| Message broker | MCP Gateway |
| Queues | Tool call routing and queuing |
| Exchanges and routing keys | Virtual MCP Servers and permission-based routing |
| Access control (vhosts, users, permissions) | Capability-based keycards, per-agent permissions |
| Message persistence | Audit trails – every tool call logged immutably |
| Consumer acknowledgments | Three-checkpoint guardrails (pre/during/post execution) |
| Dead letter queues | Blocked actions log – denied tool calls recorded for review |
| Flow control / back-pressure | Rate limiting per agent, per tool, per user |
| Clustering and high availability | Erlang/OTP distribution (same platform as RabbitMQ) |
| Management UI | Governance dashboard |
| Plugins | Governed tool registry (86+ tools) |
The parallels are not superficial. They are structural. Both systems solve the same fundamental problem: routing messages between independent actors through a governed central layer.
The Broker Pattern Applied to Agents
Before WorkingAgents: Direct Connections
Every agent connects directly to every tool it needs. A sales agent has its own CRM credentials. An engineering agent has its own GitHub token. A support agent has its own Zendesk API key. A research agent has its own database connection string.
- 10 agents connecting to 10 tools = 100 credential sets scattered across configurations
- Each agent decides for itself what it can access
- No central visibility into who is calling what
- One leaked credential exposes one tool to one agent – but nobody knows which other agents also have access to that tool
- Adding a new tool means updating every agent that needs it
- Removing an agent’s access to a tool means finding every place that credential is stored
This is the pre-RabbitMQ world of point-to-point connections. It worked when there were 3 services talking to each other. It falls apart at 10. It is unmanageable at 100.
After WorkingAgents: Brokered Connections
Every agent connects to the MCP Gateway. The gateway connects to the tools. Agents never touch tools directly.
- 10 agents connecting to 1 gateway connecting to 10 tools = 10 agent credentials + 10 tool credentials, managed centrally
- The gateway decides what each agent can access based on its capability keycard
- Central visibility into every tool call, by every agent, in real time
- One credential per tool, stored in the gateway, never exposed to agents
- Adding a new tool means registering it in the gateway – every authorized agent sees it immediately
- Removing an agent’s access means updating one permission set in one place
This is the RabbitMQ pattern. The broker is the single point of governance. Everything flows through it. Everything is visible. Everything is controlled.
Access Control: Vhosts for Agents
RabbitMQ uses virtual hosts (vhosts) to isolate different applications or tenants on the same broker. A production application and a staging application can share the same RabbitMQ cluster without seeing each other’s queues or messages. Each vhost has its own permissions, its own queues, its own exchanges.
WorkingAgents uses Virtual MCP Servers for the same purpose – but for agents instead of applications:
Sales Team Virtual Server:
- Can access: CRM, email, document generation, knowledge search
- Cannot access: database admin, deployment tools, financial records
Engineering Virtual Server:
- Can access: GitHub, CI/CD, issue tracker, deployment tools
- Cannot access: CRM data, financial records, HR systems
Executive Virtual Server:
- Can access: dashboards, reports, knowledge search, financial summaries
- Cannot access: raw database queries, deployment tools, code repositories
Each Virtual MCP Server is a boundary. Agents operating within one boundary cannot see or access tools in another boundary. A sales agent cannot accidentally (or deliberately) access engineering deployment tools. An engineering agent cannot read HR records. The boundaries are enforced at the gateway, not at the application level – just like RabbitMQ enforces vhost boundaries at the broker, not at the application.
The key insight from RabbitMQ that applies directly: access control belongs in the broker, not in the endpoints. If every tool implements its own access control, you have 86 different permission systems with 86 different failure modes. If the broker implements access control, you have one permission system, one policy engine, one audit trail.
Routing: Exchanges and Topics for Tool Calls
RabbitMQ routes messages using exchanges and routing keys. A publisher sends a message to an exchange with a routing key. The exchange matches the routing key against bindings and delivers the message to the appropriate queues. The publisher does not need to know which queues exist or which consumers are listening.
WorkingAgents routes tool calls using the same pattern:
- An agent requests a tool call (the “message”)
- The MCP Gateway (the “exchange”) evaluates the request against the agent’s permissions (the “routing rules”)
- If permitted, the gateway routes the call to the appropriate tool (the “queue/consumer”)
- If denied, the call is logged and rejected (the “dead letter”)
- The agent does not need to know how tools are implemented, where they run, or what credentials they use
This decoupling is what makes the system manageable at scale. When RabbitMQ handles routing, adding a new consumer does not require changing any publisher. When WorkingAgents handles routing, adding a new tool does not require changing any agent. When removing a tool, no agent needs to be reconfigured – the gateway simply stops routing to it.
Reliability: Acknowledgments for Tool Calls
RabbitMQ uses consumer acknowledgments to guarantee delivery. A message is not removed from the queue until the consumer explicitly acknowledges it. If the consumer crashes before acknowledging, the message is redelivered to another consumer. No data is lost.
WorkingAgents’ three-checkpoint guardrails serve the same purpose – but for safety rather than delivery:
Pre-execution (before the message is delivered):
- RabbitMQ: validates the message format and routing key
- WorkingAgents: validates the tool call inputs. Blocks SQL injection, path traversal, prompt injection, and malformed requests before they reach the tool.
During execution (while the consumer processes the message):
- RabbitMQ: monitors consumer health, enforces timeouts
- WorkingAgents: monitors agent behavior in real time. Detects anomalous patterns. Requires human approval for high-risk operations.
Post-execution (after the consumer finishes):
- RabbitMQ: waits for acknowledgment, handles dead letters
- WorkingAgents: inspects tool outputs before they reach the agent. Redacts PII. Masks credentials. Filters confidential data. Logs the complete interaction for audit.
The pattern is the same: the broker validates, monitors, and verifies every message passing through it. Nothing flows unchecked.
Flow Control: Back-Pressure for Agents
RabbitMQ implements flow control to prevent fast publishers from overwhelming slow consumers. If a consumer cannot keep up, the broker slows down message delivery. If queues grow too large, the broker pushes back on publishers. The system self-regulates.
WorkingAgents implements the same concept for AI agents:
- Per-agent rate limits: an agent cannot make more than N tool calls per minute. If it exceeds the limit, calls are queued or rejected.
- Per-tool rate limits: a tool cannot receive more than N calls per second from all agents combined. If the tool is overwhelmed, the gateway throttles delivery.
- Per-user rate limits: a user’s agents cannot collectively consume more than their allocation.
- Circuit breakers: if a tool starts failing consistently, the gateway stops routing to it. Agents get a clean error instead of timing out. When the tool recovers, the gateway resumes routing. Automatically.
Without the broker, an agent in a retry loop can hammer a tool with thousands of requests per second, causing cascading failures across every agent that depends on that tool. With the broker, the loop is detected and throttled before damage occurs.
This is the exact scenario that justified RabbitMQ’s existence in the application world. An application stuck in a retry loop sending millions of messages per second was a common failure mode that took down entire systems. The broker absorbed the pressure and protected everything downstream.
The Audit Trail: Message Tracing for Agents
RabbitMQ provides message tracing – the ability to see every message that flows through the broker, where it came from, where it went, and what happened to it. This is essential for debugging distributed systems.
WorkingAgents provides the same tracing for AI agents, but elevated to a compliance requirement:
- Every tool call: which agent, which tool, which user, what inputs, what outputs, what guardrails fired, what was the result
- Every denied call: which agent tried to access what, why it was denied, what permission was missing
- Every guardrail intervention: what was detected (PII, injection, policy violation), what was redacted, what was blocked
- Complete timeline: reconstructable sequence of every action taken by every agent, queryable for regulatory examination
In a RabbitMQ system, message tracing tells you “Service A sent message X to queue Y at time T.” In WorkingAgents, audit tracing tells you “Agent A called tool Y with parameters Z on behalf of user U at time T, guardrail G detected PII in the response and redacted fields F1 and F2 before returning the result.”
The audit trail is the compliance version of message tracing. Same concept. Higher stakes.
Why the Same Technology
This is not just an analogy. WorkingAgents and RabbitMQ are built on the same technology: Erlang/OTP running on the BEAM virtual machine.
RabbitMQ chose Erlang because message brokers need:
- Millions of concurrent connections (lightweight BEAM processes)
- Failure isolation (one crashed queue does not affect others)
- Self-healing (supervision trees restart failed components)
- High availability (built-in clustering and distribution)
- Zero-downtime upgrades (hot code swapping)
WorkingAgents chose Elixir (which runs on the same BEAM VM) for exactly the same reasons. AI agent governance has the same requirements as message brokering:
- Millions of concurrent agent connections
- Failure isolation (one agent’s problem does not affect others)
- Self-healing (governance must never go down)
- High availability (agents operate 24/7)
- Zero-downtime upgrades (new rules deployed without interruption)
The technology choice is not a coincidence. It is a recognition that agent governance and message brokering are the same class of problem – routing messages between independent actors through a governed central layer, reliably, at scale, without downtime.
What This Means for Partners
If you are evaluating WorkingAgents for a partnership, the RabbitMQ comparison tells you several things:
1. The pattern is proven. Message brokers are not experimental technology. They are foundational infrastructure in every enterprise. RabbitMQ has been production infrastructure for nearly two decades. The broker pattern – central routing, access control, delivery guarantees, audit trails – is the standard architecture for managing connections between independent systems. WorkingAgents applies this proven pattern to AI agents.
2. The problem grows with adoption. When a company has 3 services, point-to-point connections are manageable. When they have 30 services, they need a broker. The same inflection point is coming for AI agents. When a company has 3 agents, direct connections to tools are manageable. When they have 30 agents accessing 50 tools, they need a gateway. WorkingAgents is positioned at that inflection point.
3. The broker becomes essential infrastructure. Once RabbitMQ was deployed, it became the system that everything depended on. Removing it meant rewiring every service-to-service connection in the organization. It became sticky – not because of vendor lock-in tricks, but because it was genuinely the right place for routing and governance to live. WorkingAgents occupies the same position for AI agents. Once agents route through the MCP Gateway, the gateway becomes the control plane that everything depends on.
4. The value increases with scale. RabbitMQ’s value is proportional to the number of services and connections it manages. A broker managing 5 queues is useful. A broker managing 500 queues across 50 services is essential infrastructure. WorkingAgents follows the same curve. Governing 5 agents is useful. Governing 500 agents across 50 tools with per-agent permissions, guardrails, and audit trails is essential infrastructure that justifies its own budget line.
5. The technology is the same. WorkingAgents is not “like RabbitMQ” in a marketing sense. It is built on the same Erlang/OTP platform, uses the same BEAM virtual machine, benefits from the same concurrency model, the same supervision trees, the same fault tolerance, and the same distribution primitives. The infrastructure DNA is identical.
The One-Line Pitch
RabbitMQ is the broker between applications and services. WorkingAgents is the broker between AI agents and tools. Same pattern. Same technology. Same result: connections that are governed, reliable, observable, and scalable – managed in one place instead of scattered across every endpoint.
Every enterprise that adopted message brokers in the 2000s and 2010s will adopt agent brokers in the 2020s and 2030s. The question is not whether the broker pattern applies to AI agents. It is who builds the broker that becomes the standard.