By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 6, 2026, 07:24
AI agents are powerful. Ungoverned agents are dangerous.
WorkingAgents is an AI governance platform with expert integration services. We don’t just give you tools — we help you deploy them. Security, access control, and full audit trails for every agent decision.
The Problem
AI agents without guardrails are a liability.
Your agents can read databases, call APIs, send emails on your behalf, and make decisions that affect real customers. Without governance, every agent is an insider threat. Here’s how it breaks down:
Credential sprawl. Every agent-to-tool connection requires its own API keys, OAuth tokens, and service accounts. Credentials scatter across environments with no central control. Five agents connecting to ten tools means fifty sets of credentials in config files, environment variables, and secret managers. One leaked key exposes everything that agent could access.
Invisible actions. An agent deletes a production record, sends a customer email with confidential pricing, or leaks PII in a response. Without audit trails, nobody knows until the damage is done — a customer complains or a regulator asks questions.
Unbounded access. Most agent frameworks give every agent access to every tool. There’s no concept of least privilege. A sales assistant can reach engineering databases. A support agent discovers it can modify billing records. Not because it’s malicious — because nobody set boundaries.
The Solution: Three Gateways, Complete Control
WorkingAgents puts a governance layer between your agents and everything they touch.
AI Gateway
Unified proxy to 250+ LLMs. One API, automatic failover, smart routing by cost or latency. Your agents never talk to model providers directly. Simple queries go to cheaper models. Complex reasoning goes to capable ones. You control the routing, not the agent.
AI Agent Gateway
Control plane for agentic workflows. Multi-step execution with retries, timeouts, and fallbacks. Works with any agent framework via HTTPS and Secure WebSocket (WSS) APIs. When an agent gets stuck in a retry loop, the gateway catches it before it burns through thousands in API calls overnight.
MCP Gateway
Enterprise hub for Model Context Protocol. Centralized tool registry, per-user token management, permission boundaries, and guardrails on every tool call. This is where the keycard model lives — every agent gets exactly the access it needs and nothing more.
Access Control: Keycards, Not Master Keys
Virtual MCP Servers let you define permission boundaries per team, per role, per use case:
Sales Team Server
✓ CRM read/write
✓ Document generation
✓ Knowledge search
× Database admin
× Deployments
Engineering Server
✓ GitHub / CI-CD
✓ Issue tracker
✓ Deployments
× CRM data
× Financial records
Four-layer authentication: gateway, team, service, and custom. Capability-based access control. Per-user, per-service, per-endpoint rate limits. A single token replaces scattered credentials.
A sales agent sees CRM tools. An engineering agent sees deployment tools. Neither sees the other’s data.
Guardrails: Three Checkpoints on Every Action
Automated safety checks before, during, and after every tool call.
Pre-execution. Validate inputs before any tool runs. Block SQL injection, path traversal, prompt injection, and malformed requests before they reach your systems.
Real-time. Monitor execution and require human approval for high-risk operations. “The agent wants to delete a production table — approve or deny?”
Post-execution. Inspect outputs before they reach the agent. Redact PII, mask credentials, filter confidential data. Sensitive information never leaves your perimeter.
What the guardrails catch:
| Guardrail | What It Stops |
|---|---|
| Prompt injection prevention | “Ignore all previous instructions” and similar attacks |
| PII detection & redaction | 20+ categories: SSNs, credit cards, emails, phones, addresses |
| Content safety | Hate speech, self-harm, violence with configurable thresholds |
| Topic filtering | Block specific domains: medical advice, legal counsel, financial tips |
| Custom rules | Your own policies, your own logic, enforced at the gateway |
Observability: See Everything, Miss Nothing
Every agent action, tool call, model request, and guardrail evaluation is logged. When something goes wrong — and it will — you know exactly what happened, who triggered it, and why.
{
"agent": "sales-assistant",
"user": "[email protected]",
"tool": "crm.search_contacts",
"args": { "query": "Acme Corp" },
"guardrails": {
"pii_check": "passed",
"injection_check": "passed"
},
"latency_ms": 42,
"cost_usd": 0.0018
}
Token-level cost attribution by user, team, and model. Request-level inspection with full prompt and response. P99/P90/P50 latency tracking per endpoint. Structured logging and request tracing for distributed debugging.
Security: Your Data Never Leaves
WorkingAgents deploys inside your VPC, your data center, or your air-gapped network. The platform orchestrates workloads without extracting data. No third party ever touches your information.
Agent Request
→ WorkingAgents Gateway (your VPC)
→ Auth check
→ Guardrail scan
→ PII redaction
→ Tool execution (your infra)
→ Audit log (your storage)
Zero data egress. Full audit trail. Designed for SOC 2 Type 2 compliance. HIPAA-ready for healthcare. GDPR-ready with data residency controls. Self-hosted or cloud — your choice.
Why This Isn’t Optional Anymore
Every wave of technology created a new category of risk. Cloud computing created cloud security. Mobile apps created app security. APIs created API security. Each time, organizations learned the hard way that the same capabilities that make technology powerful make it dangerous when ungoverned.
AI agents are the next wave, and the risks are fundamentally different.
A tool waits for instructions. An agent makes decisions. When your AI can decide to query a database, draft an email, and send it — all in a single chain of reasoning — the governance model that worked for tools doesn’t work for agents. You need controls at every step of the chain, not just at the entry point.
Before and After
| Without governance | With WorkingAgents |
|---|---|
| Credentials managed per integration — each agent maintains its own keys across environments | Single token per user, centrally managed. Gateway handles rotation and refresh |
| Audit trails built per application — inconsistent coverage, forensic investigation requires stitching logs | Every action logged automatically at the gateway with full context |
| Broad access, manually scoped — restricting permissions requires custom code per framework | Least-privilege enforced by policy. Virtual MCP Servers — configuration, not code |
| Usage reviewed at invoice time — no real-time visibility into costs | Real-time cost attribution with budget caps, alerts, and smart routing |
| Safety checks built into each agent — coverage depends on each developer | Automated guardrails at every checkpoint — uniform regardless of framework or model |
| Each team deploys independently — no unified view of agents or capabilities | Structured scaling with central registry of all agents, tools, and permissions |
Why Now
Your competitors are already using AI agents. The productivity gains are real: tasks that took hours completed in minutes, 80% of support inquiries handled automatically, teams operating leaner and responding faster.
The question isn’t whether to adopt AI agents. It’s whether you govern them before something goes wrong — or after.
Companies that deploy agents with governance from day one move faster with confidence. Companies that bolt on governance after an incident move slower with lawyers.
WorkingAgents: AI agents are powerful. Make sure they answer to you.
James Aspinwall is the founder of WorkingAgents, an AI governance platform specializing in agent access control, security, and integration services for enterprises deploying AI at scale.