By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 7, 2026, 12:45
Fortanix makes AI workloads tamper-proof at the hardware level — encrypting data, models, and prompts even while they’re being processed. WorkingAgents makes AI agents accountable at the application level — enforcing permissions, logging actions, and controlling what agents can do. One protects the computation itself. The other governs who gets to compute and why. For enterprises deploying autonomous AI agents with sensitive data, you need both.
What Fortanix Does
Fortanix is the global leader in data-first cybersecurity and the pioneer of Confidential Computing. Their unified platform protects sensitive data, AI models, and applications across on-premises and multi-cloud environments — at rest, in transit, and critically, in use.
The platform runs AI workloads inside hardware-isolated trusted execution environments (enclaves), protecting against data leakage, model extraction, and unauthorized access — even from privileged insiders with root access to the host machine.
Core components:
- Fortanix Data Security Manager (DSM) — FIPS 140-2 Level 3 hardware security module for encryption key management and access control enforcement
- Confidential Computing Manager (CCM) — composite attestation that verifies the trustworthiness of AI workloads and infrastructure before releasing encryption keys
- Confidential AI (Armet AI) — end-to-end protection for model weights, prompts, and inference workloads on NVIDIA Confidential Computing GPUs
- Quantum-ready encryption — NIST quantum-resistant algorithms, resistant to all known cryptanalytic techniques including quantum computing
Recognized in six 2025 Gartner Hype Cycle reports (Data Security, Digital Sovereignty, Privacy, Compute, Telco Cloud, Emerging Technologies). Partners include NVIDIA, HPE, NTT DATA, and Rafay Systems. At GTC 2026, Fortanix demonstrates Confidential AI at Booth #3117.
The problem they solve: traditional security protects data at rest and in transit, but the moment you decrypt data to process it — to train a model, run inference, or execute an agent workflow — it’s exposed in memory. Fortanix eliminates that window. Data stays encrypted even during computation.
What WorkingAgents Does
WorkingAgents is the governance and control layer between AI agents and enterprise systems. Three gateways, one control plane:
- Unified LLM Routing — control which models agents use and how they access them
- Agentic Workflow Control — define, supervise, and enforce how agents take actions
- Enterprise MCP and A2A Tools Access — connect agents to internal tools with least-privilege permissions
Per-user access control with AES-256-CTR encrypted permission keys, audit trails on every action, 86+ MCP tools, per-user SQLite databases. Agents inherit the user’s permissions — one identity, one set of rules, full accountability.
The Security Gap They Close Together
Fortanix answers: “Is the computation itself protected from tampering and observation?”
WorkingAgents answers: “Is the agent authorized to perform this computation in the first place?”
Both questions must be answered “yes” for a secure agentic AI deployment. Without Fortanix, a properly authorized agent could have its prompts and model weights exposed in memory by a compromised host. Without WorkingAgents, a hardware-secured enclave could be running an agent with excessive permissions, accessing data it shouldn’t touch, with no audit trail.
Security requires both: hardware-enforced isolation of the computation AND software-enforced governance of the actor.
Synergy Areas
1. Confidential Agent Operations
WorkingAgents manages per-user encrypted permission keys (AES-256-CTR). Fortanix manages enterprise encryption keys in FIPS 140-2 Level 3 HSMs. The integration:
- WorkingAgents delegates key management to Fortanix DSM — instead of managing encryption keys locally, WorkingAgents stores its access control keys in Fortanix’s HSM. Keys never leave hardware-protected storage. Even if the WorkingAgents server is compromised, permission keys remain protected.
- Agent workflows run inside Fortanix enclaves — when a WorkingAgents agent processes sensitive data (medical records in the CRM, financial data in task notes, proprietary research), the computation happens inside a trusted execution environment. The data is encrypted in the per-user SQLite database at rest, encrypted in transit, and now encrypted in use.
- Composite attestation for agent identity — Fortanix CCM can verify that the WorkingAgents process requesting key release is genuine and unmodified. Before an agent gets access to sensitive tools, Fortanix confirms the entire execution environment is trustworthy — not just the user’s permission level, but the integrity of the software itself.
2. Secure LLM Routing
WorkingAgents routes agent requests to multiple LLM providers. Each routing decision involves sending prompts — potentially containing sensitive enterprise data — to external or internal models. Fortanix protects this pipeline:
- Prompt encryption in use — when WorkingAgents constructs a prompt that includes sensitive data (customer names from NIS, task details, business context), Fortanix ensures the prompt is encrypted even in memory during assembly
- Model weight protection — for enterprises running private models, Fortanix protects model weights from extraction. WorkingAgents routes to the model. Fortanix ensures the model itself can’t be stolen.
- HSM-gated key release — before WorkingAgents can access an LLM’s API key, Fortanix DSM validates the request through composite attestation. Stolen API credentials are useless without passing Fortanix’s hardware-backed verification.
The LLM routing pipeline becomes: WorkingAgents checks agent permissions → Fortanix verifies execution environment integrity → Fortanix releases LLM API key from HSM → prompt is assembled inside enclave → encrypted in transit to LLM provider → response processed inside enclave → result delivered to agent within WorkingAgents’ permission boundary.
3. Zero-Trust Agentic AI Architecture
Fortanix explicitly targets “secure and trusted agentic AI” on NVIDIA Confidential Computing GPUs. WorkingAgents provides the agent governance layer. Together, they deliver zero-trust for AI agents:
| Zero-Trust Principle | Fortanix Implementation | WorkingAgents Implementation |
|---|---|---|
| Never trust, always verify | Composite attestation of execution environment | Permission check on every tool call |
| Least privilege | HSM-gated key release | Per-user, per-tool permission keys |
| Assume breach | Data encrypted in use (enclave isolation) | Audit trail on every action |
| Verify explicitly | Hardware attestation before key release | Access control checked at every API boundary |
| Micro-segmentation | Process-level enclave isolation | Per-user database isolation |
Neither product alone delivers zero-trust for AI agents. Fortanix without WorkingAgents has hardware isolation but no agent governance. WorkingAgents without Fortanix has agent governance but no protection against privileged insiders reading memory. Together: the agent is governed AND the computation is tamper-proof.
4. Regulated Industry Enablement
Healthcare, finance, and government — the industries with the strictest compliance requirements — are also the industries most cautious about deploying AI agents. Both products directly address their concerns:
Healthcare:
- Patient data in WorkingAgents’ NIS CRM → encrypted in Fortanix enclave during processing
- AI agent scheduling patient follow-ups → governed by WorkingAgents permissions, computed in hardware-isolated enclave
- HIPAA compliance: data encrypted at rest (WorkingAgents), in transit (TLS), and in use (Fortanix)
Finance:
- Trading algorithms and risk models → model weights protected by Fortanix, agent access controlled by WorkingAgents
- Customer financial data in agent workflows → never exposed in memory, fully audited
- SOX/PCI compliance: tamper-proof computation + complete audit trail
Government/Defense:
- Classified data processing by AI agents → hardware-enforced isolation + software-enforced access control
- Sovereign deployment: both products deploy on-premises, air-gapped
- Quantum-ready: Fortanix’s post-quantum encryption protects data today against tomorrow’s quantum threats
The pitch to regulated industries: “Your AI agents can’t see data they’re not authorized for (WorkingAgents), and even authorized computations can’t be observed by infrastructure operators (Fortanix).”
5. Insider Threat Elimination
Fortanix’s defining capability is protecting against privileged insiders. A system administrator with root access cannot read data inside a Fortanix enclave. WorkingAgents complements this:
- Admin can’t read agent data — per-user SQLite databases inside Fortanix enclaves. Even the platform operator can’t access tenant data.
- Admin can’t modify agent permissions — permission keys stored in Fortanix HSM. Changing permissions requires HSM-backed authorization, not just database access.
- Admin can’t suppress audit logs — audit trail written inside enclave, signed by Fortanix. Tampered logs are detectable.
For enterprises deploying AI agents on third-party infrastructure (cloud, managed services, GPU clouds), this is the difference between trust and trustless security. You don’t have to trust your infrastructure provider. The hardware enforces it.
6. The Rafay-Fortanix-WorkingAgents Stack
Fortanix already partners with Rafay Systems for simplified deployment of confidential AI. WorkingAgents has complementary synergy with Rafay for infrastructure orchestration. The three-product stack:
- Rafay — provisions and manages GPU infrastructure, multi-tenant network isolation
- Fortanix — hardware-enforced encryption in use, key management, attestation
- WorkingAgents — agent governance, permissions, workflows, audit trails
A sovereign AI cloud built on all three: isolated at the network layer (Rafay), encrypted at the compute layer (Fortanix), governed at the agent layer (WorkingAgents). Each layer independently auditable. Each layer independently enforced in hardware or software. Complete stack sovereignty.
The Partnership Opportunity
For Fortanix: WorkingAgents provides the agent governance layer their confidential AI platform needs. Fortanix secures the computation, but doesn’t govern what agents do — which tools they call, what data they access, who gets notified. WorkingAgents closes that gap. Every Fortanix customer deploying agentic AI needs agent governance.
For WorkingAgents: Fortanix elevates our security story from “software-enforced access control” to “hardware-enforced confidential computing.” Our AES-256-CTR encrypted permission keys are strong. Fortanix’s FIPS 140-2 Level 3 HSMs are stronger. For regulated industries, that difference determines whether the CISO signs off.
For the joint customer: AI agents that are governed (WorkingAgents), operating on computations that are tamper-proof (Fortanix), running on infrastructure that is isolated (Rafay). No trust assumptions. No exposed data. No ungoverned agents. No unaudited actions.
Concrete Next Steps
- Key management integration — Store WorkingAgents’ access control encryption keys in Fortanix DSM instead of local environment variables. Estimate: 3-5 days, primarily Fortanix SDK integration for key storage/retrieval.
- Enclave deployment PoC — Run WorkingAgents’ core processes (permission checks, database operations) inside a Fortanix enclave on NVIDIA Confidential Computing GPUs. Validate that performance overhead is acceptable.
- Joint regulated-industry demo — Healthcare scenario: AI agent manages patient follow-ups through WorkingAgents’ NIS, all computation inside Fortanix enclave, complete HIPAA-compliant audit trail from agent action to hardware attestation.
- GTC 2026 meeting — Fortanix is at Booth #3117. Schedule a conversation about the integration path and joint go-to-market for regulated industries.
Fortanix makes computation trustless — you don’t have to trust your infrastructure because hardware enforces the isolation. WorkingAgents makes agents trustworthy — you don’t have to trust your AI because governance enforces the rules. Together, they answer the enterprise CISO’s two fundamental questions about agentic AI: “Can anyone tamper with the computation?” (No — Fortanix.) “Can the agent exceed its authority?” (No — WorkingAgents.)