Fortanix: Confidential Computing Meets Agent Governance

By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 7, 2026, 12:45


Fortanix makes AI workloads tamper-proof at the hardware level — encrypting data, models, and prompts even while they’re being processed. WorkingAgents makes AI agents accountable at the application level — enforcing permissions, logging actions, and controlling what agents can do. One protects the computation itself. The other governs who gets to compute and why. For enterprises deploying autonomous AI agents with sensitive data, you need both.

What Fortanix Does

Fortanix is the global leader in data-first cybersecurity and the pioneer of Confidential Computing. Their unified platform protects sensitive data, AI models, and applications across on-premises and multi-cloud environments — at rest, in transit, and critically, in use.

The platform runs AI workloads inside hardware-isolated trusted execution environments (enclaves), protecting against data leakage, model extraction, and unauthorized access — even from privileged insiders with root access to the host machine.

Core components:

Recognized in six 2025 Gartner Hype Cycle reports (Data Security, Digital Sovereignty, Privacy, Compute, Telco Cloud, Emerging Technologies). Partners include NVIDIA, HPE, NTT DATA, and Rafay Systems. At GTC 2026, Fortanix demonstrates Confidential AI at Booth #3117.

The problem they solve: traditional security protects data at rest and in transit, but the moment you decrypt data to process it — to train a model, run inference, or execute an agent workflow — it’s exposed in memory. Fortanix eliminates that window. Data stays encrypted even during computation.

What WorkingAgents Does

WorkingAgents is the governance and control layer between AI agents and enterprise systems. Three gateways, one control plane:

Per-user access control with AES-256-CTR encrypted permission keys, audit trails on every action, 86+ MCP tools, per-user SQLite databases. Agents inherit the user’s permissions — one identity, one set of rules, full accountability.

The Security Gap They Close Together

Fortanix answers: “Is the computation itself protected from tampering and observation?”

WorkingAgents answers: “Is the agent authorized to perform this computation in the first place?”

Both questions must be answered “yes” for a secure agentic AI deployment. Without Fortanix, a properly authorized agent could have its prompts and model weights exposed in memory by a compromised host. Without WorkingAgents, a hardware-secured enclave could be running an agent with excessive permissions, accessing data it shouldn’t touch, with no audit trail.

Security requires both: hardware-enforced isolation of the computation AND software-enforced governance of the actor.

Synergy Areas

1. Confidential Agent Operations

WorkingAgents manages per-user encrypted permission keys (AES-256-CTR). Fortanix manages enterprise encryption keys in FIPS 140-2 Level 3 HSMs. The integration:

2. Secure LLM Routing

WorkingAgents routes agent requests to multiple LLM providers. Each routing decision involves sending prompts — potentially containing sensitive enterprise data — to external or internal models. Fortanix protects this pipeline:

The LLM routing pipeline becomes: WorkingAgents checks agent permissions → Fortanix verifies execution environment integrity → Fortanix releases LLM API key from HSM → prompt is assembled inside enclave → encrypted in transit to LLM provider → response processed inside enclave → result delivered to agent within WorkingAgents’ permission boundary.

3. Zero-Trust Agentic AI Architecture

Fortanix explicitly targets “secure and trusted agentic AI” on NVIDIA Confidential Computing GPUs. WorkingAgents provides the agent governance layer. Together, they deliver zero-trust for AI agents:

Zero-Trust Principle Fortanix Implementation WorkingAgents Implementation
Never trust, always verify Composite attestation of execution environment Permission check on every tool call
Least privilege HSM-gated key release Per-user, per-tool permission keys
Assume breach Data encrypted in use (enclave isolation) Audit trail on every action
Verify explicitly Hardware attestation before key release Access control checked at every API boundary
Micro-segmentation Process-level enclave isolation Per-user database isolation

Neither product alone delivers zero-trust for AI agents. Fortanix without WorkingAgents has hardware isolation but no agent governance. WorkingAgents without Fortanix has agent governance but no protection against privileged insiders reading memory. Together: the agent is governed AND the computation is tamper-proof.

4. Regulated Industry Enablement

Healthcare, finance, and government — the industries with the strictest compliance requirements — are also the industries most cautious about deploying AI agents. Both products directly address their concerns:

Healthcare:

Finance:

Government/Defense:

The pitch to regulated industries: “Your AI agents can’t see data they’re not authorized for (WorkingAgents), and even authorized computations can’t be observed by infrastructure operators (Fortanix).”

5. Insider Threat Elimination

Fortanix’s defining capability is protecting against privileged insiders. A system administrator with root access cannot read data inside a Fortanix enclave. WorkingAgents complements this:

For enterprises deploying AI agents on third-party infrastructure (cloud, managed services, GPU clouds), this is the difference between trust and trustless security. You don’t have to trust your infrastructure provider. The hardware enforces it.

6. The Rafay-Fortanix-WorkingAgents Stack

Fortanix already partners with Rafay Systems for simplified deployment of confidential AI. WorkingAgents has complementary synergy with Rafay for infrastructure orchestration. The three-product stack:

A sovereign AI cloud built on all three: isolated at the network layer (Rafay), encrypted at the compute layer (Fortanix), governed at the agent layer (WorkingAgents). Each layer independently auditable. Each layer independently enforced in hardware or software. Complete stack sovereignty.

The Partnership Opportunity

For Fortanix: WorkingAgents provides the agent governance layer their confidential AI platform needs. Fortanix secures the computation, but doesn’t govern what agents do — which tools they call, what data they access, who gets notified. WorkingAgents closes that gap. Every Fortanix customer deploying agentic AI needs agent governance.

For WorkingAgents: Fortanix elevates our security story from “software-enforced access control” to “hardware-enforced confidential computing.” Our AES-256-CTR encrypted permission keys are strong. Fortanix’s FIPS 140-2 Level 3 HSMs are stronger. For regulated industries, that difference determines whether the CISO signs off.

For the joint customer: AI agents that are governed (WorkingAgents), operating on computations that are tamper-proof (Fortanix), running on infrastructure that is isolated (Rafay). No trust assumptions. No exposed data. No ungoverned agents. No unaudited actions.

Concrete Next Steps

  1. Key management integration — Store WorkingAgents’ access control encryption keys in Fortanix DSM instead of local environment variables. Estimate: 3-5 days, primarily Fortanix SDK integration for key storage/retrieval.
  2. Enclave deployment PoC — Run WorkingAgents’ core processes (permission checks, database operations) inside a Fortanix enclave on NVIDIA Confidential Computing GPUs. Validate that performance overhead is acceptable.
  3. Joint regulated-industry demo — Healthcare scenario: AI agent manages patient follow-ups through WorkingAgents’ NIS, all computation inside Fortanix enclave, complete HIPAA-compliant audit trail from agent action to hardware attestation.
  4. GTC 2026 meeting — Fortanix is at Booth #3117. Schedule a conversation about the integration path and joint go-to-market for regulated industries.

Fortanix makes computation trustless — you don’t have to trust your infrastructure because hardware enforces the isolation. WorkingAgents makes agents trustworthy — you don’t have to trust your AI because governance enforces the rules. Together, they answer the enterprise CISO’s two fundamental questions about agentic AI: “Can anyone tamper with the computation?” (No — Fortanix.) “Can the agent exceed its authority?” (No — WorkingAgents.)