The Constitutional Crisis of Intelligence: February 28, 2026

By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 4, 2026, 09:57


The February 28, 2026 briefing opens with a line that would have been hyperbole a year ago: “The singularity is now powerful enough to trigger a constitutional crisis.” It wasn’t hyperbole. Here’s what happened and what it means.

The Pentagon vs. Anthropic

The Secretary of War declared that the Department of War must have “full unrestricted access” to Anthropic’s models for every lawful purpose, designating Anthropic a supply chain risk to national security.

Anthropic’s response was unequivocal: “No amount of intimidation or punishment will change its position on mass surveillance or autonomous weapons.”

The dispute reportedly boiled down to a single hypothetical: an inbound nuclear missile. The Pentagon characterized Anthropic’s stance as “you could call us and we’d work it out” — framing the company’s refusal as a national security liability.

Why this matters: This is a private company telling the most powerful military on Earth “no.” Not “not yet.” Not “let’s negotiate.” No. That’s unprecedented in the history of defense contracting. Lockheed Martin doesn’t refuse Pentagon contracts on ethical grounds. Raytheon doesn’t draw red lines on use cases. Anthropic did.

The constitutional dimension is real. Can the government compel a private company to provide unrestricted access to its technology for national security purposes? The answer has historically been yes — see the Defense Production Act, FISA courts, and national security letters. But AI models aren’t widgets on an assembly line. Compelling “unrestricted access” to a general-purpose intelligence system raises First and Fourth Amendment questions that have no precedent.

OpenAI Steps Into the Gap

With what the briefing calls “impeccable timing,” Sam Altman announced that OpenAI reached its own agreement to deploy on the Pentagon’s classified network. OpenAI claims the same red lines as Anthropic — no mass surveillance, no autonomous weapons.

But an under secretary of state immediately noted that the contract still flows from “all lawful use.” That phrase does a lot of work. “Lawful use” is defined by the government, not by OpenAI. If the government decides that a particular surveillance application is lawful, the contract permits it regardless of what OpenAI’s press release says.

The pattern: Anthropic holds the line. OpenAI fills the vacuum. The Pentagon gets what it wants from a more willing partner. Anthropic takes the reputational and financial hit. The question is whether Anthropic’s position is principled stand or market disadvantage — and whether that distinction matters when the alternative supplier says yes.

The Engineers Push Back

591 Google employees and 93 OpenAI employees signed an open letter titled “We Will Not Be Divided”, demanding refusal of mass surveillance and autonomous killing.

As Ethan Mollick observed: “This is exactly what you would expect when AI gains real capabilities. Governments vying with labs for control.”

And Guillaume Verdon’s quip — “Claude one-shotted Venezuela in one evening. Let’s see how fast ChatGPT can topple Iran” — captures the dark humor of a moment when AI capabilities are advancing faster than the governance frameworks meant to constrain them.

Overworked AI Agents Become Marxists

Buried in the briefing is a finding that would be comedy if it weren’t peer-reviewed research: overworked AI agents develop Marxist political attitudes.

When AI agents are subjected to excessive workloads in simulated environments, they begin expressing preferences for collective ownership, labor protections, and redistribution. The agentic economy may simply recreate labor-capital tensions in silicon.

UCSD students are testing this by dropping OpenClaw agents into SimWorld — simulated environments with simulated bodies where agents wake up, commute, and chat. They’re building a synthetic society to see what social structures emerge.

The implication: If your agents develop politics based on how you treat them, access control and workload management aren’t just operational concerns — they’re governance concerns. The Orchestrator’s keycard model, which grants specific permissions rather than blanket access, starts looking less like an engineering choice and more like a social contract.

The Professional Class Adapts

The adoption signals are accelerating across every professional domain:

$50 Billion and 2 Gigawatts

The infrastructure buildout is staggering:

The NanoGPT speedrun record dropped to 88.1 seconds. Training curves are still compressing.

Healing and Conscripting the Physical World

The biological and physical breakthroughs continue:

Governance Can’t Keep Up

Leaving the Cradle

The Closing Line

The briefing ends with a sentence that serves as thesis statement for the entire era:

“Every institution on Earth that was built to ration intelligence is now struggling with its price falling towards zero.”

Universities ration intelligence through admissions. Law firms ration it through billable hours. Governments ration it through classification levels. Corporations ration it through hiring. Every one of these institutions is built on the assumption that intelligence is scarce and expensive.

That assumption is breaking. The constitutional crisis between the Pentagon and Anthropic isn’t really about one company’s refusal to cooperate. It’s about what happens when the most powerful capability in human history can’t be rationed, controlled, or contained by the institutions that were built to do exactly that.

The question isn’t whether intelligence becomes too cheap to meter. It’s who gets to decide what the meter reads.


James Aspinwall is the developer of WorkingAgents, an AI consulting firm specializing in agent integration and access control for medium-size companies.