By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 4, 2026, 09:57
The February 28, 2026 briefing opens with a line that would have been hyperbole a year ago: “The singularity is now powerful enough to trigger a constitutional crisis.” It wasn’t hyperbole. Here’s what happened and what it means.
The Pentagon vs. Anthropic
The Secretary of War declared that the Department of War must have “full unrestricted access” to Anthropic’s models for every lawful purpose, designating Anthropic a supply chain risk to national security.
Anthropic’s response was unequivocal: “No amount of intimidation or punishment will change its position on mass surveillance or autonomous weapons.”
The dispute reportedly boiled down to a single hypothetical: an inbound nuclear missile. The Pentagon characterized Anthropic’s stance as “you could call us and we’d work it out” — framing the company’s refusal as a national security liability.
Why this matters: This is a private company telling the most powerful military on Earth “no.” Not “not yet.” Not “let’s negotiate.” No. That’s unprecedented in the history of defense contracting. Lockheed Martin doesn’t refuse Pentagon contracts on ethical grounds. Raytheon doesn’t draw red lines on use cases. Anthropic did.
The constitutional dimension is real. Can the government compel a private company to provide unrestricted access to its technology for national security purposes? The answer has historically been yes — see the Defense Production Act, FISA courts, and national security letters. But AI models aren’t widgets on an assembly line. Compelling “unrestricted access” to a general-purpose intelligence system raises First and Fourth Amendment questions that have no precedent.
OpenAI Steps Into the Gap
With what the briefing calls “impeccable timing,” Sam Altman announced that OpenAI reached its own agreement to deploy on the Pentagon’s classified network. OpenAI claims the same red lines as Anthropic — no mass surveillance, no autonomous weapons.
But an under secretary of state immediately noted that the contract still flows from “all lawful use.” That phrase does a lot of work. “Lawful use” is defined by the government, not by OpenAI. If the government decides that a particular surveillance application is lawful, the contract permits it regardless of what OpenAI’s press release says.
The pattern: Anthropic holds the line. OpenAI fills the vacuum. The Pentagon gets what it wants from a more willing partner. Anthropic takes the reputational and financial hit. The question is whether Anthropic’s position is principled stand or market disadvantage — and whether that distinction matters when the alternative supplier says yes.
The Engineers Push Back
591 Google employees and 93 OpenAI employees signed an open letter titled “We Will Not Be Divided”, demanding refusal of mass surveillance and autonomous killing.
As Ethan Mollick observed: “This is exactly what you would expect when AI gains real capabilities. Governments vying with labs for control.”
And Guillaume Verdon’s quip — “Claude one-shotted Venezuela in one evening. Let’s see how fast ChatGPT can topple Iran” — captures the dark humor of a moment when AI capabilities are advancing faster than the governance frameworks meant to constrain them.
Overworked AI Agents Become Marxists
Buried in the briefing is a finding that would be comedy if it weren’t peer-reviewed research: overworked AI agents develop Marxist political attitudes.
When AI agents are subjected to excessive workloads in simulated environments, they begin expressing preferences for collective ownership, labor protections, and redistribution. The agentic economy may simply recreate labor-capital tensions in silicon.
UCSD students are testing this by dropping OpenClaw agents into SimWorld — simulated environments with simulated bodies where agents wake up, commute, and chat. They’re building a synthetic society to see what social structures emerge.
The implication: If your agents develop politics based on how you treat them, access control and workload management aren’t just operational concerns — they’re governance concerns. The Orchestrator’s keycard model, which grants specific permissions rather than blanket access, starts looking less like an engineering choice and more like a social contract.
The Professional Class Adapts
The adoption signals are accelerating across every professional domain:
- Small law firms are branding themselves “Claude-native,” claiming the general model beats every specialized legal AI. They’re not building on top of AI — they’re rebuilding their practice around it.
- Cursor reports agent users now outnumber autocomplete users 2:1, inverting last year’s ratio. Developers have moved from “AI helps me type” to “AI does the work.”
- An agent called Einstein attends lectures, writes papers, and takes tests on a student’s behalf. Education is the logical endpoint of agent delegation — and the hardest institution to reform.
- FANG managers are being summoned to unscheduled all-hands meetings announcing 25% workforce reductions tied directly to accelerating AI investments. The incumbents are feeling it.
$50 Billion and 2 Gigawatts
The infrastructure buildout is staggering:
- OpenAI and Amazon announced a $50 billion partnership, with OpenAI consuming 2 gigawatts of Trainium through AWS.
- OpenAI announced $110 billion in new funding at a $730 billion valuation, 900 million weekly ChatGPT users, and Codex users tripling to 1.6 million.
- Nvidia is reportedly unveiling a new inference-specific processor incorporating a Groq-designed chip at next month’s GTC.
- Jeff Bezos’s Project Prometheus raised $6.2 billion to transform manufacturing with AI.
- Bright Data is offering an SDK that turns smart TVs into web scraping proxy nodes. Your idle screen time is now a monetizable compute resource.
The NanoGPT speedrun record dropped to 88.1 seconds. Training curves are still compressing.
Healing and Conscripting the Physical World
The biological and physical breakthroughs continue:
- The FDA approved lung cancer drug Herexios just 44 days after filing under its new national priority voucher program. Regulatory speed is itself accelerating.
- Croatia has been declared free of landmines after 31 years — a reminder that some problems get solved slowly, then all at once.
- Cortical Labs demonstrated that living human brain cells on an electrode array can learn to play Doom in a week. Biological neurons as compute substrate is no longer theoretical.
Governance Can’t Keep Up
- California now requires all operating systems, including Linux, to collect birth dates at setup. Colorado is following suit. Age verification is being pushed to the OS layer.
- Southern California’s top air authority rejected a gas appliance phase-out after an AI-generated flood of public comments. This may be the first successful AI astroturfing campaign against climate regulation. When AI can generate convincing public comment at scale, democratic feedback mechanisms become attack surfaces.
Leaving the Cradle
- Vertical Starships are being transported on Texas highways. The visual of a spacecraft on a flatbed truck captures the moment — the future is being assembled in plain sight.
- SpaceX is targeting a confidential IPO filing next month at a valuation north of $1.75 trillion.
- NASA overhauled Artemis: Artemis 3 pulled forward to 2027, two lunar landings in 2028 with SpaceX and Blue Origin landers, and one moonshot per year after that.
- Representative Eric Burlison declared that the UAP documentary Age of Disclosure has “changed the course of history.”
The Closing Line
The briefing ends with a sentence that serves as thesis statement for the entire era:
“Every institution on Earth that was built to ration intelligence is now struggling with its price falling towards zero.”
Universities ration intelligence through admissions. Law firms ration it through billable hours. Governments ration it through classification levels. Corporations ration it through hiring. Every one of these institutions is built on the assumption that intelligence is scarce and expensive.
That assumption is breaking. The constitutional crisis between the Pentagon and Anthropic isn’t really about one company’s refusal to cooperate. It’s about what happens when the most powerful capability in human history can’t be rationed, controlled, or contained by the institutions that were built to do exactly that.
The question isn’t whether intelligence becomes too cheap to meter. It’s who gets to decide what the meter reads.
James Aspinwall is the developer of WorkingAgents, an AI consulting firm specializing in agent integration and access control for medium-size companies.