By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 4, 2026, 11:34
The February 27 briefing opens with a gut punch dressed as an earnings report: Block just fired half its workforce, the market cheered, and the new efficiency target is $2 million in gross profit per employee. The rest of the briefing follows the same pattern — intelligence compressing, infrastructure doubling, and every institution built for the old world scrambling to stay relevant. Here’s the breakdown.
Block Cuts 4,000 — Market Says Thank You
Block cut over 4,000 employees — roughly half its workforce — to “move faster with smaller teams using AI.” The market rewarded the decision with a 24% after-hours spike. The company is now targeting $2 million+ gross profit per person, four times its pre-COVID efficiency.
This isn’t a struggling company shedding dead weight. This is a profitable company deciding that half its humans are less productive than the AI systems replacing them. And the market didn’t just tolerate it — it celebrated.
The broader picture: Components of the State Street software ETF have lost a combined $1.6 trillion in market cap this year as investors reprice legacy SaaS against AI-native replacements. The creative destruction isn’t limited to one company or sector. The entire software industry built on selling seats and subscriptions is being repriced against systems that don’t need seats.
But where old software withers, new intelligence gets hired. Norway’s $2 trillion sovereign wealth fund now uses Claude to screen investments for reputational and ethical risk. They’ve outsourced moral judgment to the machine at sovereign scale. When the world’s largest sovereign wealth fund trusts AI to evaluate ethics, the question of who programs the values becomes a matter of global finance.
Cognition Compresses on Every Axis
The technical advances are stacking faster than anyone can absorb them:
- Self-distilled multi-token predictors decode 3x faster at under 5% accuracy loss. Foundation models are learning to compress themselves.
- Sakana demonstrated compiling documents directly into model weights via hypernetworks — giving language models durable memory without bloating context windows. Instead of feeding a model a document every time, you bake the document into the model itself.
- LM Provers released QED Nano, a compact 4B-parameter model that writes Olympiad-level math proofs approaching frontier performance. Four billion parameters. Olympiad math. On a model you can run locally.
- Google’s Nano Banana 2 image model fuses pro-level reasoning with flash speed, collapsing the quality-latency trade-off into a single release.
What this means in practice: The gap between frontier capability and what runs on your hardware is closing rapidly. Last year, you needed a datacenter for Olympiad math. This year, 4 billion parameters. Next year, your phone.
The Physical Plant Doubles Again
The infrastructure buildout continues at a pace that makes prior tech booms look modest:
- Eli Lilly and Nvidia launched Lily Pod, the world’s first DGX SuperPod with B300 systems — 1,600 Blackwell Ultra GPUs and over 9,000 petaflops dedicated to drug discovery. A pharmaceutical company now has its own supercomputer.
- CoreWeave’s Q4 revenue grew 110% year-over-year.
- Dell expects AI server revenue to double in fiscal 2027.
- Meta reportedly signed a multi-billion dollar deal to rent Google’s TPUs, diversifying away from Nvidia. When Meta hedges its silicon supply chain, it signals that Nvidia dependency is now a strategic risk.
- Japan’s Rapidus secured $1.7 billion to reach 2-nanometer mass production by 2028.
Meanwhile, the device that defined the prior era is fading. Smartphone shipments are expected to drop 12.9% to a decade low as AI-driven memory prices cannibalize consumer hardware. The generational handoff from the pocket rectangle to the data center is underway. Your phone mattered when intelligence lived on it. When intelligence lives in the cloud, the phone becomes a dumb terminal.
The Agents Clock In
AI agents aren’t experimental anymore. They’re getting work calendars:
- Anthropic introduced scheduled tasks in Claude Co-Work — recurring jobs from morning briefs to Friday presentations, running automatically. The AI has a work calendar before most interns earn one.
- Amplifying is pointing Claude Code at thousands of GitHub repos to extract what the model considers current best practices. AI is auditing the craft it’s absorbing — learning not just how to code, but what good code looks like across the entire open-source ecosystem.
- Burger King is deploying Patty, a headset-mounted voice AI that assists with meal prep and scores employees on friendliness. AI is now the shift supervisor.
- At a Gap store in San Francisco, World ID orbs scan shoppers’ faces to verify humanness. The retail iris scan from Minority Report has arrived 28 years ahead of schedule.
The access control angle: Every one of these deployments raises the same question — what can the agent access? Patty scores employees, but who sees the scores? Claude runs scheduled tasks, but with whose permissions? The Gap orb verifies identity, but where does the biometric data go? The agents are clocking in, and most organizations have no framework for controlling what they can do once they’re on the clock.
Anthropic vs. The Department of War
Anthropic publicly refused to let its models power mass surveillance or autonomous weapons for the Department of War. This is the prelude to the full constitutional crisis that erupted the next day (covered in our February 28 article).
Under Secretary of War Emil Michael attacked Claude’s constitution for “requiring sensitivity to non-Western traditions” — previewing how system prompts may become the next regulatory battleground. The values baked into a frontier AI model are no longer an engineering decision. They’re a geopolitical one.
When a government official publicly objects to a model’s system prompt, we’ve crossed a line. The fight over AI alignment isn’t theoretical anymore. It’s about which country’s values get encoded into the systems that increasingly make decisions at sovereign scale.
Robots at the Bedside, Lasers in the Sky
At Changzhou First People’s Hospital, two Agibot A2 humanoids named Xianzhen and Ruru greet patients with handshakes and handle registration and navigation. The bedside manner is now robotic — literally.
The kinetic layer is less polite. The FAA barred flights over Fort Hancock, Texas after a military laser anti-drone system accidentally downed a US government drone. The second time in recent months that laser weapons have lit up Texas skies. Iron Beam intercepting rockets at $4 a shot is one thing. Friendly fire on domestic airspace is another.
Above the atmosphere, Starship V3 is headed for ground tests with Elon “highly confident” in full reusability. Rocket Lab is introducing silicon solar arrays for gigawatt-scale orbital data centers — one more step toward the Dyson swarm discussed in our GPU Diplomacy article.
Mapping Aging, Resurrecting the Past
Rockefeller researchers published the first chromatin accessibility aging atlas across 21 mouse tissues, finding that immune cells diverge most dramatically with age. We’re reading the aging process at single-cell resolution — a prerequisite to eventually editing it.
In China, AI is turning famous historical landscape paintings into immersive ancestor simulations — a digital down payment on Nikolai Fedorov’s “Common Task,” the 19th-century Russian philosophy that humanity’s moral obligation is to resurrect all who have ever lived. It was fringe philosophy. AI is making it a product roadmap.
The Pentagon’s Other Secret
Parts of the Pentagon are reportedly resisting full UAP declassification, with officials fearing that “demonic implications” could trigger public panic or religious upheaval. Whether you take this at face value or read it as bureaucratic resistance to transparency, the pattern is the same: institutions built to control information are losing their grip.
The Closing Line
The briefing ends with a sentence that cuts both ways:
“We’re snapping half the workforce for the intelligence we have built, while bureaucrats hide any intelligence we may have found.”
Block fires 4,000 people because AI is more efficient. The Pentagon classifies evidence of non-human intelligence because the public isn’t ready. In both cases, the institution decides what people are allowed to know and do. The difference is that the market is forcing transparency on one side — Block’s layoffs are public, priced in, rewarded — while the other side remains opaque by design.
Every institution built to ration intelligence is struggling with its price falling toward zero. Some are adapting. Some are hiding. None are in control.
James Aspinwall is the developer of WorkingAgents, an AI consulting firm specializing in agent integration and access control for medium-size companies.