Anthropic Blocks OpenClaw, OpenAI Steps In

By James Aspinwall

Watch the full episode on YouTube


Matthew Berman breaks down one of the more consequential missteps in the AI platform wars: how Anthropic shut down OpenClaw’s access to Claude subscription tokens, alienated a loyal builder community, and handed OpenAI a gift-wrapped opportunity to win them over.

What OpenClaw Is

OpenClaw is a local AI assistant that runs on your machine and connects to frontier models — Claude, GPT-5.x, Gemini, and others. It’s deeply personal, highly configurable, and designed for power users who want an AI that knows their workflow, their files, and their preferences.

The project started life as “Claudebot” (CLAWD), was briefly renamed “Maltbot,” and finally became “OpenClaw” after Anthropic objected that the original name was too close to “Claude.” That rename was the first sign of friction. It wouldn’t be the last.

Anthropic Changes the Rules

Anthropic updated its documentation to explicitly state that OAuth tokens from Claude Free, Pro, and Max subscriptions can only be used with Claude Code and Claude AI. They cannot be used in any other product, tool, or service — including Anthropic’s own Agent SDK. Using them elsewhere is now a consumer Terms of Service violation.

This effectively banned OpenClaw from running on Claude subscriptions. The irony: many users had upgraded to higher-tier Claude Max plans specifically because OpenClaw made Claude dramatically more useful. Anthropic was earning more revenue because of OpenClaw, and then cut off the mechanism that drove that revenue.

The Cost Problem

Berman demonstrates why this matters with hard numbers. Running Claude Opus 4.6 through the API with OpenClaw consumes roughly 50,000 tokens for a simple interaction. A minimal “hello” request costs about 25 cents. At large context windows — the kind power users actually need — costs climb fast. API-only usage is unsustainably expensive for the kind of continuous, always-on assistant that OpenClaw is designed to be.

Subscription tokens are approximately 90% cheaper than API pricing. Anthropic’s move protects that margin, but it does so by punishing the exact users who were most engaged with the platform.

The Timeline That Tells the Story

The sequence of events is what makes this more than a policy dispute:

  1. Claudebot goes viral, driving Claude subscriptions
  2. Anthropic forces a rename to OpenClaw
  3. OpenClaw’s creator travels to San Francisco and meets with major AI labs
  4. OpenAI acquires the creator
  5. Anthropic blocks OpenClaw’s use of subscription OAuth tokens
  6. The same day, OpenAI explicitly allows using its subscriptions with OpenClaw via CodeX

Berman calls it one of the biggest “bag fumbles” for Anthropic and “bag pickups” for OpenAI. It’s hard to argue with that framing. Anthropic created a vacuum and OpenAI filled it within hours.

The New Stack

Berman has largely migrated his OpenClaw setup to OpenAI models:

Claude went from being the center of his workflow to a fallback option. Not because the models got worse, but because the platform made itself hostile to the way he wanted to use them.

What This Actually Means

Berman is careful to say he respects Anthropic’s need to manage costs. Subscription arbitrage is a real business problem — if everyone routes heavy API workloads through flat-rate subscriptions, the economics don’t work.

But there’s a difference between managing costs and alienating your most passionate users. The builder and tinkerer community that adopted OpenClaw wasn’t exploiting a loophole. They were paying customers who found a way to get dramatically more value from Claude, and they were evangelizing the platform in the process. Many upgraded specifically because of OpenClaw.

The way the change was handled — without warning, without a transition path, without acknowledging the community that had been driving subscription growth — left a sour taste. Platform trust, once broken, is expensive to rebuild. And in a market where GPT-5.x, Gemini, and open-source models are all viable alternatives, users who feel burned don’t have to come back.

OpenAI read the room. Anthropic didn’t. The models are close enough in capability that the platform experience is becoming the differentiator. And right now, the platform experience for builders is better on the side that said “yes.”