Emad Mostaque: The Man Who Gave AI to the World and Then Walked Away From His Company to Save It

Emad Mostaque is a British-Bangladeshi mathematician and former hedge fund manager who co-founded Stability AI, released Stable Diffusion to the world for free, built one of the most influential AI companies of the 2020s, and then resigned as CEO in March 2024 – not because the company failed, but because he concluded that a centralized AI company, even a good one, was the wrong answer to the problem he was trying to solve.

That decision, and what he did next, says more about his thinking than anything that came before it.

Stable Diffusion and What It Changed

Before Stability AI, high-quality AI image generation was locked behind corporate APIs. DALL-E required an OpenAI account and rate limits. Midjourney was a Discord server with a waitlist. The capability existed but the public couldn’t touch it.

Mostaque’s bet was that releasing the model weights openly – letting anyone download and run Stable Diffusion on their own hardware – would produce more value, more safety research, and more innovation than keeping it proprietary. He was right. Within weeks of the release in 2022, an entire ecosystem erupted: fine-tunes, ControlNet, Automatic1111, custom UIs, artist communities, research papers. The model was in the hands of people OpenAI and Google had never thought to reach.

Stable Diffusion remains one of the most significant open-source releases in AI history. It permanently altered the economics of image generation and established that foundation model weights could be open without destroying the company that released them.

The Exit

By early 2024, Stability AI was under pressure. Revenue had reached roughly $5.4 million monthly but the company’s burn rate was high and investors were restless. Key researchers had departed. There were reports of internal conflict and questions about Mostaque’s management style.

On March 23, 2024, he resigned as CEO and board member. His explanation was characteristically blunt: “You’re not going to beat centralized AI with more centralized AI.”

He wasn’t saying Stability AI was bad. He was saying the structure was wrong. A venture-backed company with investors demanding returns will eventually make the same decisions every other venture-backed company makes – optimize for revenue, constrain access, gate the good stuff. Even with the best intentions, the incentives point the same direction. He had concluded that working within that structure was working against the goal.

Intelligent Internet

What Mostaque built after Stability AI is harder to describe, which is probably intentional.

He started with Schelling AI, a research initiative focused on decentralized AI infrastructure. By late 2024 it had evolved into Intelligent Internet (II), a project organized around a specific provocation: what if every person, company, and country had their own AI models, and those models could interact and coordinate without any central authority controlling the infrastructure?

The Intelligent Internet whitepaper describes a “Third Path” – not the closed corporate model of OpenAI, Google, and Anthropic, and not the naive open-source model of just releasing weights and hoping for the best. The third path is sovereign AI: models that are open, verifiable, auditable, and owned by the people using them, coordinated by a protocol rather than a platform.

The four-part Master Plan published in 2025 outlines how this works in practice: a “Proof of Benefit” economic layer, a framework for sovereign AI agents, a coordination layer called Common Ground, and an open data foundation. The project is less a product than an infrastructure proposal – closer to what the internet itself was in 1993 than to what any current AI company is building.

He also published a book, The Last Economy, arguing that machine intelligence will fundamentally reshape the value of human cognitive labor and that the economic frameworks built around scarcity of intelligence are about to break. Whether you agree with the conclusion or not, it is not a hedge fund pitch deck dressed up as a manifesto. It engages seriously with what happens when intelligence becomes abundant.

What He Actually Believes

A few things are consistent across everything Mostaque has said publicly:

AI concentrated in a few companies is a civilizational risk. Not because the companies are evil, but because concentration of that kind of capability is inherently unstable. It invites capture by governments, by bad actors, by shareholders. The only durable safety property is distribution.

Open source is not enough. Releasing weights is necessary but not sufficient. The infrastructure around training data, compute, coordination, and governance also needs to be open, or the open models just run on top of a closed stack that has all the same properties as a closed model.

The right frame is sovereignty, not access. Access means you can use someone’s AI. Sovereignty means you own yours. Most of the AI safety and AI ethics discussion focuses on access – who can use GPT-5, who is blocked, what is filtered. Mostaque’s argument is that access is a distraction from the more fundamental question of who controls the underlying capability.

Education is the first application that matters. He has been consistent on this for years. Not image generation, not enterprise software, not coding tools – teaching. The gap between what children in wealthy countries learn and what children elsewhere learn is not fundamentally a resources problem anymore. It is an AI deployment problem. He has argued that building sovereign, locally-appropriate AI for education systems in every country is the highest-leverage thing the technology can do.

How to Think About Him

Mostaque is difficult to categorize. He is not a traditional startup founder optimizing for an exit. He is not a researcher interested primarily in capability benchmarks. He is not a safety researcher who thinks the primary risk is misalignment in the technical sense.

He is closer to an infrastructure builder who believes the question of who owns the stack matters more than the question of what runs on it – and who has been willing to walk away from a company he built, at a moment of financial pressure, because he concluded the structural answer was wrong.

Whether Intelligent Internet succeeds is a separate question from whether the diagnosis is correct. The diagnosis – that a world where two or three American companies control the world’s AI infrastructure is neither stable nor good – has more consensus than it did two years ago, even among people who disagree about the solution.

What Mostaque is attempting is a harder version of what he did with Stable Diffusion: not just release a model, but build the infrastructure so that releasing models is the default, and keeping them closed is the exception. That project is earlier and less certain than anything he has shipped before. It is also, if it works, considerably more consequential.