Musk v. Altman: The Trial Hiding the Real Story

5 min read
Article

The Musk v. Altman trial reads like an OpenAI exception. Wrong frame. The nonprofit-to-for-profit pattern is now the AI industry default.

The free AI newsletter
Musk v. Altman: The Trial Hiding the Real Story

$44 million in donations in 2015. Around $500 billion in valuation in 2025. And in between, a trial that asks the wrong question.

In Oakland, on Tuesday April 28, opening statements began in Musk v. Altman. Four weeks of hearings, nine jurors, a federal judge, sworn depositions from Sam Altman, Satya Nadella, Mira Murati, and Ilya Sutskever. Most of the tech press is treating it as a personal duel: Musk wants payback, Altman wants to keep his company. That framing isn't wrong, but it's the tree hiding the forest.

The real story isn't the drama. It's the pattern.

What the press covers and what it misses

Most major outlets, in France and elsewhere, have anchored their coverage on two angles: the trial timeline and how the jury reads Musk. Defensible journalism, but a low-altitude read. Meanwhile, nobody is asking the underlying question: why is OpenAI the only AI lab that can be sued for mission drift?

Short answer: because Musk donated enough money to have the legal firepower to sue. The entry ticket for a trial like this runs into tens of millions of dollars. Below that threshold, no 2015 donor gets to open the file. This trial is unique because of the size of the original check, much more than because of the nature of the case.

The pattern that took hold over a decade

Let's look at the family photo of major AI labs in 2026.

OpenAI started in 2015 as a 501(c)(3) nonprofit. In 2019, it spun up OpenAI LP, a capped-profit subsidiary. In October 2025, the recap closed: OpenAI Group, a Public Benefit Corporation, is now owned 47% by employees and investors, 27% by Microsoft, and just 26% by the OpenAI Foundation. The foundation is a minority shareholder in the entity it is meant to govern.

Anthropic, founded in 2021, started directly as a Delaware PBC, with a Long-Term Benefit Trust of five financially-disinterested independent trustees who will eventually elect a board majority. It is the most thoroughly-built governance design in the sector against mission drift. It is also the only one.

Mistral, Cohere, xAI: for-profit from day one, no detour through nonprofit status. Mistral has raised $3 billion, Cohere never claimed to be anything other than a company, xAI belongs to Musk himself. The OpenAI lesson has been absorbed: starting for-profit avoids the courtroom ten years later.

Five labs out of six are now in pure for-profit or diluted hybrid form. Anthropic stands alone.

The Mozilla precedent, or how it could have been done

The most useful analogy isn't in AI, it's older. Mozilla Foundation was created in 2003 as a California nonprofit. In 2005, it spun up Mozilla Corporation, a wholly-owned for-profit subsidiary. The reason was identical to OpenAI's in 2019: nonprofit status couldn't handle commercial revenue at scale.

The difference is in the structure. Mozilla Corporation has no external shareholders, no stock options, no dividends. All profits flow back into Mozilla projects.

The subsidiary is locked against takeover, with no exit door for founders or investors.

Twenty years later, Mozilla is still operating in this form. The structure held.

OpenAI did the opposite. By opening 47% of the capital to employees and external investors during the 2025 recap, it converted a philanthropic foundation into a commercial entity with residual moral cover. That's a different category of legal object. The same formula, "nonprofit overseeing a for-profit," hides two completely different models.

Nobody is watching mission drift

The European AI Act, passed in 2024 and now in force, regulates product risk: risk categories, transparency, foundation-model documentation. No chapter covers corporate structure. No mechanism monitors mission drift.

The U.S. side is even lighter. The SEC doesn't cover it (OpenAI isn't public), the FTC handles competition, not mission. The only real backstop is state attorneys general, in this case California and Delaware.

They reviewed the OpenAI 2025 conversion. They let it through, with a few conditions on teen safety and AGI.

So: no dedicated authority, no AI Act chapter, no blocking control in the U.S. The only legal lever available today is the one Musk grabbed: breach of charitable trust. A framework centuries old, ill-suited to organizations valued at $500 billion, and which requires a donor with the means to pay a legal team in Oakland for four weeks.

What the verdict won't settle

Judge Yvonne Gonzalez Rogers will rule after an advisory jury verdict, likely around late May. Whatever the outcome, two things will be true.

First, OpenAI will stay structured the way it is today. The judge has already labeled unwinding the for-profit conversion an "extraordinary and rarely-granted" remedy. The odds that the OpenAI Group PBC gets dissolved by court order are low.

Second, the pattern will keep spreading. No AI lab has any incentive to start as a nonprofit today. The Musk-Altman trial will have mobilized dozens of lawyers over four weeks of hearings, just to publicly demonstrate that the formula should be avoided. Next-generation founders have taken note.

A $500 billion organization overseen by a 26% foundation is not a nonprofit that drifted a little. It's a for-profit with residual moral cover. The question for the coming decade is no longer whether Musk is right against Altman, but who watches AI labs when the mission statement cracks. Right now, the answer is almost nobody.

Topics covered:

EthicsAnalysis

Frequently asked questions

Why is the Musk-Altman trial held in Oakland?
The case sits before the U.S. District Court for the Northern District of California, in Oakland, with Judge Yvonne Gonzalez Rogers presiding. Musk's claim hinges on breach of charitable trust, a California-specific framework that governs mission drift in nonprofit organizations chartered in the state.
What does OpenAI's structure look like after the 2025 recap?
Since October 2025, OpenAI Group is a Public Benefit Corporation owned 47% by employees and investors, 27% by Microsoft, and just 26% by the OpenAI Foundation. The foundation is a minority shareholder in the entity it is supposed to govern.
Why is Anthropic considered the exception?
Anthropic was founded in 2021 directly as a Delaware Public Benefit Corporation, with a Long-Term Benefit Trust of five financially-disinterested independent trustees who will eventually elect a board majority. It is the most fully-realized governance design in the sector against mission drift.
Does the EU AI Act cover mission drift in AI labs?
No. The AI Act, passed in 2024, regulates product risk (risk categories, transparency, foundation-model documentation). There is no chapter covering corporate structure or mission drift in AI laboratories.
Is there a precedent for structuring a nonprofit/for-profit hybrid properly?
Yes. Mozilla, 2003-2005. Mozilla Corporation is a wholly-owned for-profit subsidiary with no external shareholders, no stock options, and no dividends. All profits flow back into Mozilla projects. The structure has held for twenty years.
The free AI newsletter