What the New Yorker Investigation Reveals About Sam Altman
The New Yorker's investigation into Sam Altman documents 20 years of misrepresentation. But the real question isn't about his character — it's about why no guardrails were ever built.

We spend a lot of time debating Sam Altman's character. That's the wrong question.
On April 7, 2026, The New Yorker published an investigation by Ronan Farrow and Andrew Marantz titled "Moment of Truth: Sam Altman May Control Our Future - Can He Be Trusted?" Eighteen months of reporting. A hundred interviews. Hundreds of pages of internal documents. The kind of piece that doesn't happen by accident and doesn't get written fast.
What it reveals is troubling. But not necessarily for the reasons most people think.
Twenty Years, Three Organizations, One Pattern
The investigation doesn't start at OpenAI. It goes back to Loopt, Altman's first startup in the mid-2000s. Senior employees had asked the board to fire him, complaining about his tendency to blur "I think we can do this" and "this is already done." Paul Graham, co-founder of Y Combinator, goes further: "Sam was lying to us from the start."
This isn't locker-room gossip. These are on-record statements from people who worked directly with him, documented by two investigative journalists — one of whom broke the Weinstein story. Farrow doesn't publish on flimsy sourcing.
At Y Combinator, the same pattern. Altman reportedly took personal stakes in the program's best startups (while serving in his official role) and blocked certain outside investors depending on circumstances. He left YC in 2019, officially to run OpenAI. According to the investigation, he was pushed out.
The Episode That Captures Everything
At OpenAI, the allegations get heavier because the stakes are.
In December 2022, Altman told the board that controversial GPT-4 features had been cleared by an internal safety panel. A board member asked for the documents proving that validation. The documents didn't exist. The features had never been submitted to any panel.
This isn't interpretation or anonymous sourcing: it's documented in the 70 pages of notes Ilya Sutskever compiled in the fall of 2023 describing what he called a "consistent pattern of lying." Dario Amodei, co-founder of Anthropic after leaving OpenAI, is even more direct in his personal notes: "The problem at OpenAI is Sam himself."
In November 2023, the board tried to fire Altman for "lack of candor." He was reinstated five days later, under pressure from Microsoft and investors. The lesson was noted: shareholders outweigh independent directors.
The Real Problem Isn't Altman's Character
Leaders who oversell their vision a little: that's not rare. Outside of AI, it makes business headlines but rarely history.
The problem with AI is different. When decisions involve technologies that could potentially shift the global balance of power, you can't afford to build safety on one person's good faith. No matter who that person is.
Gary Marcus, AI researcher, frames it surgically: "If a future OpenAI model could enable the creation of a massive bioweapon or a large-scale cyberattack, do you really want Altman deciding alone whether to release it?"
That's not a personal attack on Altman. It's a structural question.
You can imagine an alternate version of this story where Altman is completely honest, careful, trustworthy — the problem would remain. Concentrating that level of decision-making power in a single person, with no independent external oversight, is dangerous by design. Altman isn't an anomaly: he's the symptom of a system that was never built to resist this kind of pressure.
Promise a Billion, Allocate 2%
One figure from the investigation deserves a second read.
OpenAI had pledged $1 billion in compute resources to its "superalignment" team, the group responsible for long-term safety of AGI systems. According to four researchers cited in the investigation, the resources actually allocated represented between 1 and 2% of that figure. One researcher specified: "Most of the superalignment compute was running on old clusters with the worst chips."
This is the kind of gap between promise and reality that doesn't go unnoticed in a normal boardroom. In a sector as concentrated and as poorly monitored as AI in 2022-2023, no one called time.
What This Actually Means
The EU AI Act, often framed as bureaucratic overhead, is starting to look like a sensible response.
Its core principle: high-risk systems must be transparent, auditable, and subject to independent evaluation. Not because AI leaders are necessarily bad actors. Because you can't build safety on trust in an individual when the stakes reach this scale.
The New Yorker investigation is an X-ray. It shows what happens when powerful technology is piloted by a single person without adequate external oversight. What happens next depends on the will of regulators, shareholders, and governments.
Altman will probably keep running OpenAI. He'll keep appearing before parliaments, talking about responsibility, pledging cooperation. He's very good at that.
The real question — the one the investigation raises without explicitly answering: how much longer can AI governance be built on trust in a narrative?
Sources: The New Yorker (Ronan Farrow and Andrew Marantz, April 7, 2026), Semafor, Gary Marcus/Substack



