...

The New Yorker’s 8 Key Allegations Against Sam Altman at OpenAI

Esther Speak - Senior Reporter at Villpress
8 Min Read
Add us on Google
Add as preferred source on Google

The New Yorker dropped a 18-month investigation into OpenAI CEO Sam Altman this week, and it lands like a depth charge beneath the surface of one of tech’s most closely watched companies. Written by Ronan Farrow and Andrew Marantz, the piece draws on more than 100 interviews and never-before-disclosed internal documents, including roughly 70 pages of Slack messages, HR records, and analysis compiled by former chief scientist Ilya Sutskever in the fall of 2023.

Titled “Sam Altman May Control Our Future, Can He Be Trusted?”, the story doesn’t unearth a single smoking-gun scandal. Instead, it assembles a pattern of alleged behavior that, according to multiple former board members, executives, and colleagues, raises serious questions about Altman’s candor at a company whose technology could reshape, or endanger, humanity.

Here are the eight key allegations that emerge from the reporting, presented in the order they surface in the investigation and related coverage.

1. A documented “pattern of lying”

Sutskever’s memos, sent as disappearing messages to fellow board members, open with a stark heading: “Sam exhibits a consistent pattern of . . .” The first item listed is simply “Lying.” The documents accuse Altman of misrepresenting facts to executives and the board, including on internal safety protocols. One board member who reviewed the material recalled Sutskever being “terrified” about entrusting frontier AI development to someone who “just tells people what they want to hear.”

2. Misleading the board on safety commitments

The memos allege Altman deceived colleagues about the company’s adherence to safety requirements. This occurred against the backdrop of OpenAI’s original nonprofit charter, which placed humanity’s long-term interests above commercial success. Multiple sources told The New Yorker that Altman’s approach undermined the environment needed for safe AGI development. After the memos circulated, the board briefly fired him in November 2023, citing that he was “not consistently candid.”

3. Offering the same job to multiple people

The accumulation of smaller deceptions includes instances where Altman allegedly extended the same role to two different candidates, creating confusion and resentment. In isolation, such moves might seem like aggressive recruiting; stacked together, they fed a broader narrative of manipulation.

4. Contradictory stories to executives

Altman is accused of telling different versions of events to different people, for example, shifting narratives about who should appear on a live stream or how certain decisions were made. Former colleagues described this as a habit of tailoring truth to the audience in the moment.

5. Concealing or downplaying financial entanglements

Departing board members reportedly conditioned their exit on an investigation into Altman’s handling of financial interests, including ties to foreign governments. During the Biden administration, Altman explored a security clearance but withdrew after concerns surfaced about his efforts to raise “hundreds of billions” from foreign entities, including a reported gift of a luxury car from the UAE. One RAND staffer compared the red flags to those surrounding Jared Kushner.

6. Pushing transactional relationships with Gulf states

The reporting details Altman’s outreach to autocratic governments, including exploratory talks that raised alarms inside the U.S. administration. Sources described these as “transactional relationships” that prioritized funding over governance concerns. One internal plan even floated selling AI capabilities to governments that could include Russia or China, though details remain limited.

7. Undermining or reversing safety pledges

OpenAI has repeatedly scaled back or reframed its early safety commitments. The piece notes the dissolution of key safety teams, including the superalignment group, and a February 2026 decision to weaken a major safety pledge amid a $30 billion funding round. Former policy director Jack Clark, now at Anthropic, observed that capital markets reward speed over caution.

8. A history of similar concerns predating OpenAI

The New Yorker surfaces allegations from Altman’s earlier roles. Paul Graham, who recruited him to lead Y Combinator, reportedly told colleagues that “Sam had been lying to us all the time” before Altman’s departure. Dario Amodei, who left OpenAI to found Anthropic, compiled his own extensive notes documenting a shift from idealism to alarm over Altman’s leadership style.

OpenAI pushed back sharply. In a statement, the company described much of the piece as revisiting old events through “anonymous claims and selective anecdotes sourced from people with clear agendas.” Altman himself sat for more than a dozen interviews with the reporters and disputed several specifics.

The article also touches on a separate, unrelated civil lawsuit filed by Altman’s sister Annie, alleging childhood sexual abuse, claims Altman has vehemently denied while countersuing for defamation. That case is proceeding under Missouri law but is not central to the OpenAI-focused reporting.

What makes the New Yorker piece notable is its restraint. Farrow and Marantz avoid grand conclusions, letting the mosaic of incidents speak for itself. Board members who reviewed Sutskever’s memos came away believing Altman’s position, with his “finger on the button” of potentially civilization-altering technology, required uncommon integrity. Several concluded he lacked it.

In Silicon Valley, where hype often outpaces delivery and founder myths die hard, the story lands at a delicate moment. OpenAI’s valuation continues to soar, its models power millions of users daily, and the competitive race with Anthropic, Google, and others shows no sign of slowing. Yet the questions the 2023 board grappled with, and that this investigation revives, refuse to disappear: When the stakes involve existential risk, how much tolerance should there be for a leader described by one former board member as “unconstrained by truth”?

The piece won’t end Altman’s tenure. Investor pressure and employee loyalty proved decisive in 2023, and the company’s commercial momentum has only grown since. But it adds weight to a persistent undercurrent of doubt, one that former colleagues, safety researchers, and even some current executives continue to whisper about in private.

For an industry that likes to move fast and break things, the New Yorker is asking whether, this time, the things being broken might be trust itself. Readers will draw their own conclusions. The memos, the funding rounds, the safety team exits, and the pattern of alleged deceptions are now part of the public record. In the age of AGI, that record matters more than most.

Share This Article
Esther Speak - Senior Reporter at Villpress
Senior Reporter
Follow:

Ester Speaks is a senior reporter and newsroom strategist at Villpress, where she shapes Africa-focused business, technology, and policy coverage.  She works at the intersection of journalism, and editorial systems, producing clear, high-impact news that travels globally while staying rooted in African realities.

notification icon

We want to send you notifications for the newest news and updates.

Seraphinite AcceleratorBannerText_Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.