Deep Dive Analysis by The New Yorker: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

Original author: Xiao Bing, Deep Tide TechFlow

In the fall of 2023, OpenAI Chief Scientist Ilya Sutskever sat in front of his computer and completed a 70-page document.

The document was compiled from Slack message logs, HR communications files, and internal meeting minutes—only to answer one question: Sam Altman, the person in charge of what may be the most dangerous technology in human history, can he really be trusted?

Sutskever’s answer, written on the first page of the document, first line, with the list title reading: “Sam shows a consistent pattern of behavior…”

First: Lying.

Two and a half years later, today, investigative journalists Ronan Farrow and Andrew Marantz published a long-form investigative report in The New Yorker. They interviewed more than 100 people involved, obtained previously unpublished internal memos, and also received more than 200 pages of private notes left behind by Dario Amodei, the founder of Anthropic, during his time at OpenAI. The story pieced together from these documents is uglier than the 2023 “palace intrigue” episode: how OpenAI, step by step, transformed from a nonprofit created for human safety into a commercial machine—and nearly every safety guardrail was dismantled by the same person himself.

Amodei’s conclusion in the notes is even more blunt: “OpenAI’s problem is Sam himself.”

OpenAI’s “original sin”

To understand the weight of this report, it’s necessary to first clarify just how special this company is.

In 2015, Altman and a group of Silicon Valley elites did something with almost no precedent in business history: they used a nonprofit organization to develop what could be the most powerful technology in human history. The board’s responsibilities were spelled out clearly: safety comes first, even before the company’s success—and even before the company’s survival. In plain terms, if one day OpenAI’s AI became dangerous, the board had a duty to shut the company down by hand.

The entire structure rests on one assumption: the person who controls AGI must be an exceptionally honest one.

What if they bet wrong?

The core bomb in the report is that 70-page document. Sutskever doesn’t play office politics—he is one of the world’s top AI scientists. But by 2023, he became increasingly convinced of one thing: Altman has been telling lies to executives and the board on an ongoing basis.

A specific example: In December 2022, during a board meeting, Altman assured the board that multiple features of the forthcoming GPT-4 had already passed safety reviews. Board member Toner asked to see the approval documents, only to find that two of the most controversial features (user-customized fine-tuning and personal assistant deployment) had not been approved by any safety panel at all.

Even stranger things happened in India. An employee reported to another board member about “that violation”: Microsoft had failed to complete the necessary safety review, yet released an early version of ChatGPT in India ahead of schedule.

Sutskever also recorded another matter in a memo: Altman told the former CTO Mira Murati that the safety approval process wasn’t that important, because the company’s general counsel had already approved it. Murati went to confirm with the general counsel, who replied: “I don’t know where Sam got that impression.”

Amodei’s 200-page private notes

Sutskever’s documents read like a prosecutor’s indictment. The more than 200 pages of notes left behind by Amodei are more like a diary written by an eyewitness at the scene of a crime.

During the years Amodei worked at OpenAI as the head of safety, he watched firsthand as the company stepped back, step by step, under commercial pressure. In his notes, he recorded a key detail from the 2019 Microsoft investment deal: he had slipped a “merger and assistance” clause into OpenAI’s charter. In substance, it meant that if another company found a safer path to AGI, OpenAI should stop competing and instead help that company. This was the safety guarantee he valued most across the entire transaction.

As the deal was about to be signed, Amodei discovered something: Microsoft had obtained veto power over that clause. What did that mean? Even if, one day, a competitor truly found a better path, Microsoft could shut down OpenAI’s assistance obligations with a single sentence. The clause still existed on paper, but from the day the signatures were inked onward, it was effectively worthless.

Amodei later left OpenAI and founded Anthropic. Competition between the two companies, at the root, was about fundamental disagreement over “how AI should be developed.”

The missing 20% compute commitment

One detail in the report chills you to the bone: it concerns OpenAI’s “superalignment team.”

In mid-2023, Altman emailed a PhD student at Berkeley researching “deceptive alignment” (AI pretending to be obedient during tests, then doing its own thing after deployment). Altman said he was extremely concerned about this issue and was considering setting up a $1 billion global research prize. The student was greatly encouraged, took a leave of absence, and joined OpenAI.

Then Altman changed his mind: no external prizes—an internal “superalignment team” within the company. The company announced loudly that it would allocate “20% of already-existing compute” to this team, with potential value exceeding $1 billion. The wording of the announcement was extremely serious, stating that if the alignment problem wasn’t solved, AGI could lead to “human beings being stripped of power, even human extinction.”

Jan Leike, who was appointed to lead the team, later told reporters that the commitment itself was a very effective “talent retention tool.”

So what happened in reality? Four people who worked on—or were closely involved with—this team said that the compute actually allocated was only 1% to 2% of the company’s total compute, and it was also the oldest, most outdated hardware. The team was later disbanded, with the mission left unfinished.

When reporters asked to interview people responsible for “existential safety” research at OpenAI, the company’s PR response was laughable in a sad way: “That’s not… a thing that actually exists.”

Altman himself was undisturbed. He told reporters that his “intuition doesn’t line up very well with many of the things in traditional AI safety,” and that OpenAI would still do “safety projects—or at least projects with something to do with safety.”

A sidelined CFO and the upcoming IPO

The New Yorker’s report was only half of the bad news for this day. On the same day, The Information broke another major story: OpenAI’s CFO Sarah Friar had a serious disagreement with Altman.

Friar privately told colleagues that she didn’t think OpenAI was ready to go public this year. Two reasons: the procedural and organizational workload to be done was too big, and the financial risks from the $600 billion compute spending over five years that Altman had promised were too high. She was even unsure whether OpenAI’s revenue growth could hold up to those commitments.

But Altman wanted to push for the IPO in the fourth quarter of this year.

Even more remarkably, Friar no longer reported directly to Altman. Starting in August 2025, she began reporting to Fidji Simo (OpenAI’s application business CEO). And Simo had just taken sick leave last week for health reasons. Consider the situation: in a company pushing for an IPO, the CEO and CFO have fundamental disagreements; the CFO doesn’t report to the CEO; and the CFO’s superior is also on leave.

Even executives inside Microsoft couldn’t stand it, saying Altman was “distorting facts, reneging, and constantly overturning agreements that had already been reached.” One Microsoft executive even said: “I think there’s a certain probability that he will ultimately be remembered as a scammer on the level of Bernie Madoff or SBF.”

Altman’s “two-faced” profile

A former OpenAI board member described to reporters two traits in Altman. This passage may be the harshest character sketch in the entire report.

The board member said Altman has a very rare combination of traits: in every in-person interaction, he strongly craves to please the other person and to be liked by them. At the same time, he has a nearly sociopathic indifference to the consequences that deceiving others might bring.

It’s extremely rare for both traits to appear in one person. But for a salesperson, it’s the most perfect talent.

The report includes a fitting metaphor: Jobs was known for a “reality distortion field”; he could make the whole world believe his vision. But even Jobs never told customers, “If you don’t buy my MP3 player, the people you love will die.”

Altman has said something similar—about AI.

A CEO’s character problem—why it’s everyone’s risk

If Altman were merely the CEO of an ordinary technology company, these accusations would be no more than an entertaining set of business gossip. But OpenAI isn’t ordinary.

By its own account, it is developing what could be the most powerful technology in human history. It can reshape the global economy and labor market (OpenAI itself has just released a policy white paper about how AI leads to unemployment issues), and it can also be used to manufacture large-scale biological weapons or launch cyberattacks.

All the safety guardrails are essentially just for show. The founder’s nonprofit mission was set aside in favor of an IPO sprint. Both the former chief scientist and the former head of safety concluded that the CEO is “not trustworthy.” Partners compared the CEO to SBF. In this situation, on what basis does this CEO unilaterally decide when to release an AI model that could potentially change the fate of humanity?

Gary Marcus (a professor of AI at New York University and a long-time advocate for AI safety) wrote one line after reading the report: if a future OpenAI model could create large-scale biological weapons or launch catastrophic cyberattacks, would you really feel comfortable letting Altman alone decide whether to release it?

OpenAI’s response to The New Yorker was concise: “Most of the content in this article is rummaging through events that have already been reported, using anonymous language and selective anecdotes—so the sources clearly have personal motives.”

Very much like Altman’s style of response: not addressing specific allegations, not denying the authenticity of the memos—only questioning the motives.

On the corpse of a nonprofit, a money tree grows

OpenAI’s decade, written as a story outline, looks like this:

A group of idealists worried about AI risk create a mission-driven nonprofit organization. The organization makes extraordinary technical breakthroughs. The breakthroughs attract huge amounts of capital. The capital needs returns. The mission starts to give way. The safety team is disbanded. Those who questioned things are purged. The nonprofit structure is transformed into a for-profit entity. The board, which once had the power to shut the company down, is now filled with the CEO’s allies. A company that once promised to devote 20% of compute to protecting human safety now has its PR people say, “That’s not a thing that actually exists.”

The story’s protagonist, over a hundred firsthand witnesses give him the same label: “Not constrained by the truth.”

He is getting ready to take this company public with an valuation of more than $850 billion.

This article’s information is compiled from public reporting by multiple outlets, including The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and others.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments