We finally know what Ilya saw: Sam is (allegedly) a big fat liar.

Well folks, the Musk v. Altman case is the gift that keeps on giving (to OpenAI drama lovers), and the latest news is the deposition of former OpenAI Chief Scientist Ilya Sutskever, of which 62 pages have been made public, and it’s even spicier than we imagined. The transcript details a calculated plan to oust Sam Altman, a panicked board, and a wild, failed plot to merge with their biggest rival. Below is the TL;DR:
The whole drama started with a 52-page memo Ilya wrote at the board's request. Its opening line set the tone: "Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another." Ilya admitted under oath that his goal was simple: termination.
Here’s where it gets really interesting:
- The Disappearing Memo: Fearing Sam would "make them disappear," Ilya sent his memo exclusively to the independent directors using a disappearing email.
- The Murati Connection: Ilya revealed that "most or all" of his evidence, including screenshots, came directly from CTO Mira Murati. Key accusations—like Sam being pushed out of YC and Greg Brockman being fired from Stripe—were based on secondhand stories from Mira that Ilya never bothered to verify himself.
- A Rushed Process: Ilya now admits the firing was "rushed" because the board was "inexperienced in board matters." He also confessed he’d been waiting "at least a year" for the board dynamics to shift so he could make his move.
But the biggest bombshell was the weekend chaos after the firing. With employees threatening to quit en masse, board member Helen Toner—who Ilya says acted "inappropriately" by praising competitor Anthropic in an article—came up with a plan.
On a board call that Saturday, Toner proposed a merger with Anthropic, with their leadership taking over OpenAI. Ilya was "very unhappy" about it, but he says the other board members, especially Helen, were "a lot more supportive." The deal only fell apart because Anthropic raised "practical obstacles."
So we basically know what went down during OpenAI's wild 48 hour weekend coup: This deposition paints a picture of the coup driven by secondhand gossip and a board that completely lost control due to inexperience. Ilya thought employees wouldn't "feel strongly either way" about Sam's firing, which was a massive miscalculation that nearly destroyed the company and almost handed the keys to its presumably soon to be untold riches to its biggest competitor.
The whole deposition is a stunning look at the chaos behind one of the most pivotal moments in AI history. Below, we dive into the key moments that caught out attention.
The Secret Memo, Secondhand Intel, and the Failed Plot to Merge OpenAI with Anthropic
The saga of Sam Altman’s brief ousting from OpenAI in November 2023 has been shrouded in mystery and speculation. What did Ilya Sutskever see? Why did the board act so abruptly? And what really happened during those chaotic 72 hours? Thanks to a newly surfaced deposition transcript from Elon Musk’s lawsuit against OpenAI, we finally have answers directly from the man at the center of the storm: former Chief Scientist and board member Ilya Sutskever.
The nearly 10-hour deposition, held on October 1, 2025, peels back the curtain on a calculated, year-long consideration to remove Altman, a process fueled by secondhand information, and a stunning, last-ditch effort by some board members to merge the company with its chief rival, Anthropic.
The 52-Page Memo That Started It All
At the heart of the coup was a 52-page memo, a document that has achieved near-mythical status. In his testimony, Sutskever confirmed its existence and explosive contents. Prepared at the request of independent board member Adam D'Angelo, the memo was a detailed indictment of Sam Altman's leadership style.
The memo's creation and distribution were cloaked in secrecy. Sutskever testified he sent it exclusively to the board’s three independent directors—Adam D'Angelo, Helen Toner, and Tasha McCauley—using a "disappearing email" because he was worried it would leak. It was deliberately withheld from Altman. Why? "Because I felt that, had he become aware of these discussions, he would just find a way to make them disappear," Sutskever stated under oath.
A Coup Built on Secondhand Information?
Perhaps the most startling revelation was the source of the memo's evidence. Sutskever admitted that "most or all" of the supporting material, including critical screenshots, was provided to him by OpenAI's CTO, Mira Murati.
Throughout the deposition, a pattern emerged: many of the most damaging claims against Altman and President Greg Brockman (about whom Sutskever also wrote a critical memo) were based on information funneled through Murati, which Sutskever failed to independently verify.
- The Y Combinator Allegation: The memo claimed Altman "was pushed out from YC for similar behaviors." Sutskever testified this information came from Murati, who had allegedly heard it from OpenAI COO Brad Lightcap. Sutskever never spoke with Lightcap to confirm the story.
- The Stripe Allegation: A claim that Greg Brockman was "essentially fired from Stripe" also came directly from Murati. Again, Sutskever admitted he never attempted to verify it with Brockman. "It didn't occur to me," he said. "I fully believed the information that Mira was giving me."
This reliance on unverified, secondhand accounts became a central theme of his reflection. When pressed on his sourcing, Sutskever conceded his mistake. "I've learned the critical importance of firsthand knowledge for matters like this," he said, adding that "secondhand knowledge is an invitation for further investigation." It was an investigation the board, by his own admission, never conducted. He suggested the board speak to individuals like Bob McGrew and Nick Ryder in the memo, but he doesn't know if they ever were. The process, he now says, was "rushed" because the board was "inexperienced in board matters."
A Year in the Making: "Waiting for the Board Dynamics to Change"
The decision to oust Altman was not a snap judgment. Sutskever revealed he had been considering proposing Altman’s removal for "at least a year." What was he waiting for? "A moment when the board dynamics would allow for Altman to be replaced," he testified. Specifically, a time when the "majority of the board is not obviously friendly with Sam."
This testimony reframes the narrative from a sudden crisis of conscience to a long-simmering plan awaiting the right political conditions. When those conditions finally arrived, Sutskever and the board acted. However, they disastrously misjudged the consequences. Sutskever admitted he was "astounded" by the employee backlash, which saw over 95% of the staff threaten to resign. He had expected them "not to feel strongly either way."
Pre-existing Board Tensions
The conflict with Helen Toner didn't just appear out of nowhere during the crisis weekend. Tensions had been simmering for a while. Sutskever testified that just a month before the firing, in October 2023, Toner had published an article that criticized OpenAI while praising its competitor, Anthropic.
Sutskever found this action to be "a strange thing for her to do" and "not far from obviously inappropriate" for an OpenAI board member. The situation was serious enough that he discussed the prospect of removing Helen Toner from the board with Sam Altman. This pre-existing friction helps explain the deep divisions that exploded into view that November.
The Anthropic Bombshell: A Weekend Merger Plot
As OpenAI teetered on the brink of collapse that Saturday, an even more shocking plan was hatched. Sutskever testified that board member Helen Toner—whose prior conduct, including publishing an article praising Anthropic, he had found "not far from obviously inappropriate"—brought a proposal to the board: merge OpenAI with Anthropic and install Anthropic's leadership to run the new combined entity.
A call was held with Anthropic's leaders, including CEO Dario Amodei. Sutskever was horrified. "I was very unhappy about it," he said. But he was in the minority. "They were a lot more supportive," he said of the other board members, identifying Helen Toner as the "most supportive" of the merger.
The only thing that stopped the deal, according to Sutskever, were "practical obstacles that Anthropic has raised." The company that was nearly destroyed by an internal coup was almost handed over to its biggest rival in a fire sale. This episode was followed by another startling comment from Toner, who, in a meeting with executives, allegedly stated that allowing OpenAI to be destroyed "would be consistent with the mission."
Ilya's View on AGI and Power
Beyond the boardroom drama, the deposition also offered a rare look into Sutskever's thinking about the future. When asked about who should be in charge of an eventual AGI, he gave a rather cynical, political answer that could help explain his actions.
His full response reveals a deep-seated belief that the path to controlling AGI is paved with power dynamics, not pure altruism. He stated:
"My view is that, with very few exceptions, most likely a person who is going to be in charge is going to be very good with the way of power. And it will be a lot like choosing between different politicians. Who is going to be the head of the state?"
He then elaborated on why a purely virtuous leader might not succeed, adding:
"That's how the world seems to work. I think it's not impossible, but I think it's very hard for someone who would be described as a saint to make it. I think it's worth trying. I just think it's... like choosing between different politicians."
This philosophical view is critical because it reframes the entire conflict. For Sutskever, the fight wasn't just about management tactics or whether Sam Altman lied about a specific project. It was a battle over the fundamental character of the person who might one day steward the world's most powerful technology. By describing the ideal leader as a type of "politician," he was framing Sam Altman's alleged behavior—manipulation, undermining executives, and consolidating power—not just as poor leadership, but as a dangerous qualification for a role that demanded a different kind of integrity.
We can infer that this belief likely informed his view that Sam Altman, a master of power and influence, was precisely the type of power-adept "politician" Ilya believed was ill-suited for the ultimate responsibility of safely developing AGI.
The Aftermath: Financial Interests and Lingering Questions
Today, Sutskever has moved on to his new venture, Safe Superintelligence Inc. However, he still has a significant financial stake in the company he almost brought down. He confirmed he retains equity in OpenAI and that its value has "increased" since his departure. When asked to quantify that stake, his lawyers repeatedly instructed him not to answer. He also testified that he believes OpenAI is paying his legal fees for the deposition.
In sum, Ilya Sutskever’s testimony paints a damning picture of a governance failure driven by palace intrigue, unverified information, and a profound misreading of the organization's culture. The man who set out to save OpenAI from what he perceived as dangerous leadership nearly erased it from existence, revealing just how fragile even the most powerful company in AI can be.
Now, In case you want a quick guide to the whole thing (if you plan on skimming it yourself), what follows is a heavily abridged, chronological version of Ilya Sutskever's testimony (prepared by Gemini 2.5 Pro). We've removed repetitive questions, lengthy lawyer objections, and procedural arguments to present a clear, readable narrative of what was said. Every line of dialogue is a direct quote from the deposition.
The Abridged Deposition of Ilya Sutskever
This testimony, given on October 1, 2025, covers the creation of the infamous memo against Sam Altman, the chaotic weekend of his firing, the failed plot to merge with Anthropic, and Sutskever's reflections on power and AGI.
Part 1: The Secret Memo
Q: Why didn't you send [the memo] to Sam Altman?
A: Because I felt that, had he become aware of these discussions, he would just find a way to make them disappear.
Q: Which independent directors asked you to prepare your memo?
A: It was most likely Adam D'Angelo.
Q: The document that you prepared, the very first page says: [As Read] "Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another." That was clearly your view at the time?
A: Correct.
Q: What action did you think was appropriate?
A: Termination.
Q: You sent it using a form of a disappearing email; is that right?
A: Yes.
Q: Why?
A: Because I was worried that those memos will somehow leak.
Q: You drafted a similar memo that was critical of Greg Brockman; correct?
A: Yes.
Q: Does a version of your memo about Greg Brockman exist anywhere in any form?
A: I believe various lawyers have a copy.
[Sutskever is then instructed by his counsel not to specify which other lawyers have a copy, citing attorney-client privilege.]
Part 2: The Source of the Allegations
Q: Most of the screenshots that I have... I get them from Mira Murati. It made sense to include them in order to paint a picture from a large number of small pieces of evidence.
A: [Sutskever confirms this is an accurate description of how he compiled the memo.]
Q: You say here there's reason to believe that Sam was removed from YC in the past for a reason similar to the one that you identify in this document... [As Read] "Sam was pushed out from YC for similar behaviors." Am I right the basis for this is a conversation that Mira had with Brad Lightcap?
A: The basis of this is a conversation that I had with Mira.
Q: Did you speak to Brad Lightcap?
A: No.
Q: You also write here at the bottom: [As Read] "It is my understanding that Greg has was essentially fired from Stripe as well." What was the basis for that allegation?
A: Mira told me.
Q: Did you seek to verify it with Greg?
A: No.
Q: Why not?
A: It didn't occur to me... I fully believed the information that Mira was giving me.
Q: The screenshots in this section [titled "Lying to Mira"] all came from Mira?
A: Correct.
Q: Do you know whether GPT-4 Turbo actually went through the DSB [Deployment Safety Board]?
A: I don't know.
Q: And you've since learned facts that have changed your view?
A: No. Instead I've learned the critical importance of firsthand knowledge for matters like this.
Q: Do you think it was a mistake to rely on secondhand knowledge?
A: I think secondhand knowledge can be very useful, but I think that secondhand knowledge is an invitation for further investigation.
Q: At a number of points in your document, you suggest that the reader or the board may want to talk to certain people... Bob McGrew... Nick Ryder... Were those suggestions not followed through on?
A: I don't know.
Part 3: A Rushed Process and a Tense Board
Q: Looking back at the process that preceded the removal of Sam... what's your assessment of that process?
A: One thing I can say is that the process was rushed.
Q: Why was it rushed?
A: I think it was rushed because the board was inexperienced... in board matters.
Q: Do you recall in October 2023 Helen Toner publishing an article criticizing OpenAI?
A: I do recall.
Q: What do you recall about that?
A: I don't recall the nature of the criticism, but I recall it was praising Anthropic.
Q: Did you think it was appropriate for her to do as a board member of OpenAI?
A: I thought it was not far from obviously inappropriate.
Q: Did you discuss with anyone the prospect of Helen being asked to leave the board at that time?
A: Yes. I discussed it, at least, with Sam.
Q: After Sam was removed, do you recall Helen Toner telling employees that allowing the company to be destroyed would be consistent with the mission?
A: I do recall. The executives told the board that, if Sam does not return, then OpenAI will be destroyed... And Helen Toner said something to the effect of that it is consistent, but I think she said it even more directly than that.
Part 4: The Anthropic Merger Plot
Q: Do you know whether a proposal was made around that time for OpenAI to merge with Anthropic?
A: I do know that.
Q: Tell me about that.
A: I don't know whether it was Helen who reached out to Anthropic or whether Anthropic reached out to Helen. But they reached out with a proposal to be merged with OpenAI and take over its leadership.
Q: When was that?
A: On Saturday... shortly after the removal of Sam and Greg.
Q: How did you hear about that?
A: Because there was a board call with Helen and the other board members where she told us about it. There has been a subsequent call with the leadership of Anthropic.
Q: Were you present for that call?
A: Yes.
Q: Who from Anthropic was on that call?
A: I recall Dario Amodei on the call and Daniela Amodei.
Q: And what was your response to that?
A: I was very unhappy about it.
Q: And what about the other board members? Were they supportive?
A: They were a lot more supportive, yes.
Q: Among the board members, who struck you as most supportive?
A: I would say my recollection is that Helen was the most supportive.
Q: And what happened with the proposal?
A: My recollection is that there were some practical obstacles that Anthropic has raised, and so the proposal did not continue.
Part 5: Motivations and Reflections
Q: This article... says: [As Read] "Sutskever had been waiting for a moment when the board dynamics would allow for Altman to be replaced as the CEO." Is that correct?
A: Yes.
Q: And what were the dynamics you were waiting for?
A: That the majority of the board is not obviously friendly with Sam.
Q: How long had you been considering it?
A: At least a year.
Q: [The article says] "Sutskever was astounded. He had expected the employees of OpenAI to cheer." Is that true?
A: I had not expected them to cheer, but I have not expected them to feel strongly either way.
Q: [What kind of person should be in charge of AGI?]
A: Right now, my view is that... a person who is going to be in charge is going to be very good with the way of power. And it will be a lot like choosing between different politicians... I think it's very hard for someone who would be described as a saint to make it.
Part 6: Financial Interests
Q: What did you think the value of your equity in OpenAI was at the time of Sam Altman's firing?
ATTORNEY AGNOLUCCI: I'm instructing him not to put a number on it.
Q: Are you going to not answer?
A: I mean, I have to obey my attorney.
Q: When you left OpenAI... did you have an equity stake in the company?
A: Yes.
Q: Do you still have a financial interest in OpenAI?
A: Yes.
Q: Has the value of your interest in OpenAI increased or decreased since you left?
A: Increased.
[Sutskever is again instructed by his counsel not to state the monetary value of his interest.]
Q: Who is paying your legal fees for this deposition?
A: I'm not sure. I have a guess, but I'm not 100 percent sure.
Q: Have you received any bills for legal fees?
A: No.
Q: Is OpenAI paying your legal fees?
A: I think that's probably the case.
Q: What makes you think that?
A: Because I don't know who else it would be.







