Former OpenAI Director Discusses Near Merger with Anthropic
OAKLAND, Calif. — Helen Toner, a former director at OpenAI, recently provided insights into how close the company came to merging with its competitor, Anthropic, following the chaotic firing of CEO Sam Altman in November 2023. She referred to the potential merger as a “very high-risk move.”
During her video testimony in a federal court in Oakland, Toner recounted a call made late one Sunday with board members from Anthropic, including their CEO, Dario Amodei. At that time, there was consideration of Amodei taking the lead at OpenAI as well.
“In a series of tough choices, we thought this was something worth exploring,” Toner explained, noting that OpenAI’s entire leadership team had threatened to quit in response to Altman’s removal. This made her see the idea of an unfulfilled merger as potentially dangerous.
Initially, Mira Murati, OpenAI’s head of technology, was expected to step in and stabilize the organization. However, she quickly became “reluctant” to assume the interim CEO role, which pushed the board to explore more drastic measures to “rescue the company,” Toner added.
In previous statements, Toner mentioned Murati’s ambivalence regarding Altman’s firing. “She was waiting to see which way the wind would blow,” Toner remarked, suggesting Murati was hesitant to take risks for fear of jeopardizing her career.
Tasha McCauley, another former board member who testified via video, observed that Murati appeared detached about managing the company after employees revolted against the board’s decision to dismiss Altman. McCauley noted that Altman and co-founder Ilya Satskeva had effectively mobilized public support by claiming an “evil coup” was underway, instilling fear within the organization.
Murati expressed significant concern during her prior video testimony, stating, “OpenAI was in catastrophic danger of collapse. I was worried that the company would collapse completely.” She emphasized her intent to keep the company running in the wake of Altman’s ousting.
After Altman’s firing, around 700 out of 800 employees signed a letter denouncing the action, according to McCauley. OpenAI’s President, Greg Brockman, resigned in protest, although both he and Altman later returned.
The revelations came during a high-profile trial in which Elon Musk accused Altman and Brockman of disregarding the company’s founding principles by prioritizing commercial interests over developing AI for humanity’s benefit. Altman claimed Musk was responsible for shifting the company to a for-profit model, labeling the accusations as baseless.
Musk is seeking damages up to $180 billion, along with a court decision to strip OpenAI of its for-profit status and have Altman removed from the board.
In her testimony, McCauley noted that Murati had been in communication with Microsoft CEO Satya Nadella, who expressed a desire for things to revert to their previous state.
McCauley was among the witnesses who criticized Altman’s integrity, stating he often lied and engaged in “dishonest or questionable behavior.” She detailed incidents that arose “primarily due to Sam’s actions” every few months.
She disclosed a message where Altman told another employee that OpenAI’s legal team indicated the new ChatGPT version didn’t need a safety review, a claim McCauley argued was inaccurate. This raised further alarm about the potential loss of oversight in a for-profit setting, risking safety issues.
In cross-examination, OpenAI’s attorney, William Savitt, contended that the ChatGPT version discussed was merely an enhancement of a previous edition that had undergone a security review.
McCauley’s criticisms matched those of earlier testimonies from former OpenAI staff, including Murati, who described Altman as an unreliable leader who fostered conflict among top executives.
Additionally, Rosie Campbell, a former policy researcher at OpenAI, testified about safety concerns, asserting that the company had disbanded its leading AI safety team as it shifted away from its nonprofit goals.

