Sam Altman faced significant criticism from former colleagues during the second week of the trial involving Elon Musk and OpenAI. This trial has been quite intense, with several witnesses offering testimony regarding Altman’s leadership and the organization’s commitment to AI safety. Musk claims that Altman, along with OpenAI President Greg Brockman, strayed from their nonprofit goals by collaborating with Microsoft, effectively undermining the charitable mission they established back in 2015.
This week, three notable witnesses shared their concerns about various aspects of Altman’s management approach. Their testimonies highlighted issues related to AI safety practices, ethical leadership, and adherence to the nonprofit mission initially laid out by OpenAI.
Rosie Campbell, who was an AI safety researcher at OpenAI from 2021 to 2024, expressed worries over the perceived decline in the organization’s focus on safety research. When she started at OpenAI, there were two teams dedicated to long-term AI safety. One team worked to ensure that AI aligned with human values, while the other, which Campbell was part of, aimed to prepare for superhuman AI.
According to Campbell, over time, OpenAI transitioned to a more product-centric approach. Ultimately, both safety teams were disbanded, and about half of her team left rather than accept alternate roles within the company.
She also mentioned her involvement with a letter advocating for Altman’s return after the board initially removed him from his CEO position. Interestingly, Campbell clarified that her support for Altman was not personal, but rather stemmed from her concern that if he weren’t reinstated, some OpenAI employees might move to Microsoft, which she believed had a weaker commitment to AI safety. “I thought that the best path forward for OpenAI was for Sam to come back,” she said.
In a surprising twist, Campbell also suggested that Musk’s AI venture, xAI, probably has a less rigorous approach to safety than OpenAI does.
Additionally, Tasha McCauley, a former board member who played a role in Altman’s removal, provided testimony via deposition. Her remarks backed up earlier statements from another board member about her trusting relationship with Altman and the unhealthy culture within the organization.
McCauley described Altman’s leadership style as generating chaos and deceit throughout the leadership structure at OpenAI. She recounted a specific incident concerning the launch of the AI model GPT-4 Turbo, stating that Altman inaccurately claimed that the legal department had cleared the model for deployment in India without needing approval from the safety committee.
Former directors indicated that Altman’s tendency toward dishonesty led to periodic crises within the organization. McCauley referred to an email from Ilya Satskeva, a former board member, which allegedly documented a range of chaotic scenarios resulting from Altman’s actions and statements.
David Scissor, a former law school dean, served as an expert witness on nonprofit governance for Musk’s legal team. His insights were crucial for discussing the intersection of nonprofit law and organizational missions, even though it might seem like a dry topic.
Scissor collaborated with Musk’s lawyer to scrutinize Altman’s actions outlined by previous witnesses, questioning the alignment of those actions with OpenAI’s stated safety-first mission. He consistently argued that there were significant discrepancies.
One notable example raised was the allegation that OpenAI launched products without board awareness, including claims that Microsoft tested a version of GPT-4 without completing necessary safety reviews. Scissor underscored the critical nature of board-CEO collaboration in ensuring adherence to the mission. “Boards and CEOs must work together to fulfill the mission,” Scissor pointed out. “If the CEO is withholding crucial information, that’s a serious issue.”





