Third Day of Musk’s Testimony
During the third day of testimony from Elon Musk in the ongoing legal battle involving OpenAI and its CEO, Sam Altman, Judge Yvonne Gonzalez Rogers frequently intervened to keep discussions focused on pertinent legal matters rather than broader concerns about AI’s risks to society.
The lawsuit, which marks a culmination of a long-standing dispute, focuses on Musk’s accusations that Altman misused OpenAI—a nonprofit he co-founded in 2015—for personal gain. Musk claims that Altman has broken public trust in this process.
Things heated up early in the session as Musk’s attorney, Stephen Moro, attempted to outline the potential hazards of artificial intelligence, asserting, “This is a real risk. We could all die because of artificial intelligence.” Judge Rogers was quick to steer the conversation away from such a broad discussion, remarking on the irony of Musk’s endeavors in the very field he claimed was dangerous. “Some people don’t want to put the future of humanity in the hands of Mr. Musk,” she said, indicating that the court wouldn’t delve into those concerns.
With Altman present in the courtroom during the testimony, the trial is poised to potentially impact OpenAI and its flagship product, ChatGPT. Musk is seeking around $134 billion in damages from OpenAI and Microsoft, which is a significant backer of the organization. His lawsuit contends that he played a crucial role in OpenAI through financial contributions and support.
OpenAI’s defense counters Musk’s assertions, pointing out that he never followed through with a promised $1 billion investment—a fact he admitted to during his testimony. They also argue that Musk departed from the organization after being denied control over it and his attempts to merge it with Tesla were thwarted.
Musk mentioned on the stand that he “intentionally chose to establish OpenAI as a nonprofit organization in the public interest.” This is somewhat contradictory, particularly since he launched his own commercial AI project, xAI, in 2023. In a notable acquisition, Musk’s SpaceX took over xAI, which is now valued at over $1.2 trillion.
Meanwhile, OpenAI has been reshaping its structure, converting into a for-profit entity but still maintaining oversight from a nonprofit foundation. Just last month, they secured a $122 billion funding round for commercial ventures.
Reiterating his principal grievance, Musk stated during cross-examination, “The charter clearly says it started out as a nonprofit, not for the financial benefit of any particular individual. You can’t steal charity—that’s what it comes down to.” Throughout the session, he appeared to have a contentious back-and-forth with OpenAI attorney Bill Savitt, emphasizing the difficulty in providing complete answers when frequently interrupted.
Under questioning, Musk denied having any influence over the algorithms of X, the social media platform he acquired in 2022, which is interconnected with Grok, a chatbot created by xAI. However, he did acknowledge utilizing OpenAI technology to aid in developing xAI, explaining that it is standard practice to use one AI system to validate another.
Musk also admitted that he donated only $38 million to OpenAI instead of fulfilling his $1 billion pledge, citing a loss of confidence in the organization. He concluded his testimony by addressing concerns about creating an “army of robots,” stating that xAI “does not manufacture any weapons” and aims to prevent any human-machine conflict reminiscent of scenarios in movies like *Terminator*. “As you can see in the movie, that’s not a good situation,” he remarked, emphasizing he doesn’t aim to create a future where “AI kills us all.”
This ongoing clash between major figures in technology emphasizes the pressing need for society to understand and manage AI’s development—as the implications for governance, personal use, and broader societal impact become increasingly critical.





