Investigation into Sam Altman Reveals Troubling Traits
Journalists Ronan Farrow and Andrew Marantz conducted a detailed investigation into Sam Altman, the influential figure behind OpenAI, uncovering a concerning pattern of deceit and antisocial behavior. A former board member described Altman as having two uncommon traits: a strong desire to be liked, paired with a strikingly indifferent attitude towards the consequences of his deception.
The article, featured in New Yorker, paints a thorough picture of Altman’s life and even discusses a period when he was briefly ousted from the company. It highlights a significant distrust towards him from key individuals in the AI sector, many of whom resorted to labeling him as “antisocial.” His list of adversaries seems to expand beyond Ilya Sutskever, a co-founder of OpenAI, and Dario Amodei, CEO of Anthropic, who share a fraught relationship with Altman. As noted by Farrow and Marantz, even previous board members view him as someone who strays from the truth.
Interestingly, the term “antisocial” was recurrent among those who spoke about Altman. For instance, Aaron Swartz, a fellow Y Combinator graduate and a gifted but troubled programmer who passed away in 2013, reportedly warned friends against trusting Altman. He allegedly expressed concerns over Altman’s character, saying, “You need to understand that you can never trust Sam. He’ll do anything.” Moreover, some senior Microsoft executives expressed that the collaboration with Altman had soured despite Nadella’s longstanding loyalty. They accused him of distorting agreements and misrepresenting facts. For example, while OpenAI reaffirmed Microsoft’s status as its primary cloud provider, Amazon simultaneously announced a significant deal to resell its services. Microsoft executives worry that this might infringe on their exclusive rights, although OpenAI asserts that their deal with Amazon respects previous agreements.
In the investigation, it was pointed out that Altman’s antisocial tendencies not only affected his relationships with colleagues but also resulted in serious practical issues. The launch of ChatGPT, for instance, lacked essential safety measures.
Internal communications suggested that board members were increasingly anxious about how Altman’s actions could jeopardize OpenAI’s product safety. During a December 2022 meeting, Altman assured the board that safety features for the upcoming GPT-4 model had received the green light from the safety committee. However, a board member later discovered that significant features, like allowing users to “tweak” the model for specific tasks, had actually not been approved. In a related incident, an employee discreetly informed a board member about an oversight when Microsoft prematurely deployed an early version of ChatGPT in India, neglecting necessary safety checks. “It was completely ignored,” stated a researcher at the time.
In a book titled Code Red: Left, Right, China, and the Race to Control AI, Wynton Hall discusses the biases inherent in AI technology due to influences from Silicon Valley. He asserts that with someone like Sam Altman at the helm of an AI company, it’s crucial to develop strategies that harness the benefits of AI while mitigating biases and potential downsides.





