SELECT LANGUAGE BELOW

AI Safety Experts Are Fleeing the Company

OpenAI, the developer of the popular AI assistant ChatGPT, has seen a mass exodus of its artificial general intelligence (AGI) safety researchers, according to former employees.

luck Reports Former OpenAI governance researcher Daniel Kokotaijo recently revealed that nearly half of the company's staff focused on the long-term risks of super-strong AI have left in the past few months. The departures include prominent researchers such as Jan-Hendrik Kirchner, Colin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and co-founder John Shulman. These departures follow the high-profile departures in May of lead scientist Ilya Sutskever and researcher Jan Reicke, who co-led the company's “super-alignment” team.

Founded with a mission to develop AGI in a way that “benefits all of humanity,” OpenAI has long employed a number of researchers dedicated to “AGI safety”—techniques to ensure that future AGI systems do not pose catastrophic or existential dangers. But Kokotajiro suggests that the company’s focus has shifted toward product development and commercialization, and less toward research to ensure the safe development of AGI.

Kokotajiro, who joined OpenAI in 2022 and left in April 2023, said the departures have been gradual, with AGI's safety staff numbers dropping from around 30 to just 16. He attributed the departures to “giving up” as OpenAI continues to prioritize product development over safety research.

The change in culture at OpenAI was becoming more apparent to Kokotajiro even before the boardroom drama that saw CEO Sam Altman briefly fired and then rehired in November 2022 and the removal of three directors focused on AGI safety. Kokotajiro feels the incident is over, there's no turning back, and that Altman and President Greg Brockman have consolidated their power since then.

While some AI research leaders believe the AI ​​safety community's focus on AGI's potential threat to humanity has been overblown, Kokotajiro expressed disappointment in OpenAI's opposition to California's SB 1047, a bill aimed at putting guardrails on the development and use of the most powerful AI models.

Despite the departures, Cocotailo acknowledged that some remaining employees have been relocated to other teams where they can continue similar projects, and the company has also established a new safety and security committee and appointed Carnegie Mellon University professor Zico Colter to the board of directors.

As the race to develop AGI intensifies among major AI companies, Kokotajiro warned against groupthink and the possibility that companies, driven by the opinions and incentives of the majority within their organizations, could conclude that competitive success is inherently good for humanity.

Learn more Luck is here.

Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News