SELECT LANGUAGE BELOW

OpenAI Whistleblower Warns of 70% Chance AI Could Destroy Humanity

A former OpenAI governance researcher has made a frightening prediction that there is a 70 percent chance that AI will destroy or cause catastrophic damage to humanity.

in Recent Interviews Together The New York TimesDaniel Kokotajiro, a former OpenAI governance researcher who signed an open letter alleging that employees are being silenced from raising safety issues, accused the company of ignoring the enormous risks posed by artificial general intelligence (AGI) because decision-makers are so enamoured with its possibilities. “OpenAI is very excited about building AGI,” Kokotajiro said. “And they’re recklessly racing to be the first to do it.”

In this illustration taken in Brussels, Belgium on Dec. 12, 2022, the ChatGPT website screen is displayed on a mobile device with the OpenAI logo. (Photo by Jonathan Raa/NurPhoto via Getty Images)

Sam Altman, head of OpenAI

OpenAI chief Sam Altman (Kevin Dietsch/Getty)

Kokotajiro’s most alarming claim is that there is roughly a 70 percent chance that AI will doom humanity — a probability that is unacceptable for any major life event, yet OpenAI and friends are forging ahead. The term “p(doom),” which refers to the likelihood that AI will doom humanity, has always been a contentious topic in the world of machine learning.

After joining OpenAI in 2022 and being asked to predict technological advances, the 31-year-old has become convinced that not only will the industry achieve AGI by 2027, but that it is highly likely that it will do catastrophic harm or even ruin humanity. Kokotajiro and his colleagues, former and current employees of Google DeepMind and Anthropik, as well as “AI godfather” Geoffrey Hinton, who left Google last year over similar concerns, are arguing for a “right to warn” the public about the risks posed by AI.

Kokotajiro was so convinced of the huge risks that AI poses to humanity that he personally encouraged OpenAI CEO Sam Altman to spend more time implementing guardrails to control the technology and “prioritize safety” instead of continuing to make AI smarter. At the time, Altman seemed to agree with Kokotajiro, but Kokotajiro felt it was just flattery.

Fed up, Kokotajiro quit the company in April, telling his team in an email that he had “lost confidence that OpenAI will act responsibly” as it continued to work on building near-human-level AI. “The world is not ready, and neither are we,” he wrote. “And I am concerned that we are forging ahead despite that and justifying our actions.”

Click here for details of The New York Times here.

Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News