‘Global Priority:’ AI Industry Leaders Warn of ‘Risk of Extinction’

More than 350 executives, researchers and engineers from leading artificial intelligence companies have signed an open letter warning that underdeveloped AI technologies could threaten human existence.

of new york times report More than 350 executives, researchers and engineers from top AI companies have signed an open letter warning the world that the AI ​​technologies they are building could pose a threat to human existence. .

The OpenAI logo on screen with the ChatGPT website displayed on mobile on December 12, 2022 in Brussels, Belgium. (Photo by Jonathan Raa/NurPhoto via Getty Images)

Sam Altman, founder of OpenAI, creator of ChatGPT

OpenAI founder Sam Altman, ChatGPT creator (TechCrunch/Flickr)

A statement issued by the non-profit Center for AI Safety said, “Reducing the risk of AI-induced extinction is a global priority, alongside other societal-scale risks such as pandemics and nuclear war. should” is written. Signatories include top executives from OpenAI, Google DeepMind, Anthropic, and more.

Notable signatories include OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei. Jeffrey Hinton and Joshua Bengio, two of the three Turing Award-winning researchers for their groundbreaking work on neural networks, also signed the letter.

The open letter comes at a time of growing concern about the possible adverse effects of artificial intelligence. The recent development of “large language models”, a type of AI system used by ChatGPT and other chatbots, suggests that soon AI will be used on a large scale to spread misinformation and propaganda. , or there is growing concern that millions of white-collar jobs will disappear.

Altman, Hassabis and Amodei recently met with Vice President Kamala Harris and President Joe Biden to discuss AI regulation. After the meeting, Altman testified before the Senate, calling for government control over AI because of the risks it could pose, warning that the risks of advanced AI systems are severe enough to justify government intervention. bottom.

According to Dan Hendricks, executive director of the AI ​​Safety Center, the open letter was a “coming out” for some business leaders who had previously expressed concerns about the dangers of the technology they were building. It played a role, but it was said to be a top secret. “Even in the AI ​​community, there is a common misconception that only a handful of people are catastrophic,” Hendricks said. “But in reality, many people would quietly express concerns about these things.”

The letter also mentions ideas put forward by OpenAI officials to carefully manage powerful AI systems. They advocate cooperation among top AI developers, enhanced technical research into complex language models, and the establishment of an international agency for AI safety similar to the International Atomic Energy Agency aimed at regulating the use of nuclear weapons. requested.

“If this technology doesn’t work, I think it could go very wrong,” Altman told a Senate subcommittee. “We want to work with the government to prevent that from happening.”

read more of new york times here.

Lucas Nolan is a Breitbart News reporter who covers free speech and online censorship issues. follow him on twitter @LucasNolan

usechatgpt initialization successful

Leave a Reply

Your email address will not be published. Required fields are marked *