SELECT LANGUAGE BELOW

AI compared to nuclear weapons and could potentially lead to human extinction: report

The U.S. government has a “clear and urgent need” to act, saying the rapid development of artificial intelligence could lead to the extinction of humanity through weaponization and loss of control, according to a government-commissioned report.

report, Get Time magazine The plan, titled “Action Plan to Improve the Safety and Security of Advanced AI,” states that “the rise of advanced AI and AGI threatens global security in ways reminiscent of the introduction of nuclear weapons.” It has the potential to stabilize.”

“Given the increasing risks to national security posed by the weaponization and loss of control of rapidly expanding AI capabilities, especially given the fact that the continued proliferation of these capabilities amplifies both risks, There is clearly an urgent need to intervene,” the report, published by Gladstone AI Inc., says.

The report describes an intervention developed over 13 months, in which researchers spoke with more than 200 people, including the U.S. and Canadian governments, major cloud providers, AI safety organizations, and security and computing experts. It suggests a blueprint plan.


The report, titled “Action Plan to Improve the Safety and Security of Advanced AI,” states that “the rise of advanced AI and AGI could threaten global security in ways reminiscent of the introduction of nuclear weapons.” It could be destabilizing.” ©Paramount/Courtesy of Everett Collection

The plan begins by establishing interim advanced AI safeguards before formalizing it into law. The safeguards would then be internationalized.

Some of the measures include requiring new AI agencies to regulate the level of AI computing power, requiring AI companies to seek government permission to deploy new models above a certain threshold, and imposing stronger This could include considering outlawing the publication of how AI models work. Due to open source licenses, etc., TIME reported.


illustration of robot thinking
Some of the measures include setting limits on the level of computing power that new AI agencies can set their AI at, and requiring AI companies to seek government permission to introduce new models above a certain threshold. there is a possibility. Phong Lamai Photo – Stock.adobe.com

The report also recommended that the government strengthen controls over the production and export of AI chips.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News