A consulting firm called Gladstone AI released a report commissioned by the State Department this week calling for greater government involvement in the development of artificial intelligence (AI) to avoid “urgent and growing risks to national security.” This may lead to “extinction”. This is a level of threat to humanity. ”
The proposed remedy is to create a new government agency charged with policing AI, while restricting its development with heavy-handed regulations.
of reportThis title is about 250 pages long. An action plan to improve the safety and security of advanced AI. It was commissioned by the State Department just before the release of ChatGPT, the search engine that was many people’s first exposure to artificial intelligence technology.
The results of those encounters have been clearly mixed, ranging from blatant to false information and political censorship Although the AI appears to be suffering from a series of symptoms; nervous breakdown. On the contrary, human users quickly found the following method: abuse Powerful features of ChatGPT.
More than 350 executives, researchers and engineers from leading artificial intelligence companies have signed an open letter warning that AI technologies in development could pose an existential threat to humanity. https://t.co/HlYVcwr8NB
— Breitbart News (@BreitbartNews) May 31, 2023
Gladstone AI did not see this as a promising beginning for the relationship between humanity and machine intelligence. The report’s authors were particularly concerned about the next step in its evolution, artificial general intelligence (AGI), a “transformative technology with profound implications for democratic governance and global security.”
AGI is an advanced AI system that can “outperform humans in all economically and strategically relevant areas, including creating actionable long-term plans that are likely to work under real-world conditions.” refers to
The nightmare scenario is “out of control,” and the report calls it “a potential failure that future AI systems could develop to such an extent that they elude all human efforts to limit their impact.” It is defined as “mode”.
Out-of-control consequences could escalate into catastrophes comparable to weapons of mass destruction in the information age, “including events that lead to the extinction of humanity.”
The report’s authors borrowed this concept from Sam Altman, CEO of OpenAI, the creator of ChatGPT. Altman was one of 300 signatories to the AI Risk Statement, published in May 2023, and said that just like reducing the risks from pandemics and nuclear war, Altman said, Mitigation should be a global priority.”
Altman felt it was impossible to stop AI research, or even pause it in any meaningful way. “Because even if the American people stop, the Chinese people won’t stop.” He urged the development of precautionary standards that could be adopted by researchers around the world, essentially the same as what the Gladstone AI report recommended.
Hollywood blockbuster director Christopher Nolan has warned that artificial intelligence is on the verge of an “Oppenheimer moment.” https://t.co/2qXYsRHb4b
— Breitbart News (@BreitbartNews) July 18, 2023
The report, submitted to the State Department, recommended the creation of an entirely new US federal agency to manage AI research and introduced strict regulations, including caps on the computer power that can be used in certain AI systems. This upper limit is quite close to the maximum capacity of today’s computer systems and would effectively inhibit technological development.
The new federal department would also tightly lock down AI code, criminalizing its distribution beyond the companies that created it and, of course, criminalizing the new government’s computer police.
The authors called for such intrusive government bailouts because they were concerned that the race to develop AGI would make cutting-edge “frontier” companies reckless.
“Frontier AI Labs face strong and immediate incentives to scale their AI systems as quickly as possible. Even though some may invest out of genuine concern, safety measures and security that provide no direct economic benefit “They face no incentive to invest in action immediately,” the report said.
Tight controls over the research were proposed because the authors feared that AI software would continue to improve in quality until it exceeded the limits of modern processors. One of the proposed roles of the federal AI division would be to put the brakes on software development so that the AGI genie doesn’t suddenly emerge from chipsets that will be considered obsolete in a few years.
These ideas seem to run counter to Mr. Altman’s warnings about what China would do even if the United States stopped acting. The Gladstone AI team is well aware of this, stating: Pre-publication interview That means there is nothing preventing artificial intelligence researchers from simply leaving the United States and continuing their work in less restrictive jurisdictions.

