SELECT LANGUAGE BELOW

AI companies need to assess potential threats to existence or face the risk of losing control, expert advises.

Before launching this powerful AI system, experts are calling for a similar safety evaluation to those performed ahead of Robert Oppenheimer’s first nuclear test.

Max Tegmark, a prominent advocate for AI safety, conducted assessments akin to those of physicist Arthur Compton prior to the Trinity test. His findings indicated a 90% likelihood that advanced AI could present an existential risk.

The U.S. government moved forward with the Trinity test in 1945, following assurances that the atomic bomb would not ignite the atmosphere and endanger humanity.

In a paper by Tegmark and three students at MIT, the researchers recommend calculating the “Compton constant,” which represents the chance that superintelligent AI could elude human oversight. Compton noted in a 1959 interview that he approved the test after assessing the odds of a runaway reaction as “slightly less” than one in three million.

Tegmark emphasized that AI companies must rigorously evaluate whether artificial superintelligence (ASI)—a theoretical form of AI that surpasses human intelligence—can remain under human control.

“Firms developing superintelligence should calculate the Compton constant, which reflects the risk of losing control,” he stated. “It’s insufficient to simply feel optimistic; they need to present actual probabilities.”

He argued that a collective assessment of the Compton constant by various companies could foster the “political will” necessary for a global safety framework surrounding AI.

Tegmark, an MIT physics professor and AI researcher, co-founded The Future of Life Institute, a nonprofit dedicated to the safe progression of AI. In 2023, the organization released an open letter urging a pause in the development of powerful ASI, garnering over 33,000 signatures from notable figures like Elon Musk and Steve Wozniak.

This letter emerged months after the debut of ChatGPT, marking a new phase in AI development while cautioning that AI organizations are caught in “uncontrolled races” to deploy increasingly capable digital minds.

Tegmark shared these thoughts with the Guardian amidst discussions with AI specialists, including tech industry experts, representatives from safety organizations, and academics.

The Singapore consensus on the Global AI Safety Research Priority Report, led by esteemed computer scientist Joshua Bengio alongside Tegmark, outlines three key areas to prioritize AI safety research: developing methods to assess the impact of current and future AI systems, clarifying AI operation mechanisms, and managing and regulating system behaviors.

Reflecting on the report, Tegmark noted that talks on safe AI development have gained momentum since U.S. Vice President JD Vance remarked that the future of AI cannot simply rely on hands being raised in agreement about safety.

Tegmark concluded with his insights.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News