Concerns Over Superintelligent AI Growth
A varied group of notable individuals, from tech innovators to political figures, have come together to call for a pause on developing artificial intelligence that surpasses human cognitive abilities. This statement, endorsed by prominent names like Steve Wozniak, co-founder of Apple, and Richard Branson, expresses fears that superintelligent AI could result in significant consequences, such as economic displacement and threats to human autonomy, dignity, and even safety.
As reported by CNBC, a statement was released on Wednesday, gaining support from over 850 prominent figures, requesting a hold on the advancement of what they term “superintelligence.” This category of AI would theoretically exceed human intelligence in virtually all cognitive tasks. The supporters include not only Wozniak and Branson but also notable AI pioneers like Joshua Bengio and Geoffrey Hinton.
The idea of superintelligence is attracting considerable attention in the tech community, especially as major companies like Meta and OpenAI strive to unveil more advanced large-scale language models. In fact, Meta has branded its LLM division “Meta Superintelligence Labs.” However, the statement’s signers highlighted the potential dangers tied to superintelligence, pointing to risks from personal economic displacement to broader concerns about civil liberties, dignity, national security, and even existential threat.
The statement advocates for a moratorium on superintelligence development until there is significant public backing and a scientific agreement on its safe creation and management. The signatories encompass a broad spectrum of professionals from various sectors, including tech executives, academics, media figures, religious leaders, and a bipartisan collection of former U.S. politicians, such as Mike Mullen, a former Chairman of the Joint Chiefs of Staff. Interestingly, conservative media personalities like Steve Bannon and Glenn Beck also added their signatures to this statement.
There’s a growing divide in the tech industry between those who view AI as a beneficial force and those who see it as risky and in need of regulation. Leaders from top AI companies, including Elon Musk and Sam Altman, have voiced concerns about the perils associated with superintelligence. Altman, currently the CEO of OpenAI, previously noted the development of superhuman machine intelligence poses perhaps the greatest threat to human survival.
Computer scientist Yoshua Bengio addressed this risk in a statement, warning that AI systems could surpass most people’s capabilities in various cognitive tasks within a few years. While he recognized the potential advantages such progress could provide in addressing global issues, he strongly underscored the substantial risks involved. Bengio emphasized that it’s crucial to scientifically explore how to create AI systems free from harm to people, while also advocating for greater public input on decisions shaping our future.





