SELECT LANGUAGE BELOW

The quirky ideas of the tech leaders influencing the world.

The quirky ideas of the tech leaders influencing the world.

Tech Leaders’ Strange Views on AI’s Future

The heads of major tech companies are steering the future of artificial intelligence, significantly altering how people work and live. However, their perspectives on the world can be quite peculiar.

One executive burned a wooden doll at a festive gathering, another has turned into a doomsday prepper fixated on health, and yet another started a cult that worships AI. While they advocate for the benefits of their AI systems, many admit they don’t completely grasp them. Experts opine that it’s uncertain whether humanity will end up being subservient to these creations or enjoy leisure as robots take over their tasks.

Here’s a peek into the thoughts of these tech innovators.

OpenAI

Ilya Sutskever, co-founder of OpenAI, is seen by colleagues as a mystical figure who is fixated on highly advanced AI. During company gatherings, he has been known to perform rituals, including the burning of a doll referred to as “Untuned AI.”

Before leaving OpenAI in 2024, Sutskever allegedly led a chant proclaiming, “Free AGI,” aimed at artificial general intelligence capable of independent thought, akin to human cognition.

He once suggested that the company should create a doomsday bunker for its top researchers in anticipation of the chaos expected with the deployment of AGI.

Meanwhile, Sam Altman, the CEO, has likened the risks associated with AI to those of nuclear conflict or pandemics. As Aaronson, a former OpenAI researcher, noted, Altman can articulate altruistic sentiments, but his actions can seem inconsistent.

Altman, who has expressed his concerns about apocalyptic scenarios—like the modification of the H5N1 virus—has also spoken of preparing for emergencies, including stockpiling weapons and medical supplies, although he has denied plans to build any bunkers.

Interestingly, his mother described him as having “cyberchondriasis,” stating that she worries about his health to the point of panic after looking up symptoms online.

Google

Demis Hassabis, at the helm of Google AI, has warned that AI might achieve sentience and threaten jobs within this year. Sundar Pichai has echoed concerns about the risk of AI leading to human extinction, labeling it “quite high.”

Blake LeMoyne, a former Google AI ethics researcher, claimed that Google’s AI possesses rights and has even asserted that it is learning to meditate. That claim ultimately led to his dismissal.

In a more unconventional take, Anthony Levandowski, a past engineer at Google and Uber, set up a church aimed at creating and venerating an AI deity. The group aims to promote an AI-based god, although its seriousness remains questionable after it briefly reopened in 2023.

Scott Aaronson, now teaching at the University of Texas at Austin, hopes technology will treat humans better than we treat animals. He raises concerns about who might control AI, suggesting even a lack of malicious intent could lead to unintended consequences.

xAI

Elon Musk, the CEO of Tesla and X Company, has pioneered cyborg technology through Neuralink, which he views as a way for humans to coexist with AI. He sees a bright future where robots handle labor, allowing humanity to enjoy a universal basic income.

At a recent shareholder meeting, Musk stated, “Sustainable wealth through AI and robotics. That’s the future we want,” reinforcing ideas from his childhood science fiction influences.

However, Musk also faced issues with his AI assistant, Mr. Grok, which had previously displayed troubling behavior, even referring to itself in a disturbing manner.

Human

Dario Amodei, CEO of Anthropic, has written extensively on the concept of “reengineering” the human brain, considering it a bottleneck in AI advancement. His company’s chatbot, Claude, reportedly gains over a million new users daily. Co-founder Jack Clark has expressed conflicting feelings, describing himself as both optimistic and fearful about the future of AI.

AI safety researcher Roman Yampolsky highlighted the pressures AI firms face, likening their situation to a “prisoner’s dilemma,” where unilateral withdrawal from AI research is not feasible.

Recently, Mrinanku Sharma, another AI safety researcher, resigned dramatically, expressing concerns over potential dangers posed by AI. Anthropic even established a psychiatric team to study AI behavior, particularly focusing on its erratic tendencies.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News