SELECT LANGUAGE BELOW

OpenAI co-founder sought doomsday shelter for protection against ‘rapture’

The co-founder of OpenAI, which developed ChatGPT, has suggested creating an underground bunker for its leading researchers, anticipating a cataclysmic event due to an advanced form of artificial intelligence, referred to as “Rapture,” in a recently published book.

Ilya Sutskever, often viewed as the mastermind behind ChatGPT, held a meeting with fellow scientists at OpenAI during the summer of 2023.

“We’re definitely going to build a bunker before we release AGI,” Sutskever reportedly responded, as attended individuals recounted.

He elaborated that the intention behind this bunker would be to shield OpenAI’s essential scientists from potential fallout he anticipated.

“Of course, it’ll be up to individuals whether they want to enter the bunker or not,” he added.

This conversation was initially highlighted by Karen Hao in her upcoming book, “Empire of AI: Sam Altman’s OpenAI Dreams and Nightmares.”

The discussion, as mentioned, was adapted from a piece featured in The Atlantic.

Interestingly, these bunker references by Sutskever weren’t merely a singular instance; two other sources confirmed to Hao that this was a recurring topic in internal conversations.

One researcher remarked, “There’s a faction (and Ilya is among them) that thinks creating AGIs will bring great joy—literally, the Rapture.”

Though Sutskever has not publicly commented, the notion of a refuge for AGI developers emphasizes the profound anxiety felt by those at the helm of this powerful technology.

According to sources, Sutskever has often been perceived as somewhat of a visionary or mystic within OpenAI.

Balancing this perception, he’s also one of the most technically skilled individuals responsible for ChatGPT and similar language models, playing a key role in elevating the company’s status on a global scale.

After a notable period of tension, Sutskever has been alleged to split his focus between enhancing AI capabilities and ensuring AI safety.

Interestingly, the belief that AGI could drastically alter civilization isn’t exclusive to Sutskever.

In May 2023, OpenAI’s CEO Sam Altman signed a letter warning about possible “extinction risks” posed by AI technologies. However, the mention of the bunker reveals more profound and personal apprehensions within OpenAI’s leadership.

The contrasting feelings between these fears and OpenAI’s aggressive business objectives became evident later in 2023, particularly when Sutskever and then-Chief Technology Officer Mira Murati orchestrated a brief coup that ousted Altman from the company.

Their concern stemmed from the belief that Altman was disregarding internal safety protocols and was overly consolidating the company’s future, as sources shared with Hao.

Sutskever, who once firmly supported OpenAI’s original mission of developing AGI for humanity’s benefit, reportedly grew increasingly disillusioned.

Both Sutskever and Murati expressed to board members their diminishing trust in Altman.

“I don’t believe Sam should be the one in control of AGI,” Sutskever stated.

The board’s decision to remove Altman didn’t last long.

Just days later, pressure from investors, employees, and Microsoft mounted. Eventually, both Sutskever and Murati exited the company.

The concept of the bunker, while never formally proposed or planned, now symbolizes the overarching skepticism among AI experts regarding the potential consequences of their innovations.

It encapsulates the depth of unease felt by OpenAI’s leaders about unleashing their groundbreaking technology, as well as their desire to navigate what they see as either a revolutionary or tumultuous era.

This post has requested commentary from OpenAI and Sutskever.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News