SELECT LANGUAGE BELOW

OpenAI nixes team focused on risk of AI causing ‘human extinction’

Less than a year after it was founded, OpenAI has discontinued a team focused on the risks posed by advanced artificial intelligence. And an outgoing executive warned Friday that safety “has taken a backseat to shiny products” at the company.

According to a blog post, the Microsoft-backed ChatGPT maker has been tasked with creating safeguards for advanced general intelligence (AGI) systems that “could lead to the incapacitation or even extinction of humanity,” so-called It is said that “Super Alignment” has been disbanded. Last July.

The dissolution of the team First reported by WiredThe announcement came days after OpenAI executives Ilya Sutskever and Jan Reich announced their resignations from the Sam Altman-led company.

OpenAI CEO Sam Altman responded to Reich’s tweet with a post of his own. AFP (via Getty Images)

“OpenAI has a great responsibility on behalf of all humanity,” Reich wrote in a series of articles. There are X posts on Friday. “But in recent years, safety culture and processes have taken a backseat to shiny products. We are long overdue to take the impact of AGI incredibly seriously.”

Sutskever and Leike, who led OpenAI’s safety team, resigned shortly after the company announced an updated version of ChatGPT that allows for real-time user conversations and language translation.

The surprising reveal drew immediate comparisons to the 2013 science fiction film “Her,” which featured a superintelligent AI played by actress Scarlett Johansson.

When asked for comment, OpenAI referred to Altman’s tweet in response to Like’s thread.

“We are extremely grateful for @janleike’s contributions to OpenAI’s alignment research and safety culture, and are very sad to see him leave,” Altman said. “He’s right, we still have a lot of work to do. We’re committed to doing it. We’ll have a longer post in the coming days.”

Some members of the safety team have been redeployed to other parts of the company. CNBC reportedsaid a person familiar with the situation.

Jan Reik warned that safety “takes a backseat to shiny products”. Jan Reik/X

AGI broadly defines an AI system that has cognitive abilities equal to or greater than that of humans.

In its announcement last July about the formation of its safety team, OpenAI said it was dedicating 20% ​​of its available computing power to long-term safety measures and hoped to resolve the issue within four years.

Sutskever did not say what led to his resignation in his X post on Tuesday, but acknowledged that he had “confidence in what OpenAI would build.” [AGI] It’s safe and it’s profitable,” under Mr. Altman and other leaders of the company.

It’s worth noting that Sutskever was one of the four OpenAI board members who took part in the shocking move to oust Altman from the company last fall. The coup triggered a governance crisis that nearly led to the collapse of OpenAI.

Ilya Sutskever also left OpenAI. Reuters

OpenAI ultimately brought back Altman as CEO and announced a revamped board of directors.

A subsequent internal investigation pointed to a “breakdown of trust between the former board of directors and Mr. Altman” prior to his dismissal.

According to the March release, investigators also said the dispute between the leaders was not related to the safety or security of OpenAI’s advanced AI research or “the pace of development, OpenAI’s finances, or statements made to investors, customers, or business partners.” concluded that it was unrelated.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News