of super alignment team OpenAI was tasked with devising a way to manage and manage “hyperintelligent” AI systems, and was previously promised to provide 20% of the company’s computing resources, but that never materialized. did. As a result, the team was unable to fully complete their work. tech crunch.
As a result, several team members, including co-leader Jan Rijke, decided to resign. Leike is a former DeepMind researcher who contributed to the development of ChatGPT, GPT-4, and his InstructGPT, the predecessor to ChatGPT.
“These problems are very difficult to solve, and I’m concerned that we’re not on track to get there.”
In a long thread, Reike posted to X: “I joined OpenAI because I thought it was the best place in the world to do this research. However, I have not agreed with OpenAI’s leadership on the company’s core priorities for quite some time.” Until I finally reach my breaking point. ”
“We need to spend much of our bandwidth preparing next-generation models, including security, surveillance, readiness, safety, adversarial robustness, (hyper)coordination, confidentiality, social impact, and related topics. I think there is.” he continued. “These problems are very difficult to solve, and I’m concerned that we’re not on track to get there.”
Now, OpenAI’s Superalignment team has reportedly disbanded, just a year after the company announced the group. CNBC. The report states that OpenAI co-founder Ilya Sutskever has also decided to leave the company.
Leik said that OpenAI’s “safety culture and processes have taken a backseat to the shiny product.”
When OpenAI’s Superalignment team was announced in 2023, the company said it was focused on “scientific and technological breakthroughs to steer and control AI systems that are much smarter than we are.” .
The announcement goes on to say, “Superintelligence could be the most impactful technology ever invented by humans, helping us solve many of the world’s most important problems. Such power is also extremely dangerous and could lead to the incapacitation of humanity.” Or even the extinction of humanity. ”
Co-founder and CEO Sam Altman I commented on the Like thread“We are very grateful for @janleike’s contributions to openai’s alignment research and safety culture, and are very sad to see him leave. As he said, we still have a lot of work to do. We are committed to doing just that. I will do so in a longer post in the coming days. ”
leak I have written: “For the past few months, my team has been sailing against the wind. At times we have struggled [computing resources] And it has become increasingly difficult to carry out this important research. ”
Reike pointed out that OpenAI needs to put more emphasis on the safety of its technology.[b]Developing machines that are smarter than humans is an inherently risky endeavor. ”
Do you like Blaze News? Avoid censorship and sign up for our newsletter to get articles like this delivered straight to your inbox. Register here!
