OpenAI co-founder Ilya Sutskever said this week: New artificial intelligence venture It focuses on safely developing “superintelligence.”
The new company, Safe Superintelligence Inc. (SSI for short), has a sole purpose: to create safe AI models that are more intelligent than humans.
“Building a secure superintelligence (SSI) is the most important technological challenge of our time,” the company announced in a social media post. “We launched the world’s first Straight Shot SSI Lab with one goal and one product: secure superintelligence.”
Sutskever left OpenAI last month after a failed attempt to oust CEO Sam Altman, a move he supported. The attempted resignation, which Sutskever later said he regretted, sparked turmoil within the company, centered around whether OpenAI’s leaders were prioritizing business opportunities over AI safety.
SSI was co-founded by Sutskever, along with former Apple AI leader Daniel Gross and OpenAI engineer Daniel Levy.
“We are addressing safety and capacity in parallel as technical problems to be solved through innovative engineering and scientific advances,” the three said in a statement. “We plan to increase capacity as quickly as possible, while always prioritizing safety, so we can scale with confidence.”
They said that focusing solely on safety meant they were “not distracted by administrative overhead or product cycles” and that their goals were “insulated from short-term commercial pressures”.
Sutskever He told Bloomberg SSI has made it clear that it will not be developing any products or work beyond creating superintelligence. He did not disclose who is funding the company or how much money the effort has raised.





