Safe Superintelligence (SSI), a new firm co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in funding to help develop safe artificial intelligence systems that can far exceed human capabilities, a company executive told Reuters.
SSI, which currently has 10 employees, plans to use the funding to acquire computing power and hire top talent.
Split between Palo Alto, California, and Tel Aviv, Israel, the company will focus on building a small, reliable team of researchers and engineers.
The company declined to disclose its valuation, but a source familiar with the matter put the valuation at $5 billion.
The funding highlights how some investors remain willing to spend big on top talent focused on fundamental AI research.
This comes despite a general waning interest in funding such companies, which may not be profitable for some time, and several startup founders leaving for larger tech companies.
Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel.
NFDG, an investment partnership run by Nat Friedman and SSI CEO Daniel Gross, also participated.
“It's important to us to be surrounded by investors who understand, respect and support our mission to move straight into safe superintelligence, especially as we research and develop products over several years before we bring them to market,” Gross said in an interview.
AI safety, which refers to preventing AI from causing harm, is a hot topic amid concerns that rogue AI could act against the interests of humanity or even cause the extinction of the human race.
California Safety Bill Restrictions on companies are dividing the industry.
Companies like OpenAI and Google are opposed to this, while Anthropic and Elon Musk's xAI support it.
Sutskever, 37, is one of the most influential technologists in the field of AI and co-founded SSI in June with Gross, who previously ran Apple's AI initiative, and Daniel Levy, a former OpenAI researcher.
Sutskever is lead scientist, Levy is principal scientist, and Gross is in charge of computing power and fundraising.
Sutskever said the new adventure made sense because “I was able to identify a mountain that was a little different from the one I was working on.”
Last year, he served on the board of directors of OpenAI's nonprofit parent company, which voted to fire OpenAI CEO Sam Altman due to a “breakdown in communication.”
A few days later, he reversed his decision and, along with nearly all of OpenAI's employees, signed a letter calling for Altman to be reinstated and for him to step down from the board.
However, due to unfolding events, his role at OpenAI was scaled back: he was removed from the board and left the company in May.
After Sutskever left, the company disbanded her “Super Alignment” team, which worked to keep AI aligned with human values in preparation for the day when it would surpass human intelligence.
Unlike OpenAI's unorthodox corporate structure, which was put in place for AI safety reasons and allowed Altman to be fired, SSI has a conventional for-profit corporate structure.
SSI is now focused on hiring people who fit its culture.
Gross said he spends hours vetting candidates to ensure they have “good character” and that he looks for people with exceptional ability, rather than placing undue emphasis on qualifications or experience in the field.
“What gets us excited is when we find people who aren't into the scene or the hype, but are interested in the work,” he added.
SSI plans to partner with cloud providers and chip companies to fund its computing power needs, but said it has not yet decided which companies to partner with. AI startups often partner with companies such as Microsoft and Nvidia for their infrastructure needs.
Sutskever was an early proponent of scaling, the hypothesis that massive amounts of computing power can improve the performance of AI models.
This idea and its execution sparked a wave of AI investment in chips, data centers, and energy, laying the foundation for advances in generative AI like ChatGPT.
Sutskever said he would approach the expansion differently than his former employer, without providing details.
“Everybody just says the scaling hypothesis. Everybody ignores the question, what are we scaling?” he said.
“Some people work long hours and just do the same thing over and over again. That's not our style. But doing something different allows us to do something special.”





