OpenAI creates oversight committee with Sam Altman after dissolving safety team

AI startup OpenAI has formed a safety and security committee led by its board of directors, including CEO Sam Altman, as it begins training its next generation of artificial intelligence models, the company announced on Tuesday.

OpenAI announced in a company blog post that board members Brett Taylor, Adam D’Angelo and Nicole Seligman will also lead the committee.

Chatbots from Microsoft-backed OpenAI have generative AI capabilities such as engaging in human-like conversations and creating images based on text prompts, raising safety concerns as AI models become more powerful.

a safety and security committee led by directors including CEO Sam Altman; AFP via Getty Images

The new committee will be responsible for advising the Board on decisions related to the safety and security of OpenAI’s projects and operations.

“The new safety committee means OpenAI has completed its transition from a more nebulous nonprofit-like organization to a commercial entity,” said Gil Luria, managing director at DA Davidson.

“This should help streamline product development while maintaining accountability.”

Ilya Sutskever and Jan Reicke, former chief scientists who led OpenAI’s superalignment team, which ensures that AI is aligned with its intended purpose, left the company earlier this month.

OpenAI disbanded its Superalignment team in early May, less than a year after the company created it, and some team members were redeployed to other groups, CNBC reported days after the high-profile departures.

The committee’s first task will be to evaluate and further develop OpenAI’s existing safeguards over the next 90 days, after which it will share its recommendations with the committee.

Former OpenAI chief scientist Ilya Sutskever left the company earlier this month. AFP via Getty Images
Jan Leike (above) and Sutskever led OpenAI’s superalignment team, which ensures AI is aligned with its intended purpose. Jan Rijke/X

OpenAI said it plans to publish an update on any adopted recommendations following the committee’s review.

Other members of the committee include the new chief scientist Jakub Paczocki and security chief Matt Knight.

The company also plans to consult with other experts, including Rob Joyce, a former director of the National Security Agency’s cybersecurity division, and John Carlin, a former Justice Department official.

OpenAI disbanded the Superalignment team less than a year after it was created. AP

OpenAI did not provide details about the new “Frontier” models it is training, other than to say they will take its systems to the “next level of capability on the path to AGI.”

Earlier in May, the company unveiled a new AI model capable of lifelike voice conversations and dialogue via text and images.