SELECT LANGUAGE BELOW

OpenAI forms safety council as it trains latest artificial intelligence model | Artificial intelligence (AI)

OpenAI says it has established a committee on safety and security and has begun training a new AI model to replace the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “important safety and security decisions” regarding the company’s projects and operations.

The safety committee’s launch comes amid a swirling controversy over the company’s AI safety, which drew attention after researcher Jan Reike resigned and criticized OpenAI for putting safety “on the back burner to flashy products.” OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “Super Alignment” team the pair co-led to focus on AI risks.

OpenAI said it “recently began training its next-generation models” and that its AI models lead the industry in capability and safety, but did not address the controversy. “We welcome the robust discussion at this critical time,” the company said.

AI models are predictive systems trained on massive datasets to generate on-demand text, image, video, and human-like conversations. Frontier models are the most powerful and cutting-edge AI systems.

The safety committee is heavily represented among OpenAI insiders, including CEO Sam Altman, chairman Bret Taylor, and four of OpenAI’s technical and policy experts, as well as Quora CEO Adam D’Angelo and former Sony general counsel Nicole Seligman.

The committee’s first task will be to evaluate and further develop OpenAI’s processes and safeguards, and make recommendations to the board within 90 days, after which the company will publicly announce which recommendations it adopts “consistent with safety and security.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News