SELECT LANGUAGE BELOW

OpenAI Establishes Safety and Security Committee to Oversee Development of Next Generation AI Models

Addressing growing concerns around AI safety and security, OpenAI has announced the formation of a new governance body tasked with overseeing the development of future AI models, including a successor to GPT-4.

Chief Information Officer Report OpenAI, one of the leading artificial intelligence research companies, has set up a safety and security committee within its board of directors to address concerns about responsible AI development. The committee’s main purpose is to evaluate the processes and safeguards the company has in place as it begins development of its next-generation models that are expected to bring it closer to achieving artificial general intelligence (AGI), AI systems that can match or exceed the capabilities of the human brain in a wide range of tasks.

OpenAI CEO Sam Altman speaks to reporters at the Allen & Company Media Technology Conference in Sun Valley, Idaho, US, Wednesday, July 12, 2023. The summit is typically a breeding ground for clinching mergers with a handshake, but this year could have a very different feel with a backdrop of sluggish deal volume, inflation and rising interest rates. Photo by David Paul Morris/Bloomberg via Getty Images

The creation of the safety committee comes on the heels of several high-profile departures and controversy surrounding OpenAI’s approach to AI safety, most notably the recent departure of Ilya Sutskever, the company’s former chief scientist who led the “Super Alignment” team that focused on long-term risks, followed by co-leader Jan Reicke of the same team.

The Safety and Security Committee is led by OpenAI CEO Sam Altman, Chairman Brett Taylor, and board members Adam D’Angelo and Nicole Seligman. Other key members include Head of Readiness Alexander Madry, Head of Safety Systems Lillian Wen, Head of Alignment Sciences John Shulman, Head of Security Matt Knight, and new Chief Scientist Jakub Paczocki.

Within 90 days, the committee will share its recommendations with the full board on how the company is addressing AI risks in its model development, and OpenAI has indicated it may make any adopted recommendations public at a later date “in a manner consistent with safety and security.”

The creation of the committee signals OpenAI’s awareness of the concerns being expressed by the industry and the public about AI safety: “We are proud to build and release models that lead the industry in both functionality and safety, but we welcome robust discussion at this critical time,” the blog post read.

OpenAI’s progress on the next version of GPT comes amid growing competition in the AI ​​field. Elon Musk’s xAI recently announced a $6 billion funding round at a $24 billion valuation, as the Tesla leader seeks to challenge a startup he once backed. Meanwhile, OpenAI has courted controversy after releasing a virtual assistant with a voice similar to actress Scarlett Johansson’s without her consent.

Click here for details I am the CIO.

Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News