SELECT LANGUAGE BELOW

OpenAI, Google ignoring risks in race for advanced AI, should allow ‘right to warn’ public: employees

A group of AI whistleblowers argue that tech giants such as Google and ChatGPT developer OpenAI are locked in a reckless race to develop technology that could put humanity at risk, and called for the public’s “right to warn” in an open letter on Tuesday.

The petition was signed by current and former employees of OpenAI, Google DeepMind, and Anthropic. Open Letter “AI companies have strong financial incentives to avoid effective oversight,” they warned, pointing to the lack of federal regulation of the development of advanced AI.

The researchers point to potential risks, including the spread of misinformation, exacerbating inequality, and even “possible human extinction due to loss of control over autonomous AI systems,” especially as OpenAI and other companies pursue so-called advanced general intelligence — capabilities that could match or exceed human intelligence.

Sam Altman is the CEO of OpenAI, the developer of ChatGPT. Getty Images

“Companies are racing to develop and deploy more powerful artificial intelligence while ignoring the risks and impacts of AI,” Daniel Kokotajiro, a former OpenAI employee and one of the letter’s organizers, said in a statement. “I have decided to leave OpenAI because I have lost hope that they will act responsibly, especially in their pursuit of artificial general intelligence.”

“They and others have embraced a ‘move fast and break things’ approach, which is the opposite of what’s needed for such a powerful yet poorly understood technology,” Kokotajlo added.

The letter has garnered the backing of two prominent experts known as the “godfathers of AI”: Geoffrey Hinton, who last year warned that the threat of rogue AI is “more urgent” for humanity than climate change, and Canadian computer scientist Yoshua Bengio. Prominent British AI researcher Stuart Russell also endorsed the letter.

The letter calls on major AI companies to adhere to four principles to increase transparency and protect whistleblowers who speak out publicly.

These include agreements not to retaliate against employees who raise safety concerns and to support an anonymous system for whistleblowers to alert the public and regulators about risks.

The letter was also supported by renowned researcher Geoffrey Hinton, known as the “Godfather of AI.” Reuters

AI companies are also required to commit to a “culture of open criticism” and not to enter into or enforce non-disparagement or confidentiality agreements unless trade secrets are disclosed.

As of Tuesday morning, the letter’s signatories included a total of 13 AI researchers, 11 of whom previously or currently work at OpenAI, including Koko Tajiro, Jacob Hilton, William Sanders, Carol Wainwright and Daniel Ziegler.

“There should be a way to share information about risks with independent experts, governments and the public,” Sanders said. “Currently, the people who know the most about how cutting-edge AI systems work and the risks associated with their deployment are unable to speak freely because of potential retaliation or overly broad non-disclosure agreements.”

DeepMind is a Google-owned AI research lab. Reuters

Other signatories include former Google DeepMind employee Ramana Kumar and current Anthropic employee Neil Nanda.

Asked for comment, an OpenAI spokesperson said the company has a track record of not releasing AI products until necessary safeguards are in place.

“We are proud of our track record of delivering the most capable and safe AI systems, and believe in a scientific approach to addressing risks,” OpenAI said in a statement.

The letter was signed by 11 current and former OpenAI employees. NurPhoto via Getty Images

“Given the importance of this technology, we agree that a rigorous discussion is important, and we will continue to engage with governments, civil society and other communities around the world,” the company added.

Google and Anthropic did not immediately respond to requests for comment.

The letter was released just days after it was revealed that OpenAI had disbanded its “Super Alignment” safety team, whose responsibilities included creating safety measures for advanced general intelligence (AGI) systems that “could potentially disable or even exterminate humanity.”

The two OpenAI executives who led the team, co-founders Ilya Sutskever and Jan Reicke, have since resigned from the company, with Reicke slamming the company when he left, claiming that “safety had taken a backseat to flashy products.”

Meanwhile, former OpenAI board member Helen Toner, who was part of the group that successfully temporarily ousted Sam Altman from his position as CEO last year, has alleged that Altman repeatedly lied during his tenure.

Toner claimed that she and other board members did not hear from Altman that ChatGPT would launch in November 2022, but instead learned of its debut on Twitter.

Former OpenAI board member Helen Toner has criticized Sam Altman’s leadership. Getty Images for Vox Media

OpenAI has since established a new safety oversight committee, which includes Altman, as it begins training a new version of the AI ​​model that powers ChatGPT.

The company denied Toner’s allegations and noted that an outside investigation determined that safety concerns were not a factor in Altman’s firing.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News