SELECT LANGUAGE BELOW

AI whistleblowers warn of dangers, call for transparency

A group calling itself “current and former employees of Frontier AI” [artificial intelligence] They called for greater transparency in the industry and protection for whistleblowers. letter Tuesday.

“Absent effective government oversight of these companies, current and former employees are the few people who can hold the companies accountable to the public,” the letter reads, “but extensive non-disclosure agreements prevent us from raising our concerns with anyone other than the companies, who may not be addressing these issues.”

The letter was signed by people who identified themselves as current and former employees of companies including OpenAI, Google’s DeepMind, and Anthropic, six of whom asked to remain anonymous.

“Normal whistleblower protections are inadequate because they focus on illegal activity, while many of the risks we are concerned about remain unregulated,” the letter continues. “Given the history of similar cases across our industry, it is understandable that some of us fear various forms of retaliation. We are not the first to face these issues or to speak out about them.”

The letter specifically calls on “leading AI companies” to adhere to “principles,” including “not entering into any agreements that prohibit them from ‘disparaging’ or criticizing companies over risk-related concerns” and supporting a “culture of open criticism.”

“We expect that with sufficient guidance from the scientific community, policymakers, and the public, these risks can be sufficiently mitigated,” the letter continues. “However, AI companies have strong economic incentives to avoid effective oversight, and we do not believe that bespoke corporate governance structures will be sufficient to change this.”

OpenAI announced last week that it was forming a safety committee to advise its board of directors on “significant safety and security decisions,” headed by CEO Sam Altman and chaired by directors Adam D’Angelo, Nicole Seligman and Bret Taylor.

OpenAI is “proud of our track record of delivering the most capable and safe AI systems, and we believe in a scientific approach to addressing risk,” a company spokesperson said in an email to The Hill.

“We agree that rigorous discussion is essential given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world,” the spokesperson continued.

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News