A group of current and former employees of prominent artificial intelligence companies, including OpenAI and Google DeepMind, have published an open letter calling for greater transparency and whistle-blowers’ protection in the AI industry.
of Guardian Reports The open letter, signed by 11 current and former OpenAI employees and two current and former Google DeepMind employees, highlights growing concerns about the potential harms of artificial intelligence and the lack of proper safety oversight within the industry. The employees argue that AI companies are in possession of a wealth of non-public information about the capabilities, limitations, and risks of their systems, but currently have little obligation to share this information with governments and no obligation to share it with civil society.
Alphabet Inc. CEO Sundar Pichai attends the Google I/O developers conference in Mountain View, California, U.S., Wednesday, May 10, 2023. Photo by David Paul Morris/Bloomberg
The letter calls for a “right to raise awareness about artificial intelligence” and calls for adherence to four principles of transparency and accountability, including a clause that says companies should not force employees to sign non-disparagement agreements that would prohibit them from publicly reporting risk-related AI issues, and a mechanism for employees to anonymously air their concerns to executives.
The employees stress the importance of their role in holding AI companies accountable to the public, given the lack of effective government oversight, and they argue that extensive non-disclosure agreements prevent them from raising concerns beyond the companies, who may be failing to address such issues.
OpenAI defended its practices in a statement, saying the company had hotlines and other mechanisms for reporting problems and didn’t release new technology until proper safety measures were in place. But the letter came after two senior OpenAI employees, co-founder Ilya Sutskever and lead safety researcher Jan Reicke, resigned from the company last month. Reicke alleged that OpenAI had abandoned a safety culture in favor of a “flashy product.”
In recent years, concerns about the potential harms of artificial intelligence have grown as the AI boom has left regulators scrambling to keep up with technological advances. While AI companies have publicly stated their efforts to develop the technology safely, researchers and employees have warned about a lack of oversight to ensure AI tools exacerbate existing societal harms or create entirely new ones.
Click here for details of Guardian here.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.






