A recent report commissioned by the U.S. Department of State revealed serious safety concerns expressed by employees at major artificial intelligence laboratories, including OpenAI, Google, and Mark Zuckerberg’s meta This highlights the lack of adequate safeguards and potential national security risks posed by the United States. system.
time report Some of the world’s top AI researchers have serious concerns about the safeguards and incentives that drive their organizations, according to a government-commissioned report authored by employees at Gladstone AI. The authors conducted interviews with over 200 of his experts, including employees from pioneering AI labs such as OpenAI, Google DeepMind, Meta, and Anthropic. All of these labs are actively pursuing the development of artificial general intelligence (AGI), a hypothetical technology that can perform most functions. Tasks above human level.
Google and Alphabet CEO Sundar Pichai attended a press event announcing Google as the new official partner of the women’s national team held at Google Berlin. Photo: Christoph Soeder/dpa (Photo Credit: Christoph Soeder/picture Alliance via Getty Images)
The report notes that employees at these institutes have privately shared their concerns with the authors, expressing concern that the organizations are prioritizing rapid progress over implementing robust safety protocols. It is clear that there is. One individual raised concerns about the lab’s “lax approach to safety” in a desire to avoid delays in the development of more powerful AI systems. Another employee expressed concern that containment measures were insufficient to prevent AGI from escaping control, even though the institute believes it could occur in the near future. expressed.
Cybersecurity risks are also highlighted, with the report stating, “In the personal judgment of many technical staff, the security measures in place at many cutting-edge AI laboratories are vulnerable to sustained attacks by sophisticated attackers. It is insufficient to combat intellectual property theft campaigns.” The authors warn that given the current security state of these laboratories, attempts to leak AI models are likely to be successful without direct support from the government.
Jeremy Harris, CEO of Gladstone and one of the report’s authors, emphasized the seriousness of the concerns raised by employees. “I cannot overstate the level of concern some people at these institutes have about the decision-making process and how executive incentives factor into important decisions.” he told TIME.
The report also warns against over-reliance on AI assessments, which are commonly used to test unsafe features and behaviors in AI systems. According to the authors, these assessments cannot be compromised or manipulated because if the questions are known in advance, the AI model can be superficially tweaked or “tweaked” to pass the assessment. There is likely to be. The report cited an expert with “direct knowledge” of one AI lab’s practices and said it determined that the anonymous lab was gamifying its evaluations in this way.
read more Now it’s time.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship issues.
