Jennifer Griffin, Fox News’ chief national security correspondent, said on “The Craman Countdown” that the US is leading the way in using AI on the battlefield.
A group of former and current OpenAI employees has published a letter online expressing concerns about the impact and serious risks that artificial intelligence (AI) technology poses to humanity.
The letter, posted on righttowarn.ai, was signed by five former OpenAI employees, one current and former Google DeepMind employee, four unnamed current OpenAI employees, and two unnamed former employees.
The group said the risks posed by AI “range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the potential extinction of humanity due to loss of control over autonomous AI systems.”
“AI companies themselves are aware of these risks, as are governments and other AI experts around the world,” they wrote. “We are hopeful that with sufficient guidance from the scientific community, policymakers, and the public, these risks can be sufficiently mitigated.”
Apple’s WWDC marks AI tipping point for tech giant
Several former and current OpenAI employees wrote a letter expressing concern that, if left unregulated, the race for artificial intelligence could lead to the extinction of the human race. (Dilara Irem Sancar/Anadolu/File/Getty Images)
The group says AI companies have strong economic incentives to avoid effective oversight and substantive information about the capabilities and limitations of their systems.
For example, companies have non-public information about the adequacy of their safeguards and the risk levels of different types of harm that could result from advances in AI.
Daniel Kokotajiro, a group member and former researcher in OpenAI’s governance division, told the New York Times that OpenAI is eager to build artificial general intelligence (AGI) but is “in a desperate race to be first.”
Zoom CEO wants customers to send AI-powered ‘digital twins’ to their future meetings

A former researcher in OpenAI’s governance division said the company is in a “reckless race” to be first to deploy artificial general intelligence. (iStock)
Kokotajilou previously predicted AGI would happen by 2050, but told The New York Times that given how quickly technology is advancing, there’s now a 50% chance it will happen by 2027.
He also said he believes there is a 70% chance that advanced AI will destroy humanity or cause catastrophic damage.
The group said it doesn’t expect companies to share information voluntarily.
“Without effective government oversight of these companies, current and former employees are the few people who can hold the companies accountable to the public,” they wrote.
They said non-disclosure agreements prevented the group’s members from raising concerns, that normal whistleblower protections were inadequate, and that the focus was on illegal activity, which is not yet regulated and therefore not considered illegal.
Apple unveils “Apple Intelligence” at WWDC

OpenAI CEO Sam Altman (right) is working with lawmakers around the world to shape artificial intelligence regulations. (Win McNamee/File/Getty Images)
OpenAI told FOX Business that it supports the letter’s call for government regulation of the AI industry because it is the first organization to do so.
The company also said it is in regular discussions with policymakers around the world and is encouraged by progress.
An OpenAI spokesperson also said the company has a track record of not releasing its technology until necessary safeguards are in place.
The company’s products are used by 92% of Fortune 500 companies, so a secure and reliable system is crucial — a spokesperson said companies wouldn’t sign contracts with it if its products weren’t secure.
“We are proud of our track record of delivering the most capable and safest AI systems, and believe in a scientific approach to addressing risks,” an OpenAI spokesperson said. “Given the importance of this technology, we agree that rigorous discussion is essential, and we will continue to engage with governments, civil society, and other communities around the world.”
Click here to get FOX Business on the go
“That’s why we have channels for employees to voice their concerns, including an anonymous Honesty Hotline and a Safety and Security Committee led by board members and company safety leaders,” the spokesperson added.





