The Biden administration will begin implementing new rules set out in an executive order aimed at regulating artificial intelligence, but some experts are skeptical about how useful the new rules will be.
“The executive order’s focus on model size and computational power rather than actual use cases is misguided. This approach risks creating compliance burdens on companies without meaningfully improving accountability and transparency.” “There is,” said Jake Denton, research fellow at the Heritage Foundation’s Center for Technology Policy. he told FOX News Digital.
“The order’s blurred boundaries and loosely defined reporting requirements likely result in selective and inconsistent enforcement.”
Denton’s comments come as the Biden administration begins implementing new rules under the executive order, including one that would require developers of AI systems to disclose safety test results to the government, The Associated Press reported on Monday. It came out later.
White House: Developers of “powerful AI systems” will now have to report safety test results to the government
President Biden speaks about government regulation of artificial intelligence at the White House on October 30, 2023. (AP Photo/Evan Vucci)
The White House AI Council met on Monday to discuss progress on a three-month-old executive order, which also coincided with an executive order under the Defense Production Act that would allow AI companies to start sharing, according to a report. The 90-day goal set for 2020 was also announced. Please provide information to the Department of Commerce.
Ben Buchanan, the White House special assistant on AI, told The Associated Press that the government is “interested in knowing whether AI systems are safe before they are released to the public. The president is demanding that companies meet that standard.” We are making it clear that there is.” bar. “
But Denton is skeptical that the order will have the results it advertises.
“The order’s blurred boundaries and vaguely defined reporting requirements likely result in selective and inconsistent enforcement,” Denton said. “On the other hand, substantial information asymmetries between regulators and companies are likely to render oversight ineffective.”
Christopher Alexander, chief analytics officer at Pioneer Development Group, also expressed concerns about censorship, noting that governments have struggled to regulate other technology industries, such as cryptocurrencies, and adding new rules. expressed skepticism.
White House asks Congress to act after ‘alarming’ images of AI Taylor Swift
“The Biden administration’s problematic crypto regulations are a perfect example of government dictating to industry rather than working with industry to seek appropriate regulation,” Alexander told Fox News Digital. “I am also concerned that the U.S. government’s aggressive censorship efforts on social media over the past few years have been deeply disconcerting, and any government surveillance efforts must be closely monitored by Congress for accountability. We believe it is important for Congress to clearly define who will oversee the monitors.
Nevertheless, Alexander argued that establishing industry standards is important, noting that “the private sector motivations of AI companies are not necessarily in the interests of the general public.”

In this photo illustration, the Google Bard AI logo is displayed on the smartphone and the Google logo is displayed on the PC screen. (Pavlo Gonchar/SOPA Images/LightRocket, Getty Images)
Biden’s executive order aims to close that gap by creating a common set of standards for future AI safety.
“I think the government is deciding on the future direction. The fact is that we don’t have standards yet to test the safety of these models, so this order doesn’t have a lot of teeth yet. ” said founder Phil Siegel. Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) told his Fox News Digital.
“If the administration fails to meet this timing with stifling regulations, America will see its global lead in AI technology decline.”
“But some consensus process is emerging. Eventually, there will probably be some prompts generated to test the model, or randomly generated. Conversations and testing of new models. There will be some advanced AI models used for . ”
CHATGPT chief warns of ‘superhuman’ skills AI could develop
Siegel likened the process to current drug approval regulations, which he argued are now well understood and followed by drug developers.
“Eventually we’re going to deploy it to test AI models, but honestly, we should have deployed it to test AI models. social media applications”Siegel said.
Given Havens, policy director for the Bull Moose Project, argued that the administration has reached a critical juncture in AI regulation and will need to balance safety standards while being careful not to stifle innovation.

ChatGPT on laptop (Cyber Guy)
“If the Biden administration aims to be successful in regulating AI, it will use the information provided to create reasonable standards that protect both consumers and companies’ ability to innovate,” Heavens told FOX. He told News Digital.
CLICK HERE TO GET THE FOX NEWS APP
“If the administration fails to meet this timing with stifling regulations, America will see its global lead in AI technology waning. Waving the white flag for American innovation. would be a disaster for both the economy and national security.”
The White House did not immediately respond to Fox News Digital’s request for comment.
