SELECT LANGUAGE BELOW

California governor vetoes contentious AI safety bill

SACRAMENTO, Calif. — California Governor Gavin Newsom has vetoed a landmark bill aimed at establishing the nation's first urban planning system. Safety measures Large scale artificial intelligence models sunday.

The decision is a major blow to efforts to rein in a domestic industry that is rapidly evolving with little oversight. Supporters say the bill would create some of the first regulations for large-scale AI models in the country and pave the way for national AI safety regulations.

Earlier this month, the Democratic governor told an audience at DreamForce, an annual conference hosted by software giant Salesforce, that California needs to take the lead on AI regulation in the face of federal inaction. suggestion “It could have a chilling effect on the industry.”


California Governor Gavin Newsom speaks during a press conference in Los Angeles on September 25, 2024. AP/Eric Thayer

The proposal drew fierce opposition from startups, big tech companies and several Democratic House members, and Newsom said the strict requirements could harm domestic industry.

“SB 1047, while well-intentioned, fails to consider whether AI systems are deployed in high-risk environments, involve critical decision-making, or involve the use of sensitive data,” Newsom said. he said in a statement. “Instead, this bill applies strict standards to even the most basic functions of large systems as long as they are deployed. This is the best approach to protect the public from the real threats posed by technology. I don’t think so.”

Newsom announced Sunday that the state would instead partner with multiple industry experts, including AI pioneers. Feifei Lidevelop guardrails around powerful AI models. Lee opposed proposals regarding AI safety.

The measure, aimed at mitigating the potential risks created by AI, would require companies to test models that could be used to, for example, destroy a state's power grid or help manufacture chemical weapons. They would be required to publish safety protocols to prevent manipulation. Experts say such a scenario could become possible in the future as the industry continues to advance rapidly. It would also have provided whistleblower protection for workers.

Democratic state Sen. Scott Weiner, the bill's author, said the veto would “require oversight of large corporations that make important decisions that affect the safety and well-being of the public and the future of our planet.” It's a setback for everyone who believes in it.”

“Companies developing advanced AI systems recognize that the risks these models pose to the public are real and rapidly increasing. While there are admirable efforts to monitor and reduce risks, the reality is that industry's voluntary efforts are unenforceable and rarely produce positive outcomes for the public.” Weiner said in a statement Sunday afternoon.


Newsom speaks about the AI ​​Safety Act with Salesforce CEO Marc Benioff at the Dreamforce conference in San Francisco on September 17th.
Newsom speaks about the AI ​​Safety Act with Salesforce CEO Marc Benioff at the Dreamforce conference in San Francisco on September 17th. John G. Mabanglo/EPA-EFE/Shutterstock

Weiner said the debate over this bill has made dramatic progress on the issue of AI safety, and he will continue to advocate for this point.

This law is in it lots of bills A bill to regulate AI passed Congress this year. fight deepfakes and protect workers. State lawmakers said they must act this year, citing hard lessons learned from California's failure to rein in social media companies when it might have had the chance.

Supporters of the bill, including Elon Musk and Anthropic, said developers and experts are trying to understand how AI models work and why.

This bill targets systems that: over $100 million Build. Current AI models don't reach that threshold, but some experts say that could change within the next year.

“This is due to a massive scale-up of investments within the industry,” said Daniel Cocotadillo, a former OpenAI researcher who resigned in April after accusing the company of ignoring AI risks. . “This is an incredible amount of power for a private company to control without accountability, and it is also very dangerous.”

America is already behind Europe in regulating AI To limit risk. California's proposal was not as comprehensive as Europe's regulations, but it is a good first step toward putting guardrails around rapidly growing technologies that are raising concerns about job losses, misinformation, and privacy violations. It should have been. automation biassaid supporters.

Last year, a number of leading AI companies voluntarily agreed Follow safeguards established by the White House, including model testing and information sharing. California's bill would require AI developers to follow similar requirements to those promises, according to its supporters.

But critics, including former U.S. House Speaker Nancy Pelosi, said the bill would “kill California technology” and stifle innovation. That would deter AI developers from investing in large-scale models and sharing open source software, they said.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News