SELECT LANGUAGE BELOW

California lawmakers demand AI firms install ‘kill switch’

Artificial intelligence (AI) companies are pushing back against demands from California lawmakers to install “kill switches” designed to mitigate potential dangers from the new technology, with some threatening to pull out of Silicon Valley altogether.

Democratic state Sen. Scott Wiener has introduced a bill that would subject tech companies to regulations embodied in a new government agency designed to prevent AI companies from giving their products “dangerous capabilities,” like starting a nuclear war.

Wiener and other lawmakers want to put guardrails around “very large” AI systems that could spit out instructions to create chemical weapons, assist in cyber attacks, or cause other disasters that could cost at least $500 million in damages.

The bill, which is backed by some of the most prominent AI researchers, would also create a new state agency to oversee developers and provide best practices, including for more powerful models that don’t yet exist.

California Senator Scott Wiener, a Democrat, has proposed a bill to regulate AI. Getty Images

State attorneys general can also take legal action in the event of violations.

But technology companies have threatened to relocate from California if the new bill becomes law.

The bill passed the state Senate last month.

A vote in the state Legislature is expected in August, and if passed, the bill would be sent to Gov. Gavin Newsom.

A spokesman for the governor told The Washington Post, “We generally do not comment on pending bills.”

Senior Venture Capitalist in Silicon Valley He told the Financial Times on Friday. He said he has fielded complaints from tech startup founders who are considering leaving California altogether in response to the bill.

“My advice to everyone asking is to stay and fight,” a venture capitalist told the Financial Times, “but this will pour cold water on the open source and startup ecosystem. Some founders will choose to leave.”

The biggest objection to the proposal from tech companies is that it would stifle innovation by discouraging software engineers from taking bold risks with their products for fear of hypothetical scenarios that may never come true.

“If you want to create regulations that stifle innovation, you can’t go any further than this,” Andrew Ng, an AI expert who has led projects at Google and the Chinese company Baidu, told the Financial Times.

Lawmakers have been grappling with how to regulate AI, which has rapidly advanced in recent years. AP

“It creates huge liabilities for science fiction risks and instills fear in people who dare to innovate.”

Arun Rao, lead product manager for generative AI at Meta, wrote in X last week that the bill is “unworkable” and “will be the end of open source.” [California]. “

“The net tax impact of disrupting the AI ​​industry and driving out companies could reach billions of dollars as both companies and high-paid workers leave,” he wrote.

Prominent Silicon Valley technology researchers have expressed concern in recent years about the rapid advancement of artificial intelligence, saying it could have dire consequences for humanity.

“I don’t think we’re ready, I don’t think we know what we’re doing, and I think we’re all going to die,” Eliezer Yudkowsky, an AI theorist considered a particularly radical by his tech industry colleagues, said in an interview last summer.

Yudkowsky echoed concerns expressed by Elon Musk and other tech figures, who have called for a six-month pause on AI research.

Musk said last year that there was a “non-zero” chance that AI would launch a “Terminator-style” attack on humanity.

California AI companies are threatening to withdraw from the state over a bill that would require the installation of “kill switches.” Rafael Enrique/SOPA Images/Shutterstock

The emergence of a new generation of sophisticated AI chatbots like ChatGPT has raised concerns that artificial intelligence systems could outwit humans and run wild.

Earlier this year, European Union lawmakers gave final approval to a law that seeks to regulate AI.

Early drafts of the bill focused on AI systems performing very specific tasks, such as scanning resumes and job applications.

The phenomenal growth of general-purpose AI models, such as OpenAI’s ChatGPT, has left EU policymakers scrambling to respond.

They added provisions for so-called generative AI models, the technology underlying AI chatbot systems that can generate unique and seemingly lifelike responses, images and more.

Developers of general-purpose AI models, from European startups to OpenAI and Google, will have to provide a detailed overview of the text, images, videos and other data from the internet that was used to train their systems, and comply with EU copyright law.

Some uses of AI, such as social scoring systems that manage people’s behavior, some forms of predictive policing, and emotion recognition systems in schools and workplaces, are banned because they are deemed to pose unacceptable risks.

With post wire

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News