California lawmakers are intensifying their efforts to regulate artificial intelligence (AI), positioning the state in contrast to Republicans who are advocating for a national ban on such regulations.
As discussions around AI gain momentum, former President Trump and GOP members have sought to roll back regulations that they believe hinder innovation. Meanwhile, California is moving ahead with initiatives aimed at creating frameworks for AI oversight.
The state finds itself in a peculiar situation, as it hosts Silicon Valley, the heart of the AI industry, and might significantly influence future regulatory measures surrounding AI.
“We dominate artificial intelligence. We don’t have any friends,” remarked California Governor Gavin Newsom, reflecting on the state’s pivotal role.
He added, “With so much leadership concentrated here in California, we have an accountability to guide responsible innovation.”
Recently, the California Legislature passed several AI-related bills before its session concluded in mid-September.
Among them is Senate Bill 53, which mandates that developers of large AI models create a framework to assess and mitigate serious risks. This bill is currently awaiting the governor’s signature.
Andrew Lokay, a senior research analyst at Beacon Policy Advisors, noted, “Given California’s size, the regulations it enacts might effectively serve as a national standard.”
He continued, saying many companies simplify compliance by adopting California’s regulations in their operations beyond state lines.
Washington, D.C. is taking notice.
Sriram Krishnan, a senior AI policy advisor in the White House, argued that “California doesn’t want to dictate AI rules for the entire country.”
Meanwhile, Rep. Kevin Kiley (R-Calif.) acknowledged California’s role as a hub of global innovation but questioned the need for AI regulations.
“The idea that this is the most powerful entity in human history seems a bit far-fetched,” he commented during a recent hearing.
Kiley expressed concerns that California might promote national AI policies, implying a need for a more suitable national framework to prevent this.
With an emphasis on fostering innovation, the Trump administration and some Republicans are advocating against state laws that they believe may impede technological progress.
Earlier this year, some Republicans attempted to include language in Trump’s prominent legislation to prohibit state AI regulations for a decade.
This has exposed divisions within the GOP, as some members, such as Senator Marsha Blackburn (R-Tenn.) and Rep. Marjorie Taylor Greene (R-Ga.), have voiced concerns over state rights and preemptive protections regarding AI. Ultimately, the Senate voted overwhelmingly to drop these provisions.
Despite this, Senator Ted Cruz, chair of the Senate Commerce Committee, stated recently that efforts to impose a moratorium are “not dead at all.”
This emphasis on state laws aligns with Trump’s AI Action Plan, which aims to restrict state funding linked to AI regulations, undermine state law powers, and instruct the Federal Communications Commission (FCC) to evaluate whether to review a Federal Trade Commission (FTC) investigation that could be “overburdened.”
According to Lokay, California’s push for AI regulation might gain traction in efforts to prevent state legislation, though he pointed out significant hurdles remain for passing any such moratorium.
Aside from the internal conflicts within the GOP, Congress has historically struggled to enact technical legislation. Issues like online safety and digital privacy for minors have been repeatedly sidelined. Although there is interest in AI, a coherent federal framework still seems far off.
Matt Culkins, CEO of Appia, shared his changing perspective on the issue, stating, “A year ago, I thought the federal government should handle this. Now, it seems they are not moving forward.”
He suggested that the aggressive stance at the federal level regarding AI regulations could lead states like California to take the lead, which he finds unfortunate but possibly necessary.
Public sentiment has shifted in favor of California’s SB 53, as acknowledged by various AI companies that backed the bill earlier this month.
The bill, introduced by California Sen. Scott Wiener (D), is seen as a refined version of last year’s SB 1047, which was turned down by Newsom.
SB 1047 proposed strict requirements, mandating safety tests for AI models before release and developer accountability for serious harms, which drew criticism from some California Democrats in Congress.
Humanity, an AI firm, had previously expressed mixed support for SB 1047, citing that its advantages might offset its downsides after several amendments.
Though SB 53 still faces some critique, the overall pushback appears light this year. Among supporters is Kanna, who described the new law as a “strong start,” indicating improvements made to mitigate excessive downstream liability.
Newsom offered hints of support on Wednesday, clarifying that the state pursued a bill that maintained a balanced approach, collaborating with industry yet not capitulating to it.





