As artificial intelligence reshapes American life, Congress is taking steps to implement safeguards. The House recently passed a settlement package included in Section 43201, affecting state and local regulations on AI models, systems, and decision-making processes over the next decade.
In response, the Senate introduced its own moratorium, aiming to restrict federal broadband funding to states that don’t comply with specific standards.
Supporters argue that these moratoriums are necessary to prevent inconsistent regulations that could jeopardize the nation’s AI competitiveness.
However, this approach could undermine meaningful national efforts to regulate the more harmful practices of large tech companies. With the societal and economic challenges brought on by AI, there’s a risk of sidelining state legislatures, which play a crucial role in safeguarding the interests of children and working families.
If Congress doesn’t act, states would become the frontline defense against big tech. Texas, Florida, and Utah are already tackling issues like online child protection, data privacy, and platform censorship.
Section 43201 puts many existing state laws at risk, broadly defining “automated decision-making systems” which could encompass main features of social media platforms, like TikTok’s content feed and Instagram’s recommendations.
At least twelve states have laws that require parental consent for minors using these platforms. But, these laws specifically target social media and might be interpreted as controlling “automated decision systems.”
Moreover, this Section could block state privacy laws that limit the use of algorithms, including those involving AI that analyze consumer behavior.
The moratorium raises concerns beyond its scope. It may undermine American federalism by affecting state laws that ensure AI meets promises made by officials like Vice President JD Vance. At the Paris AI Summit, he cautioned against seeing AI merely as a destructive force that automates jobs.
Vance envisions policies where AI boosts worker productivity, leading to better wages and improved living conditions. Achieving that vision largely depends on state-level initiatives. Lawmakers in various states are already exploring innovative solutions.
For example, Tennessee’s Elvis Law helps protect artists’ voices and images from unauthorized AI manipulation. Utah’s AI Consumer Protection Act mandates transparency when consumers interact with AI-generated content.
Similar initiatives in states like Arkansas and Montana are establishing legal frameworks around digital property rights related to AI.
However, all these efforts are jeopardized. State governments are vital in navigating the complexities that new technologies introduce. Federalism allows states to experiment and foster competition, revealing both successful and flawed regulatory approaches.
The wider social and economic implications of technology are especially pressing as we evaluate AI’s effects on youth and employment. A coalition of sixty advocacy groups has warned that AI chatbots could pose significant dangers to children, pointing to distressing cases of teens facing mental health crises exacerbated by AI interactions.
Leaders in the tech industry are also raising alarms. Humanity CEO Dario Amody predicts AI could potentially push unemployment rates as high as 20% within five years.
Innovation, while often disruptive, needs boundaries to prevent harm. As such, forty state attorneys general, from both parties, have voiced opposition to Section 43201, cautioning that it could override orchestrated laws aimed at managing AI’s risks.
It’s clear that not all legislation is created equal. Some states, like California and Colorado, impose regulations reminiscent of European standards that could negatively affect smaller tech companies and open-source developers.
Instead of completely dismantling federalism, Congress should consider targeted regulations that are carefully crafted to address high-risk AI bills, taking cues from California and Colorado.
Without a solid federal AI framework in place, states should retain their ability to act freely, ensuring that AI advancements support a thriving middle class, fostering innovation and competitiveness throughout America.
There is significant potential for the future of American AI, and maintaining democratic oversight is essential to achieving that potential.

