For many years, AI has been a source of anxiety for Americans. Movies and TV shows often showcase worst-case scenarios, shaping a negative perception that dates back to HAL’s infamous refusal in “2001: A Space Odyssey.” This fear, fueled by Hollywood, is now impacting policy decisions in various states. Unfortunately, many of these policies seem rooted more in fear than in a balanced understanding of the technology.
AI does pose risks—it can create false information in legal settings, compromise privacy, and be misused. There’s a clear need for human oversight in decision-making related to AI, and robust protections against misuse are absolutely essential. Recently, the White House emphasized the importance of developing a comprehensive national policy on AI.
However, the Trump administration also acknowledged that regulatory hurdles could stand in the way of progress. In December, President Trump issued an executive order meant to establish a national framework for AI regulation. He indicated that having a patchwork of different laws across states could stifle innovation. “We need a unified rule that avoids unnecessary burdens,” he stated, adding that this cohesive framework is key for the U.S. to excel in the AI sector.
The executive order tasked the Secretary of Commerce with producing a report on the state of AI regulations across the country. This report aims to identify state laws that may be seen as too restrictive, helping to prioritize cases for the Justice Department’s AI Litigation Task Force.
Under this directive, states like Colorado, already facing scrutiny, along with others like California, New York, and Illinois, could risk losing substantial federal funding. Although the order focuses on states, it’s less clear how it will affect cities. The Justice Department has recently set up a new division aimed at addressing local laws that might contradict federal regulations, indicating a readiness to challenge rules that conflict with national policies.
Centralizing the oversight of AI does seem like a sensible approach. City and state leaders, who might not fully grasp the complexities of AI and machine learning, could unintentionally hinder advancements in this field by, say, restricting the use of historical anonymized data for training algorithms.
Regardless of the potential federal funding at stake, local laws governing AI should align with federal policies that are designed to foster growth across industries that heavily depend on AI.
With America’s evolving economy, it seems crucial that AI innovators operate under a consistent national set of regulations, rather than navigating a confusing array of state laws.
There’s no need to fear the future. AI is a transformative force across many sectors—business, science, medicine, and beyond. It enhances efficiency, accuracy, and creativity. Cities and states should harness this tool for the benefit of their residents, moving towards well-considered regulations that support innovation, rather than creating barriers.

