Congress must preserve state authority in AI governance
Artificial intelligence is rapidly transforming how governments serve their constituents — from optimizing emergency response to streamlining licensing and zoning. But as AI systems become more powerful and pervasive, the question of how to govern these tools responsibly is immediate.
That urgency intensified this last month. A leaked executive order from the Trump administration would block states from enforcing their own AI regulations, and House leadership is now seriously considering inserting similar preemption language into the National Defense Authorization Act. Meanwhile, California’s new AI law has prompted federal pushback, and Sen. Ted Cruz has announced he will introduce legislation challenging it directly.
Washington is weighing whether to strip states of the authority to govern the AI systems they deploy every day. As members of the Association for Computing Machinery’s U.S. Technology Policy Committee — a nonpartisan, nonprofit body of computer scientists, technologists and policy experts — we believe this would be a grave mistake. AI’s impact on state-level decisions is too broad, too varied and too context-specific for Washington to delay and deliberate on.
AI’s impact is happening now, not in some distant future.
Officials across federal, state and local governments hold unique civic responsibilities: to maximize social benefits while mitigating potential harms for their respective constituencies by providing the right balance of investments, incentives, enforceable guardrails and jurisprudence. For example, states and local governments have unique requirements to serve their constituents by providing planning, zoning and licensing services.
AI is impacting many local services, but a one-size-fits-all approach doesn’t work for geospatial and object-detection tasks like bike lane planning or pothole detection. States manage diverse services — emergency response, housing, education, health, utilities and public safety — where AI must align with local laws, needs and conditions to be effective.
A federal governance framework would best serve the needs of highly innovative businesses, which are rapidly developing and deploying AI systems, and the American people. This type of framework also provides businesses with the predictability and consistency they need to continue moving at warp speed to deploy the next big thing. While preferable, Congress should recognize that a national framework needs to include more than a moratorium on state laws. AI doesn’t fit neatly into existing regulations, and innovative approaches to mitigating harms are required to balance the promise of AI with its misuse. States are putting in the work to find this balance.
States must retain their constitutional authority to govern areas that directly impact their constituents. If Congress were to impose a freeze on state AI regulation, it would limit the ability of state agencies to mitigate issues in health, zoning, licensing, education, transportation and policing. Removing this authority could lead to several unintended downstream effects.
One effect would be more unverified systems flooding procurement pipelines. In a regulatory vacuum, vendors can market unvalidated AI “solutions” to officials who may not have the technical capacity to distinguish quality from hype, further eroding public trust in AI. Average procurement of tech in government agencies can also take up to 2 years, further complicating the ability of states to deploy safe, effective and reliable AI systems.
Another effect would be higher costs and increased complexity for state agencies. Without the authority to set their own guardrails, state and local governments would be forced to rely on ad-hoc contractual solutions, precisely the kind of patchwork Congress claims it is trying to avoid.
It would also stifle beneficial AI adoption. If states cannot manage or mitigate AI risks, they may avoid deploying transformative tools altogether — stalling innovation where it is needed most.
Instead of sidelining states, Congress should empower them. As U.S. Supreme Court Justice Louis Brandeis said in 1932, states are laboratories of democracy. Many are already testing mixes of incentives — transparency, accountability, contestability — with sandboxes and public-private partnerships. Closer to local businesses, states can craft responsive regulation and act quickly on emerging AI harms, preventing national crises. Supporting state-led innovation tailored to local conditions can amplify U.S. leadership in AI around the globe.
AI companies understandably want to avoid a patchwork of conflicting regulations, but the bigger threat today is the absence of clear, enforceable safeguards. Without clear standards, businesses may exaggerate product capabilities, label software as “AI” just to skirt oversight, or avoid deploying AI in critical areas due to safety concerns. Sector-specific laws add confusion, leaving contracts to fill the gap, fueling uncertainty and a race to the bottom when it comes to oversight.
The future of AI governance depends on collaboration. Congress and Donald Trump’s administration should reject proposals that block state AI laws — whether through the NDAA or executive action — and instead work with states to develop a shared governance model that protects the public while encouraging responsible innovation.
Empowering states is not a barrier to progress. It’s how to ensure AI strengthens America’s communities, rather than weakening the institutions that serve them.
Larry Medsker is a former chair of the Association for Computing Machinery’s U.S. Technology Policy Committee. He is a physics research professor at the University of Vermont.
Jeremy Epstein is a former chair of ACM’s U.S. Technology Policy Committee. He is an adjunct professor of cybersecurity and privacy at the Georgia Institute of Technology.