Congress — again — considers preempting state AI laws, this time bundling with child online safety bills
After numerous stalled attempts from the White House and Congress to preempt states from enforcing their artificial intelligence laws, yet another attempt is brewing in Congress, this time potentially bundling AI preemption with child online safety legislation.
The executive director of the National Association of State Chief Information Officers, a group that represents states’ top technology officials, sent a letter to congressional leaders Wednesday urging them to reject any such continued attempts at state AI law preemption, even one coupled with laws designed to protect kids. The letter, signed by NASCIO’s Doug Robinson, notes that such efforts would “strip states of the ability to address real AI risks in their communities and provide needed protection for children.”
“While we share the objective of protecting children online, preemptive blanket restrictions on state regulation of AI are the wrong approach,” Robinson wrote. “In the absence of robust, comprehensive federal AI legislation, states have stepped forward to develop their own solutions that protect children, safeguard consumer data and strengthen cybersecurity.”
Robinson wrote that the latest plan would still bundle the unpopular preemption of state AI laws into the National Defense Authorization Act, a scheme that circulated last week, but that in this case relies on the current interest in child online safety measures to find inclusion.
According to the child-advocacy group Enough Abuse, 45 states had, as of August, enacted laws designed to address the harms of AI-generated or computer-edited child sexual abuse materials. The National Center for Missing and Exploited Children told The New York Times last July that it had received 485,000 reports of AI-related CSAM in the first half of the year alone, after just 67,000 reports in all of 2024. Yet even many advocacy groups aren’t sold on preempting state protections.
In a recent opinion piece for Time, Michael Kleinman of the Future of Life Institute, a nonprofit research and advocacy group that promotes responsible responsible management of technologies like AI and nuclear weapons, said that House Republicans exhibited “ghoulish” behavior in pushing state AI law preemption so soon after a Nov. 19 subcommittee hearing that highlighted numerous harms exacted by AI-powered chatbots.
He pointed to seven ongoing lawsuits against OpenAI, filed by families who believe ChatGPT bears responsibility for the deaths of their children, and the recent recall of the talking plush bear Kumma, which used a large language model to impart wisdom about sexual fetishes and the most likely places to find knives.
“It’s not hard to see why the vast majority of Americans want AI regulation to protect kids and more,” Kleinman’s piece concludes. “The mystery is why some of those in Congress do not.”
The House Energy and Commerce Committee on Tuesday considered a package of 19 bills designed to protect kids online, including the Kids Online Safety Act, which easily passed the Senate last year, though not without some raising concerns of how it might interact poorly with free speech rights. Alex Whitaker, NASCIO’s director of government affairs, said the plan to bundle state AI law preemption with online child safety bills is a strategic one, but not one that avoids past concerns with congressional attempts to preempt state autonomy.
“The maneuver was to placate Democrats, … and Republicans as well, who had concerns that state preemption is going to put child protection and internet safety at risk,” Whitaker said. “But there is a concern as well that even with these federal laws, this is not solving the specific issue of child safety online, because still at play is the issue that states won’t be able to respond adeptly and proactively.”
Federal attempts to preempt state AI laws have become difficult to count as they morph in Congress or are revived in executive orders. President Donald Trump last week circulated a draft order that would have established an AI litigation task force to challenge state AI statutes, and restrict funding for states with “onerous” AI laws, such as those in California and Colorado. This was after the president in July published his AI Action Plan, telegraphing a desire to pull grant funding from states that enforce “burdensome” or “unduly restrictive” AI laws.
The Verge reported Tuesday that a recent draft order would direct federal agencies to punish states for restrictive AI laws, all while consulting directly with the tech venture capitalist David Sacks. Many of the appropriate technology agencies would, reportedly, be omitted or hold greatly reduced roles, including the National Institute of Standards and Technology, the Office of Science and Technology Policy, the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation. Instead, the work of punishing states would be carried out by the Department of Justice, Department of Commerce, Federal Trade Commission and Federal Communications Commission.
Past attempts by Congress to preempt state AI laws were defeated after opponents from both parties complained of overreach, or other concerns, despite insistence by Commerce Sec. Howard Lutnick that such a measure was needed to protect national security, and “required to stay ahead of our adversaries and keep America at the forefront of AI.”
A proposal in May was met by opposition from a group of more than 260 state legislators representing 50 states who said preemption would have meant removing a broad range of protections against hazards, ranging from deepfake scams to algorithmic discrimination. Another attempt was defeated in July after Sen. Marsha Blackburn rallied opposition after noting that the “provision could allow Big Tech to continue to exploit kids, creators, and conservatives.”
In addition to opposition by state technology officials to the latest attempt at AI preemption, 36 attorneys general also drafted a letter to Congress, on Tuesday, echoing concerns of AI threats — “We are seeing scams powered by AI-generated deepfakes, social media profiles, and voice clones. We are also deeply troubled by sycophantic and delusional generative AI outputs plunging individuals into spirals of mental illness, suicide, self-harm, and violence.” — but noting the imperative of allowing states to address such issues as they arise.
“Broad preemption of state protections is particularly ill-advised because constantly evolving emerging technologies, like AI, require agile regulatory responses that can protect our citizens,” the AG letter reads. “This regulatory innovation is best left to the 50 states so we can all learn from what works and what does not.”
States have introduced hundreds of pieces of legislation designed to mitigate the potential risks and actual harms that large language models have visited upon society over the last several years. Thirty-eight states have adopted or enacted AI laws this year, according to the National Council of State Legislatures, addressing a dizzying range of concerns, from collective bargaining agreements to the administration of professional licenses.
But across the many imaginings of how state AI laws could be preempted, the motivation hasn’t changed, according to Travis Hall, director for state engagement with the Washington think tank Center for Democracy and Technology.
“The motivation for this is to limit regulatory oversight of the development and deployment of these tools,” Hall said. “This remains an attempt to essentially create a vacuum of accountability and oversight.”
Beyond preempting state AI laws, Hall warned that the latest plan would also broadly “preempt a whole ream of things in the kid safety space” that “have nothing to do with kids safety but have to do with things like workers’ rights or not being discriminated against or not having prices be done in a way that’s based on algorithms.”
Hall said the nonprofit he works for doesn’t like some of the child protection laws that states are passing, for free speech reasons, but that simply eliminating them, “without any kind of replacement, is completely wrongheaded.”