AI safety bill advancing in Tennessee was narrowed after White House feedback, analyst claims
A bill advancing through the Tennessee legislature that would create new safety requirements for large artificial intelligence companies has reportedly been significantly narrowed in scope after receiving input from the White House.
The Artificial Intelligence Public Safety and Child Protection Transparency Act mainly seeks to regulate large AI companies that build or run advanced AI systems, and would order them to be more transparent and proactive about risks, especially those pertaining to public safety or children. These actions include developing and publishing general safety plans for AI that could pose “catastrophic risks,” or threats that could cause major harm to a number of people, and then explain how they will reduce or prevent those risks. The bill, after coming to the state’s Senate floor Wednesday, was sent to its Commerce and Labor Committee for further evaluation.
The bill requires AI companies that offer AI tools used by minors to develop publicly available child safety protection policies that clearly show how they’ll protect kids from harmful interactions. These plans must specifically address risks like physical harm and emotional distress.
Tennessee’s bill comes as the debate about state-level AI laws under the second Trump administration has snowballed over the last year into a clash between the White House and its industry allies against state leaders and advocacy groups. The White House has pushed against state laws, to rein in a potentially fragmented “patchwork” of regulations, by proposing state AI law moratoriums. Pro-industry groups have said state laws are burdensome on businesses and stifle innovation. State leaders and advocacy groups have called the proposed moratoriums “dangerous,” arguing state laws are necessary to protect citizens in absence of a federal AI law.
The Trump administration has provided carve outs for some state regulations protecting children — in its AI Action Plan, Trump’s December executive order and last month’s National AI Policy Framework. During a presentation on Tuesday to Tennessee’s AI Advisory Council, which is co-chaired by Kristen Darby, the state’s chief information officer, Andrew Doris, a senior policy analyst with the nonprofit Secure AI Project, said the considered bill falls into those carve outs because it focuses on child safety and transparency and does not set rules for how models must be built.
Doris added that the White House reviewed the bill and its feedback was incorporated in the language being heard on the Senate floor.
While it’s not clear which amendments were offered up by the Trump administration, they significantly narrowed the bill’s scope. It focuses the requirements on large, high-impact AI companies with revenue and user thresholds. These are defined as “frontier developers,” companies that build AI tools and earn more than $500 million in revenue through their product or affiliate products. It defines a “covered chatbot” as a publicly accessible system that generates $25 million in annual revenue and could likely be used by minors. Another amendment added the stipulation that a chatbot must have at least 1 million monthly users. It also excludes certain chatbots, such as those used in video games or for customer service.
The amendments narrowed the definition of a “catastrophic risk,” emphasizing extreme, high-consequence harms, and adjusted transparency rules to require public summaries of safety practices rather than full internal disclosures. Other revisions strengthened and more specifically defined protections for minors interacting with AI systems, and provided carve outs for systems used in academic research.
Doris said the bill also offers “a bridge to potential federal action,” language that allows it to align with federal policy, not compete with or preempt it.
“It’s important to note that this bill has a provision that says if Congress ever passes a comparable federal standard for safety incident reporting, the Tennessee Department of Safety and the attorney general and other relevant state officials can designate compliance with that federal standard as sufficient for compliance with Tennessee’s standard,” he said.