Federal preemption could disrupt states’ child safety protections, expert warns
During a House Energy and Commerce subcommittee hearing Tuesday, one expert warned that broad federal preemption provisions included in child safety legislation under review would undermine state protections for children.
The Subcommittee on Commerce, Manufacturing and Trade hearing covered 19 bills, that spanned issues of age verification, data privacy, parental controls, artificial intelligence, screen time and digital literacy. House leaders especially focused on updates to the Kids Online Safety Act and the Children and Teens’ Online Privacy Protection Act, both of which aim to address the privacy and safety risks minors face online.
“I see a number of gaps that would be left by the legislation, the biggest one that I see is the preemption standard that is within all of the bills,” Kate Ruane, director of Free Expression Project at the nonprofit Center for Democracy and Technology, told the committee Tuesday. “If we finish this process with the standard in place, we will likely wind up with children having less protections at the state level than they do today, and we will have failed to do our jobs.”
The Kids Online Safety Act, first introduced in 2022, would create a “duty of care” mandate, requiring technology platforms to prevent and mitigate harms like cyberbullying, addiction-like behaviors and content promoting self-harm or exploitation. The legislation would require that app users have options to limit interactions and notifications, disable addictive features and opt out of algorithmic recommendations.
Ruane said the legislation should more clearly define “duty of care,” to avoid legal disputes and the risks of preempting state authority.
“Whatever Congress passes should be should be equal in scope to the preemption standard that that they would attempt to put in place,” Ruane said in an interview. “The problem is that the bills on offer … don’t do that comprehensively. They do not adequately protect kids for expression rights. They do not adequately, adequately protect kids and everyone’s privacy rights.”
If the Congress wants to keep preemption provisions, Ruane suggested, it could leave room for states to add their own laws on top of the federal standard.
‘Significant concerns for all users’
The Children and Teens’ Online Privacy Protection Act, another bill discussed during the hearing, also called COPPA 2.0, would extend online privacy protections to older kids. Protections already afforded to children 13 years old and younger would be extended up to 16 year-olds. It would also prohibit targeted advertising to teens without their consent and expand data rights for both parents and teens, offering the ability to access, correct and delete personal information.
Experts and lawmakers raised concerns, though, about the potential misuse of private data and the efficacy of age-verification tools.
“Age verification and assurance raised significant concerns for all users rights, to name just two.” Ruane told lawmakers during the hearing. “Age assurance techniques mean either more collection or more processing of sensitive data, leading to increased risks of data breaches, which could include people’s IDs or biometric information. Age assurance also chilled online engagement with sensitive topics that people want to keep private.”
Jenna Leventoff, senior policy counsel at the American Civil Liberties Union, said online safety should be balanced with access to information, and noted the potential for censorship of information teens and children may need to stay safe.
“We are still concerned that it will scare companies into removing lawful speech that could be valuable or even lifesaving for teens,” Leventoff wrote in an email. “Over-broad language risks censoring everything from jokes and hyperbole to useful information about sex ed and suicide prevention.”
A ‘tremendously ill advised’ AI moratorium
The potential passage of a moratorium on states’ AI laws loomed large during the hearing. Though previous attempts by the Trump administration and Congress have stalled, Ruane feared the latest attempt would find a vehicle in the child online safety legislation.
“It’s not clear to me that the AI moratorium will become attached to some kids safety package, but if it was, it would be tremendously ill advised,” Ruane said. “States are doing a lot of work examining how these chatbots are affecting kids’ privacy, affecting kids generally, and thinking about what the right policies are, what the right interventions are, how to do so, how to how to engage in interventions in a constitutional way. A broad preemption of that would interrupt that kind of policymaking.”
Doug Robinson, the executive director of the National Association of State Chief Information Officers, a group that represents states’ top technology officials, sent a letter to congressional leaders last week urging them to reject any such continued attempts at state AI law preemption, even one coupled with laws designed to protect kids.
In October, California Gov. Gavin Newsom vetoed legislation that would have placed new safeguards on AI-powered chatbots. The legislation was inspired by recent news stories of teens who committed suicide after forming unhealthy relationships with companion chatbots.
Ruane also noted that none of the federal bills discussed during Tuesday’s hearing addressed AI tools related to making consequential decisions about adults, specifically in areas that have civil rights impacts. New York City Council, recently passed a collection of bills designed to provide a heightened level of oversight for the city’s use of artificial intelligence tools.
“Congress really should not be enacting a broad preemption of state laws related to AI if they’re not also going to be enacting comprehensive protections related to AI,” Ruane said. “Particularly AI’s impact on consequential decision-making, especially with respect to to its use in hiring, employment, credit, health care related decisions, all of these areas that have significant civil rights impacts.”
Ruane said she remains hopeful for enhanced privacy protections for children but warned against searches for a one-size-fits-all answer.
“A lot of of the issues that are coming up are contextual. Some conversations that a system has with me aren’t going to harm me, but they might harm the person next door,” she said. “So those are things that we really need to think through, not only what good legal policy is, but what good company policy is for how to design and test these systems going forward.”