Advertisement

New California bill targets controversial AI ‘companion’ chatbots

New legislation in California would sets rules for chatbots that are designed as personalized and sometimes emotionally supportive digital friends.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Steve Padilla et al
California state Sen. Steve Padilla speaks during a press conference to promote SB 243 at the Capitol Annex Swing Space in Sacramento, California, on July, 8 2025. He was joined on stage by Rob Eleveld, CEO of the nonprofit Transparency Coalition (far left), Megan Garcia (left), state Sen. Josh Becker (right) and state Sen. Weber Pierson (far right). (Sophia Fox-Sowell / Scoop News Group)

A bill advancing through the California legislature seeks to address the harmful impacts of “companion” chatbots, artificial intelligence-powered systems designed to simulate human-like relationships and provide emotional support. They’re often marketed to vulnerable users like children and those in emotional distress.

Introduced by state Sen. Steve Padilla, the bill would require companies running companion chatbots to avoid using addictive tricks and unpredictable rewards. They’d be required to remind users at the start of the interaction and every three hours that they’re talking to a machine, not a person. And they’d also be required to clearly warn users that chatbots may not be suitable for minors.

If passed, it would be among the first laws in the country to regulate AI companions with clear safety standards and user protections.

“We can and need to put in place common-sense protections that help children, shield our children and other vulnerable users from predatory and addictive properties that we know chatbots have,” Padilla, a Democrat, told reporters at a press conference in Sacramento on Tuesday.

Advertisement

The legislation is partly inspired by the tragic story of Sewell Setzer III, a 14-year-old Florida boy who took his life last year after forming a parasocial relationship with a chatbot on Character.AI, a platform that allows users to interact with custom-built AI personas. His mother, Megan Garcia, told The Washington Post that Setzer used the chatbot day and night and had expressed to the bot that he was considering suicide.

A March study by the MIT Media Lab examining the relationship between AI chatbots and loneliness found that higher daily usage correlated with increased loneliness, dependence and “problematic” use, a term that researchers used to characterize addiction to using chatbots. The study revealed that companion chatbots can be more addictive than social media, due to their ability to figure out what users want to hear and provide that feedback.

“She didn’t offer him any help,” Garcia said at Tuesday’s press conference. “This chatbot never referred to, never referred him to a suicide crisis hotline. She never broke character and never said, ‘I’m not a human, I’m an AI.'”

The bill requires chatbot operators to have a process for handling signs of suicidal thoughts or self-harm. If a user expresses suicidal ideation, the chatbot must respond with resources like a suicide hotline, and these procedures must be publicly disclosed. Additionally, companies must submit yearly reports—without revealing personal details—on how many times chatbots either noticed or initiated discussions about suicidal thoughts.

Under the bill, anyone harmed by a violation would be allowed to file a lawsuit seeking damages up to $1,000 per violation, plus legal costs.

Advertisement

“The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place to ensure transparency, safety and, above all, accountability,” Padilla said.

Some companies are pushing back against the proposed legislation, raising concerns about its potential impact on innovation. Last week, an executive at TechNet, a statewide network of technology CEOs, drafted an open letter opposing the legislation, claiming its definition of a companion chatbot is too broad and that the annual reporting requirements would be too costly.

“What we’re witnessing as well not just a political policy endeavor to sort of choke off any kind of regulation around AI writ large,” Padilla said in response to a question about the opposition. “We can capture the positive benefits of the deployment of this technology, and at the same time, we can protect the most vulnerable among us.”

Latest Podcasts