Advertisement

Connecticut Senate passes unique bill on private sector AI use

A bill moving through the Connecticut legislature would be the first law to comprehensively regulates the private sector's use of AI.
robots
(Getty Images)

The Connecticut Senate on Wednesday passed a first-of-its-kind bill that regulates private sector deployment and use of artificial intelligence systems.

The Connecticut AI bill, SB 2, passed in the Senate 24-12, after two years of discussions in the state regarding the technology, the Associated Press reported. If enacted, the bill would become the first law in the country that comprehensively regulates private sector use of AI. It creates requirements for companies that develop or use “high-risk” AI systems in consequential decision-making processes, such as decisions about criminal cases or access to things like education, employment, financial services or government services.

The legislation arrives as Connecticut attempts to get ahead of AI. Gov. Ned Lamont last year signed a law that created an AI “bill of rights” and a group inside the state legislature tasked with recommending how AI should be regulated. The legislation also required the state judiciary to inventory annually how the government is using AI, a provision intended to prevent “unlawful discrimination” and other harmful outcomes.

Connecticut’s proposed AI bill also would provide protections against bias in AI decision-making systems that might use discriminatory factors such as race, age, religion, disability or other protected classes. AI developers would have until July 2025 to take “reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.”

Advertisement

Developers and companies that use AI would also have to provide disclosures about how they use AI, what data is used to train their systems and maintain risk management policies.

The proposal would also penalize the creation and dissemination of non-consensual deepfake content, pornographic or political. The bill would also require AI-generated content to contain digital watermarks noting their origins.

Tatiana Rice, deputy director for U.S. Legislation at the Future of Privacy Forum called the Connecticut legislation a “groundbreaking step” that could set a framework for national legislation.

“The legislation aims to strike an important balance of protecting individuals from harms arising from AI use, including creating necessary safeguards against algorithmic discrimination, while promoting a risk-based approach that encourages the valuable and ethical uses of AI,” Rice said in a statement on the Future of Privacy Forum’s blog.

The advocacy group Consumer Reports said several issues in the legislation need strengthening, including its enforcement mechanisms held by state attorney general.

Advertisement

“With Congress struggling to push forward on setting new standards on AI, Connecticut is stepping up to shape policy,” Grace Gedye, a Consumer Reports policy analyst, said in a press release. “Because this could become the first broad AI bias law in the nation, it is critical to get it right. More work remains, especially around transparency requirements and enforcement.”

The bill now goes to the state’s House of Representatives for a vote. If enacted as written, some of the bill’s regulations would go into effect on July 1, 2025, while other compliance requirements would begin in 2026.

Keely Quinlan

Written by Keely Quinlan

Keely Quinlan reports on privacy and digital government for StateScoop. She was an investigative news reporter with Clarksville Now in Tennessee, where she resides, and her coverage included local crimes, courts, public education and public health. Her work has appeared in Teen Vogue, Stereogum and other outlets. She earned her bachelor’s in journalism and master’s in social and cultural analysis from New York University.

Latest Podcasts