Connecticut governor signs bill calling for ‘AI bill of rights’

Concerned with the potential for discrimination, Connecticut Gov. Ned Lamont signed a bill creating stronger governance around AI.
Connecticut Gov. Ned Lamont
Connecticut Gov. Ned Lamont speaks about the state's efforts to get more people vaccinated at Hartford HealthCare St. Vincent's Medical Center in Bridgeport, Connecticut on February 26, 2021 (Joseph Prezioso / AFP / Getty Images)

Connecticut Gov. Ned Lamont on Thursday signed legislation putting tighter governance around the state’s current and future uses of artificial intelligence. 

The new law requires the formation of a working group inside the state legislature, tasked with making recommendations on further AI regulation and an “artificial intelligence bill of rights.” It also requires the state judiciary to conduct annual inventories of Connecticut’s AI use to prevent against “unlawful discrimination” and other harmful outcomes, starting next February.

The law comes amid heightened interest from state lawmakers about the dangers of automated systems that could potentially reinforce biases in human decision-making. The text contains a thorough accounting of the types of discrimination that the state’s Office of Policy and Management is required to prevent through the creation of new AI policies and procedures, including discrimination based on “age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income.”

The law also provides a detailed definition of AI itself, describing it as “an artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight or can learn from experience and improve such performance when exposed to data sets” and as any system that is “designed to think or act like a human.”


Connecticut Chief Information Officer Mark Raymond told StateScoop in a recent interview that he sees great potential in artificial intelligence and large language models, which have become more popular since the public launch last fall of ChatGPT, but that government must also stay vigilant.

“Government needs to be transparent, accountable, fair, safe, all those things,” Raymond said. “Those are the premises that we put around our use of the technology. There’s going to be great use-cases. I think the fear of artificial intelligence taking over the world, that possibility exists if we don’t take the premises I talked about and build them in.”

One use of AI that Raymond’s office is currently exploring, he said, is improving the state’s chatbots “to generate a set of more realistic responses and curated responses.” He said the state does not plan to use text generation models in real time because officials would be unable to control their output, but instead they’ll use those models to help chatbot designers design responses.

Latest Podcasts