Advertisement

Colorado releases new AI Policy framework aimed revising the state’s 2024 law

A new proposal would make adjustments to Colorado's AI law, including requirements to prevent algorithmic discrimination and a dilution of liability.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
AI
(Getty Images)

Colorado lawmakers are moving closer to rewriting one of the nation’s first comprehensive artificial intelligence laws, after a state working group on Tuesday released a framework aimed at resolving months of tension between consumer advocates and the tech industry.

The proposal is expected to guide legislative changes to the state’s landmark 2024 AI law, which drew national attention and criticism for its sweeping requirements on businesses and government users of high-risk AI systems. The recommendations focus on clarifying how companies must disclose the use of AI in high-stakes decisions, such as hiring, housing and lending, and how responsibility should be split if something goes wrong.

“I am very grateful to the hardworking members of the Colorado AI Policy Working Group that have reached a unanimous agreement on AI policy to protect consumers and support innovation in our state,” Gov. Jared Polis said in a press release on Tuesday.

The Colorado AI Act established one of the first comprehensive frameworks for governing AI, including requirements to prevent algorithmic discrimination in areas like hiring, housing and government services. But after receiving pushback from technology companies, including that it would stifle innovation and balloon tech companies’ budgets, Polis convened the state legislature for a special session last August, when lawmakers voted to delay enforcement until this June, to allow time for revisions.

Advertisement

Under the new proposal, developers would be required to share key details about how their systems work, including data sources and limitations, while businesses and government agencies using the tools would need to notify people in plain language when AI is involved in decisions.

The updated framework also tackles liability. Rather than placing blame solely on one party, as in the original law, the proposal would assign responsibility based on the role developers and deployers each play in AI-driven decisions.

State Rep. Brianna Titone, a sponsor of the original legislation, said questions still remain about whether the recommendations can pass the legislature.

“The work done with the task force is a good starting point, but no legislators were involved in the process,” Titone wrote in an emailed statement. “This was by design because of open meeting laws. There is no telling how a proposed bill will be introduced and changed through the process. … It is encouraging that a lot of compromise was made, but as I understand, there were a lot of yes with caveats among many voting members.”

Colorado lawmakers are expected to take up the revised policy during this year’s legislative session, with the outcome likely to influence how other states approach AI governance.

Advertisement


Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts