Advertisement

Analysis of state AI orders reveals care, inconsistency

An analysis by the nonprofit Center for Democracy and Technology shows that while governors are being careful in their approaches to AI, each state does it differently.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
California Gov. Gavin Newsom
California Gov. Gavin Newsom stands with actor Danny Trejo, left, and SAG-AFTRA National Executive Director Duncan Crabtree-Ireland at a press conference at Raleigh Studios unveiling a vast expansion of California’s Film and Television Credit Program on October 27, 2024 in Los Angeles, California. (Mario Tama / Getty Images)

A policy analyst from the nonprofit Center for Democracy and Technology told StateScoop in an interview that while governors have shown diligence in their efforts to manage the use of artificial intelligence inside their governments, some states are proving more adept at thinking through the technology’s potentially harmful effects.

Madeliene Dwyer, a policy analyst for the center’s Equity in Civic Technology project, told StateScoop that in reviewing the AI executive orders of 13 states and Washington, D.C., she noticed four trends, including that states generally each have their own definition of AI, and that only a few states point to a common definition established by the federal government.

“I think in terms of wanting to have consistency across different levels of government, from local to state, one of our recommendations was to really align the definitions of AI across state government bodies,” Dwyer said. 

Dwyer’s analysis also showed that state executive orders acknowledge the potential harms of AI, they suggest agencies start with pilot projects and that states are prioritizing governance and planning before they dive headlong into using the technology.

Advertisement

In a blog post published this month summarizing the research, Dwyer also points out that while governors incorporate “concepts associated with protecting civil rights,” only three states — Maryland, Oregon and Washington — explicitly call out civil rights as a priority in their AI governance efforts.

“Maryland’s EO sets out principles that must guide state agencies’ use of AI, including ‘fairness and equity,’ and states that ‘the State’s use of AI must take into account the fact that AI systems can perpetuate harmful biases, and take steps to mitigate those risks, in order to avoid discrimination or disparate impact to individuals or communities based on their [legally protected characteristics],’” Dwyer wrote.

Dwyer pointed to the AI orders in California, Pennsylvania and Washington as worthy of emulation, for different reasons. CDT praised California for directing agencies to “ensure ethical outcomes for marginalized communities when using AI” and requiring agencies to inventory their uses of AI, a best practice that’s often implemented but not always required.

Pennsylvania is unique in stating that AI policies should not “overly burden end users or agencies,” or detract from the goal of delivering public services.

“A big thing we’re concerned about there is discriminatory use cases where an AI is determining who gets access to benefits in a certain program and it pushes people who are at most need of those benefits to the side,” Dwyer said. “I think the biggest concerns when it comes to regulating government use of AI that we’re really focused on in this new year is preventing unintended consequences [and] making sure states are using evidence-based best practices in their governance.”

Advertisement

Dwyer’s blog post concludes with seven recommendations, including pushes for states to put senior cross-agency members — such as chief privacy officers and chief data officers — in their AI task forces, and to ensure pilot projects have “clear goals and appropriate safeguards.”

Dwyer said the most important recommendation is for states to include “robust” risk management practices as they approach AI, a recognition of the risks that using AI poses to service delivery in state government.

“State agencies should really be required to implement appropriate risk management measures, such as pre- and post-deployment monitoring,” she said.

The audio for this interview will be published on Wednesday as part of StateScoop’s weekly Priorities Podcast.

Latest Podcasts