Advertisement

Civil rights experts warn of AI’s potential to harm the public

Civil rights experts speaking at an event in Washington reminded of recent cases in which government AI systems led to harm.
AI letters with red background
(Getty Images)

Panelists at a Georgetown University event in Washington, D.C., Tuesday argued that civil liberties should be included in conversations about artificial intelligence, especially since the emerging technology has recently been linked to several cases that impact marginalized populations and infringe on constitutional rights.

“Civil rights protections are not new, existing legal bodies can help and it is everyone’s responsibility to ensure AI is used responsibly,” Elizabeth Laird, director of the Washington nonprofit Center for Democracy & Technology, said during a panel of the Digital Benefits Conference hosted by the Beeck Center for Social Impact and Innovation.

Clarence Okoh, a senior attorney at Georgetown’s Center for Privacy and Technology, pointed to a predictive policing program in Pasco County, Florida, which was discontinued last March, that used a generative AI tool to comb through student data and flag at-risk youth, leading to increased surveillance and discipline.

“We have reason to believe that students who were subject to this program were subject to higher rates of exclusionary discipline — so suspensions, expulsions, so on and so forth, as well as greater contact with the criminal legal system, so much more likely to have or be referred to law enforcement,” Okoh said.

Advertisement

Okoh said the county’s system also flagged students who fell under the Baker Act, a Florida law that allows involuntary psychiatric detention of minors for up to 72 hours. He said school districts shared those records with law enforcement agencies without an explicit academic purpose, which normally requires parental consent, but was rarely acquired.

“There were a number of agencies involved in this program. It wasn’t just the sheriff’s office or the school district. The Department of Children and Families was involved. Local mental and behavioral health care systems were involved. They were all engaged in this data sharing with the sheriff’s office, not attending to or thinking about the implications for civil and human rights for those young people,” Okoh said.

School districts and police departments are not the only government entities accused of violating civil rights using AI.

According to a long-running class action lawsuit against the State of Idaho, filed on behalf of adults with intellectual and developmental disabilities in 2015, the state severely cut their Medicaid assistance. According to the American Civil Liberties Union, the Idaho Department of Health and Welfare defended the cuts in court, but refused to disclose the formula it used to calculate the reductions. The ACLU claims Idaho’s Medicaid program uses an AI-powered tool known as the Supports Intensity Scale, used by roughly 20 other states, which has led to reductions in services for some disabled individuals who had relied on that assistance for years.

Henry Claypool, a technology policy consultant for the American Association of People with Disabilities, argued during the panel that Idaho’s Medicaid program and its generative AI assessment tool makes the lives disability community, an already marginalized population, even more difficult.

Advertisement

“One thing we do know is that people in these circumstances are often not their own best advocate,” Claypool said during the panel. “They are really just interested in getting the services they need. And if the state is going to make an adjustment for them, they’re probably more inclined to say, ‘Well, let me see if I can get by with what I’m getting now, I don’t want to make a into a lawsuit.’ So you can see how this pervasive application of a new technology has really disadvantaged a population that isn’t inclined to advocate for itself.”

A federal judge ruled last September that Idaho must disclose its decision-making procedures.

As state and local governments explore how to use AI, Claypool encouraged public officials to build diverse teams.

“Form an inclusive team, get people with really diverse backgrounds, get the people that you probably don’t want to talk to, and talk to them first, and make sure you know this transparency is actually going to serve you well if you go about this,” Claypool advised.

Okoh urged public administrators to also involve civil rights experts when considering the adoption of generative AI tools and to consider whether the emerging technology should used at all to solve a particular problem.

Advertisement

“Rather than saying, how can technology play a role in addressing X, Y or Z issue, we have to first ask the question: Should technology play a role in this arena, period?” Okoh said.

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts