Advertisement

ACLU warns police shouldn’t use generative AI to draft reports

A new ACLU report says police shouldn't use generative AI to write incident reports, citing concerns about biases and transparency.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
A photograph of a policeman writing on a clipboard while talking to someone seated in their car.
(ftwitty / Getty Images)

Following news this summer that some police departments started using generative artificial intelligence to write incident reports, the American Civil Liberties Union last week urged against the practice in a new report, citing concerns about the technology’s biases and transparency.

The ACLU’s report, titled “Police Departments Shouldn’t Allow Officers to Use AI to Draft Police Reports,” comes after the police technology company Axon this past April launched a new AI product called Draft One, which is used by a handful of police departments — including in Oklahoma City; Fort Collins, Colorado; and Lafayette, Indiana.

Part of generative AI’s allure for law enforcement agencies is that it can reduce officers’ workloads, which includes manually writing incident reports, which are sometimes used as sworn affidavits to bring criminal charges.

But leaving this process to generative AI, the ACLU said, presents a variety of worries.

Advertisement

“Because police reports play such an important role in criminal investigations and prosecutions, introducing novel AI language-generating technology into the criminal justice system raises significant civil liberties and civil rights concerns,” the report said. “These concerns include the unreliability and biased nature of AI, evidentiary and memory issues when officers resort to this technology, and issues around transparency. In the end, we do not think police departments should use this technology.”

Draft One allows officers to upload body camera footage and have its audio transcribed. Using OpenAI’s GPT-4 large language model, Draft One turns those transcripts into a first-person narratives that officers can use for police reports. While other companies, such as Policereports.ai and Truleo, offer similar products, Axon’s product is the most prominent, according to Jay Stanley, a senior policy analyst at the ACLU and author of the new report.

“Policing involves the use of extraordinary powers that non-law enforcement people don’t have, and it’s not just a baloney paperwork side task that police have to write reports just in which they describe what happened and why they exercised their powers. It’s a core part of the way the criminal justice system works,” Stanley told StateScoop.

While officers can edit Draft One’s output before swearing to its veracity and submitting it, and Axon has said it created safeguards to protect against errors in the reports its technology produces, Stanley writes that these fail-safes might not be enough to overcome generative AI’s biases.

“Because LLMs are trained on something close to the entire Internet, they inevitably absorb the racism, sexism, and other biases that permeate our culture,” the report reads. “While OpenAI and other companies that create LLMs put in place filters and other tools to try to keep those stark biases from showing up in the AI’s output, they are not entirely effective.”

Advertisement

Stanley notes that in most police cases, officer reports are critical pieces of evidence. For more minor cases, Stanley said, reports can be the only piece of evidence showing that a crime was committed. And for more critical incidents, such as a homicide case that may result in capital sentences, police officers’ first-person accounts should be preserved as independent pieces of evidence, he said.

“It’s important to also capture the officer’s subjective experience and memory of an incident — which may be pivotal to determining whether to file charges and later, in any prosecution — which will be based on all five of an officer’s senses, as well as their perception of human nuances of the situation such as whether somebody is hostile or meek, frightened or bold,” the report reads. “This subjective experience cannot be captured by a bodycam. While the body camera footage is not going anywhere, the officer’s memory may be fleeting and unstable.”

To counter these potential pitfalls, Stanley said, communities and their elected representatives should look closely at what local law enforcement agencies are doing and demand transparency and accountability for those agencies and how they use generative AI.

“If there is to be cautious experimentation with this kind of technology,” the report reads, “it should be more cautious and it should be more transparent, so that the public at large, media and experts can evaluate and explore what kinds of limitations it might have.”

Keely Quinlan

Written by Keely Quinlan

Keely Quinlan reports on privacy and digital government for StateScoop. She was an investigative news reporter with Clarksville Now in Tennessee, where she resides, and her coverage included local crimes, courts, public education and public health. Her work has appeared in Teen Vogue, Stereogum and other outlets. She earned her bachelor’s in journalism and master’s in social and cultural analysis from New York University.

Latest Podcasts