Advertisement

AI tool that writes police reports needs better oversight, transparency, report says

The Electronic Frontier Foundation says Axon's Draft One tool lacks features for determining which parts of a police report were written by a human and which were written by a machine.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
police body cam
(Getty Images)

Axon’s Draft One, a generative artificial intelligence tool that helps law enforcement officers write police reports based on body-worn camera audio, lacks oversight and transparency mechanisms, according to a report published Thursday by the Electronic Frontier Foundation.

The report follows an investigation conducted by EFF using public records the organization obtained from dozens of police agencies using Draft One, the tool’s user manuals and other marketing materials. It found that the tool, seemingly by design, lacks meaningful oversight features, potentially allowing it to circumvent audits. Lack of oversight, the report claims, could make it difficult to assess how the tool affects criminal justice outcomes.

Use of the tool has expanded after a handful of police and law enforcement agencies began using the technology last fall. EFF’s investigation found that the Palm Beach County Sheriff’s Office, which requires a disclosure at the bottom of each police report if it was generated by AI, used Draft One to generate more than 3,000 reports between December 2024 and March 2025.

Draft One’s generative AI tech relies on a variation of OpenAI’s ChatGPT to process body-worn camera audio, and it creates police reports based only on the dialogue that is captured. EFF’s report notes that police reports generated by Draft One feature bracketed placeholders where officers are encouraged to add additional observations or information, and that officers are meant to edit the report to correct issues caused by a lack of context, troubled translations or other potential mistakes.

Advertisement

When officers are done reviewing AI-generated reports, they are required to sign an acknowledgement that the report was generated using Draft One, and that they have reviewed the report for accuracy and made necessary edits. Officers can copy and paste the text into the official report, but when they close the working Draft One window, the draft disappears and the tool does not create a record of what parts of the final report were generated by the AI, and which were edited by a police officer after.

EFF claims that the lack of such a record is the most worrisome issue. When there are mistakes, such as biased language, inaccuracies, misinterpretations or lies, it’s impossible to prove whether it was the officer or the AI. Axon’s senior principal product manager for generative AI, according to the report, said the lack of keeping records was intentional, as retaining the records would “create more disclosure headaches for our customers and our attorney’s offices.”

“Police should not be using AI to write police reports,” EFF Senior Policy Analyst Matthew Guariglia said in a news release. “There are just too many questions left unanswered about how AI would translate the audio of situations, whether police will actually edit those drafts, and whether the public will ever be able to tell what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might lead to problems in an already unfair and untransparent criminal justice system.”

Latest Podcasts