Police departments are adopting a new GenAI tool to write incident reports

Skye Jacobs

Posts: 133   +3
Staff
A hot potato: Police departments are using a new tool that allows them to write reports using generative AI. The software vendor providing this claims its approach has solved some of the more vexing problems associated with gen AI like hallucinations, but the technology has yet to face the scrutiny of the courts, which means the debate of its use is far from over. Discussions are expected to center on issues of privacy, civil rights, and justice.

Police departments, long accustomed to using technology in their operations, have recently started integrating generative AI into their report-writing processes. This shift follows the introduction of Draft One, a new tool created by police equipment vendor Axon earlier this year.

Draft One uses Microsoft's Azure OpenAI platform to transcribe audio from police body cameras and then generate draft narratives. To ensure accuracy and objectivity, these reports are strictly based on audio transcripts, avoiding any form of speculation or embellishment.

Officers are required to verify and approve the reports, which are flagged to indicate AI involvement in their creation. Despite using the same underlying technology as ChatGPT, known for occasionally producing misleading information, Axon asserts that it has fine-tuned the AI to prioritize factual accuracy and minimize hallucination issues.

"We use the same underlying technology as ChatGPT, but we have access to more knobs and dials than an actual ChatGPT user would have," Noah Spitzer-Williams, who manages Axon's AI products, told The AP. Turning down the "creativity dial" helps the model stick to facts so that it "doesn't embellish or hallucinate in the same ways that you would find if you were just using ChatGPT on its own," he said.

The selling points of Draft One are clear. It is marketed as a tool designed to significantly reduce the time officers spend on paperwork, potentially cutting down report-writing time by 30-45 minutes per report. This efficiency, Axon says, allows officers to dedicate more time to community engagement and decision-making, which could enhance de-escalation outcomes.

To be fair, officers who have used Draft One report that these AI-generated documents are not only time-efficient but also accurate and well-structured. Moreover, the AI sometimes captures details that officers might overlook.

Police departments across the country have been using Draft One in different ways. In Oklahoma City, AI-generated reports are currently limited to minor incidents that do not involve arrests or violent crimes, following recommendations from local prosecutors to proceed cautiously.

Conversely, in cities like Lafayette and Fort Collins, AI usage is more extensive, even encompassing major cases, although challenges remain, such as handling noisy environments.

Axon has not disclosed the number of police departments currently utilizing their technology. The company is not alone in this market, as startups such as Policereports.ai and Truleo are also offering comparable solutions. However, due to Axon's established connections with law enforcement agencies that purchase its Tasers and body cameras, industry experts and police officials anticipate that the use of AI-generated reports will become increasingly widespread in the near future.

Despite its potential, the use of AI in report writing has sparked concerns among legal scholars, prosecutors, and community activists. They worry about AI's tendency to "hallucinate" or produce false information and the possibility of AI altering critical documents within the criminal justice system. Other issues include embedding societal biases into reports and the risk of officers becoming overly reliant on AI, potentially leading to less meticulous report writing.

"I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing," said Andrew Ferguson, a law professor at American University working on what appears to be the first law review article on the emerging technology.

Permalink to story:

 
What this means to me at this point is more litigation on many fronts and less capable cops. If they can't write a report it means training and acceptance standards have gotten very low. I have yet to see computerization make filing data faster because the folk that write the programs are not cops and the cops they listen to are not on the beat and may never have been. I suppose this is designed to save their butts in court. I would like to hear from an active patrolman because this is all conjecture on my part. It is how it was in the medical field though
 
I was against this from the headline description. But reading the further description.. if it's essentially just transcribing the audio from body cameras and filling in a couple sentences to describe the video, they should be able to do that safely. I still do feel like it'd be better for the officers to do so, or maybe they should have secretaries? But the AI will probably do a better job of making a complete report than someone who just writes out the report as quickly as possible because they want to get back out on the street... or clock out and go.. as the case may be.

I won't say I'm FOR it, but I'm at least neutral on it at that point.
 
Ah. Another opportunity for AI to hallucinate and send innocent people to jail - until the courts find out that "police reports" are better written by the officers themselves. I would not be surprised if in the process, real criminals are set free.

Just another BS excuse to have officers do more and allow police departments to hire fewer officers in an attempt to save money.
 
"Officers are required to verify and approve the reports, which are flagged to indicate AI involvement in their creation."
Um... Sure!
How many will just look at it seeing the report is completed (by the AI) and just sign off on it?
<sigh> [shudder!]
 
I'm sure it'll be bumpy at first but I'd like to see this work:

1. It provides strong motivation for officers to keep their body cams on.

2. The matured AI will hopefully be trained for accuracy, whereas officers, intentionally or not, may have become trained mostly for prosecutorial effectiveness. Similarly, AI may be less inclined to omit inconveniently ambiguous evidence.

3. Officers are an expensive and scarce resource in most districts, and transcribing conversations into paper form is not why they were hired. If this process reduces the work by half or more and particularly focuses the officer's attention on the observations reflecting their expertise and experience, we'll be better off for it.

I see I'm against the grain of other posters on this one. I suspect the primary difference may be less about me being more optimistic about AI vs. more pessimistic about the quality of human reports (no better than fair on average with plenty of doses of dismal when the officer feels like it, such as justifying a search or violence.)
 
I'm sure it'll be bumpy at first but I'd like to see this work:

1. It provides strong motivation for officers to keep their body cams on.

2. The matured AI will hopefully be trained for accuracy, whereas officers, intentionally or not, may have become trained mostly for prosecutorial effectiveness. Similarly, AI may be less inclined to omit inconveniently ambiguous evidence.

3. Officers are an expensive and scarce resource in most districts, and transcribing conversations into paper form is not why they were hired. If this process reduces the work by half or more and particularly focuses the officer's attention on the observations reflecting their expertise and experience, we'll be better off for it.

I see I'm against the grain of other posters on this one. I suspect the primary difference may be less about me being more optimistic about AI vs. more pessimistic about the quality of human reports (no better than fair on average with plenty of doses of dismal when the officer feels like it, such as justifying a search or violence.)
As to 3 - why not use simple Text-to-speech?
As to 2, that it has to be trained means there is room for error. From general reports about AI, we know that it makes mistakes - sometimes mistakes that humans would not make. That's what scares me.There are enough cases where people are jailed when they are innocent. AI, I'm afraid, will not make things any better.
 
There are enough cases where people are jailed when they are innocent

Yes. These are all examples of humans making mistakes, or being biased, or evil. AI should have potential to be better at all three and certainly the last two.

This represents my hope for the future. I'd certainly want a long interim period where one was validated against the other. Or maybe rather than one replacing the other, it makes sense to always have both, with the human report evolving to specifically supplement the machine one in ways humans do it better while avoiding the time-wasting aspects.
 
Given that AI should be able to interrogate the officer as it gets better to clarify things. It can make sure all points are considered.

Not like officer is perfect, hey Jake we are off to Hooters in 5.

Unscrupulous officers already lie or omit on reports, they will learn to trick AI or blame AI when questioned in court

Just another tool - good , bad ,indifferent

I can see this for AI Doctor reporting . between patients Doctors spend a lot of time updating case notes - Very easy to forget something, misremember something if choose to do it later

Many professionals have not been trained in checking everything off with great techniques.
Given companies involved in safety had made lots of mistakes, sometimes no consequences, sometimes death - eg checking off a plane before flight. Just wondering around, glancing at all the things you remember gets a fat zero for accuracy and best practice - as addresses none of humans known weaknesses, see what the want to see, missing stuff, lazy ( not getting close ) etc a checklist gets a 2 out of 5 compared to best practice
 
I remember taking law enforcement classes in college. At the time, the majority of new police officers washed out because they couldn't write an accurate report.
 
Back