Clark continues, “This is one type of error that could easily go unnoticed by a reviewing officer given the volume of material required to be reviewed on deadline. And when an officer on the stand alleges that their report is accurate — they will be proven wrong. When they then claim AI made the error, there will be no draft report to confirm that it was AI that made the error.”
In such situations, he explains, the “consequences will be devastating for the case, the community and the officer.”
The situation reflects broader questions nationally over the use of AI technologies by short-staffed law enforcement agencies. Clark’s memo references Open AI’s ChatGPT and Axon’s Draft One, which uses AI to generate reports from audio recordings.
Update: In a statement, Axon cited safeguards built into its AI model “to minimize speculation or embellishments,” noting that it rigorously tests its products and follows principles for responsible innovation.
The company added that narrative reports should be “edited, reviewed and approved by a human officer, ensuring accuracy and accountability of the information.” See the full statement at the bottom of this post.
Continue reading for the full text of Clark’s memo.
Continue reading for the full text of Clark’s memo.
To our Law Enforcement Partners:
Recently we have been asked by a few law enforcement agencies about our position of their proposed use of AI to help generate police reports. Some have questions about Axon’s Draft One, and others about other AI programs such as ChatGPT. The short answer is that our office will not accept any police report narratives that have been produced with the assistance of AI. All reports must be produced entirely by the authoring officer.
There are a number of reasons why we have arrived at this conclusion. Let me first start by saying we are keenly aware of how time-consuming it is to write police narratives. We also understand that staffing levels are extremely short in some departments, and there is a real need to free up as much time as possible for officers to be on the street. We are also aware that AI is here, and is already in many products we all use every day (Google Translate, Adobe, etc.). We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now.
In general, most products are not Criminal Justice Information Services (CJIS) compliant. By law, aspects of law enforcement work must remain private and are forbidden to be disseminated outside our community – separate from what is available through public disclosure. Publicly available applications like ChatGPT and others take the information submitted and then use it to learn and disseminate. That runs afoul of CJIS prohibitions.
However, there are some products that are CJIS compliant that still pose significant concerns as to how they may negatively impact officers and any case in which these reports are used. Axon Draft One is one such product. There are a number of concerns we have raised with Axon about their product that remain unaddressed. Unfortunately, these concerns will likely result in many of your officers approving Axon-drafted narratives with unintentional errors in them. Axon relies on its technology to review body warn audio to compile a draft narrative. It does not keep a draft of what it produces or what the officer fixed/added. It alone decides what parts of the audio are unintelligible. It has “hallucinations” (errors) both large and small. It does not track its rate of errors, or how many errors an officer fixed in prior drafts. While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the errors are so small that they will be missed in review. In one example we have seen, an otherwise excellent report included a reference to an officer who was not even at the scene. This is one type of error that could easily go unnoticed by a reviewing officer given the volume of material required to be reviewed on deadline. And when an officer on the stand alleges that their report is accurate – they will be proven wrong. When they then claim AI made the error, there will be no draft report to confirm that it was AI that made the error.
For obvious reasons, we do not want your officers certifying false police reports. The consequences will be devastating for the case, the community and the officer. Furthermore, it will it subject them to Brady/PID ramifications, and leave them without a way to establish that theirs was an error of oversight, and not falsehood.
Members of the King County Prosecuting Attorney’s Office have met with Axon to raise these concerns and others. We also sit on a national committee of prosecutors who are working to address AI concerns – which are being raised nationwide. There will likely come a day where AI can assist our offices in important and time-saving ways. For the reasons outlined, this particular usage is not one we are ready to accept. AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI. Please reach out if you have any questions at all. We are happy to discuss this further.
Best,
Dan
Daniel J. Clark (he/him)
Chief Deputy, Mainstream Criminal Division
King County Prosecuting Attorney’s Office
Here is the full statement provided by Axon from Noah Spitzer-Williams, senior principal product manager for Draft One at Axon:
“Agencies have various considerations when implementing new public safety technology and Axon is dedicated to offering comprehensive resources to support them throughout this process as well as addressing questions or concerns. With Draft One, initial report narratives are drafted strictly from the audio transcript from the body-worn camera recording and Axon calibrated the underlying model for Draft One to minimize speculation or embellishments.
Police narrative reports continue to be the responsibility of officers and critical safeguards require every report to be edited, reviewed and approved by a human officer, ensuring accuracy and accountability of the information. Axon rigorously tests our AI-enabled products and adheres to a set of guiding principles to ensure we innovate responsibly, including building in controls so that human decision-making is never removed in critical moments.
Draft One was created with direct feedback from our Ethics and Equity Advisory Council, and studies examining quality and bias demonstrated that Draft One produces high-quality report narratives. Axon will continue to actively collaborate with police agencies, prosecutors, defense attorneys, community members, and other key stakeholders to gather feedback and perspectives on the use of Draft One and AI technologies in law enforcement and the justice system.”
Police narrative reports continue to be the responsibility of officers and critical safeguards require every report to be edited, reviewed and approved by a human officer, ensuring accuracy and accountability of the information. Axon rigorously tests our AI-enabled products and adheres to a set of guiding principles to ensure we innovate responsibly, including building in controls so that human decision-making is never removed in critical moments.
Draft One was created with direct feedback from our Ethics and Equity Advisory Council, and studies examining quality and bias demonstrated that Draft One produces high-quality report narratives. Axon will continue to actively collaborate with police agencies, prosecutors, defense attorneys, community members, and other key stakeholders to gather feedback and perspectives on the use of Draft One and AI technologies in law enforcement and the justice system.”