Advertisement

Should states treat AI incidents like aviation accidents, and investigate?

A new policy framework from the Aspen Institute would establish clear processes for states to follow when AI produces unwanted or dangerous results.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
magnifying glass over papers
(Getty Images)

A new policy framework from the Aspen Policy Academy, a nonpartisan policy training program, is urging state officials to build formal systems to investigate incidents when artificial intelligence tools make mistakes or cause harm.

The guide, published last month, proposes a standardized incident investigation framework specifically designed for Utah’s Office of Artificial Intelligence Policy, a statewide agency that operates one of the nation’s few AI regulatory sandboxes. Regulatory sandboxes (not to be confused with AI sandboxes used for technical testing) allow the state to test technologies under the close watch of regulators checking for legal and policy compliance. According to the office’s website, Utah’s Regulatory Relief program is designed to provide compliance exemptions for AI companies whose tools may benefit the state in the future.

The guide argues that the agency lacks clear processes for responding when those tools produce biased decision-making, unsafe recommendations or other failures, with financial, physical or societal repercussions, which can erode public trust.

“Trust is not a milestone that you hit, it’s something that you earn and you maintain,” Aspen Policy Academy fellow Michelle Sipics, who authored the report, said in an interview. “Both regulators and members of the public watch what you do when something goes wrong.”

Advertisement

As more state governments turn to generative AI tools, officials are increasingly grappling with how to manage real-world risks, such as algorithmic discrimination in hiring, housing and government services. Colorado lawmakers are still debating legislative changes to the state’s landmark 2024 AI law, including how responsibility should be assigned to developers and deployers in case something goes wrong.

Sipics said the framework would establish a structured investigative process that brings together government officials, developers and industry experts to investigate so-called “GenAI incidents,” cases when AI systems cause direct harm through their development, deployment or outputs.

She said she modeled the framework after safety practices in aviation and health care, which emphasize root-cause analysis and prevention, rather than enforcement.

“Safety has continued to improve over the decades, and one of the reasons for that is the dedication to investigating incidents. From those investigations, the industry feeds what they learn back into everything, from how they train pilots, how they train air traffic control, designing aircraft maintenance operations, everything,” she explained, adding, “I feel like GenAI needs that same discipline.”

The recommendations build on Utah’s broader push to position itself as a national leader in AI governance. A previous Aspen Policy Academy collaboration outlined evaluation standards focused on transparency, accountability and public trust — which, according to the Office of AI Policy’s website, are central to the state’s AI strategy.

Advertisement

The framework also calls for companies participating in Utah’s sandbox to sign a pledge committing to publicly share investigation findings, similar to incident reports by the National Transportation Safety Board, which provides oversight to the aviation industry. Sipics said this transparency would show the public that companies and government agencies “have earned their continued trust to keep innovating with this technology.”

“People are not using this technology in a vacuum. It exists in the world. It exists for people,” Sipics said. “Everybody should be able to learn the lessons learned as we go along, so that we can improve safety for everyone.”

The guide frames incident investigation as the next phase of AI governance, one that could help states move from reactive regulation to continuous learning, potentially offering a model for federal policymakers seeking more consistent AI oversight. Though, Sipics said, we are “a ways off” from that future.

“Realistically, I think transparency is probably the best path to scale because best practices like this build in a community,” she said. “When people see you being responsible and sharing what you’ve learned and continuously improving the safety of your products, that has value, that gets buy-in.”

Latest Podcasts