Utah closes potential loophole on AI-aided crimes

New laws soon go into effect that ensure Utahns can be held legally responsible if they use AI to commit crimes.
(Getty Images)

Starting next week, generative AI users in Utah can be held legally responsible if they use the technology to commit a crime or intentionally prompt it to commit an offense on their behalf.

The legislation, set to take effect on May 1, is part of a suite of new laws the Beehive State recently passed that target the misuse of generative AI by individuals and businesses. The new laws include AI disclosure mandates for political ads and clear definitions of what is considered personal data.

One law creates the Artificial Intelligence Policy Act, which states that “data generated by computer algorithms or statistical models” is not considered “personal data,” an important distinction under the Consumer Protection Act, which governs the private sector’s use of personal data and a Utah state resident’s right to be deleted.

It also requires any person or business that forces individuals to interact with generative AI as part of its service, such as via chatbots, to disclose that they are not interacting with a human being. If the technology assists in providing a license or state certification, the company must issue a disclosure at the start of the interaction.


Violators face the Division of Consumer Protection and the agency’s enforcement powers, which include financial penalties and additional AI fines of up $2,500 per violation.

Another new law requires political parties, candidate campaign committees, political action committees, political issues committees and individuals to disclose if they used generative AI to create audio or visual content about an election or ballot proposition. The label must include a disclaimer stating that the content is generated by AI, identify the original producer of the content by initial, and list the subsequent content created by artificial intelligence.

Wisconsin, Florida, Michigan and several other states have enacted similar legislation in an attempt to curb potential misinformation ahead of the 2024 presidential election.

Failure to disclose the use of AI risks a civil penalty of $1,000 per violation. However, people or businesses that solely provide the generative AI technology used to the create political content are exempt, and only its sponsors are liable.

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts