Advertisement

How state lawmakers, election officials are fighting AI deepfakes

New laws penalizing unlabeled AI-generated content and public awareness campaigns are two ways state leaders are attempting to fight deepfakes.
New Mexico Secretary of State Maggie Toulouse Oliver
New Mexico Secretary of State Maggie Toulouse Oliver speaks at an event in 2018. (Bipartisan Policy / Flickr)

States are racing to pass legislation that targets the production of AI-generated deepfakes in an effort to curb deceptive information practices ahead of the 2024 presidential election, new research shows. 

Voting Rights Lab, a nonpartisan organization that analyzes election-related legislation, is tracking more than 100 bills in 40 state legislatures introduced or passed this year that intend to regulate artificial intelligence’s potential to produce election disinformation.

Megan Bellamy, vice president of law and policy at Voting Rights Lab, said some of these laws aim to provide transparency around AI-generated content, while others seek to penalize those that use AI to intentionally mislead voters.

“2024 is the first American presidential election year at the intersection of election-related myths and disinformation that have been on the rise and the rapid growth of AI-generated content,” Bellamy told StateScoop in a recent interview about Voting Rights Lab’s legislative analysis, which was released Tuesday.

Advertisement

Deepfakes, a portmanteau of “deep learning” and “fake,” are synthetic audio, images or videos created to replicate a person’s likeness, usually by AI.

In February, robocalls using an AI-generated voice impersonating President Joe Biden reached thousands of New Hampshire voters ahead of the state’s primary, falsely informing them that they would lose their ability to vote in the general election.

“It’s a very fast-paced, constantly changing landscape when it comes to AI-generated content,” Bellamy said. “So once legislators realized this could really be negatively impactful in a presidential election, they started taking action.”

‘Unknown area’ of AI legislation

Bellamy said legislation passed in three states – Arizona, Florida and Wisconsin – are good examples of the different types of regulatory trends Voting Rights Lab sees gaining momentum across state legislatures.

Advertisement

Wisconsin Gov. Tony Evers last week signed a bill requiring groups affiliated with political campaigns to add disclaimers to any content made with generative AI. Failure to comply is punishable by a $1,000 fine for each violation.

Bellamy said Wisconsin’s law is less restrictive than others and that it doesn’t address misinformation threats from people or groups not affiliated with political campaigns.

“AI-generated content can grab the voter’s attention, reach them faster and spread in more of a viral way than state board of elections and county board of elections and all of these trusted sources can overcome,” Bellamy said. “So they really do have an opportunity to impact the election — plus, they’re persuasive.”

In Florida, a bill awaiting Gov. Ron DeSantis’ signature would also require disclaimers for AI-generated political ads and election-related materials, with penalties of up to one year of incarceration.

In Arizona, two bills that have gained traction would aim to balance government regulation of AI-generated election content with the First Amendment and other federal laws, such as Section 230 of the Federal Communications Decency Act, which allows exceptions for media, satire or parody, internet providers and public figures.

Advertisement

One of the Arizona bills would make the failure to label AI-generated political media a felony for repeat offenses or offenses committed with the intent to cause violence or bodily harm. The other legislation would allow an aggrieved party to file a civil suit against the content creator and sometimes receive financial restitution.

“The Arizona Legislature is essentially seeking to prevent a scenario like what happens in Slovakia,” said Bellamy, referring to an incident in 2023 when audio recordings using false AI-generated conversations about election rigging were released two days before the country’s election day. 

Not every state has passed legislation on AI-generated content for political campaigns, but Bellamy said she expects to see more state legislatures taking the issue up. 

“There’s a wide variety of approaches, even among the states that have started to grapple with the issue. There’s not a one-size-fits-all approach at this point,” she said. “It’s really an unknown area that legislators are trying to solve.”

Non-legislative tools

Advertisement

Debbie Cox Bultan, the chief executive of NewDeal, a nonprofit that works with government officials on democratic policies, said legislation addressing AI-generated deepfakes is just one tool states can use to combat election-related misinformation. The organization recently released a report that advises elections administrators how to mitigate disinformation campaigns in their states.

One such measure, Bultan said, is incident response preparation, or tabletop exercises, which can help educate and prepare election workers for real-world scenarios in which they’d need to quickly stop the spread of false information.

“What happens if there is any kind of deepfake or other AI-related thing that sows chaos or confusion to the election? Who’s responsible for what? What’s the communications that needs to happen with voters?” Bultan said. “ That’s happening in a lot of states and I think is a super important way that elected officials can be prepared.”

Bultan said several secretaries of state are using the months before the election to lead public information campaigns that educate voters about the threats deepfakes present. New Mexico Secretary of State Maggie Toulouse Oliver has started a public awareness campaign to educate voters about deepfakes and provides information on trusted election resources.

“There’s always been efforts to suppress vote or to sow chaos in elections. These are just new tools to do that,” Bultan said. “So I think it’s really important we get on top of this now.”

Advertisement

Bultan said Michigan Secretary of State Jocelyn Benson is also trying to educate the public.

“Benson is engaging trusted community leaders, like faith-based leaders and others, to make sure that they have information that they can share with people in their communities as a trusted voice in those communities,” Bultan said. “We haven’t seen elections where [generative AI] has the potential to really cause some chaos, disinformation, misinformation. And that’s something our secretaries in particular are concerned about. This is an all hands on deck situation.”

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts