Advertisement

States are working on AI, but some officials say privacy should come first

Several state officials said that before forming AI policies, agencies must first consider the potential privacy implications of adoption.
(Getty Images)

Many states are considering or have already enacted legislation governing how they’ll use generative artificial intelligence, and several others have executive orders on the books. But regardless of the method to regulate the sprawling tech, some state officials say data privacy needs to be considered at the start of the AI journey.

These regulatory measures for generative AI need to involve data privacy, state officials told StateScoop, because as the technology evolves, risks to any data fed into AI models will also evolve. While some state officials have worked on policies to ensure they have clean and reliable input data for the latest AI software to make predications or conduct analyses, some officials are pushing harder for policies that account for these evolving risks.

Several state technology officials told StateScoop that before they even started creating AI policies, they considered the data-privacy risks of allowing state employees to use generative AI. This includes the practice of data minimization, limiting collection or use of personal information to just what’s relevant and necessary to accomplish a specific task. Most states have also barred employees from feeding any personally identifiable information to their AI software, and many states prohibit using prompts to further train their AI models.

North Carolina Chief Privacy Officer Cherie Givens told StateScoop that the process of considering the privacy implications of AI came before the state issued any guidelines about using the tech. After attending a conference and learning about some of the legal data privacy issues associated with AI use, Givens said, there were features of the existing privacy frameworks that could address concerns with the data that may be interacting with AI systems.

Advertisement

“Privacy is really well positioned to handle AI governance because we already do that kind of risk assessment. We already have processes in place like the privacy threshold analysis or privacy impact assessment that can easily be converted to — or like in our case, ours already asks questions about AI,” Givens said. “Any kind of a use case is being looked at from a privacy and data protection perspective, as well as an AI-governance perspective.”

Givens said the state’s AI usage guidelines overlap with privacy guidance such as the Fair Information Practice Principles, guiding principles developed by the Department of Homeland Security that inform many online privacy policies in state and federal agencies.

Hawaii Chief Data Officer Rebecca Cai, who’s leading AI governance efforts in her state, said data privacy considerations have become “critical as AI grows.” Part of her efforts to safeguard data include anonymizing sensitive data to protect privacy and ensuring employees don’t feed any sensitive data into public software, such as ChatGPT.

“Everyone needs to learn to avoid using any privacy data that could feed the AI models,” Cai wrote in an email to StateScoop. “With AI, data that was previously not private can be linked with new information to reveal privacy information and identify individuals. So, data classifications must have sensitive information fields identified and protected.”

In Pennsylvania, which in January became the first state to buy ChatGPT Enterprise licenses for its some of its employees, officials said there will continue to be new privacy concerns as AI evolves. Before kicking off the ChatGPT pilot program, Gov. Josh Shapiro signed an executive order on generative AI in September that laid out privacy as a core value that would guide the development of an AI usage framework.

Advertisement

“The agreement the Commonwealth has for ChatGPT Enterprise prohibits the Commonwealth’s prompts from being used to train ChatGPT,” Dan Egan, communications director at the Pennsylvania Office of Administration, said in an emailed statement. “Further, we do not allow employees to enter any personally identifiable information (PII), confidential, or employee information into ChatGPT Enterprise. These steps balance our ability to learn through this pilot while also maintaining privacy for our employees and Pennsylvanians.”

Egan said that these rules for AI were built on existing policies, such as the Pennsylvania employee acceptable use policy and IT policies that govern how all technology is handled and used.

“The Commonwealth will continue to adapt to ensure privacy remains a core value for generative AI work by leveraging the expertise of the Generative AI Governing Board and through our collaborations with two of the nation’s leading generative AI research institutions that are based in Pennsylvania – Carnegie Mellon University and Penn State University,” Egan continued.

Keely Quinlan

Written by Keely Quinlan

Keely Quinlan reports on privacy and digital government for StateScoop. She was an investigative news reporter with Clarksville Now in Tennessee, where she resides, and her coverage included local crimes, courts, public education and public health. Her work has appeared in Teen Vogue, Stereogum and other outlets. She earned her bachelor’s in journalism and master’s in social and cultural analysis from New York University.

Latest Podcasts