Arizona updates generative AI policies as state’s use evolves
More than six months after publishing its first policies on generative artificial intelligence, the Arizona state government last week announced updates that officials said reflect the state’s evolving needs as its agencies experiment with the technology.
The two new statewide procedure documents include changes such as defining the central role of the State Data and Analytics Office, which hadn’t been created when the first version of the policies were published last March. They also place an increased emphasis on data governance and data readiness, expand on the roles and responsibilities that should be held by the agencies and their employees and underscore the importance of transparency, security and privacy.
State Chief Information Officer J.R. Sloan told StateScoop that the updates are based on feedback his office has collected in the recent months of testing generative AI tools.
Sloan said that amid much excitement concerning new generative AI tools, he wanted to ensure that agencies can support any purchases of new technology with an assurance that they will provide clear benefits to their work. To this end, he said, the Department of Administration spent four weeks last September testing Google’s Gemini AI software with more than 200 employees to quantify any productivity benefits. According to a press release, employees using Gemini improved productivity by two-and-a-half hours per week.
“One thing I’ve learned is that the more you work with generative AI, the more vision you get for what’s possible with it, and maybe where some of its limitations are,” Sloan said. “I wanted to be able to have that kind of information to inform our agencies as they make decisions around whether they’re willing to step up and spend money on technology.”
Arizona has launched a handful of other generative AI projects in recent months, including creating testing environments or “sandboxes” with Amazon Web Services, Google and Microsoft. Sloan said the state already uses all three cloud providers and wanted to ensure agencies would be able to test AI technologies from all three vendors. According to a press release, the sandboxes are being used by at least five major departments, including the Department of Revenue and State Retirement System.
Several agencies are using generative AI to develop chatbots or create knowledge bases that can be queried by call center staff who must answer questions about how policies work. Sloan said one such chatbot developed for use by call center employees fielding questions about child safety issues is boosting efficiency.
“When someone asks a policy question, they can rapidly find correct references and the right policy answer,” he said. “That’s a very simple use case, but it helps to increase efficiencies and time to handle a question and make sure you’re also providing accurate information.”
Arizona’s generative AI policies provide guidance on how employees can use a tool like ChatGPT to draft a memo or a job description, including example prompts they might feed the model. The policies also contain advice on using such tools, like: “Try to be specific in the prompt,” and: “Edit and review the content carefully.” The policies include admonishments not to use model output without review and not to feed proprietary or sensitive information to public AI models.
“Be especially careful when drafting scopes of work or requirements for a task order or solicitation, as information may draw from the websites of potential bidders,” one document reads.
Sloan said the state chose to center its policies specifically around generative AI, as opposed to the broader category of AI tools, because of the recent interest among employees following generative AI’s rapid rise in the market.
“We knew employees would want to use these things,” Sloan said.