Advertisement

State CIO encourages ‘hopeful’ governance of generative AI

Amid a diligent outlining of how state government should take care to use generative AI ethically and safely, Connecticut CIO Mark Raymond said he's projecting optimism about the technology.
Chris Rein, Mark Raymond
New Jersey Chief Technology Officer Chris Rein, left, and Connecticut Chief Information Officer Mark Raymond speak about generative AI governance during the National Association of State Chief Information Officers' midyear meeting on April 29, 2024 in National Harbor, Maryland. (Colin Wood / Scoop News Group)

During a technology conference session about generative AI governance on Monday, Connecticut Chief Information Officer Mark Raymond outlined the many steps his office is taking to ensure that artificial intelligence is used ethically and responsibly. But amid his warnings, he said it’s important for leaders to impart an optimistic and “hopeful” attitude about AI that won’t discourage agencies from pursuing powerful uses of the technology.

Raymond said that Connecticut, which published its AI governance policy in February, has since been briefing agency staff on how they should be integrating AI and introducing them to the state AI board members who can provide guidance when it’s unclear how state policy applies to their projects. Governance, Raymond said, shouldn’t be about shutting down ideas or discouraging potentially constructive work.

“You want to be seen as being hopeful and doing this in a safe manner,” he told the National Association of State Chief Information Officers midyear conference in National Harbor, Maryland. “Because the other perspective is not a good one regularly.”

Raymond and his co-presenter, New Jersey Chief Technology Officer Chris Rein, said they’ve been exploring how to use AI in domains that are considered “low risk.” In New Jersey, Rein said, the state has undertaken a handful of projects that use generative AI, including with the state’s Economic Development Authority, Motor Vehicle Commission and Department of Labor and Workforce Development.

Advertisement

Rein said one project is testing how generative AI can reduce the amount of time it takes for staff to field incoming calls from residents asking about their benefits. He said that complex procedure manuals make fielding requests a ponderous affair, which AI can accelerate while improving outcomes for residents. He said one project improved the number of resolved calls by 50%.

But across all the state’s generative AI work, Rein said, projects are being approached in a careful and controlled manner.

“We’re definitely focusing on some internal uses at agencies first,” he said. “It’s easier to turn off something when it’s controlled internally and [used by] smaller groups. Only one of our use cases goes right out to the public and we’re taking a long, hard look at that.”

The encouraging-but-careful approach is common among state CIOs, but not all public officials have been so measured in recent weeks. New York City Mayor Eric Adams recently defended to the press his decision to keep online a chatbot that is delivering false information to the public. “You can’t live in a lab,” he said at a press conference this month after a reporter criticized a chatbot that was providing the public with faulty information about tenant and worker rights.

After his session, Raymond told StateScoop government should be using generative AI to understand what the public wants to know about, but not necessarily to generate responses on the fly.

Advertisement

“We shouldn’t be using [generative AI] to create new content and responses, because we have to curate that, we have to make sure that that’s good,” Ramond said. “One of the really powerful things large language models can do is you can implement that in a chatbot to understand the question and get them to already-curated content, instead of providing hallucinated content. That’s a great use case that helps improve government in ways that you would really have a difficult time to do on our own.”

Rein said he is also holding generative AI output to a high standard.

“In one of our first implementations, we had an 80-85% accuracy rate, and that’s far too inaccurate to let something go out to the public when you’re a state government,” Rein said.

Before generative AI use in government can mature, Raymond said, there are numerous steps officials should take, including creating an inventory of AI assets that note whether each tool is involved in making decisions and whether it underwent an “impact assessment.” Raymond said Connecticut uses cybersecurity scanning tools that don’t include a “human in the loop” but in most other cases, humans supervise Connecticut’s AI.

During his presentation, Raymond also directed advice to the many technology vendor representatives in the audience, including an admonition against pitching tools without disclosing information about their data’s provenance. 

Advertisement

“You’ve got to give it the ingredient label,” he said. “You gotta say: Where did your egg come from? How did you train it? How do you test it? How do you make sure the bias isn’t in there? Because we have to answer all those questions, and then after you help us implement it, we still have to continue testing.”

Raymond also told tech companies not to present agreements that would require the state government to engage in ethically questionable acts. He recalled an incident in which a vendor wanted him to sign an agreement for a “standard” AI model that would train itself on the public’s data.

“One of the things that we’re seeing quite a bit is AI sneaking into people’s products. I don’t want to have to pull the plug on the entire system because we don’t like something that’s happening with your AI,” he said. “So give us the ability to say ‘I’m operating without the AI.’ Our law says if we find something that is some kind of bias that’s against what we need to do, we’re required to stop using it, and if our only option is to shut off the whole thing, that’s bad for you, that’s bad for us.”

Latest Podcasts