Advertisement

Maine taps brakes on ChatGPT use

Maine's IT office issued a six-month ban on the use of ChatGPT and other generative AI tools by state agencies.
ChatGPT
(Marco Bertorello / AFP via Getty Images)

State-government agencies in Maine can’t use generative artificial intelligence tools like ChatGPT for at least six months, following a cybersecurity directive issued last week by the state’s IT agency, which cited concerns around the technology’s cybersecurity and privacy risks.

According to the directive, the “adoption or use” of generative AI technologies is prohibited in any state-government business or on any device connected to the state’s network while officials review their potential impact.

“This will allow for a holistic risk assessment to be conducted, as well as the development of policies and responsible frameworks governing the potential use of this technology,” reads the two-page document, which was first reported by Government Technology.

In an emailed statement, a MaineIT spokesperson cited a variety of risks and said state officials are waiting for best practices and regulations to develop.

Advertisement

“The realm of artificial intelligence, in particular Generative AI, is a rapidly growing sector of business technology that can deliver information independently of structured input, meaning it can produce uncontrolled results, which can lead to potential regulatory, legal, privacy, financial, and reputational risks,” the statement reads. “The moratorium gives MaineIT an opportunity to thoroughly assess any of these potential risks, including any cybersecurity concerns. As federal and state regulatory frameworks emerge, we will continue to evolve our understanding of AI and better determine what is allowed and/or prohibited.”

The hype around ChatGPT and generative AI has been unmissable for months — it’s been a contentious topic in everything from cybercrime prevention to the Hollywood writers’ strike — and state tech officials are not immune. But many of those IT leaders have already said these AI tools could have a place in government with cautious, limited use.

An outright ban — even one for just six months — may not be as logical, though, said Brandon Pugh, the director of cybersecurity and emerging threats at the R Street Institute, a free-market think tank. Pugh said that while he understands why government leaders have concerns around data privacy or bias, he’s “naturally skeptical” of Maine’s move.

“I’m not saying there aren’t cyber issues, privacy issues to work through,” he said. “I would question how much is going to be different in six months. There are areas to work through. I don’t think we should let the fears win.”

Pugh also noted that reputable frameworks for AI already exist, including one from the National Institute of Standards and Technology. That agency last week also launched a new working group devoted to building rules and benchmarks for the latest generation of artificial intelligence tools.

Advertisement

“We need a lot of super-smart people working on this,” Pugh said. “On the privacy side there’s concern around what data’s being shared with it. Ultimately AI is all about data. That’s where the effective use of policy comes into play. Perhaps you don’t share private info on citizens with generative AI. But if you’re using it to make your job more productive, it could be beneficial with guardrails.”

The MaineIT spokesperson did not say if any state agencies there have started tinkering with generative AI software. Pugh noted that even with all the clamoring around ChatGPT, generative AI isn’t a brand-new technology — and that there is a body of work officials can turn to.

“This isn’t the Wild West,” he said. “We do have frameworks. NIST is just one example.”

Latest Podcasts