Advertisement

AI is catching on in government. The Beeck Center wants to help

After years of crafting policies and surveying risks, many state and local agencies are carefully experimenting with wider deployments of generative AI tools.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
AI
(Getty Images)

It is perhaps more a feeling than a fact that state and local governments are moving into a new phase of their artificial intelligence efforts — it’s hard to say when one era ends and another begins. But after years of developing AI use policies, considering risks of bias and data-privacy and establishing new offices, task forces and roles, many agencies are venturing into what might be described as the toddler stage of their generative AI lifecycles. They’ve patiently surveyed the environment and they’re ready to start testing their legs.

Sensing the shift, the Beeck Center for Social Impact and Innovation at Georgetown University has joined the growing number of organizations to have in recent months rallied new resources to aid state and local governments in their AI efforts. The Beeck Center’s help comes in the form of Andrew Merluzzi, who for roughly the last six weeks has served as its AI innovation and incubation fellow. With so much AI activity afoot and so many other groups organizing aid, Merluzzi said, his priority so far has been to identify how the Beeck Center can be the most useful.

He’s also been thinking deeply about the second- and third-order effects that adding generative AI to government might have: “What are some of those impacts of those [projects that] are successful that might be a little bit hard to anticipate, but that if we thought about it now we might be able to have other systems in place to deal with it?” Coming up with new ideas could be challenging in a three-year (and counting) stretch when seemingly no one will stop talking about AI. But looking ahead is an essential task, Merluzzi said, because some governments are focused mostly on the tasks of integrating new tools into their organizations and avoiding immediate problems, without fully appreciating the many troubles that may be lying ahead.

Merluzzi seems to believe that governments are entering a new era of AI, one that caps off not only three years of the public’s exposure to ChatGPT and its competitors, but in some cases five years or even a decade of work, of agencies drafting AI readiness frameworks and consulting their lawyers, leading to a growing willingness by ever-conservative governments to begin some “careful experimentation.” 

Advertisement

“I feel like all of that work is starting to bear fruit,” he said.

Los Angeles, the nation’s second largest city, this week announced it was rolling out Google’s AI productivity tools to tens of thousands of city workers, joining an already well-populated pool of experimenters. Ted Ross, the city’s top tech official, said he’s especially interested in using the tools to expand communications to the speakers of foreign languages and, eventually, optimize the city’s traffic and lighting infrastructure. Maryland, among the states widely offering AI-fueled productivity suites to workers, has for more than six months tested generative AI tools for tasks like more quickly designing websites, building chatbots and drafting talking points for presentations.

Adoption is uneven. At least two state CIOs told this publication they weren’t especially interested in the latest AI products. They had, they intimated, decades-old systems to replace and organizational concerns that trumped any marginal benefits AI might add. And some states are just, quietly, less adept at absorbing the hottest new thing, which happens to be AI right now.

In Vermont, maybe the first state to name an AI chief, officials have positioned themselves as the big kid at the playground, ready to share what they know. “We hit all of the buckets maturity-wise,” said Denise Reilly-Hughes, the state’s chief information officer. The state’s prolonged focus on AI and data governance has allowed it to produce “meaningful outcomes for our partners in state government,” she said, “while at the same time creating some maturity around our data controls, data retention.”

Colorado, meanwhile, is harnessing its “second-mover advantage,” said state CIO David Edinger, content to learn from the mistakes of early movers like Vermont. Colorado’s also among the states that benefit from having top leaders who understand technology. Jared Polis, the state’s Democratic governor, founded and sold three tech firms, for hundreds of millions of dollars, before entering politics. “He said don’t get in the way of the agencies,” Edinger said of his early talks with Polis about how to lead on AI. “I interpreted that as: bullish with guardrails.”

Advertisement

Edinger’s strategy resembles the popular tech industry “fail fast” model, which tests lots of ideas, lets the bad ones fizzle out and keeps the winners. Edinger said some politicians have had a hard time with the idea of something funded by taxpayer dollars “failing,” but he said that finding the best uses of AI — about 5% of the state’s 200 projects worked really well — requires such exploration and that cutting failed projects loose early doesn’t necessarily incur huge costs anyway. One of the most exciting projects, he said, is using AI to train call center staff who handle unemployment insurance claims or field emergency calls.

“We’re putting those people into their positions fully functional in half the time we used to and a lot of those positions turn over pretty rapidly,” he said. “I couldn’t have told you six months ago that this is where we’d see the really big benefit.”

At the Beeck Center, Merluzzi said, one goal will be to find winning uses, solidify an already emerging consensus among states on what works and then begin deploying at a wider scale, across the country.

“How do we share those 5% of uses that are genuinely useful so each state or each locality doesn’t have to pilot and be doomed to make mistakes that others are doing,” he said. “Instead can we have this kind of learning ecosystem where those small number of applications can then be tried by another locality, hopefully to much more great effect.”

Merluzzi said he’s interested in the uses of AI suited to the technology’s strengths, like recognizing patterns and summarizing large corpuses of text. He pointed to work by Stanford University’s Regulation, Evaluation and Governance Lab to help San Francisco to use large language models to parse its endless regulations and reporting requirements to identify unnecessary or redundant rules, and trim down its bureaucracy. City Attorney David Chiu told Politico that a tool the group developed for that task saved “countless hours of work.” The Reg Lab has similarly applied LLMs to the task of helping local governments remove racial covenants from their land deeds to meet state laws striving to remove the offensive passages from official records. (Some versions of the documents retain the covenants, for posterity.)

Advertisement

Many favor such uses of generative AI tools because in addition to saving resources — sometimes millions of dollars — on the completion of tedious work, such uses also align with the common pledge by government leaders that they’ll recruit AI only as a helper to work alongside humans, not replace them. Merluzzi said there are likely other similar uses of AI that are “high impact, low risk” that could improve governments.

He’s still helping the Beeck Center figure out where it fits in the movement to aid such ambitions, and pointed to the work of other organizations with techno-philanthropic missions. There’s the Government AI Coalition, an “AI for social good” group started by the city government of San Jose, California. There’s City AI Connect, a Bloomberg Philanthropies project operated out of Johns Hopkins University that contains projects in 100 cities as at least as many projects. Humanity AI, a philanthropic project claiming that AI is “neither a poison nor a panacea,” has put half-a-billion dollars to ensure the technology has a “people-centered” future.

But even with all the attention AI is getting, Merluzzi said, he thinks the dynamics of human-computer interaction remain a critically understudied aspect of how government will manage to succeed or fail with AI. In response to public worry that government will begin permitting its AI tools to automatically make decisions of consequence, such as denying or approving public benefits, it’s often explained that a human will always be standing by to make the final decision. But the theory of automation bias holds that people have a tendency to favor the conclusions of automated decision-making systems, even when it should be evident the output is faulty. Having a human present may not be enough. Agency veterans who can lean on decades of experience may manage to repel such biases, but in a state where call takers are quickly trained into a work environment that’s already using AI, trusting the machine could become the only thing they know.

Such concerns are of a flavor that Merluzzi said could come to characterize at least one channel of the Beeck Center’s AI work: spotting knock-on effects that might be far from obvious. When the Chinese chatbot company DeepSeek surprised the technology world in January with its latest LLM’s excellent performance and purportedly ultra-low development cost, the AI community revived in the public’s consciousness a 19th century economic theory of English origin called the Jevons Paradox, which basically claims that greater efficiency — of coal burning, originally — leads not to a surfeit of resources, but greater demand and use.

Humans are often this way. Just as increased wages usually lead to people heightening their lifestyles and then pursuing even more wealth, rather than simply banking the extra money, energy-efficient lightbulbs have seemingly licensed around-the-clock use. More efficient computer hardware and data centers haven’t found the ceiling for demand, but enabled engineers to develop increasingly resource-hungry software and fueled more demand than ever. No matter how abundant the feast, people find a way to increase their appetites.

Advertisement

The lesson for government, Merluzzi said, is that it’s not obvious what will happen when new, more-efficient technologies are introduced into agency processes. If AI tools make it easier for people to apply for business permits — generating additional revenue for the city, stimulating economic growth and providing the public with new and desirable goods and services — it might also create the need to hire additional staff to inspect the new businesses or repair or clean public spaces more often. “These are the kinds of things that are hard to predict,” Merluzzi said, “but I don’t think they’re impossible to predict.” And planning for such problems in toddlerhood may prove a wiser course than waiting until adolescence.

Latest Podcasts