AI Readiness Project opens doors to state governments
																			The Rockefeller Foundation and the nonprofit Center for Civic Futures on Tuesday unveiled a new effort, called the AI Readiness Project, aimed at preparing state governments to use artificial intelligence tools to improve how they administer services to the public.
The project expands the CCF’s previous AI work, led through its state chief AI officer community of practice, with $500,000 in new funding from Rockefeller intended to “professionalize” the practice, as one executive with the philanthropy put it. The project was born from a growing sense that many state and local governments, after having spent several years, or sometimes longer, drafting AI policies and taking inventory of datasets, are prepared to step up their efforts and begin testing AI tools at a larger scale and for a wider array of government functions.
Technologies classed as “AI” have been under development and in use in government agencies for decades, but it was the commercial release of OpenAI’s ChatGPT in November 2022 that spurred interest in large language models and widely renewed interest in exploring what additional tasks software might automate. Cass Madison, CCF’s executive director, said the renewed interest among state and local government leaders has led to a “wide variation in capacity, maturity, and risk tolerance” among states when it comes to AI.
The AI Readiness Project is intended to give technology officials a forum, she said, where they can “move from curiosity to capability” and gain “a trusted place to learn, experiment and lead.”
“It’s where Chief AI Officers and their teams can learn by doing, comparing notes, testing tools, and experimenting together in a safe environment,” Madison wrote in an email. “For example, we’re launching an agentic AI workgroup where several states will take on a shared use case. Together they’ll develop not only practical AI applications that improve service delivery, but also the guardrails and decision frameworks that make those experiments safe and replicable.”
In addition to a group for agentic AI, the term for automated software granted some decision-making capacity, or “agency”, the effort begins with two other groups, one centered on workforce policies in the AI age and another on “evaluation and monitoring,” a way to assess AI models’ performance and potential for biases. And the project aims to support not just one project, but at least ten pilot projects in state government aimed at “high-impact use cases,” according to press materials, like rewriting old computer code for modern platforms and finding new methods of monitoring AI systems.
The project will emphasize “collective learning” and shared resources, Madison said, in hopes of discouraging the natural tendency of states and localities to independently develop technological solutions to common problems. The group plans for this effort to be embodied next year in the form of a State AI Knowledge Hub, described in press materials as “a public repository of lessons, case studies, and tools designed to help governments at every stage of readiness.”
A new AI fellow at the Beeck Center for Social Impact and Innovation at Georgetown University, which is funded by The Ballmer Group, similarly said recently that he hopes to help state IT leaders solidify an already emerging consensus on which AI practices and technologies work well in government, and then see that they’re shared widely.
If the CCF’s project succeeds, it will be thanks to its capacity to convene state leaders, vendors and academics for honest discussions behind closed doors. (The group aims to expand its network of more than 30 states to 50 states by next year.) Andrew Sweet, a vice president of innovation who oversees AI work at The Rockefeller Foundation, said such private conversations, enriched by diverse participation from across the country, provide officials latitude to operate in what has proven to be a contentious year.
“It’s a closed-door conversation where states feel that they can actually talk about the challenges in a confidential-type way because it’s just becoming so political and there’s so much pressure to move,” Sweet said. “There’s so much pressure, being able to meet every two weeks to discuss these things, it’s like a state AI self-help group.”
Sweet said the foundation did not conceive of the project but was approached for support from the administration of Maryland Gov. Wes Moore, a connection it had made through the National Governors Association, a group designed to provide states’ top executives a policy bridge to the federal government. In 2020, when states and their communities strained under the exigencies of the COVID-19 pandemic, the foundation joined with the NGA to launch STAT, or the State and Territory Alliance for Testing, a mechanism for states to quickly share information about virus testing and vaccination management.
“We’re kind of replicating that model,” Sweet said.
The appearance of chatbots that (mostly) sound like real people, along with automated agents that can deftly handle giant swaths of data, has kicked off a digital gold rush that some have compared to the dot-com boom of the 90s. It’s produced countless new streams of business aiming to capitalize on the public’s excitement with something that’s decidedly novel, but only sometimes useful. Sweet said one of the group’s functions will be to share techniques for selecting vendors that can in fact deliver what they promise and then holding them accountable.
He said that more work to iron out the group’s strategy will take place over an upcoming dinner with Madison and her team at CCF; Timothy Blute, director of the NGA’s Center for Best Practices; and the state CIOs of Georgia and Vermont, roles held by Shawnzia Thomas and Denise Reilly-Hughes, respectively. Of the states that could play formative roles in an AI community of practice, Georgia and Vermont are unsurprising choices. Both states are among the few to have formalized an AI officer title in their executive branches, usually a sign of political support or that AI is being positioned as a long-term fixture. (Some CIOs, though, have supposed that rather than an AI chief, a more general-purpose innovation officer role can just as effectively serve the same function.)
Vermont’s chief data and AI officer, Josiah Raiche, might have been the first to be named to such a role in state government, being promoted from a senior software developer role to AI chief more than a month before ChatGPT’s release. In 2023, he described to this publication his state’s “center for enablement approach” to AI, in which his team drafted policies that agencies were expected to follow, while also providing frameworks and templates that would encourage constructive and ethical uses of AI. Reilly-Hughes, the state’s CIO, estimated more recently that work has paid off: “We hit all of the buckets, maturity-wise.”
In addition to Georgia naming its own AI chief — (Nikhil Deshpande, formerly the state’s digital services chief, added AI to his title in 2023) — it has already served some of the functions that the new group is proposing. At the end of 2023, Georgia hosted an AI summit in Atlanta that attracted hundreds of state and local officials from around the state and beyond, leaders and technologists who were still struggling to wrap their heads around the most effective and prudent strategies for AI in their organizations. “Everyone wants to talk about AI,” Thomas, Georgia’s CIO, said ahead of the 2023 event. “No one’s shying away from it.”
Just over two years ago, Deshpande said in an interview that he believed the new class of AI technologies would become too useful for governments to ignore: “At some point I think expectations are going to change and people will be required to perform at a different scale.” State and local governments may finally be crossing into the era when they’re expected to do something with AI that goes beyond small pilot projects or drafting yet another policy.
Nishant Shah, Maryland’s senior adviser for responsible AI, said his own state is moving into an “enterprise adoption” phase, after two years of “pure ‘foundation building.’”
“We have living policies, tools in hands, methods to ensure use is responsible, and institutional elements,” he wrote in an email, and pointed to the state’s AI “subcabinet,” which includes an AI community of practice, an enablement team and agency-level working groups.
When asked what sort of support states need most to further their AI practices, Shah’s answers were familiar: attracting talented technical workforces, information-sharing and “building on strong basic foundations.”
“One benefit they have in government is that they’re able to compare notes across borders and jurisdictions, copy what works, and avoid reinventing the wheel,” he said. “Mechanisms that allow us to effectively do this across state, local, federal, and international levels will be crucial to ensure the frontier is less jagged.”