Advertisement

Code for America CEO urges human-centered AI adoption

“I think we have an opportunity for government not to be left behind and to [adopt AI] in a very thoughtful way,” Code for America CEO Amanda Renteria said.
abstract people illustration
(Getty Images)

The proactive conversations about adopting generative artificial intelligence, from leaders at federal government down to local agencies, has been heartening for Amanda Renteria.

The chief executive of Code for America, the nonprofit that helps governments use technology to improve how they deliver services to the public, told StateScoop that she sees this attention as a step in the right direction. But she said she’s also wary of capacity and infrastructure limitations facing state and local governments when it comes to adopting new AI technologies.

Despite these challenges, she’s urging leaders to adopt AI in a manner that is “human-centered.” 

“I think we have an opportunity for government not to be left behind and to [adopt AI] in a very thoughtful way,” Renteria said in an interview. “Our approach, as it is with anything we do, is piloting, experimenting and then testing before we scale in any kind of way.”

Advertisement

She said human-centered AI means making decisions that fully consider the risks associated with using the technology and being clear in what goals a government hopes to accomplish. These considerations are especially important with generative AI, she said, because it has biases and makes mistakes.

“What are the principles and guidelines and what is the goal of this technology? It’s not just about building it, but it’s actually about building it for the person you are serving,” Renteria said. “We shouldn’t just create systems to be faster unless they’re actually serving clients. [Governments should] be super clear about what their goals are and who it’s serving.” 

Once government leaders have identified those goals, she recommended they start small before scaling the technology too quickly. As they begin to scale these technologies up, she said, governments should consistently test outcomes “so that you can see whether it was positive or negative or leading to inequities.”

“I think some of the use of AI in government requires a conversation with people and takes them on a journey of how government is utilizing the technology out there and taking care of people while they do it,” Renteria said.

Latest Podcasts