Cybersecurity, accessibility take priority as states add AI to digital services
The key to governing artificial intelligence could be found in Indiana Chief Information Officer Tracy Barnes’ philosophy on raising a child: Start restrictive and then ease off as capabilities become more robust.
Barnes told StateScoop that it “drives [him] nuts” that states are allowing the use of AI tools before securing them and that his state is starting out with a more restrictive mindset, at least initially.
“We actually have blocked generative AI from our network,” Barnes said. “That was a starting point. It was hey, until we understand this more, [until] we figure things out with what’s going on here, let’s at least keep any sensitive state data from getting put into any of these AI models that are out there while we work through figuring out what the right policies are and what the right protections and securities are that we should have in place.”
Barnes isn’t the only one being careful. Other state technology officials told StateScoop that ensuring accessibility and cybersecurity for digital services is crucial. AI vendors, meanwhile, are trying to help states identify security risks as they attempt to include AI in their digital services.
‘The big challenge’
Washington, D.C.’s Office of the Chief Technology Officer is prioritizing accessibility for “every single person in the district,” said citywide CTO Mike Rupert. He said it’s important to ensure that residents can reach city services using older devices.
“That’s the big challenge we’ve always had. Unlike Nike or Adidas, not to pick on them, but they can deal with not making things accessible for 10%,” Rupert said. “It’s much like some of the fancy redesigns of apps and stuff like that — we can’t do that because it doesn’t work for everybody. It has to work on an older phone, it has to work on older browsers. Those are the types of things we’ve always been pretty solid in considering as we roll out new tools, and I think even with AI we’re probably going to have that same thing. We can’t take anyone for granted.”
DC’s interim chief technology officer, Stephen Miller, told StateScoop that the district’s technology office is “100% focused on transparency,” ensuring the public is aware of when the city publishes any content that was created by generative AI, including when users are interacting with chatbots.
Rupert said all of the city’s websites meet the federal Section 508 compliance standard and attempt additionally to meet WCAG 2.2 standards, which focuses on accessibility for users who may have low vision, learning disabilities or motor disabilities. He said all city websites are scanned every three days.
Miller predicted that AI will help.
“AI tools in general, they’re going to help. They’re going to help beyond what the district government is doing itself,” Miller said. “These AI tools are going to be built into their personal computers and into their phones, so when it comes to those accessibility aspects, they’ll be able to work on their own device to get advice on how to have something more accessible. … Some of it will be on us to make sure that we’re providing the right information to them and make sure that we’re opening things up. But in general, I think this revolution is going to expand a bit beyond government reach and we’ll be able to take advantage of that as well.”
Owning the algorithm
Maryland — a state that in January unveiled four major IT projects, including an executive order on AI, a new digital services team, a cyber partnership with the National Guard and a digital accessibility policy — is striving for responsible use of AI and generative AI, both internally and for digital services, officials said. Maryland CIO Katie Savage gave the example of using AI to predict what services a community struggling with drug addiction might need.
“We want to make sure that the technology development is secure and accessible, and we want to make sure that any algorithms that we create, we trust the data source,” Savage said. “We want to make sure that we own the data, that we own the algorithms, that we understand the third-party terms and conditions and that’s something that we can really govern internally. But that’ll impact Maryland residents to the extent that they interact with something.”
Barnes, the Indiana CIO, said any content his state’s agencies publish gets “regular rigorous reviews” to ensure that the information is compliant with accessibility standards.
“There’s no question that AI has the ability, and where it’s going, and where the market is fastly adopting to, has the ability to help us make these things a lot more accessible, a lot more available and a lot more consumable,” Barnes said. “One thing that I will tell you is Indiana will not let AI be the full determining factor for whether or not it is ready for that consumption.”
‘Force multiplier’
James Collins, a former Delaware CIO who now works for Microsoft’s state, local and higher education business, told StateScoop that that AI enables cybersecurity threats that make it “tougher to defend your data,” but can also help with defense.
“[Microsoft has] a security Copilot because our experience is that many of our customers don’t have enough resources to sufficiently protect their enterprises,” Collins said, referencing the company’s chatbot product. “We want to put a tool in their hands that will bring together the right insights and allow them to be taking the most important actions on the intel that they’re getting from their environment. We also want to use AI to automate some of those responses. So it’s a force multiplier in the security space. While it will enable some attacks we’re actually putting tools in our customer’s hands to mitigate and address those attacks.”
Savage said Maryland is looking to use AI to parse its cybersecurity incident reports. She said her agency does cybersecurity assessments and wants to ensure that the state’s new task force has the support it needs to fix any problems it finds.
“Now we can start to flag in constituent services how urgent is this problem,” Savage said. “[AI] provides an analysis: ‘What are the frequency of problems that we’re having?’”
Officials in the Washington, D.C., city government are also focused on the safety and the accountability of AI tools’ output. Miller said AI security is a “core component” of the city’s current work and that humans are being kept in the loop as tools are being tested ahead of release.
“We’ve worked three years, [Rupert] and I and many others here in OCTO, to make sure that D.C. government is a trusted source of information,” Miller said. “When you come to DC.gov, and you say I’m looking for this particular service, that you’re getting that service from DC.gov. We think with AI, it’s not going to be any different.”
Barnes said he wants Indiana state employees to be equipped with the knowledge and skills to employ AI so that agencies are “consuming [AI] the right way.”
“It is imperative that we, in our agency and other agencies as well, really start to understand what AI means, how AI tools are being developed, where there are concerns and, more importantly, where those opportunities that can engage our teams better and improve the systems and solutions for the services that we are creating and providing,” Barnes said. “We cannot sit back and rely on the message from the vendors that it works and that it’s safe in order for us to say, OK, let’s move forward.”
This story was featured in StateScoop Special Report: Digital Services — A StateScoop Special Report