Understanding generative AI key to harnessing it, state CISOs say
As states race to harness the power of generative artificial intelligence, concerns about potential misuse or unintended consequences of using the technology have forced chief information security officers to reimagine cybersecurity in their states.
“It’s really an opportunity to really do things right,” Vitaliy Panych, CISO for California, home to 35 of the 50 major AI companies, said during StateScoop and EdScoop’s virtual Cybersecurity Modernization Summit on Tuesday. “Any kind of bolstering of innovation is an opportunity to start thinking about ingraining these processes and technical control mitigation strategies of how things should be done.”
Panych said mapping best use cases for generative AI into low-, medium- and high-risk categories helps determine the level of cybersecurity protections his office needs to provide. Using a chatbot for a financial transaction, for example, is different than using generative AI to close a dam or manage other critical infrastructure, he said.
“So we’re peeling the layers of the onion backwards to overlay those security controls and measure the efficacy of those security controls in a federated governing environment,” Panych said.
Maryland CISO Greg Rogers said during the event that it’s imperative for government agencies to understand the purpose of generative AI before creating guardrails.
“It’s about having a really strong third-party risk management program and treating this like any other technology, but understanding how it works,” Rogers said. “So that as we go through these processes that we want to put any technology through, we are considering the unique aspects of AI to ensure that we’re protecting our data, our systems and our residents.”