Advertisement

For generative AI, Illinois task force seeks tech that’s ‘safe, secure, trustworthy’

Illinois Chief Information Officer Sanjay Gupta said his state's AI task force is targeting "low risk, high reward" use cases first.
Illinois state capitol building
(Raymond Boyd / Michael Ochs Archives / Getty Images)

Joining a host of other states, Illinois is taking a hard look at the risks and possibilities of generative artificial intelligence.

The Illinois Senate passed House Bill 3563 last August, creating the state’s Generative AI and Natural Language Processing Task Force. It’s focused on developing AI best practices that ensure public safety, privacy and fairness. Several months after the task force’s creation, Sanjay Gupta, secretary of the Illinois Department of Innovation and Technology and the task force’s leader, shared with StateScoop his thoughts on open-source versus closed-source software, generative AI creators and consumers and how best to balance the technology’s benefits with protections for residents.

Generative AI, he said, is at a critical juncture.

“I think generative AI is at an inflection point which will have a profound impact on the industry, the business technology and public sector for sure,” said Gupta, who spent more than four years as chief technology officer of the U.S. Small Business Administration, where his efforts to modernize the agency allowed it to process more than $1 trillion in loans for the nation’s largest economic recovery effort. (He’s also a former board member of the federal Technology Modernization Fund, where he helped government agencies adopt new technologies.)

Advertisement

Safe, secure, trustworthy

Gupta said thoughtfully deployed AI can revolutionize industries, streamline processes and boost economic growth. He said he prefers the state to adopt open-source software, which makes its source code open to the public, because it offers greater transparency between creators and consumers regarding how it works and which datasets it’s trained on.

“So from our standpoint, it kind of boils down to the consumer,” Gupta said, speaking of Illinois residents. “It needs to be safe, it needs to be secure, it needs to be trustworthy — those are the three most important things we look for in generative AI, or other forms of AI in the state, to ensure that we know what we’re using. Some of the closed sets don’t have as much transparency or visibility into those things.

The Biden administration last month issued a lengthy executive order directing federal agencies on how they should use artificial intelligence. It sets deadlines for new studies and urges lawmakers to develop new regulations that prioritize safety and security.

Gupta said he’s also prioritizing safety and security and that Illinois will begin with use cases that have low risks, but high reward, such as creating a “richer” experience for users by improving chatbots and boosting productivity by making it easier for staff to process documents and “synthesize” information from various sources.

Advertisement

“Most public sector organizations have a backlog of processing. We have the same situation in the state here as well,” Gupta said. “And so this improves our capacity to process more of these applications in a more timely manner that gives us some extra bandwidth.”

When the state is ready to pursue some more advanced use cases, he said, it may eventually look to using generative AI to process benefits applications and make determinations about eligibility, though he noted the importance of never fully automating such decisions.

Careful automation

Gupta sits on the state’s AI task force with 20 other members, experts in the academic and private sector who he said can speak to the complexities of generative AI. He said the task force is also exploring opportunities for the state to apply its large language models in the domains of public school classrooms, the workforce and cybersecurity.

Gupta said that although AI can automate repetitive tasks — improving accuracy and efficiency of a wide range of tasks – it’s important never to allow software to unilaterally make decisions that affect humans.

Advertisement

“Yes, we can automate things,” he said. “But the level of automation we’ll have to do is in a methodical way, we’ll start with some of the most mundane and basic things to automate. There will be a recommendation that will be presented, if you will, to a human being or human beings, who will then ultimately make the decision on what to do with the recommendation that is provided by an AI-based system.” 

Gupta acknowledged the potential risks associated with unchecked use of generative AI, including bias, job displacement and privacy concerns. He also pointed to its limitations.

“There’s always going to be some error. Even the best LLMs will always have some holes or gaps. They’re not going to be 100% perfect,” he said. “So we have to be mindful that there’s going to be some level of errors [or] gaps in the knowledge that will be there. Over time, as you learn, the systems learn [and] they will become better.”  

Illinois’ task force is required to hold at least five public meetings — in Chicago, Springfield, Metro East, Quad Cities and Southern Illinois — and summarize its recommendations in a report shared with the governor’s office and general assembly by the end of next year.

Colin Wood contributed reporting.

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts