Advertisement

Can AI help states prepare for Medicaid changes coming in 2027?

Danny Mintz, director of safety net policy at the civic tech nonprofit Code For America, said artificial intelligence tools could help agencies might a tight deadline to modify their safety-net programs.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
AI
(Getty Images)

The clock is ticking for states to prepare for major changes coming to Medicaid next January, especially as programs face tighter federal oversight, higher error-rate scrutiny and new compliance expectations under changes tied to H.R. 1, last year’s contentious federal budget reconciliation law.

As state scramble to find tools that can help them keep up, artificial intelligence is emerging as a way to help Medicaid agencies work faster, catch mistakes earlier and make it easier for eligible participants to apply for and keep health coverage. Danny Mintz, director of safety net policy at the civic tech nonprofit Code For America, said agencies should first conduct pilot studies, and develop evaluation criteria and success metrics, to ensure these tools are effective and do not exacerbate health disparities between populations.

“The real criteria are how does this tool perform for people with different demographic characteristics and is there something about a particular tool that seems to be systematically disadvantaging a particular segment of the population who’s receiving that intervention,” Mintz said in an interview.

Beginning next year, H.R. 1 will require eligible low-income adults in Medicaid expansion states to complete at least 80 monthly hours of approved activities, like work, community service, education or work programs, or meet an income-based alternative.

Advertisement

There are also new exemptions for specific groups, such as pregnant women, caregivers, people with disabilities or those in substance use disorder treatment, that also necessitate new state systems for tracking compliance from the start of the application and every six months.

Code for America identified several use cases for Medicaid service delivery, including conversational interfaces, data quality analysis, process automation and risk and fraud detection. It found that using AI to extract data and pre-populate case management software and resolve conflicts or duplication to create trusted profiles could greatly reduce caseworkers’ workloads.

Jenn Thom, Code for America’s director of science delivery, said there are benefits to using AI, but that not all tools are suitable for tasks requiring high accuracy or that contain complex calculations.

“Generative AI is the right tool if you have a plan to monitor its performance over time, but it’s not something that you can set and forget,” Thom said during a webinar Wednesday.

“If you need a single, 100% accurate, verifiable piece of data from a structured source, like a account balance or account of inventory of potential figures, generative AI is the wrong choice for that situation.”

Advertisement

The secretary of Health and Human Services has yet to issue final rules and guidance for Medicaid work requirements and may not until the June 1 deadline, which would give states only six months to prepare. Mintz said that might not be enough time for government agencies that can be slow to implement technological changes.

“We’ve seen states take nine months to make text changes,” he said.

‘Easier for doctors’

Mintz said medical professionals can help states strengthen Medicaid compliance by identifying administrative pain points, advising on where AI can safely reduce paperwork or errors, and helping to select tools that improve patient outcomes without compromising ethics or trust. For example, applicants with exemptions, such as substance abuse, injury or illness, must complete paperwork that must be verified by a medical professional.

“Doctors may see these forms rarely or they may see it all the time, but they don’t necessarily have a clear sense of what it means,” he said. “Are they being asked to make a diagnosis of some kind? Is this something that exposes them to some sort of liability? Just providing input can be very helpful in helping states make the sort of systems revisions or forms revisions that will make it easier for doctors to complete.”

Advertisement

Mintz said medical and administrative stakeholders can also help improve data quality, infrastructure and sharing practices using clear consent and data-privacy safeguards in AI systems.

“[There are] more enterprise-level AI systems available today than there were two years ago that have strong safeguards in place that silo sensitive data, which includes what medical information and other personally identifiable information that’s common in case records, that keeps them out of the training test for AI foundation models,” Mintz said.

‘Less risky AI use cases’

There has always been an overlap between recipients of Medicaid and the Supplemental Nutrition Assistance Program, but Mintz said H.R.1’s new requirements could make it harder for low-income people to apply for those benefits. SNAP’s new “able-bodied adults without dependents” rules, which apply to older adults, require 80 work hours per week to remain eligible. Medicaid focuses on “community engagement,” including work or volunteer hours for eligible adults.

Receiving Medicaid does not automatically grant SNAP, and vice versa, meaning applicants typically must apply for each program separately.

Advertisement

“Medicaid looks backward from the date that you applied to see if you were compliant before that date. SNAP looks forward from the date that you applied,” Mintz said.

He said state and local agencies should work together to share data, improve operations and increase transparency to help agencies and applicants meet compliance changes and ensure eligible recipients don’t lose benefits.

As states fall under time constraints to implement these changes, Mintz reiterated the promise of AI to address challenges in benefit systems and ensuring people’s basic needs are met.

“There are some areas where we have strong feelings that there is there is likely too much risk at this stage, like states are using an algorithmic product to make final decisions on on a case without human intervention,” Mintz said. “But there are a lot of less risky AI use cases that that may have real value for people, given the deep resource constraints that public benefits agencies are facing.”

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts