Why don’t more child welfare agencies use predictive risk models?
In January, the Idaho Department of Health and Welfare plans to launch a predictive analytics model as part of its child welfare program. The goal is to improve case management, reduce unnecessary investigations and enhance prevention services like substance abuse counseling and after-school programs.
The pilot program, which is scheduled to roll out early next year and launch statewide in March, includes three modules: intake, supervision and family mapping. Lance McCleve, who heads the statewide health department, said the intake module helps workers prioritize cases by surfacing risk indicators and providing risk scores. The supervision module helps to manage casework by showing risk scores and case histories. The family mapping tool identifies family relationships more quickly than using manual searches.
“What the model does for us is it says, hey, based on all of these research factors, this case looks like it has a really high likelihood of maltreatment in the near future,” McCleve explained. “And so it helps our workers to be able to pause a little bit and take a more holistic view of the information that we have before making a decision.”
Predictive risk models combine historical data — such as criminal records, hospital visits, substance abuse — with machine learning algorithms to identify patterns and generate risk scores that help caseworkers decide if a given child abuse report needs investigation, if the family needs specific services or if a child needs to be removed from the home.
Local child welfare agencies implement and deliver services, handling direct family contact, while the state agency sets policy, provides oversight and ensures federal compliance.
In 2021, the National Library of Medicine published a framework for how to use predictive modeling to help identify risks of abuse and maltreatment among vulnerable children and households. However, only a handful of state and local child welfare agencies — including Colorado and Oregon, as well as Los Angeles County and New York City — have since adopted the technology.
To encourage more states to adopt predictive risk models, the Administration for Children and Families at the federal Department of Health and Human Services held a roundtable discussion this month on recent research and successful implementations of predictive analytics. Representatives from 21 state and local child welfare agencies attended. Of those, only four — California, Colorado, Oregon and Pennsylvania — have counties that use them.
“We know child welfare jurisdictions continue to face the urgent challenge of how to protect vulnerable children while making the most of limited resources,” ACF Assistant Secretary Alex Adams wrote in an email. “Every day, caseworkers are tasked with making decisions about which families need immediate intervention and which children are at greatest risk of harm. For too long, these decisions have been made using individual judgement and experience alone.”
The Administration for Children and Families recently launched “A Home for Every Child,” a national initiative aimed improving the ratio of foster homes to the number of children in the foster care system. The initiative follows President Donald Trump’s November executive order urging states to use prevention measures, such as parenting classes, mental health counseling, housing and community services, often through programs federally funded by the Family First Prevention Services Act, as well as technological solutions — like predictive analytics — to reduce administrative burdens and improve foster care outcomes.
Adams said he hopes to implement these changes within the 180-day timeline set by the executive order.
“This is too big and too important, and there’s too many kids in this system for us to be sluggish,” Adams said in an interview.
‘The gut isn’t good at this’
In the 1990s, child welfare agencies started adopting formal risk assessment, a process to identify hazards, analyze the likelihood and severity of potential harms and determine the best course of action. Years later, they leapt to modern, automated predictive risk models. In 2016, Allegheny County, Pennsylvania, whose Department of Human Services began using predictive risk models to inform high-stakes decisions, such as whether to investigate cases of potential child abuse or neglect.
Erin Dalton, director of the Allegheny County Department of Human Services, which serves Pittsburgh, said that tool has reduced racial disparities and helped save time by scoring child welfare cases from 1 to 20. The lower the score, the lower the child’s safety is perceived by the model to be at risk, guiding decisions on whether case workers need to investigate. The model uses data combined from various sources, including behavioral health records and court records.
“The gut isn’t just that good at this, it’s just not,” Dalton said, explaining that case workers who receive calls from the county hotline to report child abuse ordinarily rely heavily on intuition, which can lead to errors. “It’s really difficult to process all of this information, where we don’t know that much about what’s going on. These kind of tools can can really help workers sort through that and make a better child safety decisions.”
According to a report published in March by the National Association of Counties, local child welfare systems across the U.S. face major challenges with outdated technology and fragmented case management tools. The report found that counties nationwide consistently report that aging systems and outdated data infrastructure slow caseworkers, add frustration and eat into the time staff could otherwise spend directly helping children and families.
“Congress and the administration should provide technical assistance and dedicated funding, for instance through an enhanced federal match, for child welfare agencies to invest in technology and systems upgrades to improve efficiency, cross-systems alignment, and accuracy, recognizing that modernization of technology can further spur innovations that help meet program goals,” the report states.
It’s often difficult for agencies to share data across siloed systems, which can lead to inefficiencies and errors in case planning and reporting.
Dalton said states are hesitant to implement predictive risk models because they fear that their data is poor and will result in unsuccessful outcomes. “Garbage in, garbage out” — that poor-quality input data generally produces poor-quality results — is a common principle cited by technologists. Dalton said that in her experience, though, the data-sharing infrastructure between agencies is more important.
“We have some of the best integrated data in the country that would allow us to help our workers make better safety decisions at the front door,” Dalton said. “While the data that we have of course has biases in it, it does turn out that these tools actually help our workers make less biased decisions than gut alone or these actuarial tools.”
Jason Baron, an economics professor at Duke University who studies the use of predictive risk models in case management systems at child welfare agencies, said he’s found in his research that predictive risk models exclude race as a predictor and that they have not increased racial disparities in foster care placement rates.
“There’s normally a lot of safeguards that folks implement precisely because of this concern,” Baron said.
In a randomized controlled trial of families in Northampton, Pennsylvania, Baron said, he found that using predictive risk models to manage cases led to a reduction in re-referrals for children in the treatment group, indicating improved safety.
“The tool is purely meant to complement kind of human expertise. We are kind of nudging the supervisor and the workers to pay particular attention to that family, but not in any prescriptive way,” Baron said. “In high-risk cases, we’re not saying, go, remove this child from their home. It just means our model thinks that this child has a very high likelihood of being removed two years later.”
Finding the ‘best option’
Though predictive risk models are lauded by child welfare agencies and researchers, a report published in February by the United Nations examining how child welfare case management works in the United States criticizes IT tools that use data and surveillance to identify and investigate families, arguing it can disproportionately impact low-income and minority families.
More than 75% of children in the U.S. removed from their parents and placed into foster care are taken because of “neglect” allegations, the report shows. But in many cases, those allegations stem from poverty-related conditions like unstable housing, food insecurity or lack of access to medical care — not actual abuse.
“Indeed, far too frequently the system prioritizes removal and family separation over the provision of essential social services,” the report states.
McCleve, Idaho’s health bureau chief, said the state’s pilot program is not solely intended to result in foster care placement, but also to reduce the number of families being repeatedly flagged through prevention, so that children can stay in their homes.
“Foster care isn’t always the best option for everyone, even if they have some problems in the home. It’s traumatic for children, it’s disruptive for families, and we want to minimize that trauma as much as possible,” McCleve said. “So this we’re hoping will help us be able to help workers start moving in the direction of prevention.”
Adams, the ACF assistant secretary, said he hopes to change the reporting structure for child welfare agencies, reducing the number of data points required, so that the data collected “truly capture the safety, well being of children” and is relevant for predictive models.
“Data infrastructure that states are operating under is outdated and slow. When we report our data on state child welfare systems, it’s about two years out of date” Adams said. “It’s almost impossible to make good, data-driven, evidence-based decisions with data that’s two years out of date.”