Advertisement

Nonprofit behind FTC complaint about automated fraud-detection software hopes for more responsible AI use

Grant Fergusson, an author of a recent Federal Trade Commission complaint against Thomson Reuters, urged governments to be careful about which information they provide to powerful AI algorithms.
submit button on privacy policy
(Getty Images)

The group behind the recent Federal Trade Commission complaint against Thomson Reuters’ automated public-benefit fraud detection software — which is used by 42 state governments — hopes the filing will bring about more responsible use of artificial intelligence by governments and the vendors they contract.

The complaint, filed last Tuesday by the nonprofit research organization Electronic Privacy Information Center, or EPIC, targets Thomson Reuters’s Fraud Detect product, AI-powered software that draws on personal data — like social media information, credit reports and housing records — to predict if public benefits applications are fraudulent. EPIC’s complaint claims the tool uses this personal data to make predictions in violation of several federal AI rules and that it frequently points to fraud where none exists.

Grant Fergusson, an author of the complaint and the equal justice works fellow at EPIC, told StateScoop that he believes companies, governments and individuals stand to learn from Thomson Reuters’ recent missteps.

“People give a lot of data to the government expecting that the government will use it properly and safely,” Fergusson said. “They don’t expect that a government will turn to a private company that is scraping your data, that is looking at your social media profiles, that is taking every little bit about you to make — ostensibly — government decisions about you.”

Advertisement

‘Some concerning things’

Fergusson said EPIC’s investigation kicked off after it learned about a similar AI system being used in Washington, D.C., to determine the validity of public benefits applications during the COVID-19 pandemic in 2021. EPIC filed records requests for contracts between D.C., several state governments and companies offering fraud-predicting software. During this time, states were dealing with unprecedented levels of fraud related to pandemic relief programs and they would eventually see hundreds of billions of dollars stolen.

Once EPIC received the contracts back, Fergusson said, EPIC learned they were often written by the companies themselves, were difficult to read and offered little to no information about the companies’ data practices or how their algorithms worked. This is why, he said, EPIC’s investigation leading up to its complaint filing took three years.

“We started to get a trickle of information back about the system from both D.C. and a few other states that we sent similar records requests to, and we found some concerning things,” Fergusson said. “Concerning both because the information we found suggested that they were using data and analytics that didn’t make sense for fraud detection — things like social media profiles, how far you were traveling for groceries — all kinds of random assortments and information. And also concerning because there was information that we weren’t receiving from the government.”

Fergusson said Thomson Reuters in 2020 purchased its software from a company called Pondera Solutions, which was then called FraudCaster. After the acquisition, Fergusson said, Thomson Reuters began “roping in” the new fraud detection software into its already extensive data information ecosystem.

Advertisement

‘Taking control away’

Thomson Reuter’s data ecosystem includes the CLEAR investigative platform, its searchable database of billions of public and proprietary commercial records from dozens of sources. The integration, Fergusson said, added new information that the fraud detecting algorithm could check to make its predictions. The CLEAR platform is currently the subject of a class action lawsuit in California against Thomson Reuters for selling information from its database, without consent, to third-party entities, such as private companies, law enforcement and the U.S. Immigration and Customs Enforcement.

However, the massive amount of data available to the software did not make its predictions more accurate, Fergusson said. In California specifically, Thomson Reuters’ algorithm flagged 1.1 million unemployment insurance claims out of 10 million as “suspicious.” More than half of those were actually legitimate, according to the FTC complaint, which led to more than 600,000 individuals having their benefits suspended.

“When you input a lot of this data into an AI system, without understanding what that data means, without understanding how accurate it is, without doing any sort of testing or monitoring around what that data quality looks like — a lot of the issues we see in that data, errors and biases become the outputs of the AI system,” Fergusson said.

Fergusson said there “has been a lot of great work at the state and federal level to set down guardrails” around the usage of AI, but state benefits administrators should still exercise caution when feeding data to AI models.

Advertisement

“A lot of these AI systems are taking control away from state agencies without giving them the amount of knowledge and oversight they need to make sure that things are happening responsibly,” Fergusson said. “Don’t be swayed by fancy AI snake oil or fantastical claims about what an AI system could do. We’re not there yet. We won’t be for years.”

He added that while Thomson Reuters’ Fraud Detect isn’t the only AI software that’s making public-benefits decisions — and in fact, he said, there are dozens of systems that plug into state-run programs for processing applications — EPIC is hoping for an outcome similar to the FTC’s recent decision about RiteAid’s use of facial recognition software. The commission recently banned the pharmaceutical company from using the tech to predictively spot shoplifters in some of its stores.

“We see this as the next step in that long, slow march toward responsible AI,” Fergusson said. “Make [these companies] approach this responsibly — to do the bare minimum that we expect our government to do with these AI systems, which is the bare minimum that we as consumers would want.”

Latest Podcasts