The use of facial recognition technology by local and state law enforcement agencies has risen significantly in the week since the Jan. 6 pro-Trump mob at the U.S. Capitol, as officials attempt to identify the rioters who entered and vandalized the building.
With many of the participants posting footage of the riot to social media, officials nationwide are relying on software like Clearview AI — which matches photos of unidentified people with publicly posted images — including in jurisdictions that have enacted recent reforms about police use of the technology.
During a press conference Tuesday, Massachusetts Gov. Charlie Baker suggested that state and local police departments across the commonwealth are searching facial recognition databases to determine if any of its residents traveled to Washington to participate in the riot. After Baker vetoed a bill containing an outright ban of the technology, Massachusetts recently adopted law enforcement reforms that require officers to obtain a court order or approval of the state’s Registry of Motor Vehicle before running a facial recognition search.
“One of the reasons I was so aggressive in maintaining that facial recognition technology was because I believed it was an important tool for dealing with situations like the one that took place in Washington last week,” Baker said Tuesday.
So far, more than 70 people nationwide have been charged over their participation in the Capitol assault, though Baker said none are from Massachusetts. Still, many more arrests are expected in the coming weeks and months; the FBI said Tuesday it has already collected more than 100,000 digital images from the violent insurrection, which killed five people, including a U.S. Capitol Police officer.
Much of the work is being carried out by fusion centers where local, state and federal law enforcement agencies exchange information. Baker said Massachusetts’ center has been “very active in dialogue and conversation over what people are hearing, learning and discovering.”
Facial recognition products like Clearview AI — which has sparked privacy concerns for its sprawling database of at least three billion images scraped from Facebook, YouTube and other sites — are a key component of this work. A police sergeant in Oxford, Alabama, told the Wall Street Journal this week that he’s been able to develop “some pretty good suspect leads” that were then forwarded to the FBI. Clearview executives have also said that use of their software jumped 26% the day after the Capitol riot.
But while the technology is being used to track down people who allegedly participated in the deadly riot, law enforcement use of facial recognition software continues to face stiff criticism from privacy activists. The Electronic Frontier Foundation, which has previously argued against Clearview specifically, said Tuesday that it could easily be turned on peaceful, constitutionally protected protest activity.
“Make no mistake: the attack on the Capitol can and should be investigated by law enforcement,” read a blog post from the organization. “The attackers’ use of public social media to both prepare and document their actions will make the job easier than it otherwise might be.”
But, the post continued, “Facial surveillance technology allows police to track people not only after the fact but also in real time, including at lawful political protests,” citing the use of the software in making arrests after Black Lives Matter demonstrations last summer.