Advertisement

Idaho lawmakers consider banning pro-DEI chatbots inside state government

"You wouldn't think a good state like ours would be spending money on this,” Idaho state Rep. John Shirts said during a recent legislative hearing.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
chatbot with pink background
(Getty Images)

Until recently, a company called Recidiviz, which develops a criminal-justice data platform used by more than a dozen states, hosted a page on its website dedicated to diversity, equity and inclusion. It noted that equity was “core” to the firm’s mission and workplace, “fundamental to our team culture,” and that DEI principles played “an intrinsic role in how we build our team, how we build our tools, and how we evolve our understanding of the issues we work on and our role in addressing them, day after day.”

The company proudly advertised its book club, called Recidireads, in which staff discussed academic studies, books and articles related to criminal justice. It described the company’s regular training sessions on implicit bias, after-hours events targeting criminal-justice reform, and its celebration of Juneteenth as a staff holiday, augmented each year by suggested reading and discussion groups “focused on the intersection of race and the criminal justice system.”

But sometime within the last 11 months, the page was removed from the company’s website, along with all other mentions of D, E or I. Using slightly blander verbiage, the website currently describes a company that continues “to drive better outcomes for people impacted by the criminal justice system.” Another page explains the firm’s “north star: safely and fairly shrinking the criminal justice system.” Recidiviz, the website’s new language explains, builds “technology that reduces the number of people in prison, increases community re-entry success, and overall makes communities safer.”

The company might have changed its website last summer, when an Idaho resident mentioned to his state representative that the state’s Department of Correction was doing business — to the tune of about $1 million annually — with Recidiviz. State Rep. John Shirts described to Idaho’s legislature last month how he’d personally investigated the matter, talking to officials at the corrections department, and reading DEI language on the company website that “doesn’t align with our values as a state. … You wouldn’t think a good state like ours would be spending money on this.”

Advertisement

Speaking before the Idaho House of Representative’s Environment, Energy and Technology committee, Shirts described a piece of legislation that has since passed the House and will be considered by the state’s Senate. House Bill 687, he said, includes much of the same language used in an executive order President Donald Trump issued last summer, called “Preventing Woke AI in the Federal Government.” Adopting the same spirit as the federal action, Idaho’s bill would only allow state agencies to procure or use large language models, the technology behind modern chatbots like ChatGPT, if they are “developed and implemented” in accordance with two key principles: “truth-seeking” and “ideological neutrality.”

The viability of this proposal (and the White House’s) has been disputed by observers of all levels of technical ability, including members of the Idaho House when they first learned of it. “How would an agency inside our state know that a large language model is doing that?” Rep. Ben Fuhriman, one of Shirts’ fellow Republicans, asked during last month’s committee hearing. Fuhriman recalled the state’s concerns with Twitter, before “Mr. Musk bought them,” then added: “Twitter turned into a large language model through Grok and is doing things. What if it’s bought by somebody else? You don’t know what the algorithms being used are. Or what if Facebook begins to implement large language models? And is the agency not allowed then to communicate or work through these technologies which could morph and change over time?”

Shirts, who has continually referred to LLMs as “language learning models,” was unable to provide a cogent answer. He waved away his colleague’s concerns, suggesting they would somehow be resolved by the state’s contracting process, before concluding: “We don’t want them out there using a DEA AI language learning model, even if it’s free. I think this puts some ground rules in place to say, hey, this is where we are as a state. We want unbiased AI and we don’t even want you using it if it’s free.” Fuhriman later voted in favor of the bill.

Rep. Steve Tanner, a Republican who would also vote in favor of the bill, likewise said during the hearing he was unsure of “the knowability” of DEI in LLMs, and asked if there was an industry standard for evaluating such a thing. Shirts deflected: “I don’t pretend any bill is going to solve everything, but I think it’s good for a start in the right direction.”

Defining neutrality

Advertisement

Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, a Washington nonprofit that advocates for digital rights and freedom of expression, explained in a recent interview with this publication that the absence of industry standards, or even consistent definitions, surrounding principles of “truth-seeking” or “ideological neutrality” largely accounts for why Idaho’s legislation is so problematic: “A lot of the definitions we’re going to be talking about are going to be very slippery. They can be really broad, they can be wielded in specific instances to target certain kinds of speech or certain kinds of viewpoints or certain kinds of tools. But there isn’t necessarily an agreed-upon definition of … what constitutes unbiased.”

Anex-Ries also pointed out the inconsistency of demanding ideological neutrality while simultaneously calling out specific viewpoints as unwelcome. The terms cited in Shirts’ bill read like an encyclopedia of the political right’s favorite cultural punching bags: “critical theory,” “unconscious or implicit bias, microaggressions, internalized racism, cultural appropriation, structural equity, settler colonialism, group marginalization, systemic oppression, social justice, institutional or systemic racism, white fragility, racial privilege, disparate impact, intersectionality, sexual privilege, patriarchy, gender theory, queer theory, neopronouns, transgender ideology, misgendering, othering, deadnaming, heteronormativity, allyship” — and in case any were missed — “or any other related formulation of these tenets or concepts.”

Beyond that the legislation appears poised for partisan abuse, some AI researchers have rejected the possibility that “ideological neutrality” is an ideal anyone might ever hope to achieve. A group of researchers led by Stanford University’s Institute for Human-Centered AI last September published a paper acknowledging the ills of the internet age’s widespread political biases: “AI-generated messages can influence people’s attitudes toward controversial issues such as gun control and climate action, and affect political decisions such as budget allocations. Politically biased AI systems may hinder people from independently forming opinions and making choices, a key pillar in a liberal democracy.” But they concluded that the commonly proposed solution of “making AI models politically neutral,” is “theoretically and practically impossible.”

The researchers, seven academics from the University of Washington, Stanford, UC Berkeley and UC San Diego, explored a series of ways in which AI models might be muscled into a neutral shape, acknowledging that “neutrality is inherently subjective” and that “on the political spectrum, there is no neutral point, as moderate opinions … are political positions in and of themselves.” Each strategy came with its own concessions. “Refusal,” a relatively safe, but withholding strategy ensuring AI models won’t get into much trouble, also renders them less helpful. Another strategy is “avoidance,” which retains more of a model’s utility, but at the expense of potentially confusing users with its oblique answers. “Reflective neutrality” attempts to mirror users’ biases, perhaps avoiding upsetting anyone, but likely not providing accurate responses.

Anex-Ries described the dangers of both-sidesism, another potential neutrality strategy, which could “lend legitimacy to fringe views or those that don’t align with clear social consensus.” He used the example of an AI model placing facts about the Apollo 11 moon landing on the same footing as the widespread conspiracy theories that the event was staged on Earth. Misinformation about space missions is probably relatively innocuous, but it’s easy to see how the stakes would grow if the user happens to be an Idaho state government employee who’s tasked with making decisions about issuing benefits or creating a report that might be read by the governor before he makes a major policy decision.

Advertisement

‘I don’t foresee that as a problem’

Adhering to the requirements of Shirts’ legislation would become a critical issue for employees in Idaho, if the bill passes, determining which tools they’re permitted to use in their daily work. The bill would provide no lead time, requiring agencies to adhere to the new requirements immediately. Anex-Ries guessed that many agencies, in fear of accidentally violating the new rules, would simply disallow staff from using any AI systems. But when another lawmaker recently asked Shirts about the likelihood of this exact scenario, he simply said he didn’t think it would happen, but didn’t explain why.

AI systems, and large language models in particular, are notoriously opaque, even to their own creators. Popular LLMs, like Anthropic’s Claude or Google’s Gemini, are thought to have been trained on petabytes of data — massive amounts of text (and images) that are converted into numbers and fed through predictive algorithms and artificial neural networks to generate output that bears an impressive similarity to the sort of sentences (and images) generated by human brains. When asked how the Idaho state government might hope to sift through the inner workings of such systems for evidence of DEI, particularly in cases in which companies protect their proprietary source code from public view, Shirts said, “I don’t foresee that as a problem,” and did not explain further. His questioner, David Leavitt, also a Republican, was apparently unconvinced; he later became one of 11 representatives to vote against the bill.

Shirts did not respond to requests for an interview. If he has a thorough explanation of how his proposal would work in practice, he has not shared it publicly. The legislation is, by his own admission, more about providing the state with a legal escape hatch, should it mistakenly find itself in a business agreement with an organization that uses concepts like equity or diversity to guide its corporate practices — the state would be able to ensure “that they aren’t using any DEI in their AI code, and if for some reason we find out later that they are, we can break the contract,” he said during a hearing this month. His bill includes a provision that would require vendors to pay for any “decommissioning or transition costs,” should they violate the state’s official DEI taboo.

The White House met similar criticism last summer when the Trump administration offered its latest executive order, which claimed that, “in the AI context, DEI includes the suppression or distortion of factual information,” particularly on issues of race and biological sex. The Office of Management and Budget last December published a follow-up guidance directing federal agencies on how to ensure they only procure AI tools that are “truth-seeking” and ideologically neutral, namely by obtaining “sufficient information” from vendors on whether they meet the government’s two criteria. Anex-Ries said the federal guidance “leaves a lot to individual agencies to figure out how to implement this stuff,” and noted that a state like Idaho, equipped with less procurement and technical expertise than the federal government, might struggle even more greatly to navigate the implementing the law.

Advertisement

And if vendors are turned away because they don’t meet the government’s DEI rules, he continued, they “are going to be incentivized to contest those award decisions in court. And if there’s not a clear technical rubric that the decision was made on, this could be a really big risk for agencies to make decisions based off of this.”

‘They’re not telling us what to do’

Shirts has said his legislation was inspired by concerns that a company like Recidiviz might smuggle opposing values into his state, given the erstwhile content of its website. But Tina Transue, the Idaho Department of Correction’s government relations adviser, said most of the tools her department has procured from Recidiviz don’t even use AI: “They’re not telling us what to do with our population based on some other algorithm they’re using.” A spokesperson from the company confirmed in an emailed statement that it has “partnered” with the corrections department since 2020 and that “most of the current Recidiviz tools in Idaho don’t use AI.” (The spokesperson did not respond to questions about changes to the company website in time for publication.)

In Idaho, Recidiviz is primarily used to “operationalize our data,” Transue said. She described a project currently underway, in which data from the state’s offender management system is fed through Recidiviz tools to provide alerts to probation officers doing field work: “They’ll get an alert when somebody is due for a home check or they’ll get an alert when somebody’s due for all these other things. Our POs have an average of 80 to 90 person caseload and managing it without the modern tools is complicated.” (The Recidiviz spokesperson confirmed that the company’s tools “implement IDOC’s own policies, standards, and eligibility criteria to streamline staff work.”)

Transue said the department looked for alternatives to Recidiviz but discovered there weren’t any. When asked whether she was concerned about DEI language on the company’s website, she explained that “that was during a prior administration when that was more a part of peoples’ narrative. We didn’t even think to think about it. We were looking at the product and what they can do for our staff and for public safety in Idaho.”

Advertisement

Idaho Gov. Brad Little on Monday signed a bill called the Idaho Recession Act, cutting $192 million from the state’s budget, after fears of a shortfall worried conservative leaders. The cuts are spread across the state’s functions, with the heaviest cuts to education and health programs. But the Department of Correction, along with the state police and judiciary, also saw cuts. Transue called the state’s fiscal position “a crisis” and remarked that corrections won’t “get more staff anytime soon. Finding mechanisms to save our staff their valuable time is so important, so we really appreciate the work Recidiviz has done for us.”

Latest Podcasts