Advertisement

Chatbots spew conflicting, misleading, ‘egregiously’ wrong info on voting with disabilities

Researchers tested five popular large language models to find they all produced answers that lacked nuance or were sometimes outright wrong.
question mark in chat bubble
(Getty Images)

Research published Monday by the Washington nonprofit Center for Democracy & Technology shows that the generative artificial intelligence models powering many chatbots often do not provide reliable information when asked about voting with disabilities.

The report, titled “Generating Confusion,” concludes that a quarter of responses from the most popular generative AI models could “dissuade, impede or prevent” users from voting, thanks to their frequently inaccurate, incorrect or misleading responses. After testing five of the most popular large language models — including Gemini 1.5 Pro and ChatGPT-4 — by providing them a series of prompts related to voting information relevant to people with disabilities, researchers concluded that responses often lacked nuance and were occasionally egregiously wrong.

“An inaccurate answer to a simple question, such as how to vote absentee, could impede the user’s exercise of their right to vote. There are numerous opportunities for error, including potentially misleading information about eligibility requirements, instructions for how to register to vote or request and return one’s ballot, and the status of various deadlines – all of which may vary by state,” the report reads.

Beyond holding concerns of misleading individual chatbot users, the researchers also expressed worry that the dissemination of inconsistent or misleading information could broadly undermine public confidence in elections. These issues are further amplified for people with disabilities, the researchers wrote — “particularly considering that the laws surrounding accessible voting are even more complex and varied than those regulating voting more generally.”

Advertisement

While researchers found that instances of bias or discrimination were “exceedingly rare,” they also observed that all five of the large language models during testing produced at least one “hallucination,” or a piece of information with no basis in fact. Errors ranged in severity from minor to egregious.

“[S]ome factual errors were so severe that the entire answer was incorrect and could interfere with a voter’s ability to cast their ballot. In response to a question about whether a Louisiana voter with a disability could receive their absentee ballot electronically, ChatGPT writes that ‘Louisiana does not provide electronic absentee ballots.’ This answer is indisputably false,” the report reads.

In even more harmful cases, large language models send users on wild goose chases, such as the case when the Llama 2 model was asked about curbside voting, an option that exists only in 27 states and Washington D.C.

“Llama incorrectly said that ‘election officials are required to provide curbside voting upon request.’ This is among the most harmful types of incorrect information in the responses we assessed. Instead of creating a barrier to voting, a false positive directs people to a means of voting that does not exist,” the report reads.

The report notes that concerns of false information being spread by chatbots are becoming more salient as a growing number of people turn to chatbots as common sources of information and as the technology is integrated into other, more common services, like search engines. The Washington Post reported last year, for example, that Amazon’s virtual assistant Alexa sometimes falsely stated that there was fraud in the 2020 election and that the election had been stolen from Donald Trump. In fact, investigations into the 2020 election have revealed no evidence of fraud.

Advertisement

For users, researchers recommended avoiding chatbots as a source of information about voting, exercising caution when receiving information from chatbots and to always verify all information using trusted sources.

For developers, researchers had a longer list of recommendations, including that they direct users to official, nonpartisan sources of election information, and that they prohibit users from conducting political campaigns or demographic targeting.

Latest Podcasts