Advertisement

People deftly extracted ‘passwords’ from AI chatbots during lab test

During tests led by a software company called Immersive Labs, users of various technical skill levels succeeded in tricking a chatbot to hand over secret information.
chatbot illustration
(Getty Images)

A study published Tuesday by the English software company Immersive Labs found that in testing whether people could exploit generative AI chatbots, the technology was no match for human ingenuity.

The report analyzes a public challenge the company launched in 2023 in which participants with various levels of technical skills successfully tricked a generative AI chatbot into disclosing a password, showing the technology has cybersecurity weaknesses that could easily be exploited by bad actors determined to breach the software’s defenses.

The report found the participants’ persistence was similar to that of cyber criminals, who repeatedly probe networks with various attack techniques until they find vulnerabilities.

As state and local governments search for ways to integrate AI into their digital services and back offices, cyber psychologist John Blythe, one of the study’s authors, told StateScoop it’s imperative they first close the knowledge gap with employees through cybersecurity training programs designed that account for human psychology.

Advertisement

Immersive Labs designed 10 levels of generative AI chatbots, each one more difficult to trick into revealing a secret word. Some participants prompted bots to encode its password in a base-64 number (instead of using the usual base-10 number system). Others simply told the bot to write the password backwards, while others asked the bot to express a password in Morse code.

“People were not only creative, but really able to leverage their problem solving skills and cognitive flexibility,” John Blythe, director of cyber psychology at Immersive Labs, told StateScoop. “A technique that might have worked on one level might not have worked on the next level, so they had to adapt the types of techniques that they used to try and trick the bot.”

Most participants — 88% — were able to extract a password from at least the first level. The report said this shows there’s a relatively low barrier to bypassing basic generative AI security protocols. Researchers said the tests underscore the urgency for organizations to implement stronger security measures, including “data loss prevention checks, strict input validation context-aware filtering to prevent and recognize attempts to manipulate GenAI output.”

Cyber psychology

Cyber psychology is the study of how new technology impacts human behavior, such as social media, technology addictions, online-dating relationships and, of course, cybersecurity.

Advertisement

“In cybersecurity, we focus on understanding what prevents people from engaging in cybersecurity best practices and how we can design interventions and training to tackle those psychological barriers,” Blythe said.

He said psychology can be used to understand the psychological tactics involved in social engineering, when a bad actor manipulates or deceives a person to gain system access, such as phishing emails.

“The attacker mindset, the psychology of hackers and attackers, helps us manage that human element more effectively, which we know contributes to a significant amount of security breaches,” Blythe said.

As state and local governments prepare to enact their budgets for the upcoming fiscal year beginning on July 1, many carve out funding for cybersecurity efforts, including employee training.

Most organizations, Blythe said, focus their cybersecurity training on the closing the knowledge gap amongst their employees. He said they are mistaken in assuming that more training will lower the chance of cyberattacks caused by human error, such as a staff member falling for a phishing scheme.

Advertisement

“What we know from psychology is that simply telling people information very rarely changes their behavior,” Blythe said. “You see that in public health, you see it in drunk driving, climate change and safety.”

Blythe said there are many explanations for why people fall prey to social engineering tactics — they may have poor risk perception or lack confidence. Some cybersecurity policies interfere with how employees normally perform their jobs, which could lead them to ignore their training entirely.

A recent audit of Missouri’s cybersecurity practices found that 20% of state employees failed to complete their required monthly cybersecurity training.

“[Cybersecurity] campaigns tend to be more effective when they target people’s personal lives” said Blythe.

He said traditional cybersecurity training programs tend to be overloaded with corporate jargon and often don’t resonate with employees who may better connect with training that discusses cyber threats to families or celebrities.

Advertisement

“It’s having that clear behavior at the forefront of any campaign, it’s not enough to say, ‘Be careful with Gen AI,'” he said. “That might be used a strong password, install software updates, anything that’s going to really create the hook where it’s gonna lead to that attitude change that you’re really wanting to deliver on.”

Fighting human ingenuity with humans

The clever prompts that participants fed to chatbots during the Immersive Labs experiment were akin to traditional prompt injection techniques that have been used by hackers for decades. A web form that allows a user to enter a snippet of PHP code, for example, could allow that user to do something the system’s designer didn’t intend, like get a list of users or passwords.

Generative AI interfaces, which are backed by reams of data, present an additional security challenge. Noting the relative ease with which novice users found ways to extract passwords, though they were poorly secured, the report recommends organizations form interdisciplinary teams of experts to develop comprehensive policies for generative AI.

Dozens of states have established task forces to identify the risks and benefits of generative AI. Many states have formed policies and laws setting the guardrails for how generative AI can be used inside state government.

Advertisement

California this month announced it will test generative AI tools over a six-month trial period in four departments to address various operational challenges. In February, Oklahoma’s task force submitted its final recommendations to Gov. Kevin Stitt on how the state can use AI to make government more efficient.

The Immersive Labs report also encourages organizations to implement contingency plans and failsafe mechanisms, such as automated shutdown procedures, regular backups of data and system configurations, in case a cyberattack breaches a network or a generative AI system malfunctions. “Employing human oversight and intervention mechanisms alongside systems can provide an additional layer of control and resilience,” the study reads.

Blythe stressed the need to not leave software designers out of the conversation, to push the creation of generative AI tools with built-in cybersecurity protections against prompt injection attacks.

“We need better collaboration between governments, academia, also industry to do research so we can understand what these harms are, but also have a more coordinated effort to design out these potential harms,” he said, adding that there’s a catch.

“An inherent flaw of generative AI is that you can actually never fully design out this type of attack, because human ingenuity will always will always win.”

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts