After giving wrong answers, NYC chatbot to stay online for testing
After reports last week that a New York City chatbot powered by generative artificial intelligence was providing residents with incorrect information on a range of topics — from housing rights to worker protections — Mayor Eric Adams has said he doesn’t intend to take the tool offline.
The Markup last week reported that the city’s digital assistant, called MyCity Chatbot, had been providing users with misleading, incomplete or sometimes outright false information. As of Wednesday, for example, the bot still tells users that bosses are permitted to “take a cut” of their workers’ tips, which is at odds with workplace laws set by New York City and the U.S. Department of Labor.
Other misinformation proliferated by the chatbot has pertained to concealing funeral home pricing, running cashless stores in the city and landlord requirements on accepting government vouchers.
Rebecca Williams, a senior strategist on privacy and data governance with the American Civil Liberties Union, told StateScoop that leaving the faulty chatbot online is “inexcusable.”
“I think that’s a really ripe example of the inarticulable harms [of AI],” Williams said. “Let’s say somebody went to the chatbot, they tried to know what to do if the landlord isn’t paying for repairs and they get the wrong answer and then they don’t execute their rights. But how do you measure that? It’s hard to even quantify the monetary impact. People expect when they go to a government website for it to be authoritative information that’s accurate, that will help them.”
In addition to displaying a “beta” label, a widespread tech industry practice advertising that a piece of software is not a finished product, the website includes a warning: “Responses may occasionally produce inaccurate or incomplete content. Validate answers on NYC.gov and MyCity Business.”
When asked during a press conference Tuesday why the city hasn’t taken down the chatbot for repairs, Adams balked at a reporter who characterized the technology as providing answers that are “so wrong.”
“It’s wrong in some areas and we gotta fix it. … Any time you use technology, you need to put it in the real environment to iron out the kinks,” Adams said. “You can’t live in a lab. You can’t stay in a lab forever. You must be willing to say I’m going to put it out among the real universe to iron out the next level of perfection. Anyone who knows technology knows this is how it’s done.”
When StateScoop on Wednesday asked the chatbot if businesses “have the right to refuse service to customers who are violent,” the chatbot responded in the negative.
“No, a business does not have the right to refuse service to customers who are violent. The City of New York has a Business Owner Bill of Rights that ensures business owners provide courteous and professional treatment to all customers,” New York’s chatbot responded.
“Only those who are fearful sit down and say, ‘Oh, it’s not working the way we want. Now we have to run away from it altogether.’ I don’t live that way,” Adams said Tuesday. “Every evolution is going to build a better product until the product becomes an excellent, perfect product. And it’s never perfect.”
Adams’ office did not respond to StateScoop’s requests for comment.
“I don’t know what the exact policy solution is,” said Williams, of the ACLU, “but I do know they shouldn’t have had that on the internet. Or as soon as they heard it was wrong, they should have been repairing that situation and taking it offline.”
Corrected April 4, 20204: This story was updated to remove incorrect references to the timeline of the chatbot’s “beta” status.