An AI chatbot released by the New York City government and designed to help business owners access information has come under scrutiny for sharing inaccurate and misleading guidelines.
A report from The layout, co-published with local nonprofit newsrooms Documented And The cityreveal multiple cases where the chatbot gave incorrect advice about legal obligations.
For example, the AI chatbot claimed that bosses could accept tips from employees and that landlords could discriminate based on their source of income – both wrong advice.
Chatbot failed?
The chatbot, which was launched by Mayor Adams’ administration in October 2023 as an extension of the MyCity portal, is described as “a one-stop shop for city services and benefits,” and is powered by Azure services. Microsoft. Despite its intent to serve as a reliable source of information directly from city government websites, the pilot program appears to be generating poor responses.
An example given by The layout sees the chatbot claiming that businesses could operate as cashless establishments, despite New York City’s ban on such practices in 2020.
In response to the report, Leslie Brown, spokesperson for the NYC Office of Technology and Innovation, acknowledged the chatbot’s imperfections and highlighted ongoing efforts to refine the AI tool:
“In line with the City’s core principles of trustworthiness and transparency around AI, the site informs users that the clearly marked pilot beta product should only be used for business-related content, tells users that there are potential risks and encourages them via a disclaimer to double both. -check the comments with the links provided and do not use them as a substitute for professional advice.”
After a months-long honeymoon period, the cracks are starting to show as companies and government agencies begin to question the reliability, safety and security of artificial intelligence, with many imposing bans and other strict regulations.