bbieron@platformeconomyinsights.com

Italian Consumer Protection Authority Targets DeepSeek for Hallucinations

Jul 1, 2025

Report from Reuters

In Brief – Italy’s joint antitrust and consumer rights regulator has opened a probe into Chinese AI start-up DeepSeek for allegedly failing to sufficiently warn users about the danger of false information arising from so-called “hallucinations” in its chatbot. The AGCM claims that DeepSeek did not give users “sufficiently clear, immediate and intelligible” warnings about the risk of misleading information in AI-generated content. The regulator describes AI hallucinations as “situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information”. In January, another Italian regulator, the country’s data protection authority, ordered DeepSeek to block access to its chatbot in Italy after the company failed to address concerns that it was not complying with the General Data Protection Regulation. The company argued that it was not subject to local regulation in Italy and the app is reportedly still not available in app stores in Italy.

Context – Italy’s data protection regulator made a splash in early 2023 by banning OpenAI for a few months for failing to conform with the EU’s privacy law. Among other issues, the regulator questioned how the GDPR applied to hallucinations, a novel concept at the time. The EU has since enacted its comprehensive AI Act to regulate all AI applications, including chatbots. However, regulatory overlaps, in particular involving member state data protection authorities, remains an industry concern. DeepSeek being based in China is also an issue with some governments. However, on the topic of AI hallucinations, it is worth nothing that providing very clear warnings seems the most reasonable and sensible policy. All Generative AI (GAI) systems sometimes produce results that are false and many of the latest and more advanced systems seem to be plagued more by the problem, not less. In a recent court order dismissing a hallucination-based defamation lawsuit against OpenAI in Georgia, the judge cited the company’s “extensive warnings to users that errors of this kind could occur.”

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required