bbieron@platformeconomyinsights.com

Italian Consumer Protection Authority Targets DeepSeek for Hallucinations

Jul 1, 2025

Report from Reuters

In Brief – Italy’s joint antitrust and consumer rights regulator has opened a probe into Chinese AI start-up DeepSeek for allegedly failing to sufficiently warn users about the danger of false information arising from so-called “hallucinations” in its chatbot. The AGCM claims that DeepSeek did not give users “sufficiently clear, immediate and intelligible” warnings about the risk of misleading information in AI-generated content. The regulator describes AI hallucinations as “situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information”. In January, another Italian regulator, the country’s data protection authority, ordered DeepSeek to block access to its chatbot in Italy after the company failed to address concerns that it was not complying with the General Data Protection Regulation. The company argued that it was not subject to local regulation in Italy and the app is reportedly still not available in app stores in Italy.

Context – Italy’s data protection regulator made a splash in early 2023 by banning OpenAI for a few months for failing to conform with the EU’s privacy law. Among other issues, the regulator questioned how the GDPR applied to hallucinations, a novel concept at the time. The EU has since enacted its comprehensive AI Act to regulate all AI applications, including chatbots. However, regulatory overlaps, in particular involving member state data protection authorities, remains an industry concern. DeepSeek being based in China is also an issue with some governments. However, on the topic of AI hallucinations, it is worth nothing that providing very clear warnings seems the most reasonable and sensible policy. All Generative AI (GAI) systems sometimes produce results that are false and many of the latest and more advanced systems seem to be plagued more by the problem, not less. In a recent court order dismissing a hallucination-based defamation lawsuit against OpenAI in Georgia, the judge cited the company’s “extensive warnings to users that errors of this kind could occur.”

View By Monthly
Latest Blog
Swedish Court Delays Ruling in PriceRunner v Google Damages Case

Report from Crowdfund Insider In Brief – Sweden’s Patent and Market Court has delayed until June 10th its decision in a major antitrust damages case between comparison shopping site PriceRunner, which is owned by Sweden-based fintech company Klarna, and Google, citing...

Indonesia Warns YouTube Over Not Complying with Age Limit Rules

Report from Business Today In Brief – Indonesia has issued a formal reprimand to Google over noncompliance by its YouTube service with the country’s new child-protection rules for social media platforms that took effect March 28. The regulations require “high-risk”...

Meta Sued for Addictive Design in Denmark by Nonprofit Association

Report from Anadolu News In Brief – A Danish non-profit association, SOMI, has filed a lawsuit against Meta in Denmark on behalf of parents and children, alleging that the company’s platforms cause psychological harm to minors. The complaint in Copenhagen City Court...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required