bbieron@platformeconomyinsights.com

Italian Consumer Protection Authority Targets DeepSeek for Hallucinations

Jul 1, 2025

Report from Reuters

In Brief – Italy’s joint antitrust and consumer rights regulator has opened a probe into Chinese AI start-up DeepSeek for allegedly failing to sufficiently warn users about the danger of false information arising from so-called “hallucinations” in its chatbot. The AGCM claims that DeepSeek did not give users “sufficiently clear, immediate and intelligible” warnings about the risk of misleading information in AI-generated content. The regulator describes AI hallucinations as “situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information”. In January, another Italian regulator, the country’s data protection authority, ordered DeepSeek to block access to its chatbot in Italy after the company failed to address concerns that it was not complying with the General Data Protection Regulation. The company argued that it was not subject to local regulation in Italy and the app is reportedly still not available in app stores in Italy.

Context – Italy’s data protection regulator made a splash in early 2023 by banning OpenAI for a few months for failing to conform with the EU’s privacy law. Among other issues, the regulator questioned how the GDPR applied to hallucinations, a novel concept at the time. The EU has since enacted its comprehensive AI Act to regulate all AI applications, including chatbots. However, regulatory overlaps, in particular involving member state data protection authorities, remains an industry concern. DeepSeek being based in China is also an issue with some governments. However, on the topic of AI hallucinations, it is worth nothing that providing very clear warnings seems the most reasonable and sensible policy. All Generative AI (GAI) systems sometimes produce results that are false and many of the latest and more advanced systems seem to be plagued more by the problem, not less. In a recent court order dismissing a hallucination-based defamation lawsuit against OpenAI in Georgia, the judge cited the company’s “extensive warnings to users that errors of this kind could occur.”

View By Monthly
Latest Blog
European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Dutch Court Bans Grok from Creating Nude and Partially Nude Images

Report from Reuters In Brief – A Dutch court has ordered AI company xAI to stop its chatbot Grok from generating or distributing non-consensual sexualized images, including depictions of adults or children partially or wholly stripped naked. The preliminary...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required