Report from Reuters
In Brief – Italy’s joint antitrust and consumer rights regulator has opened a probe into Chinese AI start-up DeepSeek for allegedly failing to sufficiently warn users about the danger of false information arising from so-called “hallucinations” in its chatbot. The AGCM claims that DeepSeek did not give users “sufficiently clear, immediate and intelligible” warnings about the risk of misleading information in AI-generated content. The regulator describes AI hallucinations as “situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information”. In January, another Italian regulator, the country’s data protection authority, ordered DeepSeek to block access to its chatbot in Italy after the company failed to address concerns that it was not complying with the General Data Protection Regulation. The company argued that it was not subject to local regulation in Italy and the app is reportedly still not available in app stores in Italy.
Context – Italy’s data protection regulator made a splash in early 2023 by banning OpenAI for a few months for failing to conform with the EU’s privacy law. Among other issues, the regulator questioned how the GDPR applied to hallucinations, a novel concept at the time. The EU has since enacted its comprehensive AI Act to regulate all AI applications, including chatbots. However, regulatory overlaps, in particular involving member state data protection authorities, remains an industry concern. DeepSeek being based in China is also an issue with some governments. However, on the topic of AI hallucinations, it is worth nothing that providing very clear warnings seems the most reasonable and sensible policy. All Generative AI (GAI) systems sometimes produce results that are false and many of the latest and more advanced systems seem to be plagued more by the problem, not less. In a recent court order dismissing a hallucination-based defamation lawsuit against OpenAI in Georgia, the judge cited the company’s “extensive warnings to users that errors of this kind could occur.”
