bbieron@platformeconomyinsights.com

Anthropic Lawyers Admit Filing a Brief with a Hallucinated Case Citation

May 5, 2025

Report from Reuters

In Brief – Lawyers for the AI company Anthropic submitted a court filing that included a citation “hallucination” created by the company’s AI chatbot Clause. The filing was part of expert testimony from one of the company’s data scientists in a copyright lawsuit brought by a group of music publishers that claims Anthropic illegally trained its chatbot on copyrighted music lyrics without authorization or permission. A lawyer representing Universal Music Group told US Magistrate Judge Susan van Keulen that the data scientist’s filing cited a nonexistent academic article to bolster the company’s argument in a dispute over evidence. When the error was pointed out, lawyers for the AI company agreed that the filing included mistakes in the citation but said that the article cited in the filing was real, that the content of the article said what the filing said it said, and that the link to the paper was correct. However, the lawyers believe that when the Anthropic chatbot was asked to properly format all the citations for the filing, it apparently created a made-up name for that paper, and attributed new authors to it as well.

Context – Questions around the legality of “training” the neural networks of major generative AI models like Claude with non-licensed copyrighted material is the biggest legal issue surrounding AI. In the US, copyright lawsuits are taking center stage and judges will determine how to apply the fair use doctrine. It’s a complex legal question and both sides have strong arguments. In one of the big cases, involving image generating services, the judge has said he wants to determine how the GAI systems work to ascertain whether they store and retrieve some form of copies, or whether they create new things. To that key question, the fact that all generative AI systems sometimes create realistic-seeming fabrications is telling. The developers themselves are not sure how they work. And hallucinations, which limit the utility of the AI services in many business fields where accuracy really matters because all outputs need to be checked, are a clear case in point. If the developers knew why the systems sometimes simply make stuff up and pass it off like all the other outputs, they would solve the problem.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required