bbieron@platformeconomyinsights.com

Anthropic Lawyers Admit Filing a Brief with a Hallucinated Case Citation

May 5, 2025

Report from Reuters

In Brief – Lawyers for the AI company Anthropic submitted a court filing that included a citation “hallucination” created by the company’s AI chatbot Clause. The filing was part of expert testimony from one of the company’s data scientists in a copyright lawsuit brought by a group of music publishers that claims Anthropic illegally trained its chatbot on copyrighted music lyrics without authorization or permission. A lawyer representing Universal Music Group told US Magistrate Judge Susan van Keulen that the data scientist’s filing cited a nonexistent academic article to bolster the company’s argument in a dispute over evidence. When the error was pointed out, lawyers for the AI company agreed that the filing included mistakes in the citation but said that the article cited in the filing was real, that the content of the article said what the filing said it said, and that the link to the paper was correct. However, the lawyers believe that when the Anthropic chatbot was asked to properly format all the citations for the filing, it apparently created a made-up name for that paper, and attributed new authors to it as well.

Context – Questions around the legality of “training” the neural networks of major generative AI models like Claude with non-licensed copyrighted material is the biggest legal issue surrounding AI. In the US, copyright lawsuits are taking center stage and judges will determine how to apply the fair use doctrine. It’s a complex legal question and both sides have strong arguments. In one of the big cases, involving image generating services, the judge has said he wants to determine how the GAI systems work to ascertain whether they store and retrieve some form of copies, or whether they create new things. To that key question, the fact that all generative AI systems sometimes create realistic-seeming fabrications is telling. The developers themselves are not sure how they work. And hallucinations, which limit the utility of the AI services in many business fields where accuracy really matters because all outputs need to be checked, are a clear case in point. If the developers knew why the systems sometimes simply make stuff up and pass it off like all the other outputs, they would solve the problem.

View By Monthly
Latest Blog
OpenAI Reaches Defense Department Deal Flanking Anthropic

Report from the New York Times In Brief – OpenAI says it has reached agreement with the US Department of Defense (DoD) to supply AI for classified systems in a manner that the company says addresses its opposition to the technology being misused in autonomous weapons...

Federal Judge Blocks Virginia’s One-Hour Time Limit for Social Media

Report from Reuters In Brief – US District Judge Patricia Tolliver Giles has issued a preliminary injunction blocking Virginia from enforcing Senate Bill 854 that imposes a time limit on teens using social media platforms with so-called “addictive” features. Platforms...

FTC Chairman Accuses Apple of News Media Viewpoint Discrimination

Report from the New York Times In Brief – The Federal Trade Commission announced that it sent a warning letter to Apple CEO Tim Cook expressing concerns that the operations of the Apple News may favor certain political viewpoints in a way that conflicts with Apple’s...

PM Starmer Proposes Bringing AI Chatbots Under the UK Online Safety Act

Report from Bloomberg In Brief – UK Prime Minister Keir Starmer has announced plans to bring AI chatbots directly under the Online Safety Act (OSA) to close what he called a “legal loophole” in Britain’s online safety regime and ensure that they are designed to not...

Reddit Fined By UK ICO for Failing to Age Check 13-Year-Olds

Report from the BBC In Brief – The UK’s data protection regulator, the Information Commissioner's Office (ICO), has fined Reddit more than £14 million for failing to adequately enforce its rules regarding children under 13 accessing the platform. Following an...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required