bbieron@platformeconomyinsights.com

Federal Judge Withdraws Opinion Riddled with Likely AI Hallucinations

Aug 1, 2025

Report from the Bloomberg

In Brief – A New Jersey US district court judge withdrew his decision in a biopharma securities case after lawyers pointed out that his opinion contained numerous errors, including made-up quotes and misstated case outcomes. Judge Julien Neals completely withdrew his denial of the motion to dismiss a shareholder lawsuit made by CorMedix Inc. and posted that a new opinion and order will be forthcoming. Earlier in the month, lawyers in a separate securities case in the same district pointed out flaws in Neals’ CorMedix opinion, saying it “contains pervasive and material inaccuracies” and “includes quotations from case law and pleadings that cannot be found in the sources cited.” Although none of the formal filings with the court allege that Judge Neals used an AI chatbot for writing his opinion, the flaws in the document, including fabricated quotes, mis-referenced cases, and completely incorrect interpretations of decisions, carry the hallmarks of AI “hallucinations” that have come to the fore in legal matters as lawyers increasingly rely on AI services and chatbots to prepare case documents.

Context – Chatbot “hallucinations” in legal briefs first made news in mid-2023, less than a year after ChatGPT’s release, with two lawyers being sanctioned in US District Court. Chief Justice John Roberts led off his 2023 Annual Report on the Federal Judiciary discussing AI technology in the legal system, warned about hallucinations, and urged responsible use. It seemed primarily aimed at the bar. Well, similar cases keep popping up, with lawyers facing various sanctions. Of course, everyone should know that they need to check AI work, but everyone knows they should not drink and drive, too. The Judicial Office of the UK Ministry of Justice released AI guidance in late 2023 to judges and courts, including about the use of chatbots in official court duties. They warned of the limitations of the systems, including “hallucinations”, as well as the fact that most AI training materials at the time was US-centric and might not reflect UK legal tradition. Everyone, please always check your AI chatbot work. And please don’t drink and drive.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required