bbieron@platformeconomyinsights.com

OpenAI Wins Dismissal of AI Hallucination Defamation Lawsuit

May 5, 2025

Report from Reuters

In Brief – OpenAI has won the dismissal of a defamation lawsuit filed by a conservative radio host in Georgia who alleged that the ChatGPT chatbot made false claims that he embezzled money from a gun-rights organization. Judge Tracie Cason of the Gwinnett County Superior Court ruled that plaintiff Mark Walters had not shown he was defamed and said OpenAI’s chatbot ChatGPT puts users on notice that it can make errors. The case, filed in mid-2023, has been an early test of the legal implications of so-called “hallucinations” produced by AI-enabled chatbots. The fabricated claims regarding Walters were produced for a fellow radio host doing research for an upcoming interview. They were not aired because the other radio host thought they were likely false after further research. OpenAI argued that it should be free of liability because it provides clear warnings to users that the chatbot can produce inaccurate information and that users should confirm any outputs. In her order, Judge Cason said, “OpenAI’s ‘industry-leading efforts’ to reduce errors of this kind and its extensive warnings to users that errors of this kind could occur negate any possibility that a jury could find OpenAI acted with actual malice here.”

Context – All Generative AI (GAI) systems sometimes produce results that are false and cannot be explained by the system creators. GAI models are not traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. While the Walters’ defamation case, the most prominent early hallucination lawsuit, seems to be resolved, the problem exists. Robby Starbuck, a prominent conservative activist, recently filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. A top Meta executive apologized and said they would solve the problem. And if you are a chatbot user, check the work of your AI intern before you share it.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required