bbieron@platformeconomyinsights.com

OpenAI Wins Dismissal of AI Hallucination Defamation Lawsuit

May 5, 2025

Report from Reuters

In Brief – OpenAI has won the dismissal of a defamation lawsuit filed by a conservative radio host in Georgia who alleged that the ChatGPT chatbot made false claims that he embezzled money from a gun-rights organization. Judge Tracie Cason of the Gwinnett County Superior Court ruled that plaintiff Mark Walters had not shown he was defamed and said OpenAI’s chatbot ChatGPT puts users on notice that it can make errors. The case, filed in mid-2023, has been an early test of the legal implications of so-called “hallucinations” produced by AI-enabled chatbots. The fabricated claims regarding Walters were produced for a fellow radio host doing research for an upcoming interview. They were not aired because the other radio host thought they were likely false after further research. OpenAI argued that it should be free of liability because it provides clear warnings to users that the chatbot can produce inaccurate information and that users should confirm any outputs. In her order, Judge Cason said, “OpenAI’s ‘industry-leading efforts’ to reduce errors of this kind and its extensive warnings to users that errors of this kind could occur negate any possibility that a jury could find OpenAI acted with actual malice here.”

Context – All Generative AI (GAI) systems sometimes produce results that are false and cannot be explained by the system creators. GAI models are not traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. While the Walters’ defamation case, the most prominent early hallucination lawsuit, seems to be resolved, the problem exists. Robby Starbuck, a prominent conservative activist, recently filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. A top Meta executive apologized and said they would solve the problem. And if you are a chatbot user, check the work of your AI intern before you share it.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required