bbieron@platformeconomyinsights.com

OpenAI Wins Dismissal of AI Hallucination Defamation Lawsuit

May 5, 2025

Report from Reuters

In Brief – OpenAI has won the dismissal of a defamation lawsuit filed by a conservative radio host in Georgia who alleged that the ChatGPT chatbot made false claims that he embezzled money from a gun-rights organization. Judge Tracie Cason of the Gwinnett County Superior Court ruled that plaintiff Mark Walters had not shown he was defamed and said OpenAI’s chatbot ChatGPT puts users on notice that it can make errors. The case, filed in mid-2023, has been an early test of the legal implications of so-called “hallucinations” produced by AI-enabled chatbots. The fabricated claims regarding Walters were produced for a fellow radio host doing research for an upcoming interview. They were not aired because the other radio host thought they were likely false after further research. OpenAI argued that it should be free of liability because it provides clear warnings to users that the chatbot can produce inaccurate information and that users should confirm any outputs. In her order, Judge Cason said, “OpenAI’s ‘industry-leading efforts’ to reduce errors of this kind and its extensive warnings to users that errors of this kind could occur negate any possibility that a jury could find OpenAI acted with actual malice here.”

Context – All Generative AI (GAI) systems sometimes produce results that are false and cannot be explained by the system creators. GAI models are not traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. While the Walters’ defamation case, the most prominent early hallucination lawsuit, seems to be resolved, the problem exists. Robby Starbuck, a prominent conservative activist, recently filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. A top Meta executive apologized and said they would solve the problem. And if you are a chatbot user, check the work of your AI intern before you share it.

View By Monthly
Latest Blog
The Trump Administration’s Shifting View on AI Model Security Tests

Report from the New York Times In Brief – The Trump Administration is weighing a shift in policy to allow federal agencies to review advanced AI models, with officials discussing an executive order to establish an AI working group composed of government leaders and...

Big AI Industry Companies Reach Agreements with the Pentagon

Report from the Wall Street Journal In Brief – The US Defense Department has finalized agreements with eight major US technology companies to deploy their artificial intelligence tools in classified environments. The companies are OpenAI, Google, SpaceX (via xAI),...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required