bbieron@platformeconomyinsights.com

Conservative Activist Sues Meta Over Defamatory AI Hallucinations

May 5, 2025

Report from the Wall Street Journal

In Brief – Robby Starbuck, a conservative activist notable for successfully pressuring companies to change their DEI practices, has filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. Starbuck says he discovered the problem last August when the allegations were posted on X. Starbuck immediately denied the allegations and contacted Meta about the claims, which he says continued to show up in its AI system responses months later. Starbuck’s lawsuit, filed in Delaware Superior Court, seeks more than $5 million in damages. Joel Kaplan, Meta’s chief global affairs officer, took to X to apologize to Starbuck that the company’s fix “didn’t address the underlying problem” and that he was working to “explore potential solutions.”

Context – AI “hallucinations”, the reality that all generative AI systems sometimes produce results that are false and cannot be explained by the system’s creators, provides valuable context for many AI public policy issues. They illustrate that GAI tools are not like traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. Starbuck’s defamation case gets in line, with a noteworthy lawsuit in Georgia state court involving OpenAI being farthest along. The fact that all the AI systems create fictions that are presented as real should be presented in AI-related copyright lawsuits as proof that AI systems create new works rather than copy and paste. The issue will likely appear when the question of applying Sec. 230 to AI results is decided because developers cannot ever be sure of outputs. And hallucinations happen at the same time some entities are trying to influence AI outputs by seeding training date with falsehoods. “Fixing” troubling outputs often ends up with an AI developer installing “guardrails”, meaning algorithmically overriding the system in some cases, which has created accusations of ideological biases by the companies.

View By Monthly
Latest Blog
Swedish Court Delays Ruling in PriceRunner v Google Damages Case

Report from Crowdfund Insider In Brief – Sweden’s Patent and Market Court has delayed until June 10th its decision in a major antitrust damages case between comparison shopping site PriceRunner, which is owned by Sweden-based fintech company Klarna, and Google, citing...

Indonesia Warns YouTube Over Not Complying with Age Limit Rules

Report from Business Today In Brief – Indonesia has issued a formal reprimand to Google over noncompliance by its YouTube service with the country’s new child-protection rules for social media platforms that took effect March 28. The regulations require “high-risk”...

Meta Sued for Addictive Design in Denmark by Nonprofit Association

Report from Anadolu News In Brief – A Danish non-profit association, SOMI, has filed a lawsuit against Meta in Denmark on behalf of parents and children, alleging that the company’s platforms cause psychological harm to minors. The complaint in Copenhagen City Court...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required