bbieron@platformeconomyinsights.com

Conservative Activist Sues Meta Over Defamatory AI Hallucinations

May 5, 2025

Report from the Wall Street Journal

In Brief – Robby Starbuck, a conservative activist notable for successfully pressuring companies to change their DEI practices, has filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. Starbuck says he discovered the problem last August when the allegations were posted on X. Starbuck immediately denied the allegations and contacted Meta about the claims, which he says continued to show up in its AI system responses months later. Starbuck’s lawsuit, filed in Delaware Superior Court, seeks more than $5 million in damages. Joel Kaplan, Meta’s chief global affairs officer, took to X to apologize to Starbuck that the company’s fix “didn’t address the underlying problem” and that he was working to “explore potential solutions.”

Context – AI “hallucinations”, the reality that all generative AI systems sometimes produce results that are false and cannot be explained by the system’s creators, provides valuable context for many AI public policy issues. They illustrate that GAI tools are not like traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. Starbuck’s defamation case gets in line, with a noteworthy lawsuit in Georgia state court involving OpenAI being farthest along. The fact that all the AI systems create fictions that are presented as real should be presented in AI-related copyright lawsuits as proof that AI systems create new works rather than copy and paste. The issue will likely appear when the question of applying Sec. 230 to AI results is decided because developers cannot ever be sure of outputs. And hallucinations happen at the same time some entities are trying to influence AI outputs by seeding training date with falsehoods. “Fixing” troubling outputs often ends up with an AI developer installing “guardrails”, meaning algorithmically overriding the system in some cases, which has created accusations of ideological biases by the companies.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required