bbieron@platformeconomyinsights.com

Conservative Activist Sues Meta Over Defamatory AI Hallucinations

May 5, 2025

Report from the Wall Street Journal

In Brief – Robby Starbuck, a conservative activist notable for successfully pressuring companies to change their DEI practices, has filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. Starbuck says he discovered the problem last August when the allegations were posted on X. Starbuck immediately denied the allegations and contacted Meta about the claims, which he says continued to show up in its AI system responses months later. Starbuck’s lawsuit, filed in Delaware Superior Court, seeks more than $5 million in damages. Joel Kaplan, Meta’s chief global affairs officer, took to X to apologize to Starbuck that the company’s fix “didn’t address the underlying problem” and that he was working to “explore potential solutions.”

Context – AI “hallucinations”, the reality that all generative AI systems sometimes produce results that are false and cannot be explained by the system’s creators, provides valuable context for many AI public policy issues. They illustrate that GAI tools are not like traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. Starbuck’s defamation case gets in line, with a noteworthy lawsuit in Georgia state court involving OpenAI being farthest along. The fact that all the AI systems create fictions that are presented as real should be presented in AI-related copyright lawsuits as proof that AI systems create new works rather than copy and paste. The issue will likely appear when the question of applying Sec. 230 to AI results is decided because developers cannot ever be sure of outputs. And hallucinations happen at the same time some entities are trying to influence AI outputs by seeding training date with falsehoods. “Fixing” troubling outputs often ends up with an AI developer installing “guardrails”, meaning algorithmically overriding the system in some cases, which has created accusations of ideological biases by the companies.

View By Monthly
Latest Blog
European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Dutch Court Bans Grok from Creating Nude and Partially Nude Images

Report from Reuters In Brief – A Dutch court has ordered AI company xAI to stop its chatbot Grok from generating or distributing non-consensual sexualized images, including depictions of adults or children partially or wholly stripped naked. The preliminary...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required