bbieron@platformeconomyinsights.com

Robby Starbuck Settles with Meta on AI Defamation and Will Advise on Bias

Aug 1, 2025

Report from the Wall Street Journal

In Brief – Robby Starbuck, a conservative activist who has pressured companies to change their DEI practices, has settled a defamation lawsuit he brought against Meta that alleged the company’s AI tool smeared him by falsely asserting he participated in the January 6 riot and that he was linked to QAnon. Starbuck will serve as an advisor to Meta on how it can mitigate ideological and political bias in its AI tools. In April, in a lengthy blog post that preceded the Starbuck matter, Meta detailed the development, training and operations of its openly available Llama AI models, and said that leading LLMs “historically have leaned left when it comes to debated political and social topics” based on the how they have been trained, and that the company’s goal was “to remove bias from our AI models”.

Context – On the issue of AI hallucinations as defamation, the first major lawsuit, a Georgia state court case involving ChatGPT, was recently dismissed, with the judge noting in her order that OpenAI worked hard to reduce errors and extensively warns users that its system could make them. That said, ideological bias and “hallucinations” are different things. The initial Starbuck complaint about untrue statements seemed like a hallucination suit. However, bias, especially as defined by Meta, is about intentional answers due to training data sets being skewed in ways that some people could consider ideological, reinforcement learning methods that some could see as ideological, and “guardrails” that all companies use to limit or direct AI models that can seem biased. No digital issue unites conservatives more than the view that content moderation on the big digital platforms has been slanted against conservative viewpoints because the main companies operate in what is perceived as a progressive monoculture in the Bay Area. Similar ideological bias within AI chatbots, especially ChatGPT, quickly became a point of contention and the Trump AI Action Plan includes an executive order against “Woke AI”. Of course, everything about bias is subjective without agreement between left and right. Different platform options seems the best antidote.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required