bbieron@platformeconomyinsights.com

Google Pulls Gemma AI Developer Tools Following Conservative Criticism

Nov 1, 2025

Report from TechCrunch

In Brief – Google has removed Gemma, a family of AI models the company says were intended for developers, not consumers, from its AI Studio after US Senator Marsha Blackburn (R-TN) accused the AI model of defamation by fabricating accusations of sexual misconduct against her. In a letter to Google’s CEO, Blackburn recounts how Gemma created an answer about a supposed incident from 1987 in which none of the responses or links were true. Blackburn further criticized the response of a senior Google official at a recent Senate hearing that called similar fabrications by Gemma about conservative activist Robbie Starbuck “hallucinations” and claimed Google is “working hard to mitigate them.” Blackburn said Gemma’s fabrications are “not a harmless ‘hallucination,’” but rather “an act of defamation” that add to “a consistent pattern of bias against conservative figures demonstrated by Google’s AI systems.” Google responded with a letter saying that hallucinations are a common technical issue with LLMs and similar problems can arise with non-partisan names or those related to other political views.

Context – All generative AI systems sometimes deliver believable-sounding fabrications. The first noteworthy AI hallucination defamation lawsuit resulted in a Georgia state court judge dismissing the complaint against OpenAI noting that the company worked hard to reduce ChatGPT errors and extensively warns users of the problem. That said, ideological bias and hallucinations are different things. Robbie Starbuck’s recent defamation lawsuit against Google’s Gemma followed a similar lawsuit against Meta’s AI models. The Meta suit was settled with the company agreeing to include Starbuck in its efforts to reduce bias in its AI, which Meta describes as accurate answers skewed in ways that some people could consider ideological due to training datareinforcement learning methods, and “guardrails”. No digital issue unites conservatives more than the view that big digital platforms discriminate against conservative viewpoints, and similar charges of bias by “Woke AI” have quickly emerged. Of course, everything about ideological bias is subjective and left and right never agree.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required