bbieron@platformeconomyinsights.com

Trump Administration AI Plan Fires a Shot in Chatbot Culture War

Aug 1, 2025

Report from the New York Times

In Brief – President Trump’s executive order “Preventing Woke AI in the Federal Government” directs federal agencies to only procure large language models (LLMs) that adhere to the principles of “truth-seeking”, which it defines as prioritizing “historical accuracy, scientific inquiry, and objectivity,” and “ideological neutrality”, which includes not manipulating responses “in favor of ideological dogmas such as DEI” and not intentionally encoding ideological judgments into outputs. The effort to disincentivize models that they believe are slanted toward progressive viewpoints is paired with an overall AI Action Plan that ramps up federal government AI use and the development of computing and energy infrastructure. Critics raised free speech concerns with government judging LLM content and warned that attempting to remove some types of content from AI training would bias models and limit their effectiveness.

Context – No digital issue unites conservatives more firmly than the view that content moderation on all major platforms has been slanted against conservative viewpoints, at least before Elon Musk bought Twitter. They believe their policies reflected a progressive monoculture in the largely-Bay Area companies that was reinforced by advocates in the advertising, media, and entertainment industries, and an ecosystem of progressive think tanks, advocacy groups, academic researchers, government agencies, and media outlets. The changes made to X by Elon Musk were very disruptive and generated significant criticism from that content moderation ecosystem. The potential for similar ideological bias within AI chatbots quickly became a point of contention after ChatGPT’s release. Bias can come from many sources, including the training data, directions given to human reviewers who grade results, and the “guardrails” that instruct the system to do or not do certain things. Every aspect of content moderation, whether social media or AI, is subjective with no agreement on what is politically left, right, fair, accurate, or objectionable, which is why bilateral fights over online content standards, US v EU or US v Brazil, are so difficult to truly resolve.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required