bbieron@platformeconomyinsights.com

Trump Administration AI Plan Fires a Shot in Chatbot Culture War

Aug 1, 2025

Report from the New York Times

In Brief – President Trump’s executive order “Preventing Woke AI in the Federal Government” directs federal agencies to only procure large language models (LLMs) that adhere to the principles of “truth-seeking”, which it defines as prioritizing “historical accuracy, scientific inquiry, and objectivity,” and “ideological neutrality”, which includes not manipulating responses “in favor of ideological dogmas such as DEI” and not intentionally encoding ideological judgments into outputs. The effort to disincentivize models that they believe are slanted toward progressive viewpoints is paired with an overall AI Action Plan that ramps up federal government AI use and the development of computing and energy infrastructure. Critics raised free speech concerns with government judging LLM content and warned that attempting to remove some types of content from AI training would bias models and limit their effectiveness.

Context – No digital issue unites conservatives more firmly than the view that content moderation on all major platforms has been slanted against conservative viewpoints, at least before Elon Musk bought Twitter. They believe their policies reflected a progressive monoculture in the largely-Bay Area companies that was reinforced by advocates in the advertising, media, and entertainment industries, and an ecosystem of progressive think tanks, advocacy groups, academic researchers, government agencies, and media outlets. The changes made to X by Elon Musk were very disruptive and generated significant criticism from that content moderation ecosystem. The potential for similar ideological bias within AI chatbots quickly became a point of contention after ChatGPT’s release. Bias can come from many sources, including the training data, directions given to human reviewers who grade results, and the “guardrails” that instruct the system to do or not do certain things. Every aspect of content moderation, whether social media or AI, is subjective with no agreement on what is politically left, right, fair, accurate, or objectionable, which is why bilateral fights over online content standards, US v EU or US v Brazil, are so difficult to truly resolve.

View By Monthly
Latest Blog
US Supreme Court Soundly Rejects Broad ISP Liability for User Piracy

Report from the New York Times In Brief – The Supreme Court has unanimously ruled that Cox Communications cannot be held liable for copyright infringement committed by its users, even if the company knows some customers engage in piracy and yet it does not cut them...

Tech Trade Group Challenges Chicago’s Social Media Tax

Report from the Chicago Sun-Times In Brief – NetChoice, a digital company trade group, is suing to block the City of Chicago’s new social media tax, arguing it violates the First Amendment and the federal Permanent Internet Tax Freedom Act (PITFA). The new tax extends...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required