bbieron@platformeconomyinsights.com

OpenAI Removes Political Bias Landmine from AI Policy Document

Jan 12, 2025

Report from TechCrunch

In Brief – OpenAI quietly edited its “Economic Blueprint” soon after release to remove language in the section on “the proper rules” for AI that called for AI models to be “politically unbiased by default”. The report is part of a public policy campaign to stress the benefits of investment in AI and the infrastructure that will power AI development, including datacenters and the electrical generating capacity that will power them. The chatbot phenom has a reputation for being pro-regulation and its government relations team has always been led by Democrats. Allegations that its chatbots were biased in favor of progressive viewpoints have plagued the San Francisco-based company from the first months it operated publicly. President-elect Trump’s crypto and AI “czar” David Sacks has singled out ChatGPT as “programmed to be woke” and untruthful about politically sensitive subjects.

Context – One big divide in AI policy has been between trying to address so-called existential risks, essentially that some super-powerful AI system will go rogue someday and destroy humanity, and more tangible near-term problems, such as AI systems fostering discrimination because the training data or algorithms are biased, or being used to further hatred, misinformation, or political division. In lieu of any AI-specific regulations policing the present-days “risks” like hateful content and discrimination, many companies building AI tools have put in “guardrails” that essentially police the tools by tailoring or blocking outputs on sensitive topics. It’s not just OpenAI. Google has been heavily criticized, especially from conservative commentators, because its generative AI services appeared to be trained and governed by guardrails that skewed left. This all parallels experiences of social media companies trying to deal with objectionable content, including repeated charges that the company efforts are ideologically biased. OpenAI’s quick retreat from discussing the inherently partisan topic of “political bias” can be chalked up with Meta’s momentous decision to change content moderation.

View By Monthly
Latest Blog
The Trump Administration’s Shifting View on AI Model Security Tests

Report from the New York Times In Brief – The Trump Administration is weighing a shift in policy to allow federal agencies to review advanced AI models, with officials discussing an executive order to establish an AI working group composed of government leaders and...

Big AI Industry Companies Reach Agreements with the Pentagon

Report from the Wall Street Journal In Brief – The US Defense Department has finalized agreements with eight major US technology companies to deploy their artificial intelligence tools in classified environments. The companies are OpenAI, Google, SpaceX (via xAI),...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required