bbieron@platformeconomyinsights.com

OpenAI Removes Political Bias Landmine from AI Policy Document

Jan 12, 2025

Report from TechCrunch

In Brief – OpenAI quietly edited its “Economic Blueprint” soon after release to remove language in the section on “the proper rules” for AI that called for AI models to be “politically unbiased by default”. The report is part of a public policy campaign to stress the benefits of investment in AI and the infrastructure that will power AI development, including datacenters and the electrical generating capacity that will power them. The chatbot phenom has a reputation for being pro-regulation and its government relations team has always been led by Democrats. Allegations that its chatbots were biased in favor of progressive viewpoints have plagued the San Francisco-based company from the first months it operated publicly. President-elect Trump’s crypto and AI “czar” David Sacks has singled out ChatGPT as “programmed to be woke” and untruthful about politically sensitive subjects.

Context – One big divide in AI policy has been between trying to address so-called existential risks, essentially that some super-powerful AI system will go rogue someday and destroy humanity, and more tangible near-term problems, such as AI systems fostering discrimination because the training data or algorithms are biased, or being used to further hatred, misinformation, or political division. In lieu of any AI-specific regulations policing the present-days “risks” like hateful content and discrimination, many companies building AI tools have put in “guardrails” that essentially police the tools by tailoring or blocking outputs on sensitive topics. It’s not just OpenAI. Google has been heavily criticized, especially from conservative commentators, because its generative AI services appeared to be trained and governed by guardrails that skewed left. This all parallels experiences of social media companies trying to deal with objectionable content, including repeated charges that the company efforts are ideologically biased. OpenAI’s quick retreat from discussing the inherently partisan topic of “political bias” can be chalked up with Meta’s momentous decision to change content moderation.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required