Report from TechCrunch
In Brief – OpenAI quietly edited its “Economic Blueprint” soon after release to remove language in the section on “the proper rules” for AI that called for AI models to be “politically unbiased by default”. The report is part of a public policy campaign to stress the benefits of investment in AI and the infrastructure that will power AI development, including datacenters and the electrical generating capacity that will power them. The chatbot phenom has a reputation for being pro-regulation and its government relations team has always been led by Democrats. Allegations that its chatbots were biased in favor of progressive viewpoints have plagued the San Francisco-based company from the first months it operated publicly. President-elect Trump’s crypto and AI “czar” David Sacks has singled out ChatGPT as “programmed to be woke” and untruthful about politically sensitive subjects.
Context – One big divide in AI policy has been between trying to address so-called existential risks, essentially that some super-powerful AI system will go rogue someday and destroy humanity, and more tangible near-term problems, such as AI systems fostering discrimination because the training data or algorithms are biased, or being used to further hatred, misinformation, or political division. In lieu of any AI-specific regulations policing the present-days “risks” like hateful content and discrimination, many companies building AI tools have put in “guardrails” that essentially police the tools by tailoring or blocking outputs on sensitive topics. It’s not just OpenAI. Google has been heavily criticized, especially from conservative commentators, because its generative AI services appeared to be trained and governed by guardrails that skewed left. This all parallels experiences of social media companies trying to deal with objectionable content, including repeated charges that the company efforts are ideologically biased. OpenAI’s quick retreat from discussing the inherently partisan topic of “political bias” can be chalked up with Meta’s momentous decision to change content moderation.
