Report from the Washington Post
In Brief – The State Department is closing an office designed to counter foreign online disinformation that was plagued by conservative criticism that it was part of US Government efforts to block conservative viewpoints online. The Counter Foreign Information Manipulation and Interference office, known as R/FIMI, was founded from the remnants of the Global Engagement Center (GEC), a larger office on the State Department that was established in 2011 to counter online ISIS radicalization and saw its remit expanded in 2016 amidst charges of Russian efforts to impact the 2016 elections. It became increasingly active in international anti-disinformation circles and supported organizations that many on the right believed were ideologically slanted. The GEC was closed late last year when congressional Republicans blocked its funding.
Context – Before R/FIMI there was the GEC, and before the GEC there was the DHS’s Disinformation Governance Board that flamed out when videos appeared of its proposed Executive Director singing progressive parody showtunes lampooning conservative viewpoints as disinformation. There are few digital policy issues that unite conservatives more solidly than the belief that online content moderation by Big Tech, at least before Elon Musk bought Twitter, had slanted rules and punished conservatives for challenging the ideological and cultural views of the largely-Bay Area corporate leaders. President Trump and several of his cabinet members claim to have been the target of anti-disinformation activists, and it is a top priority of the Administration’s tech regulators. The ideological conflict is infiltrating the AI ecosystem as well. The largest generative AI companies use “guardrails” that essentially police the tools by tailoring or blocking outputs on sensitive topics, with conservative commentators claiming that the guardrails skewed left, while other argue some are slanted rightward. Intentional online disinformation appears to have its AI branch as malign actors are reportedly using AI tools to create false content with the intent to manipulate how large language models operate and respond to user queries.
