bbieron@platformeconomyinsights.com

State Department Closes Its International Anti-Disinformation Office

Apr 1, 2025

Report from the Washington Post

In Brief – The State Department is closing an office designed to counter foreign online disinformation that was plagued by conservative criticism that it was part of US Government efforts to block conservative viewpoints online. The Counter Foreign Information Manipulation and Interference office, known as R/FIMI, was founded from the remnants of the Global Engagement Center (GEC), a larger office on the State Department that was established in 2011 to counter online ISIS radicalization and saw its remit expanded in 2016 amidst charges of Russian efforts to impact the 2016 elections. It became increasingly active in international anti-disinformation circles and supported organizations that many on the right believed were ideologically slanted. The GEC was closed late last year when congressional Republicans blocked its funding.

Context – Before R/FIMI there was the GEC, and before the GEC there was the DHS’s Disinformation Governance Board that flamed out when videos appeared of its proposed Executive Director singing progressive parody showtunes lampooning conservative viewpoints as disinformation. There are few digital policy issues that unite conservatives more solidly than the belief that online content moderation by Big Tech, at least before Elon Musk bought Twitter, had slanted rules and punished conservatives for challenging the ideological and cultural views of the largely-Bay Area corporate leaders. President Trump and several of his cabinet members claim to have been the target of anti-disinformation activists, and it is a top priority of the Administration’s tech regulators. The ideological conflict is infiltrating the AI ecosystem as well. The largest generative AI companies use “guardrails” that essentially police the tools by tailoring or blocking outputs on sensitive topics, with conservative commentators claiming that the guardrails skewed left, while other argue some are slanted rightward. Intentional online disinformation appears to have its AI branch as malign actors are reportedly using AI tools to create false content with the intent to manipulate how large language models operate and respond to user queries.

View By Monthly
Latest Blog
Major Brazilian Law to Protect Teens on Social Media Goes into Effect

Report from AP News In Brief – A Brazilian law enacted last September to shield minors from harmful online content has taken effect with experts calling it a milestone in the protection of children and adolescents. The Digital Statute of Children and Adolescents does...

European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required