bbieron@platformeconomyinsights.com

Federal Judge Blocks Another California Election Misinformation Law

Aug 1, 2025

Report from the Courthouse News

In Brief – A federal judge has blocked enforcement of California law AB 2655 that requires large social media companies to remove “materially deceptive content” about political candidates and election officials from their platforms. Senior District Judge John Mendez said from the bench that he didn’t need to rule on First Amendment or other constitutional arguments, instead relying solely on federal preemption per Section 230 of the Communications Decency Act to shield plaintiffs X and Rumble from liability for deceptive content posted by users. The judge said that no part of the “Defending Democracy from Deepfake Deception Act of 2024” was salvageable. The judge also again expressed serious doubts about AB 2839, which bans digitally manipulated communications that are false or misleading and target political candidates and election processes four months before an election. Mendez imposed a temporary injunction on that law last October so that it could not be enforced during the 2024 election cycle and again noted its shortcomings. Although he said his latest ruling was based only on Sec. 230, the judge was very direct about both laws’ clear First Amendment shortcomings, which he said were told to the State Legislature. “But the Legislature goes ahead and drafts it anyway.”

Context – More than 20 US states have enacted laws addressing election AI deepfakes. But none has been enforced. At the same time, California’s election deepfake laws don’t even mention AI. They go after misinformation. That’s an obvious First Amendment problem. However, it’s also an opportunity to remember that digital platforms face different misinformation responsibilities when there isn’t a First Amendment. In the UK, the Online Safety Act may not yet require social media platforms to block misinformation, but senior Labor Party leaders have called for toughening the law to address the problem after last year’s UK immigration policy riots and incendiary comments by Elon Musk. In Europe, a decision is expected soon as to whether X violated the DSA by using algorithms to promote alleged election misinformation and disinformation.

View By Monthly
Latest Blog
The Trump Administration’s Shifting View on AI Model Security Tests

Report from the New York Times In Brief – The Trump Administration is weighing a shift in policy to allow federal agencies to review advanced AI models, with officials discussing an executive order to establish an AI working group composed of government leaders and...

Big AI Industry Companies Reach Agreements with the Pentagon

Report from the Wall Street Journal In Brief – The US Defense Department has finalized agreements with eight major US technology companies to deploy their artificial intelligence tools in classified environments. The companies are OpenAI, Google, SpaceX (via xAI),...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required