Report from the Courthouse News
In Brief – A federal judge has blocked enforcement of California law AB 2655 that requires large social media companies to remove “materially deceptive content” about political candidates and election officials from their platforms. Senior District Judge John Mendez said from the bench that he didn’t need to rule on First Amendment or other constitutional arguments, instead relying solely on federal preemption per Section 230 of the Communications Decency Act to shield plaintiffs X and Rumble from liability for deceptive content posted by users. The judge said that no part of the “Defending Democracy from Deepfake Deception Act of 2024” was salvageable. The judge also again expressed serious doubts about AB 2839, which bans digitally manipulated communications that are false or misleading and target political candidates and election processes four months before an election. Mendez imposed a temporary injunction on that law last October so that it could not be enforced during the 2024 election cycle and again noted its shortcomings. Although he said his latest ruling was based only on Sec. 230, the judge was very direct about both laws’ clear First Amendment shortcomings, which he said were told to the State Legislature. “But the Legislature goes ahead and drafts it anyway.”
Context – More than 20 US states have enacted laws addressing election AI deepfakes. But none has been enforced. At the same time, California’s election deepfake laws don’t even mention AI. They go after misinformation. That’s an obvious First Amendment problem. However, it’s also an opportunity to remember that digital platforms face different misinformation responsibilities when there isn’t a First Amendment. In the UK, the Online Safety Act may not yet require social media platforms to block misinformation, but senior Labor Party leaders have called for toughening the law to address the problem after last year’s UK immigration policy riots and incendiary comments by Elon Musk. In Europe, a decision is expected soon as to whether X violated the DSA by using algorithms to promote alleged election misinformation and disinformation.
