News insights

News & Insights

May 2025

X Files Lawsuit to Block Minnesota’s Law Banning Deepfake Political Content

Report from Reuters

In Brief – X has filed a lawsuit in federal court to block enforcement of a Minnesota state law that bans people from using AI-generated “deepfakes” to influence an election and threatens those who disseminate such content with fines and jail terms. The social media platform argues that the 2024 law violates the First Amendment rights of social media platforms to determine the content circulated on their sites, empowers government officials to censor political speech, is overly vague in its reach, and is precluded by Section 230, the federal law that protects digital platforms from being held liable for content posted by users. A similar motion filed last year by a Republican state lawmaker and conservative social media influencer was turned down by US District Judge Laura Provinzino who ruled the plaintiffs lacked standing.

Context – Election interference and non-consensual pornography are the most cited harms from AI-related technology tools that can create so-called deepfakes. Although the biggest AI developers have agreed to identify and label AI-generated images created by their services, “watermarking” is considered of limited value by many experts because it can be circumvented and there are AI tools that don’t use the technology. More than 20 states have legislated in some way to prohibit election deepfakes, but none have been enforced, and California’s 2024 election deepfake law was quickly blocked by a federal judge after its enactment. The US Senate recently passed legislation on non-consensual pornography, including through deepfakes, with strong bipartisan backing.

Florida Sues Snap Alleging Snapchat has Illegal Addictive Features

Report from The Hill

In Brief – The State of Florida has sued Snap, operator of Snapchat, for violating HB 3, a state law enacted in 2024 to protect teens from “addictive” social media features, including though infinite scrolling, push notifications, auto-play videos, and “likes” and other feedback metrics. Florida’s complaint further alleged that the platform is marketed as safe for 13-year-olds although it can be used for harmful activity, including viewing pornography and buying illegal drugs. HB 3 requires social media platforms to remove 13-year-olds and obtain parental consent for 14- and 15-year-old users, none of which is done by Snap. The company and digital platform trade groups have argued that the law violates the US Constitution.

Context – Most of the state laws regulating social media on the premise that they are harmful to teens have been blocked by federal judges, at least temporarily, often for violating the First Amendment. Recent examples include laws enacted by Ohio, Utah, Arkansas, and California. Given that they operationally depend on age verification, the Supreme Court’s consideration of constitutional challenges to a Texas state law requiring age checks for viewing online pornography will help inform judges. Finally, civil lawsuits targeting social media platforms for harming teens are having much better luck getting past initial court hurdles than the state laws are.

UK Ofcom Issues Rules Requiring Platforms to Protect Younger Users

Report from The Guardian

In Brief – Ofcom, the UK online content moderation regulator under the Online Safety Act (OSA), has announced a set of new rules for online sites likely to be used by young people, including the largest social media platforms, search engines, and gaming sites. Dubbed the “Children’s Code”, the new requirements for the “riskiest” services, such as big social media platforms, include implementing “highly effective” age checks to identify users under age 18, tailoring recommendation algorithms to those younger users to filter out a wide range of harmful material, and instituting effective procedures to report and remove dangerous content quickly.  Age verification technology plays a central role in the new regime, with Ofcom’s guidance suggesting that platforms verify ages by checking with banks or mobile network operators or using photo-ID matching or facial-age-estimating software. While the UK’s digital minister called the rules a “watershed moment” some online safety campaigners criticized Ofcom as “overly cautious”. Covered platforms will need to meet the new requirements by July 25.

Context – Efforts to regulate online platforms to “protect” teenagers is an increasingly global phenomenon. Online porn is consistently a top target, but social media more broadly is rapidly facing similar demands. France is requiring age verification for adult sites, but they are working to impose age-based requirements on social media, including requiring parental approval for users under age 15. The US Supreme Court heard oral arguments in January on a Texas-law requiring age checks for online porn and their decision will further inform US courts scrutinizing the flood of US state laws regulating social media to “protect” teens that would operationally depend on age checks. The EU’s Digital Services Act, called a “regulatory cousin” of the UK OSA, directs platforms on how to address a wide range of objectionable content, and Commission regulators are investigating how TikTok and Meta’s top platforms protect younger users. Countries across Asia are actively considering online age limits, and Australia has set a firm minimum age of 16 for social media besides YouTube.

Yelp Antitrust Lawsuit Targeting Google Search Practices Can Proceed

Report from Courthouse News Service

In Brief – A federal judge has rejected keys parts of Google’s motion to dismiss Yelp’s antitrust complaint alleging that the digital giant has a monopoly over local search services and related advertising. Yelp, a platform providing local search and reviews that has been a long-time critic of the digital behemoth, argues that Google uses its dominant general search service to unfairly promote its own specialized local reviews service, directing users away from local search platforms like Yelp regardless of the accuracy or helpfulness of their content. US Magistrate Judge Susan van Keulen wrote in her order that Yelp only needed to show a “dangerous probability” of Google having monopoly power in local search for its claims to survive at this point, and that the local search company met the threshold by presenting a range of data on market shares and growth. Yelp’s California state law unfair competition claims will also proceed.

Context – Yelp’s suit aims to build on US District Judge Amit Mehta’s ruling that Google’s general search service is a monopoly built and reinforced by its anticompetitive business deals. That DOJ-led antitrust case is now in its remedies phase. Yelp’s complaint argues that Google uses its monopoly in general search to monopolize other adjacent specialized search markets, which are often called search “verticals”. US regulators have rejected pursing complaints from Yelp and other vertical search businesses for years. Judge Mehta dismissed similar vertical search preferencing claims in the case Google ended up losing. However, Google’s vertical search competitors have had more legal and regulatory success in the EU. The “Google Shopping” case, the company’s first antitrust loss in Europe, involved vertical competitors. The EU Digital Markets Act prohibits Google from self-preferencing its verticals in its main search engine. After months of investigation and stakeholder talks regarding Google’s plan to treat vertical competitors more fairly in its European search engines, the European Commission recently issued formal preliminary findings that Google’s plan fails to comply with the landmark digital gatekeeper law.

Ask A Question! [email protected]