News insights

News & Insights

May 2025

Library of Congress and Copyright Office Firings Attributed to AI Legal Fight

Report from the Washington Post

In Brief – President Donald Trump continued his campaign of firing executive branch officials often considered immune to being removed without cause when the White House changes partisan hands, removing the Librarian of Congress, the top official at the Library of Congress, followed soon thereafter by the Register of Copyrights, who heads the US Copyright Office, which is a part of the Library of Congress. The Copyright Office has been releasing a series of reports in recent months on copyright issues raised by the development and use of artificial intelligence, including a pre-publication report on “Generative AI Training” just days before the two senior officials were released. The top Democrat on the House Administration Committee which oversees the Library of Congress publicly alleged that the officials were removed because they refused to “to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”

Context – You don’t need AI to know that correlation does not equal causation. Legal questions around “training” neural networks of major GAI models with non-licensed copyrighted material is probably the top legal and regulatory issue surrounding AI. Or at least the one with the most lawyers and deep-pocketed companies involved. But do not expect definitive actions by the executive or legislative branches of the US Government. Major copyright lawsuits have taken center stage, and they won’t be resolved too quickly. The Copyright Office report on GAI training correctly focuses on how US judges choose to apply the fair use doctrine to the activity. It’s a complex legal question and both sides have strong arguments. But in terms of the two firings, the claim that they were retribution because the report took a strong stand against the AI developers is just wrong. Publishers criticized the report for being too weak protecting copyright holders. In the EU, with its AI Act, regulators and AI expert groups will play key roles setting training rules. Even in big markets like the UK and Japan, governments are cautiously deliberating while they wait for legal developments in the US to come into more focus.

US-UK Framework Trade Deal Does Not Include Digital Taxes (Yet)

Report from Politico

In Brief – The framework for a new trade deal between the United States and United Kingdom, which was hailed by both President Trump and Prime Minister Starmer as the first agreement that will forestall much of the tariff increase announced by Trump on April 2nd, does not include a pledge from the UK to remove its digital services tax (DST). The 2 percent tax, imposed in 2020, applies to digital companies generating over $662.37 million globally and $33.12 million from UK users, which primarily impacts major US-based tech companies. Pro-trade groups in the US, including the National Foreign Trade Council, expressed disappointment that the framework agreement did not address the DST issue, labeling it a key trade irritant. US officials continue to criticize DSTs as discriminatory against American companies and said that the Trump Administration would continue to negotiate for the UK to remove the tax as part of the negotiations to finalize the agreement. A British official acknowledged that the US is pushing for changes but that no formal process has been established to resolve the dispute.

Context – During the first Trump Administration, France created the first DST to raise taxes on big internet businesses. Other countries, including the UK, followed suit. President Trump argued they discriminated against US companies and responded with tariff threats. The DSTs were delayed, and talks moved to the OECD. During the Biden years, many national DSTs went into effect. The second Trump Administration came out of the gate with executive orders signaling that he would again aggressively fight foreign DSTs with retaliatory tariffs. However, the ability to employ tariffs as a threat against DST were overtaken in a sense by the US unilaterally imposing massive “reciprocal” tariffs on scores of countries around the world, some with DSTs and some without. India, Brazil, and Italy are examples of countries signaling a willingness to step back on DSTs, while the US Treasury Secretary says that he wants the EU to address the issue in a trade deal, but the EU member states need to arrive at a single position first.

House Republicans Trying to Preempt State AI Regulations in Big Budget Bill

Report from the Washington Post

In Brief – The legislative recommendations proposed by the Chairman of the House Energy & Commerce Committee for the big budget reconciliation bill that is designed to carry out the major tax and spending policies of President Trump and the narrow Republican majority in Congress, includes a provision that would bar states from enforcing laws or regulations governing AI models and systems. It provides some exemptions for state measures that aim to “remove legal impediments” or “facilitate the deployment or operation” of AI systems, as well as those that seek to “streamline licensing, permitting, routing, zoning, procurement, or reporting procedures.” President Trump staked out a strong pro-industry position on AI development and investment during the 2024 campaign, revoked President Biden’s main AI executive order in January, and Administration officials have chided Europe for over-regulating the industry. Supporters of AI regulation criticized the Republican effort. Everything included in the budget reconciliation bill, which is a special legislative vehicle that can pass the Senate without the usual 60-vote super-majority, must comply with the “Byrd Rule” that requires each provision to meet a set of tests, including that it primarily involves federal taxes or spending.

Context – Some believe regulatory guardrails will benefit AI development by easing user uncertainty while others argue they will slow innovation and drive entrepreneurs and investment away. In 2024, the regulatory camp was notching wins. The EU enacted the AI Act. Biden issued his executive order. The UK focused on AI safety and pressure was building in Japan for AI rules. Colorado was the first state to enact broad AI regulation and many states pursued narrow AI bills on things like “deepfakes”. But no other state followed Colorado’s lead and at year’s end California’s governor vetoed broad-based AI regulation. Now, the US, UK and Japan are all moving away from AI regulation, and the large developers are arguing that innovators need freedom to develop lest China dominate AI. Even EU leaders are voicing concern over regulation.

Federal Court Rejects FTC Bid to Overturn Microsoft’s Activision Acquisition

Report from Reuters

In Brief – A three-judge panel of the Ninth Circuit Court of Appeals has rejected the Federal Trade Commission’s effort to overturn District Judge Jacqueline Scott Corley’s 2023 decision to turn down the regulator’s motion to block Microsoft’s $69 billion deal to buy videogame giant Activision-Blizzard while it challenged the acquisition in its internal court system. Corley ruled that the FTC failed to prove that Microsoft would have blocked access to popular titles such as Call of Duty on hardware owned by other gaming brands, as well as rejecting their arguments that the deal would have lessened competition in gaming subscription services and cloud streaming. Microsoft’s acquisition of Activision was completed in October of 2023, following the approval by the European Commission that May, Judge Corley’s ruling in July, and finally a reversal by the UK Competition and Markets Authority. Nevertheless, the Lina Khan-led FTC continued to battle away in court to try to overturn the Corley decision that stymied their effort to block the acquisition.

Context – Microsoft prevailed over challenges to their Activision acquisition in part because of market factors relevant to judicial antitrust review, including that they were not the top provider in any meaningful video game market, that the game creator market had many providers, and that their chief antagonist held a larger market share in game consoles and often made game titles exclusive. At the same time, Microsoft also engaged in a global good behavior campaign aimed at progressive regulators. It helped in Europe, where they won the key approval from the Commission. Today, Microsoft is a “gatekeeper” under the EU Digital Markets Act with Windows OS and LinkedIn regulated as “core platform services”. And the company continues to pursue its good behavior strategy, for example recently publicly expressing their commitment to complying with EU laws and regulatory decisions often criticized by other US tech giants and the Trump Administration. That earned plaudits from European Commission Vice President Teresa Ribera who heads EU competition enforcement and is a digital policy leader.

Apple Appeals “Extraordinary” Judicial Order That Bans App Fees and Rules

Report from the BBC

In Brief – Apple has petitioned the Ninth Circuit Court of Appeals to block a court order from District Judge Yvonne Gonzalez Rogers requiring it to permit all app developers to direct users to external websites for in-app purchases without limitations and prohibits Apple from charging any commissions on those purchases. The company’s motion argues that “a federal court cannot force Apple to permanently give away free access to its products and services, including intellectual property,” and that the judge unlawfully prevents it from controlling “core aspects of its business operations.” Rogers alleges that Apple willfully failed to abide by her initial injunction ordering the company to stop engaging in “anti-steering” practices that prohibited app developers from informing their users of alternative ways to buy in-app content outside the App Store. Although the judge found in Apple’s favor on the federal antitrust claims brought by Epic Games, a giant app developer, she then ruled that the iPhone maker’s anti-steering practices violated California’s unfair competition law. Apple eventually instituted new App Store processes that warned users about pursuing developer prompts to engage in purchases outside of Apple’s payments system, and charged app developers a 27% commission, which Rogers derided as being barely less than the 30% fee she called “supracompetitive”.

Context – Despite ruling that Apple did not violate federal antitrust law, Judge Rogers called Apple an incipient monopolist and was consistently skeptical that the company deserves to charge 30% commissions. Her initial order used “supracompetitive” 13 times, and in a hearing last year she called Apple’s app fees a “windfall”. An Apple executive responded, “We are running a business.” Despite the Epic lawsuit being about payments processing, and the initial injunction being about anti-steering, the real issue has always been Apple’s fee level, meaning prices. And who will set them. The Ninth Circuit largely upheld Judge Rogers’ initial ruling in 2023. And the Supreme Court rejected Apple’s appeal of the nationwide reach of the trial judge’s injunction. Maybe they will consider the extent to which a federal judge can take over a company’s pricing.

Oregon Legislature Advances a Forced State Media Funding Bill

Report from the Nelson Star

In Brief – Legislation that aims to force the largest digital platform companies, especially Google and Meta, to pay Oregon media companies when the content they create appears in search results or on the companies’ social media platforms, is moving through the Oregon state legislature with backing from key Democratic leaders. SB 686, the Oregon Journalism Protection Act, requires the covered companies to pay in-state digital journalism enterprises directly, with arbitrators used to set rates when parties don’t reach agreement, or donate to the Oregon Civic Information Consortium, a non-profit organization established by the bill to support local journalism. Bill backers modeled their effort after Canada’s Online News Act, which was passed in 2023 and recently resulted in Google making $22 million of $100 million in annual payments to Canadian media companies, as well as a similar effort in California, where Canada-style legislation was debated but then set aside after Google agreed to contribute $55 million to a fund to pay California media companies. California’s state government agreed to contribute an additional $70 million. As they did in Canada and California, Meta officials have informed Oregon legislators that the company would block media posts on their platforms in the state rather than pay local media companies when the media companies themselves, or users, post the media companies’ content on the Meta platforms.

Context – Google and Meta have faced years of pressure from governments around the world trying to force them to pay local media companies. The most interesting development has been the diverging responses of Google and Meta. When push comes to shove, Google agrees to pay while Meta has been willing to walk away. It happened in Canada and California, and is playing out in Australia, where the government is proposing to retaliate against Meta with a special tax on large social media companies that don’t pay media companies. Other US states pursuing taxes aimed at the digital giants include Minnesota considering a tax on large social media services and digital advertising taxes in Maryland, Washington and Rhode Island.

Conservative Activist Sues Meta Over Defamatory AI Hallucinations

Report from the Wall Street Journal

In Brief – Robby Starbuck, a conservative activist notable for successfully pressuring companies to change their DEI practices, has filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. Starbuck says he discovered the problem last August when the allegations were posted on X. Starbuck immediately denied the allegations and contacted Meta about the claims, which he says continued to show up in its AI system responses months later. Starbuck’s lawsuit, filed in Delaware Superior Court, seeks more than $5 million in damages. Joel Kaplan, Meta’s chief global affairs officer, took to X to apologize to Starbuck that the company’s fix “didn’t address the underlying problem” and that he was working to “explore potential solutions.”

Context – AI “hallucinations”, the reality that all generative AI systems sometimes produce results that are false and cannot be explained by the system’s creators, provides valuable context for many AI public policy issues. They illustrate that GAI tools are not like traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. Starbuck’s defamation case gets in line, with a noteworthy lawsuit in Georgia state court involving OpenAI being farthest along. The fact that all the AI systems create fictions that are presented as real should be presented in AI-related copyright lawsuits as proof that AI systems create new works rather than copy and paste. The issue will likely appear when the question of applying Sec. 230 to AI results is decided because developers cannot ever be sure of outputs. And hallucinations happen at the same time some entities are trying to influence AI outputs by seeding training date with falsehoods. “Fixing” troubling outputs often ends up with an AI developer installing “guardrails”, meaning algorithmically overriding the system in some cases, which has created accusations of ideological biases by the companies.

Google Makes First Payments to Canadian News Media Companies

Report from the Toronto Star

In Brief – The Canadian Journalism Collective, an organization that is distributing some of the funds contributed by Google to pay Canadian news media outlets in exchange for Google being exempted from the media payments scheme created by the Canada’s Online News Act, has announced that they have distributed the first $22 million provided by the online giant. Broadcasters will reportedly receive about 30 per cent of the $100 million per year from Google, while publishers will split the remainder. Qualifying news organizations had to meet several criteria including that they must operate in Canada, have two or more journalists in the country, and be a member of a recognized journalistic association or follow a journalist code of ethics. Canada’s payments regime, which compensates media companies when their news content appears on very large search or social media platforms, currently applies to Google and Meta, but Meta chose to block Canadian news on Facebook and Instagram to avoid having to make payments.

Context – In 2023, Canada was ground zero in the global campaign by media companies to have governments push the largest digital platforms to pay them. The most interesting development was the policy divergence between Google and Meta. Google agreed to make media payments while Meta chose to block news rather than pay a government-set rate for media content. In 2024, the two companies pursued their separate paths in California, and Meta also announced that they would stop media payments in Australia, which was home to the first media payments scheme in 2021. The Australian Government responded to Meta’s plan to stop paying by threatening to enact a new tax in 2025 on large social media companies that won’t voluntarily pay media companies, meaning Meta. One of the Trump Executive Orders threatens trade retaliation against foreign governments that employ taxes or regulations that are “designed to transfer significant funds” from American tech companies to a “favored domestic entities”, which very likely includes the national forced media payments schemes.

DoJ Plans Further Break Up of Google in AdTech Antitrust Case Remedies

Report from the TechCrunch

In Brief – The US Department of Justice has informed the federal judge that recently ruled that Google was guilty of “willfully acquiring and maintaining monopoly power” in the digital ad space that it would ask that Google be forced to sell two major parts of its ad tech business. The DOJ wants Google to sell AdX, its ad exchange product, straightaway, while carrying out a “phased” sale of DoubleClick for Publishers. The agency also wants Google blocked from operating an ad exchange for 10 years and to open its ad buying tools, including AdWords, to work equally well with third-party ad tech products. The DOJ filing says that the comprehensive set of remedies “is necessary to terminate Google’s monopolies, deny Google the fruits of its violations, reintroduce competition into the ad exchange and publisher ad server markets, and guard against reoccurrence in the future.” Google opposes breaking up its advertising business and said in its own filing to US District Judge Leonie Brinkema that doing so would “go well beyond the Court’s findings, have no basis in law, and would harm publishers and advertisers.” The company proposed a more limited set of conduct remedies and has indicated it would appeal Judge Brinkema’s initial decision after the remedies trial, which is set to begin on September 22.

Context – Google is on an antitrust losing streak. Federal Judge Amit Mehta found that Google has a monopoly in general internet search and that its uses of overly long business deals with phone makers, especially Apple and Samsung, preferencing Google Search, were illegal. Google must have taken heart last fall when candidate Donald Trump expressed reticence about breaking up the company, but that’s long past as his DOJ has been pressing Mehta to force Google to sell its Chrome browser in the search case’s remedies trial that wraps up this week. Like with the DOJ’s ad tech business breakup plan, Google argues that a massive, forced divestiture far exceeds the monopoly law violations and would harm online users. Judge Mehta is expected to issue his remedy decision in August and Google’s appeal will proceed from there.

President Trump Indicates He Would Again Extend TikTok Deal Deadline

Report from Bloomberg

In Brief – President Donald Trump said he would give social media video platform TikTok another extension of a deadline to sell to a US owner. Trump has said many times that he likes the platform because it helped his 2024 presidential campaign appeal to younger voters. The federal law that requires TikTok’s US operations be sold by China-based ByteDance to address national security concerns was enacted with overwhelming bipartisan support in April 2024. However, US-China relations have become increasingly tense, with the Trump Administration imposing new tariffs on products from China that can reach 145%, while the Chinese Government has responded with a range of trade retaliations, including tariffs of up to 125% on some US products. Trump has said that a deal to sell TikTok to US buyers was very close before the new tariffs were imposed and that he would consider reducing the levies on China somewhat to facilitate a TikTok sale.

Context – There is no legal clarity surrounding the unprecedented TikTok matter. Instead, the most relevant fact appears to be that President Trump went from the first President proposing to ban the app to its chief US backer. The federal law forcing the sale of the platform survived a review by the Supreme Court, which rejected the First Amendment-based challenges of the company and a group of content creators. The law’s deadline for a sale was January 19, but it gave the President the authority to extend for 90 days the provisions that prohibit app stores and hosting providers from supporting TikTok’s US services if he certified that the time was needed to wrap up a qualified deal to divest the company. Trump did so on January 20. Conservative Republican “China Hawks” condemned the initial extension order, which they said exceeded the law’s provisions. He granted a second extension on April 4 with more criticism coming from Democratic backers of the law. Questions remain regarding how the app could operate without the current TikTok algorithm engineered by ByteDance, as well as the implications of the worsening US-China trade relationship on the prospects of a transaction.

US Government Challenges AI Act Code of Practice on General Purpose AI

Report from Bloomberg

In Brief – The US Government is calling on the European Commission to delay the release of the AI Code of Practice for “General Purpose” AI that is scheduled to be finalized in May to help provide guidance to developers of the largest and most capable AI models for complying with the EU’s AI Act. The code, which is being developed by the EU’s new AI regulators and multistakeholder “expert groups” that include representatives of tech firms, copyright industries, academia, and civil society groups, aims to provide AI developers with recommendations to meet the law’s standards for transparency, risk-mitigation and copyright rules that go into effect in August. Although the code is voluntary, not following the recommendations could mean greater regulatory scrutiny for an AI developer. Critics argue the code is setting guidelines that go beyond the bounds of the law and will undermine AI development by imposing onerous regulatory burdens. The US Administration’s letter to the Commission reportedly detailed concerns with both the code of practice and the AI Act at large, and called for the entire AI Act implementation process to be put on hold to address the issues.

Context – When the EU was crafting its landmark AI Act, the Parliament changed course in response to the emergence of Chat-GPT and chose to regulate large “foundation models” such as the biggest chatbots. This shift was divisive and EU-based AI innovators pushed for lesser mandates. The final version planted the flag globally for AI regulation, but that was not without risk. The US, UK, and Japan have each shifted their AI policies in a deregulatory direction since last year, and there are top European leaders calling for the EU to “resynchronize with the rest of the world” to better incentivize investment and innovation in Europe. That intra-European debate is playing out over the Code of Practice, with its third and likely final draft criticized from across the spectrum of stakeholder groups, including MEPs who led the legislative effort arguing it weakened standards beyond what the bill’s drafters ever intended.

Congress Passes Bill Targeting Non-Consensual Intimate Images Online

Report from the New York Times

In Brief – The TAKE IT DOWN Act, bipartisan legislation criminalizing the posting online of nonconsensual intimate imagery (NCII) and forcing platforms to quickly take it down has been overwhelmingly passed by the US Congress and sent to President Trump. The measure, billed by backers as a response to so-called “deepfake” AI nudes, was first introduced last year by Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN) and was passed by the Senate in the final days of the 118th Congress. This year, it was quickly refiled, passed the Senate in the opening weeks of the 119th Congress, won the backing of First Lady Melenia Trump, and President Trump called for the House of Representatives to pass the bill in his March address to Congress. Despite broad general agreement on the need to better address NCII, especially as AI-enabled image creation services have increased in availability, some free speech advocates criticized the bill, including the requirement for platforms to take down AI-generated fakes, the short 48-hour time window available to platforms to remove content challenged under the law, and claims that the bill could be used to challenge the use of strong end-to-end encryption by private messaging platforms.

Context – Non-consensual pornography and election “disinformation” are the most cited harms from AI-related technology tools that can create “deepfakes”. Although the biggest AI developers have agreed to identify and label AI-generated images created by their services, “watermarking” is considered of limited value by many experts because it can be circumvented and there are AI tools that don’t use the technology. More than 20 states have legislated in some way to prohibit election deepfakes, but none have been enforced, and California’s 2024 election deepfake law was quickly blocked by a federal judge after its enactment. X recently sued the State of Minnesota to block enforcement of its election content law. Many Take It Down Act backers are certain to try to build on their success in protecting teens from malicious AI uses and next protect teens from malicious social media uses.

Federal Judge Hammers Apple for Violating Her App Store Fees Court Order

Report from the Wall Street Journal

In Brief – Federal Judge Yvonne Gonzalez Rogers hammered Apple for violating her 2021 unfair competition ruling related to App Store restrictions, issued an order prohibiting Apple from charging any fees on any in-app sales made by app developers outside of Apple’s payments service, and took the extraordinary step of referring Apple’s conduct to federal prosecutors for a criminal contempt investigation. “Apple willfully chose not to comply with this court’s injunction,” Judge Yvonne Gonzalez Rogers said, specifically chiding CEO Tim Cook, and alleging misconduct by other company executives. Judge Rogers’ order is the latest in a legal dispute kicked off in 2020 when the giant game developer Epic Games sued Apple for monopolizing in-app payments on the iPhone. In 2021, Rogers found in Apple’s favor on the federal antitrust claims but ruled that the iPhone maker had violated California’s unfair competition law by prohibiting app developers from informing their users of alternative ways to buy app content outside the App Store, and issued a one-page nationwide order requiring Apple to stop those practices. Apple eventually instituted new App Store processes that extensively warned users if they pursued prompts from app developers to engage in purchases outside of Apple’s payments system, as well as charging those app developers a 27% commission, barely less than their regular 30% fee. Apple says it will comply with, and appeal, the latest order.

Context – Despite ruling that Apple did not violate federal antitrust law, Judge Rogers has been consistently skeptical that the company deserves a 30% commission rate. In a hearing last year, she called the fees a “windfall”, to which an Apple witness said, “We are running a business.” From the start, we’ve been saying that Epic’s lawsuits and other app developer complaints about in-app payments were always disingenuous. The issue was fees. App developers want much lower fees. In Europe, the DMA regulators are pressing the same issue. Rather than regulate nuanced Apple policies, Judge Rogers has ordered that they can no longer charge any fees on any purchases outside their payments system. And Apple won their antitrust case.

The US Wants Digital Services Tax Relief in a US-EU Trade Deal

Report from Bloomberg

In Brief – US Treasury Secretary Scott Bessent singled out national Digital Services Taxes (DSTs) imposed by some EU member states on the largest digital companies, who are most often US-based, as something the Trump Administration wants to address in trade negotiations that could reduce the level of tariffs that the Administration will impose on EU exports. “We want to see that unfair tax on one of America’s great industries removed,” Bessent said. Referring to the fact that some EU member states like France have national DSTs while others like Germany do not, Bessent added, “They have some internal matters to decide before they can engage in an external negotiation.”

Context – During the first Trump Administration, France created the first DST to increase taxes on the biggest internet businesses. Other countries followed suit. President Trump responded with tariff threats, the DSTs were delayed, and talks moved to the OECD. The second Trump Administration came out of the gate with two executive orders signaling that he opposed the OECD plan and would again aggressively fight foreign DSTs with retaliatory tariffs. And a further White House directive even more broadly threatens trade retaliation, covering any tax or regulatory actions that are “discriminatory, disproportionate, or designed to transfer significant funds or intellectual property from American companies to the foreign government or the foreign government’s favored domestic entities.” However, in the meantime, President Trump imposed large “reciprocal” tariffs on scores of countries around the world, including the EU, and then paused their application for 90 days to allow for trade talks that could result in agreements to lower trade barriers to US exports. India, Brazil, and Italy are examples of countries signaling a willingness to step back on DSTs, while Poland is moving forward. The states of Minnesota and Washington are pursuing state variations of DST-type levies despite the fact that they likely violate the federal Permanent Internet Tax Freedom Act because the states do not impose equivalent taxes on non-digital versions of the services.

X Files Lawsuit to Block Minnesota’s Law Banning Deepfake Political Content

Report from Reuters

In Brief – X has filed a lawsuit in federal court to block enforcement of a Minnesota state law that bans people from using AI-generated “deepfakes” to influence an election and threatens those who disseminate such content with fines and jail terms. The social media platform argues that the 2024 law violates the First Amendment rights of social media platforms to determine the content circulated on their sites, empowers government officials to censor political speech, is overly vague in its reach, and is precluded by Section 230, the federal law that protects digital platforms from being held liable for content posted by users. A similar motion filed last year by a Republican state lawmaker and conservative social media influencer was turned down by US District Judge Laura Provinzino who ruled the plaintiffs lacked standing.

Context – Election interference and non-consensual pornography are the most cited harms from AI-related technology tools that can create so-called deepfakes. Although the biggest AI developers have agreed to identify and label AI-generated images created by their services, “watermarking” is considered of limited value by many experts because it can be circumvented and there are AI tools that don’t use the technology. More than 20 states have legislated in some way to prohibit election deepfakes, but none have been enforced, and California’s 2024 election deepfake law was quickly blocked by a federal judge after its enactment. The US Senate recently passed legislation on non-consensual pornography, including through deepfakes, with strong bipartisan backing.

Florida Sues Snap Alleging Snapchat has Illegal Addictive Features

Report from The Hill

In Brief – The State of Florida has sued Snap, operator of Snapchat, for violating HB 3, a state law enacted in 2024 to protect teens from “addictive” social media features, including though infinite scrolling, push notifications, auto-play videos, and “likes” and other feedback metrics. Florida’s complaint further alleged that the platform is marketed as safe for 13-year-olds although it can be used for harmful activity, including viewing pornography and buying illegal drugs. HB 3 requires social media platforms to remove 13-year-olds and obtain parental consent for 14- and 15-year-old users, none of which is done by Snap. The company and digital platform trade groups have argued that the law violates the US Constitution.

Context – Most of the state laws regulating social media on the premise that they are harmful to teens have been blocked by federal judges, at least temporarily, often for violating the First Amendment. Recent examples include laws enacted by Ohio, Utah, Arkansas, and California. Given that they operationally depend on age verification, the Supreme Court’s consideration of constitutional challenges to a Texas state law requiring age checks for viewing online pornography will help inform judges. Finally, civil lawsuits targeting social media platforms for harming teens are having much better luck getting past initial court hurdles than the state laws are.

UK Ofcom Issues Rules Requiring Platforms to Protect Younger Users

Report from The Guardian

In Brief – Ofcom, the UK online content moderation regulator under the Online Safety Act (OSA), has announced a set of new rules for online sites likely to be used by young people, including the largest social media platforms, search engines, and gaming sites. Dubbed the “Children’s Code”, the new requirements for the “riskiest” services, such as big social media platforms, include implementing “highly effective” age checks to identify users under age 18, tailoring recommendation algorithms to those younger users to filter out a wide range of harmful material, and instituting effective procedures to report and remove dangerous content quickly.  Age verification technology plays a central role in the new regime, with Ofcom’s guidance suggesting that platforms verify ages by checking with banks or mobile network operators or using photo-ID matching or facial-age-estimating software. While the UK’s digital minister called the rules a “watershed moment” some online safety campaigners criticized Ofcom as “overly cautious”. Covered platforms will need to meet the new requirements by July 25.

Context – Efforts to regulate online platforms to “protect” teenagers is an increasingly global phenomenon. Online porn is consistently a top target, but social media more broadly is rapidly facing similar demands. France is requiring age verification for adult sites, but they are working to impose age-based requirements on social media, including requiring parental approval for users under age 15. The US Supreme Court heard oral arguments in January on a Texas-law requiring age checks for online porn and their decision will further inform US courts scrutinizing the flood of US state laws regulating social media to “protect” teens that would operationally depend on age checks. The EU’s Digital Services Act, called a “regulatory cousin” of the UK OSA, directs platforms on how to address a wide range of objectionable content, and Commission regulators are investigating how TikTok and Meta’s top platforms protect younger users. Countries across Asia are actively considering online age limits, and Australia has set a firm minimum age of 16 for social media besides YouTube.

Yelp Antitrust Lawsuit Targeting Google Search Practices Can Proceed

Report from Courthouse News Service

In Brief – A federal judge has rejected keys parts of Google’s motion to dismiss Yelp’s antitrust complaint alleging that the digital giant has a monopoly over local search services and related advertising. Yelp, a platform providing local search and reviews that has been a long-time critic of the digital behemoth, argues that Google uses its dominant general search service to unfairly promote its own specialized local reviews service, directing users away from local search platforms like Yelp regardless of the accuracy or helpfulness of their content. US Magistrate Judge Susan van Keulen wrote in her order that Yelp only needed to show a “dangerous probability” of Google having monopoly power in local search for its claims to survive at this point, and that the local search company met the threshold by presenting a range of data on market shares and growth. Yelp’s California state law unfair competition claims will also proceed.

Context – Yelp’s suit aims to build on US District Judge Amit Mehta’s ruling that Google’s general search service is a monopoly built and reinforced by its anticompetitive business deals. That DOJ-led antitrust case is now in its remedies phase. Yelp’s complaint argues that Google uses its monopoly in general search to monopolize other adjacent specialized search markets, which are often called search “verticals”. US regulators have rejected pursing complaints from Yelp and other vertical search businesses for years. Judge Mehta dismissed similar vertical search preferencing claims in the case Google ended up losing. However, Google’s vertical search competitors have had more legal and regulatory success in the EU. The “Google Shopping” case, the company’s first antitrust loss in Europe, involved vertical competitors. The EU Digital Markets Act prohibits Google from self-preferencing its verticals in its main search engine. After months of investigation and stakeholder talks regarding Google’s plan to treat vertical competitors more fairly in its European search engines, the European Commission recently issued formal preliminary findings that Google’s plan fails to comply with the landmark digital gatekeeper law.

Ask A Question! [email protected]