Archive – 2023

November 2023

November 2023

Unlike With Uber Drivers, UK High Court Rules That Deliveroo Riders are Independent

Report from Reuters

In Brief – The UK Supreme Court has ruled that Deliveroo riders cannot be unionized because they are self-employed individuals and not company workers. The Independent Workers Union of Great Britain (IWUGB) had tried since 2016 to unionize a group of Deliveroo riders. The Central Arbitration Committee ruled against the IWUGB, highlighting that the Deliveroo riders had a broad and unfettered right to substitution, meaning delivery people could arrange for a different individual to carry out their assignment, and that Deliveroo allowed the practice. The panel also noted that the Deliveroo contract was very flexible, allowing delivery persons to work or not work as convenient to them, as well as confirming that delivery people were free to, and in fact did, work simultaneously for Deliveroo and delivery competitors. So, the CAC concluded that the Deliveroo contract with delivery personnel genuinely reflected the reality of an independent contractor relationship and that it was not an employment relationship, a decision confirmed by the UK High Court.

Context – Gig work is one digital policy issue where the UK has struck a middle ground between the US and EU. This may be because the UK has a three-tier rather than binary worker classification regime. In 2021, the UK High Court ruled that some Uber drivers are “workers” in the middle UK tier and therefore warrant some employee-type benefits. On the other hand, the US and Europe continue to diverge. European courts continue to support classifying many rideshare and delivery drivers as employees and both Spain and Portugal have enacted laws regulating Gig driver and food delivery services. The big kahuna is the European Commission’s massive legislative initiative on digital labor platforms that directs a Europe-wide change on worker classification for platforms that control the performance of workers. The Council and Parliament still need to reconcile their versions. On the other hand, reclassifying platform workers as employees has largely been stymied in the US since voters in deep blue California exempted Gig drivers from the state’s worker reclassification law in 2020.

Federal Judge Rejects Meta’s Attempt to Shut Down FTC Reopening Privacy Challenge

Report from Bloomberg

In Brief – Federal Judge Timothy Kelly has rejected Meta’s effort to block the Federal Trade Commission (FTC) from attempting to impose aggressive new sanctions on the company for allegedly failing to abide by a 2019 settlement agreement reached with the regulator. Kelly dismissed Meta’s argument that only his court, and not the FTC itself, had the jurisdictional authority to take additional enforcement actions. In May, the FTC accused Meta of failing to abide by a 2019 settlement agreement to address years of alleged privacy and data security shortcomings in the wake of a 2011 settlement regarding similar concerns. The FTC action in May was highlighted by a proposed ban on any of the company’s businesses using data on users under age 18 to earn revenues. Meta called the move a “political stunt” that aimed to “usurp the authority of Congress to set industry-wide standards” on data policy related to teen internet users, and appealed to Judge Kelly, who approved the 2019 settlement, arguing that only he had the authority to rule that Meta failed to abide by its terms. Kelly ruled to the contrary, determining that the FTC’s action did not violate the specific terms of the 2019 settlement and that Meta would need to bring a new federal court challenge against the new action. Meta says it will appeal Kelly’s ruling.

Context – Meta is probably right that the FTC Democrats are using this enforcement action to make a statement on policy issues they want to highlight, including behavioral advertising and how social media companies serve teens, that have not been addressed by Congress to their liking. Chair Lina Khan is an unabashed Big Tech critic, and while her tenure has been somewhat unsettled, the agency does not appear to be pulling back. Contentious FTC undertakings on “unfair methods of competition”, employer non-complete clauses, data privacy and “commercial surveillance”, Gig work, rules mandating “all in” pricing, and merger reviews that appear more aligned with European regulators than US court rulings, are drawing increasingly aggressive criticism from business groups and Republicans. But none of that seems to be helping Meta.

EU Releases Statement of Objections to Amazon’s iRobot Deal

Report from TechCrunch

In Brief – The European Commission has released formal objections to Amazon’s $1.4 billion deal to acquire iRobot, the maker of Roomba robot vacuums. EU officials announced initial market impact concerns in July, including the prospect that Amazon might undermine rival robot vacuum competitors such as Dyson and Shark on their marketplace, as well as use iRobot user data to gain advantage over current or future ecommerce marketplace competitors. They have followed up with more specific concerns on strategies they believe Amazon could pursue to undermine robot vacuum rivals, including reducing their visibility in organic search and advertising on the marketplace, limiting their access to certain beneficial features or labels, or raising robot vacuum advertising rates. The Commission’s current deadline to approve the deal, now likely needing concessions from Amazon, or reject it, is February 14, 2024.

Context – Since their decision to approve Microsoft’s massive deal for Activision, which proved the lynchpin in overturning objections from the UK CMA and the US FTC, the Commission has taken a harder line than the UK regulators on a couple of tech acquisitions. The CMA approved Amazon’s iRobot acquisition in June without conditions. A key takeaway was that the CMA understood that Amazon’s most profitable and least risky ecommerce business model involves third-party sellers who purchase Amazon logistics and on-site advertising. In those cases, Amazon commissions often reach 50%, and profit margins far exceed first-party retail operations. These margins explain why Amazon has been shifting away from first-party retail for years. This is also why Amazon’s recent antitrust settlement in the EU (and one in the UK as well) tries to limit Amazon practices that push third-party sellers to purchase logistics services. Does the Commission think that settlement won’t work for robot vacuums? Or is it that they think it will work, lowering Amazon’s commissions on the third-party vacuum sales and incenting the company to penalize them and preference Amazon’s own vacuums? And how will DMA regulation fit in?

Spanish Competition Authority Rejects Allegation That Top Platforms Promote Fake Reviews

Report from Reuters

In Brief – Spain’s competition authority has dismissed a complaint filed by a consumer advocacy group alleging that leading online ecommerce and travel sites, including Amazon, Booking, and TripAdvisor, intentionally published fake reviews of sellers on their websites. The regulator said that it did not find evidence that the platforms have participated in or facilitated the publication of fake reviews, noting that the companies instead invested resources in a range of tools and efforts to detect and combat fraudulent reviews, including collaborating with government investigations of the problem. The CMNC did note that the complaint filed by the Organisation of Consumers and Users provided evidence that some online sellers themselves, as well as various intermediaries, do contact users with enticements to leave fake reviews in exchange for money, gifts, or other compensation, and forwarded the complaint to the Spanish consumer affairs department.

Context – While consumer reviews have been one of the most enduring benefits of the internet, fakes, frauds, and deceits have always been a nagging online problem. There is a fake review industry, with illicit enterprises, often based outside the largest ecommerce markets, soliciting phony customer reviews in exchange for money or other benefits, offering to boost businesses with positive reviews and harm competitors with fake negative reviews. Regulators and leading online companies are engaged. The US FTC released updated online “Endorsement Guidelines” in June and is pursuing regulations on unfair and deceptive practices in digital marketing, including paying for positive (or negative) reviews, suppressing negative reviews, or buying or selling online followers. The UK CMA updated its guidance for online influencers last December and the draft UK Digital Markets, Competition and Consumers Bill includes a section aimed at policing fake online reviews. Finally, a collection of the leading platforms featuring consumer reviews have formed the Coalition for Trusted Reviews to help advance both corporate and government efforts to combat the problem.

IRS Again Delays Dramatically Expanded Tax Reporting for Online Payments

Report from the New York Times

In Brief – For the second year in a row, the Internal Revenue Service will delay implementing a major expansion of tax reporting required for individuals and small businesses who receive payments through platforms such a Venmo, PayPal, CashApp, eBay, and Etsy. Since 2011, digital payments platforms have been required to file 1099-K reports to users and the IRS at the end of the year for every user that received more than 200 total payments adding up to at least $20,000. The intent of the reporting was to push people operating online businesses to pay income taxes, while not treating a wide range of non-commercial online transactions as business activity. In early 2021, 1099-K reporting was dramatically expanded as part of the $1.9 trillion stimulus legislation, lowering the threshold for the business tax reports to cover every user receiving at least $600 in payments in a year. Backers claimed it would increase federal tax payments by $8 billion over a decade. However, the dramatically lower thresholds would result in tens of millions more people getting tax reporting notices when they don’t run businesses, with many complaining that non-business activities would mistakenly be perceived as taxable income, raising tax complexity with little return. In announcing its latest decision to not implement the $600 threshold for this year’s tax filings, the IRS said that it planned to set the 2024 tax year threshold at $5,000.

Context – From the earliest years of the commercial internet many tax officials were certain that people were earning money online and not paying all the applicable taxes. However, given that the taxpayers targeted by 1099-K filings are the farthest thing from billionaires or giant corporations not paying their full taxes, it might continue to be a politically salient issue. The worry that people earn money on digital platforms without paying all their taxes is not just a US phenomenon. The OECD has prepared a series of reports since 2019 aimed at national government tax ministries encouraging the adoption of standardized tax reporting obligations and formats for digital platforms that empower independent, freelance, and “Gig” work.

Streaming Giants Accused of Self-Censoring in the Face of Indian Pressure

Report from the Washington Post

In Brief – US-based streaming giants such as Netflix and Amazon’s Prime Video India are increasingly seen by anti-establishment Indian movie and video creators as engaging in self-censorship to avoid butting heads with government regulators and political influencers. The corporate decisions are an outgrowth of a regulatory regime established in 2021 that directed video streaming companies to resolve viewer complaints within 15 days, or else face regulation by a committee of government ministries. Officials claim the goal was to curb profanity and sexual content, not squash criticism of the government or ban content drawing attention to India’s social and religious rifts, but the process is now being used by Hindu-aligned political and social conservatives who influence the government of Prime Minister Narendra Modi to pressure the companies to pull down and de-fund content that is alleged to be “anti-Hindu” or critical of Indian nationalism. The streaming companies dismiss the allegations.

Context – Modi’s India, Erdogan’s Turkey, Putin’s Russia, and, of course Xi’s China, are seen as various shades of authoritarian efforts to implement online censorship. China’s Great Firewall is the crème de la crème for scope, control, and staying power. Part of the compliance equation for digital platforms is clearly the size and value of the market, and on that score, India is unquestionably the top rung because many of the US-based digital platforms are not operating in China. The Modi Government’s willingness to ban many Chinese-based apps in 2020, including TikTok, for so-called national security risks, was likely a “He Means Business” moment that impacted US digital giants. India also passed a robust social media content moderation law in 2021 criticized for similar online censorship concerns. X is challenging it in Indian court. Of course, the EU’s Digital Services Act deserves the same censorship scrutiny, and its market size drives company compliance. EU officials have challenged X, Meta, and TikTok with not blocking pro-terrorist and antisemitic content and misinformation following the Hamas attack on Israel.

TikTok Is the One Gatekeeper Arguing It Should Be Exempt From the DMA

Report from Reuters

In Brief – TikTok is challenging its designation as a “core platform service” under the EU’s Digital Markets Act that regulates the major platforms of the largest digital “gatekeepers”. Enacted in 2022, the law establishes a set of 18 behavioral “Do’s and Don’ts” for the largest digital platforms that are modeled on competition law principles but enforces them proactively by regulators rather than through enforcement actions. In September, the European Commission announced that six companies were gatekeepers with a combined total of 22 core platform services. The qualifying criteria for gatekeepers and core services includes both quantitative and qualitative factors. ByteDance, TikTok’s Chinese-based parent company, was designated as a gatekeeper, with TikTok as its one covered core platform service. TikTok argued to the Commission before the initial determination that it should not be considered a covered service despite a very large number of European users because it was an emerging competitor to dominant social media and video platforms like Facebook, Instagram, and YouTube, and as such was enhancing competition. In appealing the Commission’s initial decision, TikTok is again making that argument, as well as claiming that ByteDance does not qualify as a gatekeeper company because its European revenues do not meet the law’s quantitative gatekeeper thresholds.

Context – Only TikTok and ByteDance are attempting to be removed completely from the DMA regime. Early estimates of gatekeeper companies projected a dozen or more, including EU-based Booking, but in the end, it was five US-based giants (Amazon, Apple, Google, Meta, and Microsoft) and China’s ByteDance, maybe there to prove it was not an anti-US endeavor. Google and Microsoft are not challenging any of the designations. Meta is arguing that its Marketplace and Messenger services should be regulated as part of the Facebook platform rather than separately. Apple is attempting to modify how the law applies to its App Store and arguing that iMessage should not be added as a core platform, something Google is lobbying for.

DoJ v Google Search Antitrust Trial to Reconvene for Closing Arguments in May

Report from the Wall Street Journal

In Brief – US District Judge Amit Mehta has announced that he will reconvene the antitrust trial pitting the US Department of Justice (DoJ) against Google next May for closing arguments. Google has antitrust trials stacking up. In this one, the DoJ is accusing the digital giant, who processes upwards of 90% of search queries on the internet outside China, of using illegal anticompetitive tactics and payments to maintain and grow its search monopoly, with their multi-billion-dollar business arrangement with Apple to be the default iPhone search engine as the main example. Google counters that the agreements are legal, akin to standard retail shelf slotting deals, and are neither exclusive or mandatory. In his wrap up, Mehta said, “I can tell you as I sit here today that I have no idea what I’m going to do.” If the judge rules against Google, he will oversee a separate proceeding to impose remedies on the company.

Context – By the time this antitrust trial reconvenes next May, Google’s lawyers may have battled out two more antitrust cases in US court. They are in the midst of their second trial, a 1-on-1 face off with Epic Games over allegations that anticompetitive practices in its Android ecosystem harm consumers and app developers through overly high fees for in-app purchases. That case parallels Epic’s battle with Apple, which Apple largely won but both have appealed to the Supreme Court. Epic v Google is a jury trial, a twist in the recent Big Tech antitrust cases. While Google’s Android is clearly more open than Apple’s walled garden, its brand rules and business deals are very complicated. Who knows how a jury might react. Then come two big antitrust challenges to Google’s sprawling and complex adtech business. Another DoJ-led complaint, filed in US District Court in Virginia, is likely up next, with a scheduling hearing in January 2024 and a trial, also in front of a jury, as early as March 2024. Finally, an adtech-focused complaint from State Attorneys General, led by Texas, has been delayed for years by a venue fight, but the AGs have succeeded in moving it to federal court in Texas where the pace should pick up.

YouTube Announces New Generative AI Video Disclosure Rules

Report from Bloomberg

In Brief – YouTube has announced new rules requiring content creators to label videos that include “synthetic” content that looks realistic. While the new policy does not apply specifically or only to video created with artificial intelligence tools, the rules are clearly intended to address widely held concerns that rapidly advancing Generative AI (GAI) tools threatens to dramatically increase the prevalence of manufactured but realistic looking video content. In the blog post announcing the policy, YouTube said, “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.” Creators who repeatedly violate the synthetic content policy may be subject to content removal, suspension from the program that allows them to earn ad revenue, or other penalties. Besides the new labeling policy, the company announced that it is planning to implement a third-party content removal process available to artists, musicians, and others to request that content, even properly labeled, be removed for copying or mimicking an individual’s face, voice, musical style, or other characteristics.

Context – AI-generated content that realistically depicts events that never happened or shows people saying or doing something they didn’t do, often referred to as “deep fakes”, has long been considered an AI-enhanced threat by cybersecurity experts. Policies and standards around the labeling of AI-created or manipulated content is therefore a regular part of so-called safe or responsible AI. It is also a major concern of many governments in the context of political advertising. Meta recently announced new policies. When GAI services automatically label or otherwise allow reviewers to know that the product was created by AI, it is often called “watermarking”. While experts claim that watermarking tools are not foolproof, encouraging developers of large GAI services to institute them is a priority of many safe AI efforts, including G-7’s AI Code of Conduct. In that light, products created a Google-developed GAI video service will be auto-labeled when posted to YouTube.

A New Meta Whistleblower Speaks to the Senate Judiciary Committee

Report from the Washington Post

In Brief – Two years after Frances Haugen testified in front of the US Congress, a new Meta whistleblower has testified in front of the Senate Judiciary Committee charging that the company ineffectively measures negative experiences of teens on Instagram and therefore fails to combat them. Arturo Bejar, a former Facebook security engineer and Instagram consultant, pushed the company to change how it measured and responded to a wide range of offensive and objectionable communications directed at teens. The company focused its content moderation efforts on using digital tools to identify and block content that clearly violated high priority platform rules, but he argued this methodology missed a very large number of more ambiguous negative comments and messages, including sexual innuendos, propositions, bullying, and criticism. He told executives that data showed large numbers of teens regularly received or saw comments they found offensive or did not like, but the company did not change their overarching moderation policies. Committee members were uniformly critical of Meta and Big Tech companies, with many calling for legislation to regulate social media platforms and reform Sec. 230.

Context – Congressional legislation to regulate social media use by teens has been proposed but not enacted. States have had more success passing age-based internet laws, but they are running into First Amendment problems. The most impactful actions in the US are likely to be lawsuits from a coalition of 42 State Attorneys General and from private plaintiffs, including parents and school districts, attempting to circumvent Sec. 230 with complaints against Meta and other social media sites arguing they were negligent in developing and operating their platforms. Mr. Bejar is serving as a consultant to the AGs in that litigation. Although the committee did not endeavor to have a serious discussion of the nuances, merits, and challenges of different emphases in content moderation, including policing comments that are upsetting to some but squarely legal, the big social media design negligence suits are proceeding to trial at the federal and state level. Expect a battle over whether the platforms are really prone to causing mental health harms.

EU AI Act Negotiations Face Standoff Over Regulating Foundation Models

Report from Euractiv

In Brief – Efforts to resolve differences between the Council and Parliament versions of the EU AI Act before a December 6 trilogue meeting has hit a major snag over the controversial issue of regulating AI “Foundation Models”. The AI Act originally proposed by the European Commission in 2021, and the one adopted by the Council in 2022, had a tiered, risk-based, regulatory system that imposed mandates on enterprises using AI-enabled applications. The level of regulation was linked to the risks associated with the type of customer-facing service. The bills did not attempt to regulate the underlying AI technologies. However, following the boom created by Chat-GPT, the Parliament changed the AI Act by proposing to regulate general foundation models and their creators as well. That change has been an ongoing sticking point. The Spanish Government, current President the Council, has attempted to resolve the deadlock with a tiered system that imposed more mandates on the largest foundation model companies, which are today mostly US-based, with lesser burdens on smaller firms. However, in a recent meeting of the Telecom Working Party, the French and German governments, reportedly driven by concerns of domestic AI startups such as Mistral and Aleph Alpha, objected to regulating foundation models, and argued for the original application-based risk model. The EU Parliament’s leaders say dealing with foundation models is a must have and failure to reach agreement might send the whole enterprise back to the drawing board.

Context – Advocates of AI regulation have considered the EU’s AI Act a near certainty that would inspire legislation globally. However, some European leaders who are not shy about regulating tech, including French President Emmanuel Macron and EU Commission Jura Jourova, have spoken words of caution about over-regulation harming AI innovation. Ireland’s trade minister recently added his voice. Unlike the EU’s proposal for sweeping AI regulation, talks at the G7 show more support for generalized principles of safe and responsible AI use and corporate commitments on practices like AI watermarking.

Nepal Bans TikTok for Disrupting Social Harmony and Family Structures

Report from AP

In Brief – Nepal’s government has announced that it had banned the highly popular Chinese-owned social media app TikTok. Foreign Minister Narayan Prakash Saud made the initial announcement, saying “the government has decided to ban TikTok as it was necessary to regulate the use of the social media platform that was disrupting social harmony, goodwill and flow of indecent materials.” Communication Minister Rekha Sharma attributed the ban to TikTok “disrupting our social harmony, and the impact it’s having on our family and social structures.” Neither official provided further specifics and the company did not make a public statement. The government has asked social media companies that are not based in the country to register and open a liaison office in Nepal, pay taxes, and abide by the country’s laws and regulations.

Context – Chinese-owned TikTok’s explosive popularity is leading to a diverse and growing collection of regulatory challenges globally. They have faced sanctions in many countries for past data practices related to young users, as well as data security and censorship accusations related to links to China, most notably the ongoing CFIUS standoff in the US, but they also led to the app being shut down in India since 2020. In Europe, TikTok (and the other large social media platforms) have been pushed to better police so-called disinformation and hate speech related to the ongoing Israel-Hamas conflict. In Africa, Senegal, and Somalia, each wracked by domestic strife, banned the platform amid government claims that it was used to foment discord. The company was also recently forced to shutter its growing ecommerce shopping service in Indonesia amidst charges that it was harming traditional small business shops, as well as change their live-streaming service in Kenya due to the proliferation of adult content, a concern that seems akin to the comments from the Nepalese officials. Finally, in TikTok’s biggest market, the United States, all eyes are on the federal court in Montana that is hearing the company’s constitutional challenge to the state’s first-in-the-nation law banning TikTok downloads.

Non-Traditional Canadian News Media Fear Disruption from Forced Payments Law

Report from Politico

In Brief – Non-traditional media outlets in Canada, including freelancers and small media companies focused on online distribution, have been sharing their concerns with the Canadian Government that they will be seriously harmed by the Online Media Act that forces Google and Meta to pay Canadian media companies when their content appears on their platforms. The law was passed by the Canadian Parliament in June and regulations were released by the Department of Canadian Heritage in September. Both companies have consistently objected to paying media companies for content they don’t actively place on their own platforms. With Meta, that’s news posted by users, especially by media companies themselves. With Google, it’s links surfaced in basic search results. Both have said they would block covered news content rather than make mandatory payments. Meta instituted that policy in August, claiming they wanted their Facebook and Instagram algorithms to be accurate when the law goes into effect in mid-December. Google has reportedly been engaged in negotiations with the government but threatens to block Canadian news in search. The smaller and newer Canadian enterprises have reported to the government that the platforms blocking news hurts their businesses, with some asking for direct government payments in compensation.

Context – Canada is now ground zero for forced media company payments. The most interesting developments involve Meta. They are being straightforward in all jurisdictions that they will block news rather than pay a government-set rate. They say big media companies overvalue their content and undervalue Meta’s free distribution platforms. Some claim its a big power play. But reports indicate that blocking news content in Canada is not hurting Meta’s businesses. Seriously, consider the headaches “news” brings them. Meta might be learning that news is a net loser even when “free”. Google will have to decide soon. Media company champions in the US Congress and Sacramento must be watching. I wonder if they find it as entertaining as I do.

Google and Telecoms Want iMessage to be a Covered Platform for EU’s DMA

Report from the Financial Times

In Brief – Google and some of Europe’s largest telecoms operators are encouraging the European Commission to designate Apple’s iMessage messaging service as a “core” service under the Digital Markets Act (DMA). When the Commission designated six digital giants as “Gatekeepers”, including Apple, they identified 22 of their platforms as core services that are required to comply with the DMA’s regulatory requirements. For messaging services, a key obligation is to connect seamlessly with competitors’ messaging services. Apple’s iMessage has proudly never done so, with the company’s “blue bubble” texts and many features only operative when all participants in a message thread are using Apple devices. This is widely seen as creating a type of peer-pressure enhanced brand loyalty for iPhone. The only messaging services included in the initial 22 platforms were Meta’s Facebook Messenger and WhatsApp, but the Commission said that it was further studying Apple’s iMessage. Google, Vodafone, Deutsche Telekom, Telefónica and Orange, argue that iMessage meets the qualitative thresholds of the DMA and applying the rules would “benefit European consumers and businesses” by allowing all users, especially businesses, to reach Apple customers with so-called enriched messages.

Context – Given widespread concerns with teen mental health, the impact of digital services, and cyberbullying, it is remarkable that Apple does not face more pressure from public officials to make changes that would stop widely reported bullying over their iMessage service, including commonplace ostracization of teen Android users by teen iPhone users. When a reporter asked Apple’s CEO about not being able to send his Android-owning Mom a video, his answer was to buy her an iPhone. Is that Apple’s answer when iPhone-owning teens shut Android-owning teens out of groups and chats to avoid “turning the chat green”? Shockingly, adults do it too. None of this is new behavior and Apple sees it as good business. And what’s wrong with a little bullying between friends, right?

DSA Regulation – Don’t Worry, It’s Not Just About Controversial Online Speech

Report from Reuters

In Brief – The European Commission has ordered Alibaba’s AliExpress to provide it information on how it stops the sale of illegal products such as counterfeit goods and fake medicines. The platform was given until Nov. 27 to respond to an inquiry based on the new Digital Services Act (DSA) that regulates how digital platforms counter illegal and harmful online content. The largest digital platforms, those with at least 45 million users in the EU, are designated as Very Large Online Platforms (VLOPs) by the DSA and face the strictest mandates. Their DSA regulator is the European Commission rather than a member state authority. AliExpress is one of four ecommerce marketplaces, along with Amazon, Google Shopping, and Zalando, among the initial 19 DSA VLOPs designated by the European Commission.

Context – European Commissioner Thierry Breton, who leads the bureau enforcing the DSA for the VLOPs, has been very public in announcing DSA regulatory actions in the first months of the law being fully online. Within days of the brutal Hamas terror attack and Israeli military response, major social media platforms were inundated with objectionable content and calls to block, screen, or otherwise restrict material for reasons including being overly violent, hateful, false, deceptive, or fabricated. Breton publicly challenged X (formerly Twitter), Meta, and TikTok to meet the DSA requirements to deal with such content quickly and transparently to protect the EU’s citizenry and preserve freedom of expression. He also issued a formal inquiry to X. Despite repeated claims to the contrary throughout the DSA legislative process, of course government regulation of how platforms deal with highly political content is going to be an online censorship regime in practice. Announcing the AliExpress DSA inquiry, Breton made clear that the law does not just regulate how platforms deal with “hate speech, disinformation & cyberbullying” but also ecommerce. The main takeaway is that the DSA, and soon the DMA, will have European regulators dealing with digital platforms in a manner more akin to banks or telecommunications companies.

Federal Judge Says Social Media Giants Must Face Liability Suit for Teen Harms

Report from the Courthouse News Service

In Brief – Federal Judge Yvonne Gonzalez Rogers, who decided the massive federal antitrust trial pitting Epic Games against Apple in 2021, has ruled that Meta, Google, TikTok, and Snap must face off in court against plaintiffs alleging that the companies knowingly designed their platforms to be addictive to young users and impossible for parents to moderate. In her 52-page order, she dismissed some of the complaints but is allowing claims of design defect liability, including failure to warn and negligence, to proceed. Rogers said that the companies know that young users are on their platforms, which are in turn demonstrably harmful to young people using them. The companies argued that the suits should be dismissed based on Section 230 of the Communications Decency Act protecting them from liability for third-party content, and the First Amendment that protects their own rights to free expression. But the judge rejected the argument that all the companies’ conduct was speech and that all the potential harms were caused only by third-party content. However, the judge did reject the claim that the platforms are defective because they provide compelling short-form content that plaintiffs find problematic. “To hold otherwise would essentially be to hold that any website that generates revenue by maintaining the interest of users and publishes content with the intent of meeting this goal, would no longer be entitled to Section 230 immunity,” said Gonzalez Rogers.

Context – Critics of social media platforms have been trying for years to construct legal strategies to circumvent Sec. 230. Two of the most common avenues are that the harms are caused by the conscious design of the platforms to encourage longer and more intensive use, which the companies knew, or should have known, would lead to harms, or that the use of algorithms by the platforms is an activity that is not protected by the federal statute. This is their biggest win and promises a long and costly court fight, or massive company payouts. This federal court ruling follows shortly on the heels of a similar ruling in California state court, as well a dozens of State Attorneys General suing Meta based on similar accusations.

A Second Federal Judge Wants to Narrow the AI Copyright Debate to Training

Report from Reuters

In Brief – A second federal judge appears intent on focusing a federal lawsuit brought by copyright holders against AI developers onto the core question of whether the underlying “training” of Generative AI (GAI) systems with copyrighted material violates copyright law. The case involves three high profile authors, including comedian Sarah Silverman, who sued Meta Platforms (and, separately, OpenAI) for a wide range of copyright law violations by copying their works from the internet without consent and using them to train their AI systems. Meta argued that all the claims, except the direct infringement claim based on training Llama with copyrighted works, should be dismissed because copyright law does not protect facts or types of linguistic information that might have been extracted from their books by the large language model. In a pre-trial hearing, US District Court Judge Vince Chhabria said he found most of the authors’ claims and arguments to be exasperating and very hard to follow and highlighted that Meta had not attempted to dismiss the complaint on GAI training. On the other claims, the judge directed the plaintiffs to address whether Meta’s AI system had created copies of the authors’ works.

Context – Everyone now knows that huge volumes of copyrighted material have been used to train neural networks. But they are not like traditional databases. They learn from data and then produce new output. There have been many lawsuits alleging copyright violations, but input from judges is only now coming in. Judge William Orrick, overseeing a similar AI training suit from visual artists against Stability AI, recently dismissed all the artists’ claims except the core allegation that training with copyrighted material itself violates copyright law, a ruling that largely parallel’s Chhabria’s thinking. Judge Orrick also noted that producing actual copies is usually a very important fact in copyright infringement cases, and questioned moving forward with most of the complaints absent copied output. On the main training question that both judges appear inclined to allow to proceed, it seems to be lining up for the expected showdown over AI training as “fair use”.

Ofcom Releases First Guidance for Implementing the UK Online Safety Act

Report from TechCrunch

In Brief – Ofcom, the UK online content moderation regulator under the new Online Safety Act (OSA), has announced the release of its first set of draft Codes of Practice as part of the regulatory consultation process. The initial focus is on how user-to-user services will be expected to respond to illegal content designated in the law, with the agency highlighting protecting children as their top priority. Social media platforms are a clear focus with suggested strategies including not presenting kids with lists of suggested friends, not recommending child users be added to others’ connection lists, and not making young users’ connection lists visible to others. However, a collection of 15 different kinds of illegal harms are set out in Ofcom’s draft risk assessment guidance, including child sexual exploitation and abuse, encouraging self-harm, hate, harassment, promoting illegal drug use, firearms offenses, human trafficking, and fraud. The first guidance is over 1,500 pages long (helpful summary table here) and the rulemaking process will run approximately 18 months.

Context – The EU enacted the Digital Services Act (DSA), their regime to regulate how platforms deal with illegal and harmful online content, in mid-2022. The largest platforms, those with at least 45 million users in the EU, are designated as Very Large Online Platforms (VLOPs). They face the most extensive mandates and are regulated directly by the European Commission rather than a member state authority. Unlike the UK’s detailed draft guidance and formal rulemaking process, the VLOPs were notified of their status in April, told to comply with the requirements of the law within four months, and report their first risk assessment to the Commission. In recent weeks, Commission regulators asked TikTok, Meta, X (formerly Twitter), YouTube, and AliExpress for various compliance reports. Smaller platforms will deal with a member state digital regulator starting next spring. Ofcom’s head of online safety says that the OSA and DSA are more “regulatory cousins” than “identical twins,” but that the regulators are trying to work together to make it easier to comply with the proliferating regimes.

Meta Announces Paid, Ad-Free Options in Europe to Meet Legal Obligations

Report from the New York Times

In Brief – Meta has formally announced that it will begin offering a paid, ad-free version of Facebook and Instagram in Europe. The company says that the service is made to comply with “evolving European regulations”. The subscriptions are the first time the company has offered users core services that were not free and ad-supported. They will be available in the European Economic Area (EEA), which is comprised of the EU along with Iceland, Liechtenstein, Norway, and Switzerland. Users will have the option to purchase ad-free versions of Facebook and Instagram for 9.99 euros a month, although prices on apps downloaded from the Apple and Google app stores will be 30% higher to account for their commissions. In their statement announcing the plan, Meta reaffirmed their support for free, ad-supported online business models, arguing that they are an excellent value for consumers, as well as supporting an advertising ecosystem that greatly benefits small enterprises with small ad budgets that need to be able to find willing consumers across the otherwise massive internet.

Context – Although the free, ad-supported online business model is not unique to Meta’s services, Facebook (and more recently Instagram) have been at the forefront of years of legal and regulatory battles over how the EU’s General Data Protection Regulation (GDPR) applies to behavioral advertising. Meta’s lead GDPR regulator, the Irish Data Protection Commission (DPC) recently ruled that the company needed to get explicit user consent to use data for advertising. The Norwegian data authority recently won a ruling from the European Data Protection Board ordering the DPC to ban Meta targeted advertising across the EEA without a valid consent regime in place this month. Meta claims that the European Court of Justice has indicated that paid subscriptions are a valid ad-free alternative. Spotify and YouTube are two examples of ad-free paid offerings. Nevertheless, the privacy champion who has led the anti-Meta campaign for years objects and will again go to court, arguing that privacy should not be contingent on financial resources. Good stuff should be free.

Big Tech Antitrust Reform on the UK Government Agenda for 2024

Report from Reuters

In Brief – The King’s Speech to the UK House of Parliament, which lays out the government’s policy priorities for the coming year in a manner like a State of the Union Address in the United States, has reiterated the Sunak Government’s intention to enact the Digital Markets, Competition and Consumer Bill (DMCC) and fully establish the Digital Markets Unit (DMU) within the UK Competition and Markets Authority. The draft legislation released earlier this year gives the regulator authority to designate very large digital businesses as having “strategic market status” (SMS) and imposing a code of conduct backed up by fines of up to 10% of global turnover. Merger guidelines are toughened for the digital giants, including lower notification thresholds and more stringent tests to assess the likelihood of deals causing future harm. Finally, the bill adds to consumer protection rules, including on data use, terms and conditions, fake online reviews, and the cancellation of subscription contracts.

Context – This year’s King’s Speech promises more of the UK’s schizophrenic effort to pair aggressive regulatory intervention with aspirations to be a pro-innovation alternative to the EU. The DMCC Bill largely parallels the EU Digital Markets Act, enacted in 2022 and being implemented now. The UK’s Online Safety Bill (OSB), passed in September after years of discussion, regulates how platforms address objectionable content including child sexual abuse, promoting self-harm, animal cruelty, and terrorism, a UK take on the EU’s Digital Services Act. Widespread tech industry concern that the OSB would allow UK regulators to outlaw end-to-end encryption led secure messaging app providers including WhatsApp and Signal to threaten to leave the market. The standoff was defused by a government minister stating that no change on encryption was forthcoming. But the King’s Speech now proposes an Investigatory Powers Amendments Bill that includes new authority requiring companies rolling out new or improved encryption technology to notify the government in some manner, a proposal that already has Signal’s CEO reiterating concerns about their ability to operate in the country.

UK CMA Accepts Concessions from Amazon and Meta to Settle Marketplace Cases

Report from AP

In Brief – The UK Competition and Markets Authority (CMA) has agreed to settle separate antitrust investigations into the marketplace practices of Amazon and Meta. The CMA’s focus on Amazon involves allegations that it preferences the products of merchants who pay the company for logistics services like storage, packaging, and delivery. Amazon is agreeing to use non-discriminatory processes to determine the products that are the top offer to shoppers. Amazon also commits to ending its use of non-public information gathered from merchants to benefit Amazon’s retail offerings and improving the ability of third-party logistics providers to serve merchants in the Seller Fulfilled Prime program. Given the technical nature of ongoing compliance with the commitments, Amazon has agreed to establish and cooperate with an independent Monitoring Trustee. The Meta settlement addresses the regulator’s concerns that the company used data collected from advertisers on Facebook to unfairly compete with those advertisers on the Facebook Marketplace. Meta has agreed that competitors of Facebook Marketplace can ‘opt out’ of their ads data being used to improve Facebook Marketplace.

Context – We’ll focus on Amazon. It’s more timely. They have now settled similar antitrust cases with the European Commission and CMA. While they each address long-running concerns that Amazon unfairly uses data from merchants to compete with them as a retailer, but the real news is that both target Amazon’s most profitable ecommerce business, which is sales by merchants who buy Amazon’s logistics services. They often pay up to 50% in fees and Amazon incents this by driving buyers to them. Amazon is agreeing to stop unfairly preferencing those sellers. Their EU settlement includes a plan to surface a second Buy Box offer with different shipping. The CMA settlement is less specific and hinges on the independent Trustee. The same Amazon practices are central to the FTC’s recent antitrust lawsuit. At some point we may learn if Amazon made similar settlement offers or will try to defend practices they say they’ve abandoned in Europe.

GOP Big Tech and Trump Critic Ken Buck Leaving the US House at Term’s End

Report from the Washington Post

In Brief – US Rep. Ken Buck (R-CO), an aggressive conservative critic of tech giants who formed a bipartisan partnership with Democratic Rep. David Cicilline (D-RI), Chair of the House Antitrust Subcommittee to push for regulation of tech giants, has announced that he will not run for reelection in 2024. Buck’s pending departure from Congress follows Cicilline’s unexpected resignation last June. The two will be remembered for working together on a landmark set of antitrust bills to combat allegedly anticompetitive practices by tech giants. The policy ideas emerged from the progressive antitrust reform movement, and some believed they had gained enough conservative backing to prevail. However, the House Republican Leadership objected to most of the bills and their disagreements with Buck became increasingly fractious. Buck upped the ante by encouraging his colleagues to follow his lead and publicly renounce political fundraising from Big Tech, criticizing Republicans leaders for accepting their support. The measures fell short amidst heavy lobbying through the end of the 117th Congress. When the Republicans narrowly prevailed in the 2022 election and took over the House majority, Buck was passed over as Chairman of the Antitrust Subcommittee, with the role going to a strong opponent of the Cicilline-Buck bills. Big Tech antitrust was clearly not on the agenda while Republicans were in charge. Buck also emerged as an increasingly vocal counter voice among House Republicans on issues such as the provenance of the 2020 Presidential Election.

Context – When David Cicilline announced that he was leaving his safe Rhode Island House seat mid-session to take over as the president of a major foundation in his home state, and Buck was clearly on the outside looking in on Republican tech policy, we said here that we expected both to eventually leave the House to run for higher office from their respective home states. We still do. However, as Buck has become an increasingly strident critic of former President Donald Trump, he might choose to wait for a while. He’s only 64.

Supreme Court Hears First in Series of Social Media 1st Amendment Cases

Report from the Washington Post

In Brief – The Supreme Court heard arguments in a pair of cases involving the constitutional limits of government officials blocking people from accessing their social media accounts. A challenge facing the justices is defining when government employees are acting in an official capacity online, and are therefore bound by First Amendment restrictions on censorship, and when they are acting as private citizens online, with their own free speech rights. The two cases involve local officials, one a city manager from Michigan who faced critics blasting local coronavirus policies, the second school board members from California who faced parental critics. However, the arguments occurred under the backdrop of a similar case involving President Donald Trump blocking people on his Twitter account that reached a federal appeals court in 2019 and determined that users could not be blocked. The justices debated questions such as whether the amount of official business compared to personal matters was important from a constitutional perspective, as well as whether social media accounts are appropriately considered an official government communications platform, or a private venue used by officials, which are often accessed by-invitation even if government business is discussed. The Biden Administration backed the government officials right to block users from their private accounts, arguing they were not acting in an official capacity. The court is expected to rule early next year

Context – This is the first of three issues involving the First Amendment and social media in front of the Supreme Court. Next up is whether states may regulate social media content moderation, driven by laws in Florida and Texas that backers say is designed to stop censorship of conservative viewpoints. After that, the court will deal with the question of how to determine if communications by government officials to encourage social media platforms to restrict some online posts or speakers amounts to state censorship. All together, the big question is whether a majority of justices will rule that the largest social media platforms are not themselves traditional private communications platforms and require novel First Amendment rules.

Testy Exchanges Over AI Extinction Talk as a Big Tech Competition Strategy

Report from Insider

In Brief – A growing number of top AI technologists are accusing companies such as OpenAI and Google of fostering talk of AI “existential risks” to foster government regulation that would make it difficult for small AI innovators to compete with the current industry giants. “There are definitely large tech companies that would rather not have to try to compete with open source, so they’re creating fear of AI leading to human extinction,” said Andrew Ng, a cofounder of Google Brain who works with startups. “It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community.” Meta’s chief AI scientist Yann LeCun, who recently said that machines are not even as smart as a dog, believes that some large AI tech companies have a “superiority complex” that leads them to believe that only they can be trusted to develop AI safely, while others want to engineer “regulatory capture under the guise of AI safety.” Former associates and colleagues are on both sides of the debate, with OpenAI co-founder Sam Altman, who was Ng’s student, often calling for regulation, as does ex-Googler Geoffrey Hinton, who jointly won a Turing Award with LeCun for groundbreaking work on neural networks.

Context – Think of the AI safety and regulation debate from two perspectives. First, the company competition. OpenAI, Microsoft, and Google have the largest Generative AI services with models that are closely held. And they are tech leaders saying AI is risky, asking for regulation, and expecting to craft plans at the adults’ table with regulators. Meta is the one relative giant who has championed an open-source AI vision. They push back on talk of doom, and not just LeCun. Then there are governments. Rhetorically, leaders rarely downplay talk of doom. Who wants such comments dredged up if something bad happens? But on the concrete action front, just China, protecting its Great Firewall regime, and the EU, finalizing an AI Act, are creating laws. The US, Japan, and UK talk about AI safety, but they are not legislating. Instead, versions of voluntary AI safety regimes are proliferating, as are forums to study risks. Many leaders clearly fear pushing innovators elsewhere.

Meta Faces European-Wide Targeted Advertising Ban from Privacy Regulators

Report from Reuters

In Brief – Meta faces a ban on targeted ads on its largest platforms in Europe driven by the data privacy regulators of the European Economic Area (EEA) who allege that the digital giant has not conformed its data practices to the General Data Protection Regulation (GDPR). The striking order comes from the European Data Protection Board (EDPB), a panel made up of the privacy regulators from the EEA member states and the European Commission, who agreed to a petition from Norway’s data protection authority. The Norwegian regulator had imposed a temporary ban on the company doing targeted ads in Norway and fined Meta for not complying. The regulator then appealed to the EDPB and requested that it use its emergency authority to extend the ban across the EEA, make it permanent, and threaten a fine that could reach 4% of global revenue. The EDPB has ordered the Irish Data Protection Commission (DPC), who technically is the lead regulator for Meta for the GDPR, to impose the ban within two weeks. Meta responded with disappointment, said the regulators had ignored recent moves it has made to give European users more control over their data and advertising, and is likely to challenge the decision.

Context – Meta is mired in a quagmire involving privacy advocates frustrated with the GDPR and the lack of huge penalties being imposed on tech giants, the law’s One-Stop-Shop regime that empowered allegedly lax regulators from Ireland, and the slow pace of change in online practices. And many of them fundamentally oppose targeted advertising. The Irish DPC, responding to years of pressure from other national regulators, has recently imposed large fines on Instagram and TikTok. In addition, the GDPR is no longer state of the art in EU tech regulation replaced by the Digital Services Act and the Digital Markets Act. Both measures abandoned the One-Stop-Shop for very large platforms, putting the European Commission in the regulatory driver’s seat. One major compliance response from Meta is their plan to roll-out a paid, ad-free version of Facebook and Instagram in Europe modeled after ad-free versions of services such as YouTube and Spotify.

Match and Google Settle Payments Dispute Before Trial Leaving Epic 1-on-1

Report from TechCrunch

In Brief – Match Group and Google have settled Match’s lawsuit alleging that Google “illegally monopolized the market” for app distribution and imposed an “extortionate tax” with the fees on in-app transactions. Match’s lawsuit, similar to a complaint filed by Epic Games in August 2020, was scheduled to be considered in tandem with Epic in a trial scheduled to begin November 6. Match now withdraws from the case. A coalition of State Attorneys General who had also filed complaints against the Android operator settled in September. Match will adopt Google’s new user-choice billing system that offers app developers the choice to use a set of Google-approved payments services for in-app purchases, including the Google payments service. When a non-Google payments service is used, Google’s commission will be 4% lower (11% for subscriptions and 26% for purchases). In addition, Match will retain the $40 million it put into escrow to cover unpaid commissions during the litigation.

Context – Google’s now 1-on-1 face off with Epic is their second big antitrust trial this fall. Their battle with the US Department of Justice over practices that the antitrust enforcers allege they used to protect their dominant search engine, including tens of billions paid to Apple, started in September. The Epic Games v Google battle is focused on Google’s Android ecosystem and the fees it imposes on in-app purchases. It parallels Epic’s legal battle with Apple. That legal complaint moved at a much faster pace with a trial in Federal District Court in May 2021, and a decision that August. Apple prevailed on Epic’s federal antitrust complaints, but Apple was ordered to drop their payments steering policies under California’s unfair competition law. The Ninth Circuit appeals panel upheld the full decision and both parties have appealed to the US Supreme Court. Epic v Google will face off in a jury trial, a twist in the recent Big Tech antitrust cases. While Google’s Android is clearly more open than Apple’s walled garden, its brand rules are complicated. Who knows how a jury might react. Two big antitrust challenges to Google’s adtech business are also in the queue.

New European Commission Deadline for Decision on Amazon Buying iRobot

Report from Reuters

In Brief – The European Commission has pushed back to February 14 the deadline it has set to approve or reject Amazon’s proposed acquisition of iRobot, the maker of the Roomba robotic vacuum. The ecommerce giant announced the $1.7 billion deal in August 2022, but cut the price to $1.42 billion in July after iRobot incurred more debt. When the Commission’s competition authority announced that it was proceeding to an in-depth review of the acquisition, it said that it was concerned that the deal might threaten competition in the market for robot vacuum cleaners, such as by enticing Amazon to tilt search results on its massive marketplace to benefit iRobot products and disadvantage competitors such as Dyson and Shark. The regulator also raised concerns about how Amazon might broadly use iRobot user data to gain advantage over current or future marketplace competitors.

Context – More antitrust enforcement roulette with US, EU, and UK regulators. In terms of financial value, Amazon’s bid for iRobot pales in comparison to Microsoft’s purchase of Activision-Blizzard. That mega deal closed despite initial rejections by the UK CMA and US FTC. The European Commission’s approval was key to Microsoft prevailing. Since then, the Commission has taken a harder line than the CMA on a couple of tech acquisitions. On Amazon-iRobot, the UK regulator signed off and directly rejected the argument that Amazon had a financial incentive to undermine competitor vacuum sales on its massive ecommerce marketplace given the very high commissions Amazon makes on third-party vacuum sales. Another example is the hard line being taken by the Commission to block Booking from buying flight OTA eTraveli, also approved by the CMA. In that case, the EU regulator can sit confidently on the fact that Booking is the top OTA in Europe and claim the deal will add to its dominance. The FTC has still not made its final call on Amazon-iRobot but following federal court setbacks on Meta’s acquisition of Within and Microsoft-Activision, it might just wait for this one to play out in Brussels like Illumina-Grail.

Federal Judge Prepares the Way for an AI Training Data Copyright Court Fight

Report from the Hollywood Reporter

In Brief – In an early federal court ruling on the question of whether Generative AI (GAI) systems trained on copyrighted material violate the rights of copyright holders, US District Court Judge William Orrick dismissed most of the complaints made by three visual artists but is allowing the core allegation regarding GAI training to move forward. The case involves Stable Diffusion, a GAI service to create digital images, developed by Stability AI and used by companies including Midjourney and DeviantArt. Judge Orrick ruled that it was too early to decide the key question of whether training AI systems with copyrighted material is a copyright violation, noting that Stability AI does not contest the fact that it made copies of copyrighted works and used them to train Stable Diffusion. However, he indicated that the plaintiffs would need to better explain whether billions of images are stored in the program, as well as questioning the strength of any claims when the plaintiffs cannot identify any near-identical copies created by the AI systems. The judge gave the plaintiffs 30 days to modify the complaints he dismissed.

Context – Every week it seems like another lawsuit is filed by copyright holders against AI companies, but there are too few rulings to consider. We know that huge volumes of copyrighted material have been used to train neural networks, but they are not like traditional databases. They learn from data and then produce new output. Of course, as Judge Orrick noted, producing copies is often an important fact in copyright infringement cases. To wit, a GAI-related lawsuit by music publishers against Anthropic’s AI service claims it creates direct copies of copyright protected song lyrics. However, absent producing copies, many legal analysts expect training to be considered “fair use”, which is a relatively robust exception in US copyright law. The newly released Biden Executive Order on AI does not propose to answer the contentious question. The EU Parliament’s version of the AI Act does regulate GAI foundation models, including a requirement to identify copyrighted materials used in training.

Malaysia Warns TikTok and Meta Not to Block Pro-Palestinian Content

Report from Reuters

In Brief – Malaysia’s Communication Minister Fahmi Fadzil has warned Meta and TikTok that they should not be discriminating against pro-Palestinian content on their massive social media platforms. Fahmi took to X, formally known as Twitter, claiming that officials had heard from many parties that Facebook, Instagram, and TikTok were blocking pro-Palestinian content following the Hamas atrocities of October 7th, which has led to weeks of Israeli bombing, threats of a major military attack to remove Hamas from its rule of the territory, and widespread fears that the violence could spark a wider regional war. A Meta spokesperson said there was “no truth” to the claim that its platforms were disproportionately blocking pro-Palestinian content. TikTok, which was publicly criticized by Fahmi two weeks earlier for allegedly failing to adequately curb defamatory or misleading content in Malaysia, did not respond to the charges regarding their Middle East moderation policies.

Context – In the weeks since the brutal Hamas attack on Israel, passionate viewpoints have been expressed in demonstrations across the globe, and on all the major social media platforms. One side or the other often sees some of that torrent of content as overly violent, hateful, false, deceptive, or fabricated. Large social media platforms have long done content moderation with a combination of rules and standards, digital scanning tools, and human moderators that check content in the grey areas. It’s an inherently imperfect process. The European Union’s recently enacted Digital Services Act regulates how platforms do content moderation, requiring clear rules, fair and transparent enforcement, protection of freedom of expression, and effective measures in the event of a crisis. European Commissioner Thierry Breton has publicly accused X, Meta, and TikTok of falling short in the current crisis by not effectively blocking pro-terrorism content and misinformation. He has been especially aggressive with X, the platform accused of less aggressive moderation under its new management. Does anyone think Breton and Fahmi agree on what’s misleading, fair, proportionate, or effective?

Biden USTR Withdraws Long-Standing Digital Policies at WTO Digital Trade Talks

Report from Reuters

In Brief – The Biden Administration’s US Trade Representative (USTR) has informed the World Trade Organization that it is no longer calling for e-commerce rules to back cross-border data flows or prohibit national data localization requirements and government reviews of software source code. The move reverses years of US leadership in support of policies that promoted data flows, ecommerce, and digital trade. Progressive critics of so-called “Big Tech” and “globalization” praised the policy change while digital economy and trade advocates, both in the US business community and on Capitol Hill, decried it. Senator Ron Wyden (D-OR), Chairman of the Senate Finance Committee that oversees trade policy, harshly rebuked the USTR saying it was abandoning leverage against China’s digital policy expansionism.

Context – Chalk this up as an early salvo by the Biden Administration to energize their left flank in preparation for the 2024 election. They likely tell tech friends behind closed doors to relax because the WTO digital trade talks weren’t going anywhere soon, and they are winning real gains on cross-border data flows with the EU and the UK. Tech policy fights keep spilling over into US trade debates in part because lobbyists on both sides look for things to do when legislation is stalled. We saw it in 2019 with the US-Mexico-Canada Agreement, and then in 2021 on the China competition bill. The Indo-Pacific Economic Framework for Prosperity (IPEF) talks are the most recent opportunity. Tech and trade critics, often one in the same, claim that trade agreements allow US tech companies to push get the US to fight foreign digital regulations. A more tortured argument is that trade commitments could tie the hands of Congress to reform US digital policy laws. At this point, US progressives are envious of EU laws regulating digital “gatekeepers”, online content moderation, privacy, and potentially AI. They are frustrated that the US Congress won’t follow suit. Besides Senator Wyden still supporting free trade and the open internet, digital trade backers can look to recent letters produced in the Senate and House that support their views.

EU AI Act Remains Stuck on a Few Key Issues

Report from Euractiv

In Brief – The fourth official EU AI Act trilogue session ended without agreement on a few sticking points standing in the way of the bloc’s landmark AI regulation effort. The toughest issues continue to be how to handle Foundational Models such as Chat-GPT that could be used to underpin a myriad of AI-enabled services, and the ability of Member State security services to use biometric analysis and surveillance. The European Commission released a draft AI Act in 2021 with a tiered regulatory system that applied mandates to enterprises using AI-enabled applications based on the risks associated with the specific service itself rather than attempting to regulate the underlying technologies. However, following the boom created by Chat-GPT, the EU Parliament made a big change by proposing to regulate “foundation models” and their creators rather than just those who use AI technology in specific applications. In addition, the Parliament continues to press for more limits on the ability of state security agencies to use biometric surveillance, including facial recognition. Spain, who is leading the Council through year’s end, is hoping to wrap up agreement in a December 6 trilogue session.

Context – EU officials are generally not shy about championing their leadership on regulatory issues, especially the so-called “Brussels Effect” when it comes to the digital economy. However, regulating foundation models and other core AI technologies is leading to words of caution from officials who are not shy about tech regulation, including French President Emmanuel Macron and EU Commission Jura Jourova. While the EU proposes sweeping AI regulation, talks at the G7 show more support for the approach of voluntary Safe AI principles and corporate commitments such as practices like AI watermark standards and third-party audits of AI systems. Japan is leading the G7’s Hiroshima AI Process and Prime Minister Kishida intends to press for guidelines to coordinate how companies building Generative AI services help distinguish AI-created content, especially images and video, from authentic material (i.e. “watermarking”).

Lawsuits from State AGs Targeting Meta Add to Parade of Plaintiffs Suing Social Media

Report from the Wall Street Journal

In Brief – A bipartisan group of 42 State Attorneys General (AGs) is suing Meta alleging that features on its Facebook and Instagram platforms are addictive, aimed at kids and teens, and cause significant mental health harm. AGs from 33 states filed a single lawsuit in federal court in California while nine AGs are filing separate lawsuits in their state courts. The new legal complaints argue that Facebook and Instagram have been designed to keep young users on them for longer times and to repeatedly come back, including through alerts, notifications, and the “infinite scroll” of platform feeds, and include features that the AGs allege negatively impact teens’ mental health such as “likes” and photo filters.

Context – More than two years ago, Frances Haugen stole thousands of documents from inside Facebook and became the face of an international campaign to regulate social media. Among the charges that most resonated with public officials was the claim that Instagram knew that it was harmful to younger users, especially teen girls. The company responded that its internal research efforts were highly misrepresented and defended the company’s work to identify and address social media challenges. Haugen addressed legislative committees in the US, UK, EU calling for regulation. The EU and UK were already well on their way to enacting the now-passed Digital Services Act and Online Safety Act. Legislation to regulate social media use by teens has been proposed but not enacted in the US Congress. States have had more success passing age-based laws, but they are running into First Amendment problems in the courts. Finally, US lawsuits from private plaintiffs, including parents of harmed children and school districts, are targeting Meta, TikTok, YouTube, and Snap, making many of the same arguments as the AGs. The judge overseeing the federal court cases is looking for complaint-by-complaint defenses against specific charges rather than a blanket free speech defense. A state judge in California likewise recently ruled that one of those class action suits can proceed based on the claim that the social media platforms were negligent in their design.

Ask A Question!