News insights OOLD

June 2024

Judge Refuses Google’s Request to Reject US DoJ AdTech Antitrust Suit

Report from Bloomberg

In Brief – Federal Judge Leonie Brinkema has rejected Google’s motion to toss out the US Department of Justice (DoJ) antitrust complaint accusing the company of monopolizing the AdTech market. The ruling came unexpectedly in a pre-trial hearing focused on challenges to expert witnesses. The trial, scheduled for September, was recently changed by Judge Brinkema from a jury trial to a bench trial, which was requested by Google and opposed by the DoJ. During back-and-forth in the latest hearing, Brinkema announced that she was rejecting Google’s motion for summary judgment in which they argued DoJ had failed to show that it controls at least 70% of the market for display advertising and qualifies as a monopoly, saying “We’re going to let this go to trial” and that there are “way too many facts in dispute.”

Context – Google emerged as the leading AdTech giant following its 2007 acquisition of DoubleClick. Antitrust scrutiny ramped up in 2020. A lawsuit from State AGs led by Texas came first but was delayed for years over a venue battle. It is now set for trial in a federal court in Texas in spring 2025. Then came the DoJ complaint in January 2023, and finally preliminary antitrust charges from the European Commission in June of that year. In all these cases, traditional antitrust analysis including defining the relevant markets, measuring shares, and trying to prove market power, will be at the heart of the cases. However, the underlying technologies and markets, and therefore the facts, are very complex and in dispute. In addition, the complaints involve conduct stretching back years to when Google’s market shares were higher, but new entrants have been joining and growing, while Google’s share has been falling with reports pinning it below 30% of the overall market. Therefore, antitrust enforcers are focusing on narrow types of AdTech services, arguing that the company holds much larger shares in some of them and still exercises great influence overall. Finally, Google’s AdTech platform is now governed in Europe by the Digital Markets Act and so their conduct will need to appease regulator there regardless of the EU antitrust case.

Meta Pauses AI Rollout in Europe as Irish Data Commissioner Reviews Data Policy

Report from Euractiv

In Brief – Meta has announced that they are pausing the rollout of their AI features in Europe and the training of their large language models with publicly shared content posted by their users in Europe following a request from the Irish Data Protection Commission (DPC). The regulator is reviewing complaints filed by Austria-based privacy advocacy group NOYB, a long-time Meta antagonist, that the company does not have legal justification to use the data for AI training without express opt-in consent from individual Facebook and Instagram users. Meta argues that training AI models on European content is needed for AI features to “accurately understand important regional languages, cultures or trending topics” and that “Europeans will be ill-served by AI models that are not informed by Europe’s rich cultural, social and historical contributions.” The company expressed clear frustration with the regulatory intervention just two weeks before they planned to release their AI products in Europe, saying that they had been informing EU data protection authorities about the work since March and incorporating their feedback, as well as noting that Google and OpenAI both have “already used data from European users to train AI” without the level of transparency Meta claims they are providing users.

Context – The EU AI Act has been celebrated by AI regulation advocates for setting up a comprehensive AI regulatory regime, including creating a specialty EU AI Office. The AI Act’s treatment of large foundation models, including training, was a key compromise leading to a final deal. Now, despite the regulatory regime not going fully into effect for two years, privacy regulators are already stepping in and proposing to use the GDPR to regulate AI model training and services. The European Data Protection Board recently issued a report on Chat-GPT claiming that AI companies need a valid legal basis for all stages of personal data processing, including the collection and use of training data, as well as raising the concern that chatbot hallucinations could violate the GDPR’s “principle of data accuracy”.

US DoJ and FTC Sue Adobe Over Cancellation Fee and Practices

Report from the New York Times

In Brief – The Federal Trade Commission (FTC) and US Department of Justice (DoJ) have together sued Adobe for making it overly difficult and expensive for consumers to cancel software subscriptions. The company, a leading provider of specialized software for graphics, photography, illustration, and video, moved largely to a monthly subscription revenue model in the early 2010’s. The complaint from the regulators alleges that the software giant included a sizeable Early Termination Fee in their subscription agreement, but the fee was largely hidden from shoppers “in fine print and behind optional text boxes and hyperlinks” at the time of purchase. The regulators also argue that the company used several techniques to make it difficult for subscription holders to cancel, including telling unsuspecting users that they agreed to the expensive cancellation fee. “Adobe knows about the barriers consumers face when attempting to cancel their subscriptions,” the government said in the lawsuit that charges the company and two of its executives who had oversight of the program with violations of the Restore Online Shoppers’ Confidence Act. Adobe’s General Counsel said in a statement that, “We are transparent with the terms and conditions of our subscription agreements and have a simple cancellation process.”

Context – This is the second federal lawsuit targeting corporate subscription cancellation processes and deceptive website practices that are often called “dark patterns” to make news in recent weeks. The FTC filed suit against Amazon in June of 2023 targeting Amazon’s use of dark patterns to sign up users to the Prime subscription program and dissuade them from cancelling their monthly subscription. Amazon’s practices were the target of a consumer group campaign in Europe in early 2021 and the company settled with EU regulators in 2022 by making major changes there. Amazon largely kept their original processes in the US for another year despite FTC scrutiny, making changes days before the FTC sued. That complaint recently survived Amazon’s motion to dismiss and has been scheduled for a June 2025 trial.

Federal Cybersecurity Official Wants “Deepfake” Law to Protect Elections

Report from the Washington Post

In Brief – The Director of the US Cybersecurity and Infrastructure Security Agency (CISA) believes that voluntary commitments from AI companies to identify and label AI-generated images on their platforms won’t be enough to keep the technology from being used to interfere with US elections and argued that legislation will be needed to put rules in place that have “real teeth”. The largest digital platforms and AI developers have made a series of voluntary commitments to identify and label realistic-looking AI-generated images as well as act against deceptive political content created by AI tools, but Congress has not approved any major federal legislation on AI, including restrictions on AI in campaigns.

Context – The folks at the Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security (DHS) seem to have a tough time understanding that regulating so-called “misinformation” is asking for political and legal trouble. Back in 2022, CISA and DHS created a “Disinformation Governance Board” that intended to coordinate DHS activities to combat online misinformation, including foreign election interference and threats to critical infrastructure. Conservatives and Republicans did not trust that Biden Administration appointees would not be partisan, sensed online censorship, and the whole endeavor quickly fell apart and was scrapped. The fact that AI and so-called “deepfakes” are part of the equation doesn’t change the political risks, First Amendment challenges, or suspicions of partisan motivations. The FEC has been considering a proposal since last summer that would prohibit using AI deepfakes to deliberately misrepresent an opponent in a political ad but has not finalized any rules, and deliberately misrepresenting an opponent already violates FEC rules regardless of using AI. The FCC will consider new rules requiring political ads aired on television or radio to require a disclosure when AI is used, but the agency cannot regulate ads on online platforms. Finally, quick action in Congress is not likely and an AI legislative roadmap from the Senate Majority Leader was recently lambasted by progressives championing aggressive regulation.

Japanese Parliament Passes Bill to Regulate Apple and Google App Stores

Report from Kyodo News

In Brief – Japan’s parliament has passed targeted digital market competition legislation to regulate the largest mobile app store businesses rather than rely on traditional antitrust enforcement processes. The Act on Promotion of Competition for Specified Smartphone Software is intended to push Apple and Google to comply with a set of requirements, including opening their mobile ecosystems to third-party app stores and payments systems. The Japan Fair Trade Commission (JFTC) will have the authority to fine the two mobile app store giants up to 20% of their domestic app store sales if they do not comply with the app store competition rules with fines potentially reaching 30% for repeat violations. The two mobile ecosystem giants will be permitted to oversee security, data management, and protections for young users provided by the alternative app stores and app providers. The level of fines in the new legislation far exceeds the 6% authorized for similar “exclusion-type private monopolization” behavior under Japan’s Anti-Monopoly Law. The app store competition law must next be formally approved by the Cabinet and is expected to go into effect by the end of 2025.

Context – This app store bill is being compared to the EU’s Digital Markets Act (DMA). They are not that alike. The DMA is much broader in scope, regulating far more than mobile phone app store rules. The DMA also covers 7 digital giants already. That said, as the European Commission determines whether Apple’s DMA compliance plan is adequate, they will cover much of the same ground as this law. If this law is noteworthy regarding how Japan is proposing to regulate digital platforms, it is that they have chosen to focus narrowly in a way that limits the new regulation to non-Japanese digital giants. There are five digital giants in Japan – Amazon, Apple, Google, Rakuten and Yahoo Japan. Two are Japan-based. Heretofore, Japan has pursued more moderate digital regulation than the EU, where no digital giants are European. Japan picking a digital market to regulate that impacts only US-based Apple and Google is a change.

Privacy Rules Backers Want FTC Action Now Fearing Time Is Running Out

Report from the Washington Post

In Brief – A coalition of more than 30 privacy, consumer, and civil rights advocacy groups have signed a public letter urging the Federal Trade Commission (FTC) to quickly carry out formal rulemaking to establish federal privacy regulations governing what the agency’s Democratic commissioners call “Commercial Surveillance and Data Security”. The progressive public interest organizations argue that corporations have been relentlessly increasing their efforts to “extract even more personal, sensitive data from individuals”, criticize the regulator for “the lack of action” to deal with the problem, and call for the agency to “act now to protect the public at large”. Backers of strong new federal privacy regulations are reported to be concerned that Republican gains in the November elections could end hopes for the kind of regulations they want to see. The groups believe fast action at the FTC is needed because a return of President Trump in 2025 would almost certainly shut down the commercial surveillance rulemaking that is widely opposed by business groups.

Context – The impatience and angst of the privacy advocates is understandable. While many tech commentators have argued for years that federal privacy legislation was inevitable, we’ve consistently recommended betting the under. Last Congress, a compromise bill earned the backing of three of the four key committee leaders, with only Sen. Maria Cantwell (D) holding out, and that measure still fell victim to complicated political and policy crosscurrents. This year’s Cantwell-Rodgers compromise includes more than enough to raise major Republican and business concerns. And there are big operational hurdles as well, including a short legislative calendar, very narrow House and Senate majorities, Republican and business antipathy for the current FTC, and GOP reluctance to give President Biden a big legislative win. In terms of go-it-alone FTC rulemaking, a federal judiciary that is receptive to “Major Questions” doctrine challenges to expansive regulatory activity is waiting even if former President Trump does not prevail in November.

New York Joins States Trying to Regulate How Teens Use Social Media

Report from AP

In Brief – The New York state legislature has passed a bill regulating how social media platforms serve teenage users and Governor Kathy Hochul (D) is expected to sign it into law. The legislation follows the increasingly popular model of regulating various platform features such as the use of algorithms to recommend content, auto-play and scrolling features to keep users engaged, and notifications that remind a user of posts on the platform. Critics of social media argue that these and other technical features make the platforms “addictive” to teen and are employed to maximize user engagement and ad revenue, despite them knowing it to be harmful. Restrictions in the bill include prohibiting platforms from showing users under age 18 posts from users that they do not already follow, requiring that content be presented in chronological order rather than based on some other algorithm, or sending notifications between midnight and 6 am. The bill allows teens to get recommended posts if they obtain “verifiable parental consent” for the change. Critics charge that the measure violates the First Amendment and that it will be challenged in federal court.

Context – Efforts to cordon off parts of the internet, especially social media platforms, from teens, are only gathering steam despite the lack of data showing social media use causes worse mental health in teens. The New York law will get into the litigation queue with others from both red and blue states including Arkansas, Ohio, Utah, Texas, Florida, California, and Maryland. Federal judges have not been sympathetic so far, blocking them based on the First Amendment and privacy grounds (due to age verification needed to bypass restrictions). The recent legislative trend is to try to avoid strict legal scrutiny on free speech grounds by focusing regulation on “content neutral” app and website features, such as recommendation algorithms and auto-play, rather than anything on harmful content like bullying, disordered eating, and suicide. In the EU and UK, absent a First Amendment, they are starting to directly regulate content moderation.

Booking Fails to Reach Settlement with Spanish Competition Authority

Report from LaRazon

In Brief – The Spanish National Markets and Competition Commission, the country’s competition authority, has rejected the effort of Booking, the leading digital platform for online hotel and lodging reservations, to settle an investigation that accuses the company of abusing its dominant position. The regulator reached a provisional decision in February that the company prevented domestic hotel groups from offering deals on their own websites that were cheaper than those they offered on Booking and proposed a 486 million euro fine. Booking, who defends their pricing policies by arguing that setting higher prices their platform would harm consumers, is reported to have offered a range of commitments to the competition authority to change practices, which would go along with a reduction in the fine, but the two sides did not reach an agreement. Booking reacted to the regulator’s decision to proceed to a final decision against the company saying that they do “not agree with the justification set out in the draft” and will challenge the conclusions.

Context – Booking has long been the top hotel reservation intermediation platform and has had repeated run-ins with antitrust regulators over “price parity” policies that allegedly block hotels from offering lower prices on other online platforms, which strike critics as especially problematic when the alternative platforms charge the hotels lower fees. In Europe, the regulatory scrutiny they face recently expanded when the European Commission designated them a “gatekeeper” under the Digital Markets Act (DMA). Booking was the first company added to the DMA following the original six – Amazon, Apple, Google, Meta, Microsoft, and ByteDance. The five initial US giants all exceed $1 trillion in market capitalization, with three valued beyond $2T. ByteDance is estimated to be worth around $230 billion. Booking is valued as “just” $130 billion. The Commission is also investigating X as a DMA “gatekeeper” and it is worth less than $10 billion, which would redefine a regulatory scheme for digital platforms that are supposed to be dominant.

Federal Ninth Circuit En Banc Panel Rejects Uber Challenge to AB 5

Report from Courthouse News Service

In Brief – An en banc panel of the Federal Ninth Circuit of Appeals has rejected Uber and Postmates’ effort to overturn California law AB 5 changing worker classification standards for Gig style rideshare and delivery platforms. The 11-judge panel’s ruling overturned an earlier appeals court decision that sided with the platform companies’ argument that the law violated the Equal Protection clauses of US and California constitutions. AB 5 was enacted by the California Legislature in 2019 to apply the ABC Test for worker classification to narrow the ability of businesses to hire workers as independent contractors rather than employees. High-profile Gig ridesharing and delivery platforms were a top target of the labor initiative, but many professions that traditionally operated as independent contractors were caught up and lobbied to be exempt. Some were exempted, many were not. In 2020, the legislature expanded the exemption list, adding many professions, artists, and freelancers, and even non-ridesharing and delivery platforms. Writing for the panel, Judge Jacqueline Nguyen said that the appropriate test for the narrow targeting of the law was the rational basis test, meaning that there could be a rational reason for legislature’s decision, and they decided that the answer was yes. The judges argued that focusing on the ridesharing and delivery platforms because of their scale, leadership in developing the Gig work model, and employment impact was not irrational and therefore the law was not unconstitutional.

Context – For those who thought platforms like Uber and Postmates were exempt from AB 5 because Prop. 22 was enacted in 2020, you remember correctly. However, the judges and the litigants agreed that this litigation was not moot because of continued legal challenges to Prop. 22 and some enforcement for conduct between enactment of AB 5 and Prop. 22. That said, this decision does not change anything about AB 5 for the Gig platforms. Prop. 22 litigation could. Independent truckers on the other hand, who are not covered by Prop. 22, have been fighting in federal court since 2020 to be exempted and keep losing.

Utah Sues TikTok Alleging TikTok LIVE Leads Teens to Create Adult Content

Report from CNN

In Brief – The State of Utah has sued TikTok alleging that its TikTok LIVE feature promotes sexual solicitation and exploitation of young people. The lawsuit in Utah state court claims that TikTok LIVE, which was introduced in 2019 and allows users to live-stream performances, includes a monetization feature where viewers can purchase virtual currency to reward streamers. TikTok earns a commission on virtual currency purchases. According to the Utah complaint, minors are sometimes streamers and they can be incentivized by the currency to take part in sexually explicit acts. A company spokesperson responded to the complaint with a statement saying that, “Creators must be at least 18 years old before they can go LIVE, and their account must meet a follower requirement. We immediately revoke access to features if we find accounts that do not meet our age requirements.”

Context – Utah’s Republican leaders have been early and aggressive critics of the social media giants. The state was the first to enact legislation regulating how social media platforms deal with teen users, something that is increasingly commonplace in states dominated by a single political party. Like the others, Utah’s legislation is being challenged in federal court. This is also not the state’s first legal challenge of TikTok, with the Attorney General suing TikTok last fall alleging “addictive design” and insufficient protections of young users. But that was nothing compared to fellow mountain west state Montana that banned the app outright only to see it blocked by a federal judge. Of course, the issue of banning TikTok is now centered on the legal battle surrounding the federal legislation forcing the divestment of the US TikTok business. However, Utah is once again stepping out front by going after TikTok LIVE for adult content, or at least they are in the US context. Accusations that live streaming on TikTok was undermining public morality has been causing major problems for the app in a growing number of developing country markets last year, leading to Nepal to ban the app and Kenya to consider strict regulation.

Microsoft Says More Coming to Address EU Teams Antitrust Concerns

Report from Bloomberg

In Brief – Microsoft President Brad Smith has said that while the company has “done a lot” to address the concerns of European competition regulators regarding how its Teams product is integrated with its Office 365 and Windows software suites, they will do more. Slack filed an antitrust complaint with the European Commission accusing Microsoft of illegally tying Teams to Office and Windows and the Commission opened a formal investigation in mid-2023. A formal antitrust complaint from the Commission is reportedly being prepared. Last fall, Microsoft offered to provide versions of Office and Windows in Europe without Teams, as well as offer a stand-alone Teams product. In April 2024 they announced that they would roll those changes out globally, saying that the expansion “addresses feedback from the European Commission by providing multinational companies more flexibility when they want to standardize their purchasing across geographies.” Issues over pricing between the different services have plagued the offers. For example, the pricing of the stand-alone Teams was $5.25 per month, while a software buyer would only save $2.25 per month by buying Office 365 without Teams.

Context – Microsoft won Commission approval of their massive Activision acquisition with a regulatory good behavior campaign. Addressing concerns with company practices integrating its market-leading Windows OS and Office 365 software, component applications, and its massive cloud business, have been at the top of the list. Along with the Teams investigation, European cloud services providers complain that Microsoft uses software licensing to pull customers into its cloud infrastructure, and NextCloud has complained that Microsoft unfairly bundles its OneDrive cloud system with Windows. The Windows OS is a gatekeeper core platforms under the Digital Markets Act (DMA). Apple and Meta pricing features as part of Commission DMA investigations. Teams-related pricing could eventually be dealt with through the DMA.

Some AI Company Employees Want New Whistleblower Protection

Report from the New York Times

In Brief – A group of 13 current and former employees from top AI companies, including 11 from OpenAI, one current employee of DeepMind who previously worked for Anthropic, and another former employee of Anthropic, have signed an open letter asking AI companies to foster transparency about risks from their AI systems and protect employees from retaliation if they speak out about their AI safety concerns. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” says the letter. They call for a “verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.” Some of the letter writers have ties to the “effective altruism” movement as well as notable past tech whistleblowers. They cite AI risks including exacerbating inequality, misinformation, and even the loss of human control potentially resulting in human extinction and warn that “silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

Context – OpenAI has been at the forefront of AI mania since it publicly released its human-sounding chatbot and openly talked about super-powerful AI causing huge harms, backed calls for regulation, and had soap opera-like internal operations. Tech industry whistleblowers are an increasingly common occurrence. Microsoft recently had an AI-focused one to add to Facebook, Amazon, TikTok, Uber, and Instagram. Now they are announcing themselves ahead of time. More than a year ago, the AI doomsday contingent had an open letter calling for a six-month global AI moratorium or regulation. It came to nothing. A high profile sub-clash between the AI doomers and those who think Large Language Models might have inherent flaws that will keep them from super-intelligence is the recent spat between Elon Musk and Meta’s Paul LeCun.

US Antitrust Agencies Agree to Split Up Top AI Investigations

Report from New York Times

In Brief – The US Department of Justice (DoJ) and Federal Trade Commission (FTC) have agreed to split up what they believe are the top antitrust investigations in the AI sector, with the DoJ taking the lead to investigate chip giant Nvidia while the FTC investigates the antitrust implications of Microsoft’s investments in, and relationship with, OpenAI and AI startups such as Inflection AI. Agency talks began months ago to determine which would take the lead looking into the unusual Microsoft-OpenAI relationship that was highlighted by Sam Altman jumping to Microsoft a day after he was deposed as CEO of OpenAI. Nvidia’s emergence as the dominant global supplier of top-end chips and processors for many AI applications, which has resulted in the company growing into the world’s second most valuable company (behind only Microsoft), created a second major AI sector antitrust concern and led to the DoJ Antitrust Division taking the lead on that portfolio. The two agencies engaged in similar jurisdiction negotiations in 2019 over four digital giants, with a division of leadership that has cases involving Google and Apple at the DoJ, and Amazon and Meta at the FTC. The AI agreement will see DoJ continuing its leadership role related to Google AI activities and the FTC related to Amazon’s AI activities.

Context – One mantra of tech regulation advocates is that governments should not make the “same mistakes” with AI that they made with social media, including more activist competition policy. However, startup OpenAI emerged as the leader in LLM chatbots, while Clearview AI pulled off a similar surprise beating the incumbent digital giants on facial recognition. Conventional wisdom might hold that having the largest data sets and biggest stock of GPU “compute” confers an unbeatable advantage to the largest incumbents, but some experts see the largest models potentially hitting a point of diminishing returns from more data and processing power already while smaller, targeted, specialized systems may prove more efficient. Innovation might continue to be key. Regardless, the EU Competition Authority and UK CMA are on the AI beat too.

UK Retailers File Class Action Lawsuit Targeting Amazon

Report from the Silicon Republic

In Brief – The British Independent Retailers Association (BIRA) has filed a class action-style lawsuit against Amazon in the country’s top antitrust court. The UK retailers are claiming £1bn in damages for what it describes as illegal misuse of non-public retailer data to engage in a “product entry strategy” to unfairly compete with the retailers, as well as manipulation of the “Buy Box” to influence shoppers to purchase products that brought greater profits to Amazon. The CEO of BIRA said, “If small business want to sell online, Amazon is the dominant marketplace in the UK. While the retailers knew about the large commissions charged by Amazon, they did not know about the added risk of their trading data being used by Amazon to take sales away from them.” In 2023, the UK Competition and Markets Authority (CMA), the country’s competition regulatory, accepted voluntary commitments from Amazon to give third party sellers a “fair chance” to be featured in the Amazon Marketplace Buy Box, as well as saying that they will not use Marketplace data obtained from third-party sellers to give itself an unfair competitive advantage.

Context – Amazon is the largest online retailer, the largest ecommerce marketplace provider, and the largest ecommerce fulfilment services provider. Unlike true marketplaces, Amazon handles the goods for most of their top marketplace sellers as if they were Amazon retail goods. And through the Prime program, and the Buy Box algorithm, Amazon directs most consumer purchases to goods that involve FBA fulfillment and very high fees. Complaints about Amazon misusing third-party seller data to grow its own retail business are years old, but they now largely miss the point. Amazon’s most profitable ecommerce business is not its own retail sales. Third-party sellers using FBA logistics bring far higher margins. The Amazon Buy Box settlement with the European Commission involving logistics, and the Buy Box component of their CMA voluntary commitments, are more important than Amazon promises not to use seller data to unfairly grow their own low margin retail business.

Department of Justice v Google AdTech Antitrust Suit Will Be a Bench Trial

Report from Reuters

In Brief – Google has prevailed in its effort to battle the US Department of Justice’s AdTech antitrust complaint in front of a federal judge rather than a jury. The antitrust complaint, which targets Google’s core revenue generator, alleges that the digital giant has built an illegal online advertising monopoly and may lead to a breakup of their collection of online advertising businesses. The case is scheduled to go to trial in September in federal court in Alexandria, Virginia and was set for a jury trial at the request of the government antitrust regulators. Google asked Judge Leonie Brinkema to switch to a bench trial, arguing that it is unprecedented for a jury to decide a federal antitrust case brought by the government, and that the case involves “a complicated, intricate technology ecosystem, which DOJ has acknowledged to this Court is ‘highly technical, often abstract, and outside the everyday knowledge of most prospective jurors.’” In an interesting legal twist, Google argued that the DoJ had a stronger argument for demanding a jury in a case where it sought monetary damages, and so the company sent a $2.3 million check to the government covering three times the federal government’s claimed losses to address that aspect of the case, a move that the judge accepted in moving the case to being her responsibility.

Context – You can trace this procedural conflict back to Google’s decisive defeat at the hands of Epic Games last December in a lawsuit alleging that Google’s Android rules and practices violated federal antitrust law. Apple prevailed in a similar Epic lawsuit in 2021. It struck many observers as odd that the “closed” Apple ecosystem, which generally holds larger market shares than Google’s, was on the right side of antitrust law while the more open Android was not. One big difference was that Epic v Google was a jury trial, while Epic v Apple was a bench trial. Plaintiffs in all the Big Tech antitrust suits, including the DoJ, are now asking for jury trials. Google argued here that the constitutional right to a jury trial is intended to protect citizens against the federal government, not the other way around.

California Judge Dismisses School District Social Media Liability Claims

Report from the Bloomberg

In Brief – Los Angeles Superior Court Judge Carolyn Kuhl has dismissed legal claims made by hundreds of public-school districts that big social media companies should be held liable for damages the schools allege were caused by the companies creating “addictive” online services and marketing them to teen users. The schools argue that Google, Meta, TikTok, and Snap drove up the cost of education over many years by making students more distracted and disruptive, adding to the cost of in-school discipline, employee training, and communication with parents. Kuhl, who ruled last fall that the same social media companies could not rely on federal Section 230 to dismiss damages claims from teens and their parents alleging direct harms from the platforms, ruled in this case that the claims of school districts are too remote and there must be limits on liability, otherwise any company could be held responsible when “emotional harm” it inflicts on individuals then causes those individuals to “act out.” The school districts were not given an opportunity to amend their claims.

Context – Social media critics have been trying for years to construct legal strategies to circumvent Sec. 230. Two of the most common avenues are to allege that the harms are not due to the dangerous third-party content itself, but instead to the design of the platforms to encourage longer and more intensive (“addictive”) use, or that the use of algorithms by the platforms to direct dangerous content to users is not an activity protected by the federal statute. Along with the lawsuits in California state court being overseen by Judge Kuhl, there is a similar set in federal court being overseen by Judge Yvonne Gonzalez Rogers. Last November she, like Kuhl, rejected the companies’ argument that the suits should be dismissed en masse based on Sec. 230 and the First Amendment, although she did dismiss some of the plaintiffs’ claims. Similar addictive design arguments are also being included in state laws attempting to regulate teen social media use as well as lawsuits filed by dozens of State AGs targeting Meta.

EU Court of Auditors Criticizes Government on EU AI Investment Policy

Report from Courthouse News Service

In Brief – The European Court of Auditors, an EU agency that audits public spending to improve transparency, accountability and trust in government, has released a major report on public efforts to develop Europe into an AI power and found them severely wanting. Examining European Commission AI initiatives stretching back to 2018, including efforts to expand public and private investment, research, and development, the agency said that Europe is falling further behind the US and China on AI investments rather than catching up. Both member states and the European Commission come in for criticism, as do public AI investment initiatives and efforts by government to encourage private sector AI investments. The differences between private sector investment in AI between the US and EU was especially striking, with the US reportedly home to €62.5 billion in private AI investment in 2023, seven times more than the EU and UK’s €9 billion, with private investment in AI estimated to have fallen by 44% from 2022 to 2023. The auditors found that AI investment goals and research priorities were too vague, responsibility for investment initiatives was spread across too many national agencies and authorities, and that the European Commission was unsuccessful in effectively coordinating the many efforts. “Going forward, stronger governance and more – and better targeted – public and private investment will be paramount if the EU is to achieve its AI ambitions,” the report said.

Context – The EU’s AI Act, enacted this year and bringing online the first comprehensive AI regulatory regime over the next two years, was first proposed in 2021, but the European Commission’s focus on AI began in earnest in 2018. The AI Act, like the GDPR, Digital Services Act, and Digital Markets Act, has proven that Europe is the digital regulation leader. But business breakthroughs have been lacking. For example, investments in European venture-backed startups fell dramatically from 2021 to 2022 and again from 2022 to 2023. Rather than any consensus on why, it’s one of those chicken and egg questions.

X’s Lawsuit to Silence Progressive Critic Shot Down by Federal Judge

Report from the Washington Post

In Brief – US District Judge Charles Breyer in California has dismissed the lawsuit filed by X against the Center for Countering Digital Hate (CCDH) last July alleging that the group violated the platform’s term of service and compiled false criticisms to harm the company’s business. In his ruling, the judge did not mince words in finding that the suit lacked merit and appeared to be a blatant attempt to intimidate critics. “X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp. — and perhaps in order to dissuade others who might wish to engage in such criticism.” Breyer dismissed the suit under California’s strict SLAPP laws designed to stop “strategic lawsuits against public participation”.  X sued the UK and DC-based non-profit after it published a report alleging that the social network was profiting from hate after Twitter reinstated scores of previously suspended accounts of “neo-Nazis, white supremacists, misogynists and spreaders of dangerous conspiracy theories.” X alleged that the group improperly gained access to data about X and that its claims influenced advertisers to spend less money on the site, costing X tens of millions of dollars in lost revenue.

Context – When Musk acquired Twitter and shook up its content moderation rules and practices, many in the left-leaning Twitter “Indigo Blob” rebelled. The CCDH initiated a campaign criticizing the platform and its principal owner that continues. Of course, it’s deeply ideological. Elon Musk’s robust defenses of “free speech” and criticisms of the prior Twitter regime’s content moderation practices has upset many progressives and delighted many conservatives. Agreement is rare on where to draw the lines on concepts like “hate speech” and “disinformation”, let alone when and how to enforce them. In the US, the First Amendment is key. It gives X and other platforms great leeway to impose editorial standards in content moderation. However, in Europe, social media content moderation practices are now regulated by the Digital Services Act and X is under investigation by the European Commission for not meeting the law’s standards.

European Court of Justice Reaffirms Key Principle of the Digital Single Market

Report from EU News

In Brief – In a series of advisory rulings, Europe’s top court has strongly upheld the “country of origin” principle for regulating ecommerce businesses in the EU that is enshrined in the eCommerce Directive and the Digital Single Market initiative. The European Court of Justice addressed a series of legal challenges raised by top digital platforms, including AirBNB, Google, and Amazon, to Italian regulations enacted in 2020 and 2021 that imposed obligations and fees on their operations that exceeded the regulatory requirements imposed by the member states where the platforms have their European headquarters. The EU judges strongly upheld the principle that the online services companies are regulated by the laws of the country where they are based, which for most of the companies is Ireland, with Amazon headquartered in Luxembourg, not by where their consumers are located, and set a high bar for getting an exemption from the e-Commerce Directive to regulate platforms. The ECJ rulings will now be used by the Italian courts to decide the legal challenges by the digital intermediaries that have established EU headquarters in an EU member state that is not Italy, as well as sending a signal more broadly to other EU national governments that the High Court is skeptical of country-by-country regulation of digital services.

Context – These rulings follows a similar one in December that rejected an Austrian law regulating social media content moderation challenged by Google, Meta, and TikTok. The ECJ refenced that ruling in this case. A 2022 French law imposing a minimum shipping fee for books sold online to protect physical bookstores may be up next, with a French court asking EU judges whether the law is compatible with EU single market rules. While Ireland and Luxembourg have faced years of criticism for supposed lax regulation of the digital giants, especially on privacy and taxes, the EU legislative response has been to centralize the regulation of digital giants with the European Commission, such as through the Digital Services Act and Digital Markets Act, rather than empower regulators and legislators country-by-country.

Temu is the Latest eCommerce Platform Designated a DSA VLOP

Report from The Verge

In Brief – Temu, one of China’s top online shopping platforms, is set to be designated a Very Large Online Platform (VLOP) under the EU’s Digital Services Act (DSA) and will be face the highest level of scrutiny under the law that regulates how digital platforms moderate illegal and objectional online content. Platforms with more than 45 million monthly active users in Europe are VLOPs. Temu has reported to the European Commission that it has crossed that threshold. VLOPs are regulated directly by the European Commission, face stricter criteria for dealing with objectionable material, and must submit regular risk assessment the detail their responsive measures. The Commission initially designated 19 platforms to be VLOPs, added three popular online pornography sites to the roster in December, and most recently announced that Temu’s fellow ecommerce platform Shein, was a VLOP. Among the “systemic risks” addressed by the DSA that are most relevant to ecommerce platforms are the offer and sale of counterfeit, unsafe, and illegal products, adherence to consumer protection standards, and transparency regarding sellers, goods, on-site advertising, and recommender algorithms. Temu will have four months to provide the Commission with its initial systemic risk assessment report.

Context – Temu is the sixth ecommerce platform to be regulated as a VLOP, following initial inductees AliExpress, Amazon, Google Shopping, and Zalando, as well as Shein. While Amazon and Zalando have challenged their designation, the Commission argues that the law is clear in its coverage of very large ecommerce marketplaces and the European Court of Justice ordered Amazon to comply with the law’s advertising transparency rules while its challenge is being adjudicated. The Commission has opened full scale DSA investigations of five VLOPs: X, TikTok, Facebook, Instagram, and AliExpress. Telegram, a messaging platform considered central to disinformation campaigns, claims to have just 41 million EU users, but the platform is reportedly being scrutinized by the Commission for inclusion under the DSA. Expect it relatively soon.

Federal Court Sets September Trial for Challenges to Federal TikTok Ban

Report from Reuters

In Brief – The US Court of Appeals for the District of Columbia is calling for oral arguments in September on the legal challenges to the recently enacted federal law requiring China-based ByteDance to divest its US TikTok business by January 19, 2025, or face a ban. TikTok, as well as a group of successful US-based online creators who use TikTok, have each sued to block the law from going into effect. The litigants and the Department of Justice all asked the court for a quick schedule, and the court responded by calling for the briefs from TikTok and the creators to be filed by June 20, the brief from the Justice Department by July 26, with reply briefs due by August 15. TikTok and the Justice Department have asked the court for a ruling by December 6 to allow them to seek review from the Supreme Court if needed.

Context – The US Government attempting to ban TikTok, whether through CFIUS or federal legislation, was always setting up for a historic battle in federal court. The showdown was diverted in 2021 when President Biden abandoned the initial executive order from President Trump to ban TikTok and dropped the related court fights. However, the Biden Administration never dropped the CFIUS review, never accepted TikTok’s efforts to resolve the Chinese-influence concerns, and strongly backed the eventual congressional legislation, including giving classified briefings to Congress on the risks of China using TikTok to malignly influence US society. This is going to be a massive case at the intersection of the First Amendment and national security at the mega-scale of the internet. The First Amendment has traditionally protected even adversarial speech, including anti-American propaganda, and a federal judge also blocked the Trump effort to ban the Chinese WeChat app, which is definitely a part of the Chinese online control regime. But there has never been an internet service like TikTok, which claims 170 million US users, has the ability to non-transparently control the news and public affairs messaging they see, and is based in an authoritarian country under the sway of a government seen as the US’s top global security rival.

Federal Judge Rejects Amazon Motion to Dismiss FTC’s Dark Patterns Lawsuit

Report from Courthouse News Service

In Brief – US District Judge John Chun has rejected Amazon’s motion to dismiss the FTC’s lawsuit that accuses the ecommerce and logistics giant of using deceptive online practices to sign up users to the Prime subscription program and dissuade them from cancelling their monthly subscription. Chun’s order notes that courts rarely grant motions to dismiss when it comes to deciding whether a disclosure, such as the terms of enrollment in Amazon Prime, is clear and conspicuous to a reasonable consumer as a matter of law, and that he was bound to review the FTC’s allegation in the most favorable light. This FTC lawsuit focuses on deceptive website practices that are often called “dark patterns”. Consumer protection complaints targeting Amazon’s use of dark patterns to dissuade Prime members from cancelling started in Europe in January 2021 and the company settled with EU regulators in 2022 by making major changes there. But Amazon kept the original over-the-top processes in the US for another year, making changes days before the FTC filed their lawsuit. The FTC’s complaint says that “Amazon still requires five clicks on desktop and six on mobile for consumers to cancel from Amazon.com.”

Context – Judge Chun is going to be a central figure in the application of antitrust and regulatory policy to Amazon in the US. He is not just overseeing this FTC dark patterns complaint, he is handling the bigger FTC antitrust lawsuit alleging that Amazon harmed online consumers and small business sellers by pushing online sellers to raise the prices they offer on other websites to match the higher prices they set on Amazon’s dominant marketplace to account for high Amazon fees, creating an elevated virtual price floor on the internet. Chun has set a fall 2026 trial date for the big FTC antitrust lawsuit. Chun is also handling a similar private antitrust suit targeting Amazon for their price floor policies, and rejected the company’s motion to dismiss. Not unlike the situation with Prime dark patterns, Amazon settled EU complaints regarding how third-party sellers are treated on their marketplace in 2022, but not yet in the United States.

French Court Pushing X to Pay Media Companies for News Stories in Tweets

Report from France 24

In Brief – The Judicial Court of Paris has ordered X to provide a dozen top media companies with data to help determine the revenue generated for X by the sharing of their news content on the platform. The media companies are seeking licensing payments from X based on the “neighboring rights” provision added to French copyright law in 2019 to funnel some ad revenues from digital platforms to news media companies. X has been given 2 months to share data including the number of views of the media company stories, the number of clicks on the content, data about reader engagement, the revenue generated in France by X on those pages, and to describe how its algorithms lead it to publishing the media content. X argues that they are not required to pay because users post the content, not X.

Context – Google and Meta have been the main targets in the global battle by media companies to have governments compel digital platforms to pay them when news content appears on their platforms. From the government perspective, Australia and France have led. Australia passed legislation in 2020 compelling Google and Meta to pay media companies or have rates set a government arbiter. Both companies made deals with media companies. France enforced neighboring rights linked to the “snippets” Google served alongside links. When Google threatened to just stop using snippets, as they had done in Spain and Germany, the French competition authority ruled that stopping snippets to avoid payments would violate antitrust law. Google relented, signed payments deals, and was still fined millions by the competition regulator. Meta then signed voluntary licensing deals in France. Canada enacted an Australia-style law last year. Meta refused to participate, blocked media stories in Canada, and appears to have been no worse for wear. Meta may next do the same in Australia. In France, X will likely need to pay unless they block links with snippets. Then the question is whether X, which has never made much profit, will nevertheless be ruled a dominant platform like Google search. And media companies will look for new targets like TikTok.

European Data Authorities Issue Report on Chat-GPT’s GDPR Problems

Report from Reuters

In Brief – The European Data Protection Board (EDPB) Taskforce on Chat-GPT has issued a report that presents high level areas of agreement among the region’s privacy authorities regarding concerns with the development and operation of AI chatbots. The regulators, who play a central role in the enforcement of EU’s landmark General Data Protection Regulation (GDPR), argue that despite the novelty of the new technology, Chat-GPT needs a valid legal basis for all stages of personal data processing, including the collection of training data, model training, treatment of user prompts, and Chat-GPT outputs. The taskforce noted that the service carries “peculiar risks” for people’s fundamental rights, such as from the vast volumes of personal data being ingested through web scraping, including sensitive “special category” personal data, as well as difficulties posed by chatbot “hallucinations” of made-up outputs. The regulators warn that the GDPR’s “principle of data accuracy must be complied with”, suggesting that OpenAI provide users with an “explicit reference” that generated text “may be biased or made up” and that the “probabilistic output” of the chatbot has a “limited level of reliability”. Several data protection regulators have also raised concerns with OpenAI’s claim that it cannot correct inaccurate information used to train the model, but instead can only offer to block certain outputs, with the report saying that it is “imperative” that people are able to exercise their right to rectify personal data.

Context – The EU AI Act has been celebrated by AI regulation advocates, including for creating a specialty EU AI Office. It doesn’t go fully into effect for two years, and its treatment of foundation models like Chat-GPT was a key compromise leading to a deal. But privacy regulators are already stepping in, and the EU Commission is raising similar AI issues with Microsoft as their DSA enforcer. Despite the regulator gun jumping, it is refreshing to see someone noting that the remedy for hallucinations is likely to be AI companies being very clear with users that incorrect outputs are part of the how the systems operate.

Full Ninth Circuit Moves to Review Internet Service Liability Jurisdiction Case

Report from Reuters

In Brief – The Ninth US Circuit Court of Appeals has agreed to an en banc review of a decision by a three-judge panel that ruled that online commerce platform Shopify could not be sued in California for violating state privacy laws because it did not specifically target its online services to California-based businesses. The class action lawsuit, rejected at the District Court level and by the appeals court panel, involves California consumers who claim that Shopify violated state privacy laws by selling data collected by their online payments business as they processed transactions for online retailers. The appeals court panel ruled that while the consumers and the online retailers who used Shopify may be based in California and subject to state law, Shopify itself was not because its payment service was offered online generally without targeting customers in the state. The case highlights the different standard for “jurisdiction” in state law liability cases between companies that sell physical products online that are then delivered into a state, which the Ninth Circuit has held does create jurisdiction for the company in the state, and the standard for third-party services that are offered on the internet, which has heretofore not established jurisdiction for the service provider if they are offered online without targeting the state.

Context – Legal “jurisdiction” questions may seem like dry legalese from the early days of ecommerce, but from the perspective of the potential impact on online businesses, changes can be immensely important. For platforms offering third-party business services, big changes could be coming. A majority of Ninth Circuit judges voted to revisit the established rules. A similar fundamental change in US state legal jurisdiction for ecommerce occurred with the US Supreme Court’s Wayfair decision in 2018 that overturned the long-established precedent that states could only enforce their sales tax laws on businesses operating in the state. That change has solidified the online commerce dominance of the largest ecommerce businesses, Amazon and Walmart, who now leverage their local logistical facilities without additional tax burdens.

FCC to Consider New Rules Requiring Transparency for AI Used in Political Ads

Report from TechCrunch

In Brief – The Chairwoman of the Federal Communications Commission (FCC) has proposed that the agency consider and seek public comments on a rule that would require political advertisements aired on television and radio disclose if they use content created by artificial intelligence. Under the proposed framework, the rules would apply to both candidate ads and so-called issue ads and would require an on-air disclosure as well as a public filing with the agency. The rules would not apply to online platforms and streamers because their services are not regulated by the FCC. The rulemaking process would require the agency to develop a definition of AI-generated content. The FCC first dabbled in regulation of AI-enabled services earlier this year when it unanimously voted that the use of AI-created voices in robocalls were “artificial” and therefore violate the Telephone Consumer Protection Act, which governs the use of pre-recorded and artificial telephone messages.

Context – The Federal Election Commission has been considering a proposal to prohibit AI deepfakes to deliberately misrepresent an opponent in a political ad since last summer but has not finalized any rules. That said, deliberately misrepresenting an opponent regardless of AI already violates FEC rules. Progressive advocacy groups and Democrats on Capitol Hill have been more aggressive in calling for new rules and regulations around AI, including in the political realm. The FEC is unique in having a six-person board that is equally divided between Democrats and Republicans, so any partisan differences can really hamstring action. Other agencies, such as FCC and the Federal Trade Commission, which has also been very public about policing AI-violations of existing federal laws and regulations, such as for consumer protection and antitrust, have Democratic majorities and more ability to act. President Biden’s AI Executive Order details many ways that federal agencies can promote safe and responsible AI using existing laws and authority, but the Congress has not enacted new AI legislation, and a roadmap from the Senate Majority Leader was lambasted by champions of aggressive action.

Ask A Question!