News insights OOLD

August 2024

Telegram CEO’s Arrested in Escalation of Fight to Protect Children Online

Report from New York Times

In Brief – Telegram founder and CEO Pavel Durov has been arrested in France and charged with complicity in managing an online platform to enable illegal transactions. The arrest warrant was issued by OFMIN, an arm of the National Police created in 2023 to combat serious offences against minors, including online sexual exploitation. The Paris prosecutor pursuing the case has ordered Durov to post 5 million euros bail and check into a Paris police station twice a week. Telegram is a hybrid platform that includes direct messaging like Meta’s WhatsApp, as well as social media features that facilitate group communications. It is alleged to be popular among criminals, but also among those seeking to avoid repressive governments. Telegram’s social media service engages in less content moderation than most other popular platforms. Most of its 900 million global users use direct messaging, while it claims less than 45 million social media users in Europe that would require content moderation oversight by the European Commission under the Digital Services Act (DSA).

Context – “Free speech”, content moderation, and even encryption are part of the story, but above all this is a flashpoint in the effort by governments to more aggressively combat alleged online harms to young people. The EU’s DSA, the UK’s Online Safety Act, a raft of legislation enacted by US States, and bills in the US Congress, are all examples. But they are generally not criminal statutes. High profile criminal prosecutions of digital business executives in Western democracies have been rare. Ross Ulbricht, the founder of “dark web” platform Silk Road was jailed by the US in 2015 for creating the platform to facilitate illegal conduct and for directly engaging in illegal drug sales. Kim Dotcom, who created and ran Megaupload, a site allegedly made to facilitate entertainment IP piracy, was arrested in 2012 in New Zealand, and has been engaged in an extradition battle since. He is likely to be heading to a US courtroom soon. But neither of those platforms ever had the massive user base or arguably legitimate uses of Telegram. The developments in this case could be similarly groundbreaking.

NLRB Determines Amazon is a Joint Employer of Delivery Partner Drivers

Report from Bloomberg

In Brief – A regional director of the National Labor Relations Board (NLRB) in southern California has determined that Amazon was a “joint employer” of delivery drivers who worked for Battle-Tested Strategies, a small delivery company based in Palmdale, California that worked as a “Delivery Service Partner” (DSP) of Amazon. The labor regulator’s decision followed the cancellation of the delivery company’s contract by Amazon in 2023. The DSP had voluntarily recognized the efforts of several dozen of its drivers who were complaining about working conditions and sought to unionize with the Teamsters. Amazon terminated the contract ten days before the DSP’s owner formally recognized the union, and the NLRB official dismissed the allegation that Amazon cancelled the contract in retaliation against the union drive. However, the NLRB official determined that Amazon and Battle-Tested Strategies were joint employers of the drivers and unlawfully failed to bargain over the effects of Amazon canceling the delivery contract, effectively firing union employees without advance notice, severance, and other conditions often negotiated by unions. Amazon is expected to challenge the NLRB decision in federal court.

Context – The proper classification of contract workers is an ongoing tech industry issue, and not just for Gig-style platforms. Most large digital companies use contract workers employed by third-party firms. The recent NLRB decision involving Amazon DSP drivers is not the first “joint employer” decision involving a tech giant. Last year, the NLRB deemed Google a joint employer of 50 contract workers employed by Cognizant Technology Solutions in Austin, Texas, who voted to unionize. Google argues that the labor relationship is solely between the workers and Cognizant and is challenging the NLRB in court. A Kenyan labor court ruled last year that Meta was similarly a joint defendant with Sama, a California-based contracting firm who employed nearly 200 workers in Nairobi to do online content moderation for the digital giant, but then dropped the work with Meta and terminated the roles.

DC’s Antitrust Suit Targeting Amazon Pricing Practices Back in Play

Report from Courthouse News Service

In Brief – A three-judge panel of the DC Court of Appeals has overturned a 2021 DC Superior Court decision to dismiss the antitrust complaint of the DC Attorney General alleging Amazon engaged in anticompetitive price fixing that harmed online shoppers by pressuring third-party sellers using Amazon’s marketplace to raise their prices on other ecommerce websites to match their price on Amazon even when the Amazon sales involved much higher commissions and fees. The lawsuit will now be returned to the lower DC court for adjudication. The appeals panel found that the AG’s allegations, if true, were sufficient to suggest that Amazon’s seller agreements, including its Fair Pricing Policy, created anticompetitive effects, and so the trial court judge applied the wrong standard in granting Amazon’s motion to dismiss. Amazon’s seller contracts and policies penalizing sellers who do not price match their offerings on lower-fee platforms is now a component of a growing number of antitrust challenges.

Context –  The charge that Amazon is engaged in anticompetitive “price fixing” is gaining traction in US courts. Most critics argue that Amazon’s product search algorithms penalize retailers who offer lower prices on other websites in order to protect its high seller fees from lower-fee competition. Many sellers say they could profitably sell products at lower prices on other lower-fee venues, but if they do that, they lose their sales on Amazon, which has an 80% share of marketplace sales. So, they raise prices elsewhere instead. A similar consumer class action complaint has survived Amazon’s motion to dismiss in federal court in Washington State, a complaint brought by the California AG survived Amazon’s Motion to Dismiss in California state court, and the same set of Amazon policies is a part of the FTC’s major antitrust complaint scheduled to go to trial in October 2026. Amazon’s role as the largest ecommerce logistics services provider is a central part of the complex debate, with Amazon’s marketplace algorithms rewarding sellers who buy Amazon logistics that are a major component of Amazon fees.

Unanimous FTC Releases Final Rule Targeting Purveyors of Fake Reviews

Report from The Hill

In Brief – The Federal Trade Commission has released rules prohibiting a range of deceptive practices used to promote products and services online. The Final Rule on the Use of Consumer Reviews and Testimonials had bipartisan, unanimous backing by the five FTC commissioners and will go into effect later this fall. The rule prohibits a range of abusive practice by online marketers, endorsers, and businesses the sell online, including creating, purchasing or selling fake reviews and testimonials, incentivizing a consumer to write a review expressing a particular positive or negative sentiment, failing to clearly disclose when a review or endorsement is made by a company officer or employee, and suppressing negative reviews through intimidation or threats. Violations of the new rule could result in FTC enforcement action leading directly to fines, a priority of the FTC following the Supreme Court’s decision in AMG Capital Management v. FTC that pared back the agency’s authority to impose certain fines. While the new rules only apply to the direct participants in the processes of creating and using fake reviews and deceptive testimonials, sparing the digital platforms that host consumer reviews from new duties or liabilities, the regulator clearly believes that digital platforms should continue to invest in efforts to help deal with the problem, noting that they bear “significant responsibility for combatting fake reviews.”

Context – Consumer reviews have been a top benefit of the internet, but frauds and deceits, often powered by an international fake review industry, have been a chronic problem. Regulators and leading online companies are engaged. The new FTC rules follow their updated online “Endorsement Guidelines” released in June. The UK CMA updated its guidance for online influencers last December and the Digital Markets, Competition and Consumers Bill enacted in May includes a section on fake reviews. An initial regulatory consultation is expected this fall. Finally, a collection of the leading review platforms has formed the Coalition for Trusted Reviews to further corporate and government efforts.

Visual Artists’ IP Lawsuit Moving Forward to an AI – IP Law Reckoning

Report from Reuters

In Brief – US District Court Judge William Orrick produced a mixed decision as four AI companies operating visual image creating services based on Generative AI (GAI) tried to dismiss legal claims brought by a collection of visual artists. The judge is permitting claims alleging copyright violations in training and operating of the systems, as well as various trademark violations. On the other hand, he dismissed claims of unjust enrichment, breach of contract, and alleged violations of the Digital Millennium Copyright Act. The artists’ lawsuit targets Stability AI, a company that created the Stable Diffusion GAI service, and three other companies that use the Stable Diffusion model in their own image-generating service. Last October, Judge Orrick appeared to have narrowed the case down to the key question of whether training AI systems with copyrighted material violates the federal Copyright Act, but he is now allowing a broader set of complaints to proceed, including whether the Stability AI Model stores “compressed copies” or “effective compressed copies albeit stored as mathematical information”. He believes the questions should be tested at summary judgment after discovery produces more facts.

Context – Massive amounts of copyrighted material have been used to train the neural networks that power all the GAI models. However, as the prevalence of “hallucinations” shows, GAI models are not like traditional databases. They don’t store and retrieve copies. They learn from data and then produce new output. Very often, it’s what’s expected, but sometimes it is totally not. The rapidly proliferating AI copyright lawsuits seemed to be moving toward a showdown over AI training and the fair use exception. But Judge Orrick is raising more technical questions, like the meaning of “effective compressed copies albeit stored as mathematical information” and “how the diffusion models operate”. It will be interesting to see a court react to the fact that GAI operators don’t know exactly how their models operate, which is why unwanted hallucinations exist at all.

Margrethe Vestager Won’t Be in the Next EU Commission

Report from Financial Times

In Brief – Margrethe Vestager will not be renominated for a third term as Denmark’s member of the European Commission as her domestic political party is no longer a part of the country’s coalition government. Following a stint as Deputy Prime Minister of Denmark she served for a decade as an unusually high-profile leader of the EU Commission’s antitrust authority and was also named by Commission President Ursula von der Leyen as an Executive Vice President focused on digital policy. During her Commission tenure, she was seen as a persistent critic of US-based tech giants, although she consistently argued that she only demanded they operate fairly and abide by European law like all EU-based companies. In 2023, Vestager looked to move from her commissioner role to take over as the President of the European Investment Bank, but she was reportedly blocked by opposition from the French Government, where she is said to have collected opponents due to her challenging various industrial policy projects involving French corporate interests.  Speculation on who President von der Leyen will name this fall to succeed Vestager as the Commission leader on competition policy currently includes Belgian Commissioner Didier Reynders, Dutch Commissioner Wopke Hoekstra, or French Commissioner Thierry Breton.

Context – A top digital policy development of the decade Vestager has served as the EU’s top antitrust enforcer has been the rise of ex-ante regulation of digital giants to address competition concerns. The EU’s Digital Markets Act (DMA) is the preeminent example, although Germany’s 2021 revision of its competition law to address the largest digital platforms blazed the trail. When Vestager took over the Commission’s competition helm, she ramped up enforcement on Big Tech, with three cases targeting Google being especially noteworthy. The fact that they took up more than a decade and fines are still under appeal was a driving force behind the DMA’s continuous regulation model. In the United States, progressive champions of antitrust reform and tech regulation will miss her.

X Announces It Is Closing Its Office in Brazil in Fight with Federal Judge

Report from Wall Street Journal

In Brief – The months-long standoff between social media platform X and Brazilian Supreme Court Justice Alexandre de Moraes appears to have heated up and hit a new low-point as the company and its owner have announced that the platform is closing its operations in Brazil “to protect the safety of our staff.” Elon Musk added that the move was made due to Justice de Moraes’ “secret censorship and private information handover demands.” De Moraes has been leading a multi-year investigation of online disinformation and hate speech in Brazil, including related to the 2022 presidential campaign. Musk announced in April that X would restore many blocked accounts, and de Moraes threatened legal action against the company and Musk directly. The company reversed course and said it would comply with the judge’s orders, although not any longer. The latest move by X does not turn off the service for the 20 million Brazilian-based users, but only shutters its offices and in-country operations.

Context – Elon Musk, the most followed person on X, is a vocal defender of “free speech,” but it’s key to recognize that he describes the company’s policy as complying with “local laws” on legal speech. The local laws in places like India, Türkiye, Australia, and the EU are meaningfully different from each other and the US First Amendment standard, and X generally complies with each. However, there are ongoing pain points and Musk is quite comfortable engaging in sparring matches with lawmakers. Most recently, EU Commissioner Thierry Breton linked a live interview of former President Trump by Musk to the EU’s Digital Services Act, eliciting a sharp rebuke from Musk and some backtracking from other EU Commissioners. In a standoff in Australia over a government regulator trying to force X to block a violent video globally rather than just in Australia had Musk intervene and an Australian judge eventually walked the global ban back. Which side blinks next in Brazil is an open question. Will Justice de Moraes next attempt to force domestic network companies to block X in the country?

California Child Online Protection Act Again Suffers a Court Setback

Report from Washington Post

In Brief – A three-judge panel of the US Ninth Circuit Court of Appeals upheld the preliminary injunction imposed on a key provision of the California Age-Appropriate Design Code Act (CAADCA) while vacating the decision to block other aspects of the law, directing the lower court to more thoroughly examine those other provisions. The opinion backs the view that the portion of the law that requires tech companies to assess potential harm to children before releasing their products likely runs afoul of the First Amendment “because it clearly compels speech” by forcing companies to express opinions on harmful content. However, the panel ruled that the trial court failed to adequately test whether other provisions of the law, such as the mandate that companies estimate the age of users and provide teenagers under 18 with a “high level of privacy” by default, would also likely fail a First Amendment test. Therefore, the judges sent the case back to the lower court to engage in a more thorough review of the other provisions in the law, including whether they were severable from the sections with more clear constitutional deficiencies.

Context – Efforts to cordon off parts of the internet from teens are only gathering steam. A common legislative tactic in US States is to try to avoid strict First Amendment scrutiny by focusing regulations on “content neutral” app and website features, such as recommendation algorithms and auto-play, not classes of content. The litigation queue is already filled with laws from red and blue states. Federal judges have generally not been sympathetic to the social media teen safety laws, blocking them from going into effect and raising concerns on First Amendment and privacy grounds. Any form of online age verification raises First Amendment and privacy concerns. The Supreme Court has accepted a case challenging a Texas law requiring age verification for internet porn sites. Outside the US, unconstrained by the US Constitution, online age verification mandates are having more success.

European Commission Scrambles to Claim Breton Went Rogue on Musk Warning

Report from Politico

In Brief – Some in the top leadership of the European Commission hastily backtracked from Commissioner Thierry Breton’s public warning to X’s owner Elon Musk to ensure that the platform fully complies with the EU’s Digital Services Act (DSA) that prohibits large platforms from amplifying the spread of harmful content in the context of a one-on-one interview Musk was holding with US Presidential Candidate Donald Trump later that day. A Commission spokesperson said that Breton did not inform Commission President Ursula von der Leyen about the public letter ahead of its publication on X, and four separate EU officials, each speaking anonymously, said the letter surprised many within the Commission. One said, “DSA implementation is too important to be misused by an attention-seeking politician in search of his next big job.” On the other hand, unnamed officials close to Breton claimed that the letter was planned for some time, that the interview seemed an appropriate “trigger point”, and that neither Breton nor his staff were backing away, with an “EU Official” saying that Breton had “taken note” of Musk’s insulting reaction to the letter and still expected a formal response.

Context – We’re glad we abandoned our regular email cadence to report on Breton’s striking letter explicitly linking a live interview with a major US Presidential candidate with the DSA’s “due diligence obligations” for large platforms. Such a public explanation of how the new law is a censorship cudgel is highly unusual in western democracies. Also, as confirmed by unnamed sources, while Breton’s tweet alluding to Spiderman didn’t require staff support, we were correct that the formal warning letter was a detailed product of the Commission’s DSA staff. Comments from the Breton camp indicate he is not backing down. However, while he’s been renominated by French President Macron for another term, Commission President von der Leyen has not yet assigned new commissioner briefs and if she is truly upset by his politically motivated DSA enforcement tendencies, he could be assigned to a less digitally relevant role. We’ll be watching.

UK May Re-Open Online Safety Act to Target “Legal But Harmful” Content

Report from Reuters

In Brief – The UK Labor Government of Prime Minister Starmer is responding to allegations that disinformation and hate speech on social media platforms fueled anti-immigration riots in the wake of a mass stabbing of children by considering ways to toughen up the Online Safety Act (OSA). The legislation, which regulates how digital platforms and internet search engines police a range of objectionable online content, was enacted last October after years of work by a series of Conservative governments. Ofcom, the UK agency that will enforce the OSA, is currently in the regulatory process to establish the specific duties and responsibilities of different types of platforms for the different types of illegal content covered by the law, which includes child sexual abuse material, support for terrorist organizations, incitements to violence, and hate speech. The Starmer Government is said to be considering adding so-called “legal but harmful” content, such as misinformation, to the list of content covered by the OSA. The Conservatives had spent many months tweaking the bill to strike the right balance between protecting free speech by online users with the desire to force digital platforms to block online harms, eventually dropping mandates against legal but harmful speech as too threatening to freedom of expression.

Context – Sadiq Khan, the Mayor of London, and a member of the Labor Party, called the Online Safety Act “not fit for purpose” to effectively address online falsehoods that contributed to the UK riots, and urged the government revisit the law “very, very quickly”. Not surprisingly, the owner of X inserted himself in the UK controversy, sharing content from right wing and anti-immigration sources and drawing the ire of Labor Party leaders and political commentators, including some calling for legal changes to create personal criminal liability for company heads like Elon Musk. When European Commissioner Thierry Breton warned Musk that the EU’s Digital Services Act required X to police harmful content coming from his interview with former US President Donald Trump, he also cited content related to the UK riots.

Did EU Commissioner Thierry Breton Really Warn X That a Live Interview with Donald Trump Might Violate the DSA? (Yes)

Report from Politico

In Brief – European Commissioner Thierry Breton reminded Elon Musk, owner of the X social media platform, that the EU’s Digital Services Act (DSA) requires his platform to take appropriate action to block the dissemination of “content that promotes hatred, disorder, incitement to violence, or certain instances of disinformation”. Breton’s post on X and official warning letter referenced two specific political topics, the recent riots in Great Britain related to crime and immigration, and a one-on-one live interview with US presidential candidate Donald Trump planned for X later that day, both of which the letter notes would “be accessible to users in the EU.” Musk responded to the letter with a crude movie reference and proceeded with the live interview with former President Trump, while X’s CEO described the intervention from the EU Commissioner as “an unprecedented attempt to stretch a law intended to apply in Europe to political activities in the US.”

Context – The direct and explicit linking of a live interview with a major US Presidential candidate with the “due diligence obligations” of a major digital platform under the EU’s DSA is the biggest and most shocking digital platform policy news that is likely to emerge this August. (The verdict in the Google search monopoly case was not that surprising, and the remedies will be bigger news.) A top EU leader laying out how the DSA is a censorship cudgel is very unexpected, especially linked to most consequential political campaign in the world. Very thoughtful observers of digital policy such as Mike Masnick raised the clear censorship risks of the DSA when Breton warned platforms to police content following the Hamas attack on Israel. But raising content moderation enforcement law and policies related to a live interview in the US presidential campaign? Yikes. Commissioner Breton summarizes X’s obligations as “ensuring, on one hand, the freedom of expression and of information, including media freedom and pluralism, are effectively protected and, on the other hand, that all proportionate and effective mitigation measures are put in place regarding the amplification of harmful content in connection with relevant events, including live streaming…” but only reading the highly lawyered letter (it’s here… don’t worry, it’s just one single-spaced page) can provide a true taste of how the EU Commission believes the DSA provides constant after-the-fact authority to the regulators overseen by Breton to judge whether big platforms handled content as the authorities wanted in the obvious grey areas between those two conflicting high-level goals. On clearly and obviously political topics. It’s August, so news is slow. But expect this Breton interjection to raise real hackles, at least among US Republicans and conservatives. It will be interesting to watch the reaction, if any, of progressive free speech and open internet supporters.

Google Loses Federal Antitrust Case on Default Search Engine Deals

Report from New York Times

In Brief – In a landmark antitrust decision siding with the US Department of Justice, Federal District Judge Amit Mehta ruled that Google held a monopoly in the market for general internet search and engaged in illegal conduct to maintain that search monopoly. In his ruling, Judge Mehta found that Google’s $26 billion in payments to companies like Apple and Samsung to make its search engine the default option on smartphones and web browsers “foreclose a substantial portion of the general search services market and impair rivals’ opportunities to compete.” The judge found anticompetitive harm in the related market for the online text ads that appear on the Google search engine results pages. “Google’s monopoly power, maintained by the exclusive distribution agreements, has enabled Google to increase text ads prices without any meaningful competitive constraint,” he wrote. Google said it would appeal the ruling, with its president of global affairs saying, “This decision recognizes that Google offers the best search engine, but concludes that we shouldn’t be allowed to make it easily available.”

Context – It’s a pet peeve here at PEI to read media reports that said Google was found to be an “illegal monopoly”. They weren’t. They were found to be a monopoly because they carry out an overwhelming share of general online searches. That’s not illegal. Then, they were found to have engaged in anticompetitive deals to protect that search monopoly. They are illegal. And the harm is alleged in the prices for text ads. It is noteworthy that the judge found that Google doesn’t have a monopoly in the market for general online advertising, or even in the market for general search advertising, where he noted that competitors like Amazon and Walmart offers advertising related to searches on their websites. However, with Mehta claiming that Google controls upwards of 90% of general search, and text ads really only appearing on general search results pages, a monopoly in that related ad market was almost guaranteed. Besides Google’s appeal, the court now turns to remedies, which could prove more controversial and complicated.

Apple Again Changes Developer Policies to Comply with the EU DMA

Report from Reuters

In Brief – Apple has made more changes to its app developer rules and App Store policies in the EU in a bid to bring their iPhone and iPad businesses into compliance with the Digital Markets Act (DMA). The latest version aims to address the complaints raised by the European Commission in June against the last revised Apple policies. The DMA regulator made a preliminary determination that Apple failed to allow developers to freely steer consumers to alternative buying channels outside the Apple ecosystem to purchase content for lower prices with lower fees, including not allowing developers to provide off-Apple pricing information within their iPhone apps. The Commission also questioned the level of the fees proposed by Apple for off-Apple purchases. Apple’s new policies will allow developers to use their apps to promote offers that are available anywhere, rather than just on their own website. Apple is also changing their fees, creating a 5% initial acquisition fee when an app developer acquires a new user on an Apple device, and a 10% store services fee for any sales made by app users on any platform within the 12 months of the app installation. A European Commission official said that the regulator would assess Apple’s changes and consider “feedback from the market, notably developers.”

Context – The DMA challenges Apple’s core user proposition more than the other “gatekeepers”. Rather than using complex, technical, or non-transparent practices to shape its platforms, Apple built an openly restrictive “walled garden” that was a core value proposition that the company argued promotes privacy, security, and a better user experience. And their devices have been very popular. Apple is trying to set boundaries on DMA mandates, especially those against self-preferencing, requiring interoperability, and most of all, efforts to avoid fees. Their app industry antagonists, on the other hand, want to circumvent Apple fees. As with Apple’s previous changes, they have cried foul about the latest version. The Commission appears sympathetic to the developers. Expect a court showdown.

UK Ofcom Encourages Social Media Platforms to Restrict Hate Speech Spurring Riots

Report from Bloomberg

In Brief – Ofcom, the UK telecommunications regulator that will be tasked with regulating how digital platforms address illegal and harmful content under the Online Safety Act (OSA) has urged social media platforms to act now to stop users from inciting violence in the UK. Although the legislation’s regulatory regime isn’t scheduled to be in effect until 2025, Ofcom’s Director for Online Safety issued a public letter encouraging platforms to address content that depicts “hatred and disorder” and promotes violence or disinformation. In the two weeks since a 17-year-old wielding a knife killed three in an attack on a dance class of children in Southport, UK, accounts on platforms including X, Facebook, TikTok and Telegram, have been used to stoke racism and violent demonstrations across the UK, often claiming that the teenage assailant was a Muslim immigrant. For the first week, officials cited privacy laws related to youth offenders to limit information to the fact that the alleged perpetrator was born in Wales, although officials later confirmed that he was born in the UK to immigrant parents from Rwanda. X’s owner has personally engaged in the public debate, posting that civil war was “inevitable” in Britain, eliciting a rebuke from the spokesperson of Prime Minister Keir Starmer.

Context – Ofcom’s draft OSA guidance for Codes of Practice sets out the responsibilities of digital platforms to address 15 types of illegal and harmful content including child sexual exploitation and abuse, encouraging self-harm, hate, and harassment. The Digital Services Act is the EU’s version of content moderation regulation described by Ofcom as more “regulatory cousins” with the OSA than “identical twins”. Following last October’s Hamas attack on Israel that ignited the ongoing war in Gaza, EU Commissioner Thierry Breton took to X saying that “we have indications of X/Twitter being used to disseminate illegal content & disinformation”. X remains under investigation by the Commission for not properly restricting alleged hate and misinformation, raising censorship concerns.

Apple and Amazon Get First Directives From Japan Under Digital Platform Law

Report from Kyodo News

In Brief – The Japanese Ministry of Economy Trade and Industry (METI) has directed Amazon and Apple to address problems related to the terms, conditions, and practices they impose on third-party businesses that use their online platforms. The corrective orders are the first ones issued by the regulator based on legislation enacted in 2020 to improve the transparency and fairness of digital platforms and obliges five designated IT giants, including Amazon and Apple, to clearly notify the many businesses that use their platforms of all business terms and notify them of any changes in advance. Amazon has been told to clearly explain to business users the system that determines selling fees, as well as how Amazon sometimes assigns a seller’s product to a different category than selected by the seller, which can include higher fees. As for Apple, METI found that the company failed to meet its deadline to provide app providers with Japanese translations of the terms and conditions of contracts and gave the iPhone giant three months to provide the documents to the ministry in Japanese.

Context – Japan’s Act on Improving Transparency and Fairness of Digital Platforms (TFDPA) resulted in the designation of five digital giants in Japan – Amazon, Apple, Google, Rakuten and Yahoo Japan. Two are Japan-based. This is a big difference from the EU effort to regulate digital giants, where none of the initial Digital Markets Act (DMA) gatekeepers were EU businesses. This may be factor in Japan pursuing a more moderate path on digital regulation heretofore.  However, Japan’s parliament recently passed targeted digital market competition legislation to regulate the largest mobile app store businesses, which means just Apple and Google. Both will be required to open their mobile ecosystems to third-party app stores and payments systems. Although the Act on Promotion of Competition for Specified Smartphone Software has been compared by some DMA, they are not that alike because the EU regimes is much broader in scope and if applied in Japan would likely draw in the largest Japanese platforms as well.

Malaysia Instituting a Social Media Licensing and Regulatory Regime

Report from Reuters

In Brief – The Malaysian Government has announced a regulatory regime for large social media and messaging platforms that will go into effect on January 1, 2025. The Malaysian Communications and Multimedia Commission has said that the online companies must comply with national laws aimed at fighting online scams, cyberbullying and sexual crimes. The government says that it submits “take-down requests” to the large platforms and that responses have been “good but largely not been sufficient”, including 79 to 88 percent for the three large Meta platforms, 76 percent by TikTok, and 25 percent by X. Platforms with more than 80 million users in the country, which is likely to include Facebook, Instagram, WhatsApp, YouTube, TikTok and Telegram, will be required to obtain a government license to continue to operate in the country in 2025. One requirement will be the establishment of a formal in-country business presence. Civil society organizations have expressed deep concerns with the government’s social media regulation proposals, arguing that they will likely restrict freedom of expression online.

Context – Modi’s India, Erdogan’s Turkey, and Putin’s Russia intermittently make news for their latest efforts restricting online speech. And China’s Great Firewall seems to be an aspirational goal for authoritarian governments around the world. But the trendline is up almost everywhere. Vietnam, Indonesia, and Singapore in Asia. Africa is a hotbed, with a 2021 report claiming that 31 of 54 African countries had blocked social media platforms in some manner since 2015. But Western democracies are also active. The UK, Germany, France, and Australia have been regulatory champions. The EU’s Digital Services Act deserves meaningful censorship scrutiny as evidenced by the fact that EU officials quickly challenged X, Meta, and TikTok to block content that officials found objectionable related to the Hamas attack on Israel. Malaysia, one of the world’s largest Muslim-majority countries, pushed back on the platforms in the opposite direction. Platform compliance often seems linked mostly to market size, which is likely to mean they will comply in Malaysia.

Federal Judges Blocks Net Neutrality Rules Over “Major Question” Doctrine

Report from Bloomberg

In Brief – A three-judge panel of the US Sixth Circuit Court of Appeals has extended an order blocking the net neutrality (NN) rules put in place by the Federal Communications Commission in April from going into effect while a legal challenge brought by broadband providers is litigated. The judges’ new order ruled that the broadband providers showed that they’ll likely succeed in arguing that the FCC exceeded its regulatory authority, in particular noting that “net neutrality is likely a major question requiring clear congressional authorization” and that federal lawmakers have never granted the FCC the power to decide the issue. The panel said that oral arguments will be heard in either late October or early November by a new, randomly drawn trio of judges. Federal Communications Commission Chair Jessica Rosenworcel said, “Today’s decision by the Sixth Circuit is a setback but we will not give up the fight for net neutrality.”

Context – It was very surprising that the Biden FCC was not able to vote to restore NN rules until there were barely more than six months to go to the next presidential election. The President supported the policy as a candidate, it is a progressive fixture of internet policy, and his federal agencies have been motivated by progressive activism throughout his term. However, the broadband providers were able to delay confirmation of a third Democratic FCC commissioner until September 2023, tying up matters of partisan disagreement. Now a series of decisions by the US Supreme Court paring back the regulatory authority of federal agencies has changed the legal dynamic. The High Court overturned the principle of “Chevron deference” this year, meaning federal judges, not agencies themselves, are to determine the regulatory scope granted by federal law. The Court had already resuscitated the “Major Questions Doctrine” that rejects agencies regulating on important topics absent direct statutory authority. Expect something similar if the US Department of Labor enforces Gig worker classification regulations, another partisan regulatory football that goes back and forth based on party control of the White House.

CPSC Rules That Amazon is Same as a Retailer When Recalled Products are in FBA

Report from CNN

In Brief – The US Consumer Product Safety Commission (CPSC) has unanimously ruled that Amazon is required to abide by the recall requirements imposed on a retailer, called a “distributor” in this matter, when recalled products are sold on its site by third-party sellers but are also handled in the ecommerce giant’s massive logistics network of fulfillment centers. The ruling applies to more than 400,000 defective items sold, including faulty carbon monoxide detectors, children’s pajamas, and hair dryers, that were sold on Amazon’s shopping site and handled by Fulfilment by Amazon (FBA). The CPSC noted that their ruling follows a decision by a federal administrative law judge that backed the Commission’s view that when a single company serves as both the online marketplace and the fulfilment service for a third-party product, then that business is a retailer-style distributor. The agency says that Amazon must now develop and submit proposed plans to notify purchasers and the public about the product hazards, and to provide refunds or replacements. Amazon said it will appeal the order in federal court.

Context – Amazon has basically reimagined what it means to be a retail industry wholesaler. Third-party sellers have accounted for more than 60% of the sales on Amazon since 2019, when Jeff Bezos proudly claimed that third-party sellers were “kicking our first-party butt”. But the truth is that most of those third-party sales involve goods housed and handled by Amazon. The goods come from what looks like wholesalers in the traditional retail world. Those new-style wholesalers are not beating Amazon. They are often preferred over first-party sales. Fees often exceed 50%. They are more profitable than first-party sales. Amazon still controls the customer experience. When its Marketplace and FBA are considered unrelated businesses, think Etsy and the US Postal Service, there are big liability benefits. But in the real world, the Amazon Marketplace and FBA function as one. The CPSC, the FDA, and several product liability lawsuits are pushing the ecommerce giant on this point.

Senate Passes Bill Targeting AI-Generated Nude Images of Individuals

Report from Mashable

In Brief – The US Senate unanimously passed the DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits), a federal bill that would allow victims of nonconsensual sexually explicit deepfakes to sue people who create, share or receive them. The measure does not criminalize the activity, but instead establishes a federal civil remedy for identifiable victims. In the case of sexually explicit images, increasingly capable digital tools often merge an unsuspecting person’s face with a body in a pornographic image or video, while some generative artificial intelligence models can also create audio, videos and images that are entirely fake but look and sound realistic. The production of nonconsensual sexually explicit deepfakes has reportedly skyrocketed since 2023, first becoming popular with the likenesses of female public figures and celebrities, but cases have also sprung up at schools around the world, with most but not all victims being female.

Context – Dealing with AI-generated fakes, generally through some type of digital watermarks, is widely seen as a top “responsible” AI practice. When OpenAI announced rules for political uses of its services, they included new image watermarking, and Meta has announced an AI image identification plan as well. Microsoft is calling for a broad “deepfake fraud statute” that will give law enforcement officials a legal framework to prosecute AI-generated scams and fraud. They also want federal and state laws on child sexual abuse material to cover entirely AI-generated content. While the DEFIANCE Act does not impose new liability or duties on digital platforms that are often used to circulate the images in question, Sen. Ted Cruz (R-TX) has introduced the “Take It Down Act” to penalize platforms that fail to remove such content within 48 hours. In the UK, the new Starmer Government’s AI plans include imposing “binding regulation on the handful of companies developing the most powerful AI models” and “banning the creation of sexually explicit deepfakes.”

Coalition of WTO Countries Without US Agrees on Digital Trade Facilitation

Report from Reuters

In Brief – Over 80 countries have signed onto a preliminary agreement to promote ecommerce through standardized rules and processes to facilitate trade, including digital customs processes, recognizing e-documents and e-signatures, and establishing legal safeguards against digital fraud. The “stabilized text” of the E-Commerce Joint Initiative of the World Trade Organization (WTO) is the product of more than five years of talks. The US did not sign on, a striking reflection of how far it has stepped back from its long-time role as the preeminent global champion of internet-enabled trade and global digital services freed from taxes and regulations. The coalition that picked up the ball when the US stepped back was led by delegations from Australia, Singapore, and Japan, and focused on technical trade facilitation issues aimed at using digital tools to move goods across borders and through ports, avoiding more controversial regulatory issues, an approach backed by the European Union, Great Britain, and China. The US Ambassador to the WTO described the agreement as an important step forward but said that it still fell short and would hopefully be improved through further negotiations, while countries including Brazil, Indonesia, and Turkey also continued to have reservations.

Context – Going back many years advances on rules supporting digital trade were led by the US. Not anymore. Last fall, the Biden Administration’s US Trade Representative formally told the WTO that it was no longer calling for e-commerce rules to back cross-border data flows or prohibit national data localization requirements and government reviews of software source code. The move was the most explicit sign that the US was not the global champion of unencumbered data flows, ecommerce, and digital trade. Progressive critics of so-called “Big Tech” and “globalization” praised the policy change while digital economy advocates in the US business community and on Capitol Hill decried it. The US lining up with open trade skeptics such as Brazil, Indonesia, and Turkey on digital matters at the WTO speaks volumes.

UK CMA Announces Investigation of Google-Anthropic Relationship

Report from Bloomberg

In Brief – The UK Competition and Markets Authority (CMA) has opened an initial review of Google’s $2 billion investment in US-based AI startup Anthropic, the creator of Claude, one of the leading Generative AI chatbots. The regulator is asking market participants for information and views on whether the digital giant’s relationship with the chatbot startup should be reviewed as a merger or acquisition. Google built up its stake in the chatbot company, which was reported to amount to a 10% ownership share last fall, over a series of investments since 2022. Anthropic, which was founded by seven former developers from chatbot phenom OpenAI, has received major funding from both venture capital firms and established tech businesses, including a total of $4 billion from Amazon. The Amazon investment in Anthropic is already the subject of a Phase 1 investigation by the CMA, putting the opening of the Google inquiry on par.

Context – Digital regulation advocates are aligned in the view that there are major risks of anticompetitive conduct posed by AI despite the top breakthroughs coming from small firms. The antitrust regulators of the European Commission, Great Britain, and the United States recently signed a joint statement that sets out fair dealing, interoperability, and choice as key principles to support AI competition. The agencies agreed that “foundation models” are of particular concern, including noting that firms could entrench or extend their market power in AI related markets, including through investments and partnerships between incumbents and newcomers. The CMA has been especially focused, releasing two reports on AI foundation models and well as opening initial inquiries into Microsoft’s more than $10 billion in investments into OpenAI as well as the two Anthropic inquiries. Finally, the UK’s new Starmer Government has pledged to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”.

Ask A Question!