News insights

February 2024

European Court for Human Rights Objects to Encryption Backdoors

Report from ArsTechnica

In Brief – The European Court of Human Rights (ECHR) has issued a ruling determining that giving governments broad-based access to encrypted messages is a human rights violation. The case was brought by a Russian Telegram user who challenged a 2017 order by the Russian security service for Telegram to turn over information allowing them to decrypt messages for six users accused of supporting terrorism as required by Russian law. The claimant was not a suspect and argued that Telegram complying would undermine their privacy, and that of all Telegram users. Although Russia stopped being a party to the European Convention on Human Rights in 2022, the court still heard the case and backed the contention that Telegram could not selectively provide authorities with decryption keys. Instead, tools to access any encrypted messages would enable access to all encrypted messages. “In the digital age, technical solutions for securing and protecting the privacy of electronic communications, including measures for encryption, contribute to ensuring the enjoyment of other fundamental rights, such as freedom of expression,” wrote the ECHR, ruling that it was a disproportionate harm.

Context – The battle over encryption between defenders of privacy and civil liberties on one side, and those arguing that overly strong encryption protects criminals on the other, is decades old. The point of the spear is often the charge that encryption protects perpetrators of child sexual abuse material. Similar efforts to protect children by requiring that messaging services can access encrypted communications has been underway in the US, with the EARN IT Act, the EU, with draft child protection legislation that critics call the Chat Control Law, and with the UK Online Safety Bill (OSB) enacted last year. Widespread tech industry concern that the OSB would outlaw end-to-end encryption nearly derailed the bill, but the standoff was defused by a government minister stating that no change on encryption was forthcoming. But soon after, the King’s Speech of 2024 priorities proposed an Investigatory Powers Amendments Bill that threatens to reopen the UK debate.

Microsoft Investment in French AI Leader Mistral to Draw Regulator Scrutiny

Report from Bloomberg

In Brief – European Commission regulators will analyze the competition implications of the proposed “multiyear partnership” between Microsoft and Mistral AI, a French-based AI startup that is one of the continent’s AI industry leaders. Microsoft has described the arrangement as a 15million euro “investment” that includes plans to make the company’s generative AI models available to customers of Microsoft’s Azure cloud. Microsoft’s investment will be converted into equity as part of Mistral’s next funding round. Along with developing its own AI models, Microsoft has been a top investor in OpenAI, investing approximately $13 billion in the company behind ChatGPT. The relationship between Microsoft and OpenAI is already being reviewed by antitrust regulators in the EU and UK. In the US, it is the subject of a jurisdictional tug-of-war between the Federal Trade Commission and US Department of Justice.

Context – One mantra of tech regulation advocates is that governments should not make the “same mistakes” with AI that they made with social media, meaning regulate more and sooner. Acquisition scrutiny is one focus, which increasingly worries EU-based startups. The EU DMA regulating “Gatekeepers” (Microsoft is one), requires the companies to notify the Commission of all acquisitions, which is likely to be interpreted by regulators to include most investments. EU antitrust officials are generally concerned about competition related to AI, both directly in terms of AI services themselves, but also related impacts, such as on the development of EU-based cloud services providers. Nvidia, the global leader in the development and manufacture of AI-optimized processors has publicly reported that French regulators are concerned that that their processor sales to US-based cloud services giants may undermine prospects for smaller EU cloud competitors. Google, considered the third-top provider of cloud services, continues to criticize Microsoft’s policies linking their software services to their cloud business, claiming that their growing AI offerings, including by partners like OpenAI, expands the problem.

Indonesia Requires Digital Platforms to Pay Media Companies for News

Report from Reuters

In Brief – The outgoing President of Indonesia has promulgated a regulation that purports to require large social media platforms to compensate news media companies when their content appears on the platforms. The new rules, which go into effect in six months, are expected to apply to Google, Meta, and the largest domestic digital services. The platforms are directed to enter into agreements with “verified news publishers” that will compensate them through means such as “paid licenses, revenue sharing, sharing of aggregate user data of news, and/or other agreed forms.”  Google responded to the release of the regulation by reiterating its concern with a provision of the regulation that requires it to prioritize the news content from verified news publishers in their search, which it argues could restrict public access to diverse news sources. The next President takes office in October and could modify the regulation.

Context – The biggest development in the years long global campaign to force Google and Meta to pay “news” media companies has been Meta’s reaction. They have been increasingly straightforward that they will block news on their platforms rather than pay a government-set rate when they don’t themselves place the content on Facebook and Instagram. They say media companies overvalue their content and greatly undervalue Meta’s free distribution platforms. Throughout 2023, Canada was ground zero. When they enacted a forced payments regime, Google negotiated a $100 CAD million annual payment into a media fund. Meta blocked media stories. And the move does not appear to have hurt their traffic in Canada. In fact, Meta increasingly acts like often-contentious news content is not positive, even when it doesn’t carry an added cost. With that context, the biggest news from Indonesia is Meta saying that they believe the new Indonesian regulation does not require them to pay media publishers when the publishers post their content on Meta’s platforms, only when Meta chooses to post media company content, which is consistent with Meta’s policy position in Canada and elsewhere.

Nvidia Reports that the UK Has Joined the Regulatory Review Party

Report from Bloomberg

In Brief – Nvidia, the global market leader in the manufacture of specialized processors that power most of the top Generative AI technology research and services, has reported that the UK Government has joined the queue of regulators inquiring about the company’s policies and practices. The information was included in a filing made with the US SEC as part of its Q4 earnings report that exceeded expectations and has seen the company grow into the third most valuable in the world.  The UK joins regulators from the EU and China asking the company about “sales of GPUs, our efforts to allocate supply, foundation models and our investments, partnerships and other agreements with companies developing foundation models”. The company’s risk disclosures list a range of potential legal, legislative, and regulatory challenges for the company and the broader development of AI technology and services.

Context – Why is Nvidia so important in AI? First, know that a GPU (Graphics Processing Unit), a kind of “parallel” digital processor initially designed to efficiently handle digital video processing, such as for videogame graphics, ended up proving to be the most efficient technology to build, train, and operate Large Language Models and other Generative AI systems. Nvidia has been the top GPU company for many years. But the key thing to understand about their current position at the top of the GPU-AI market is how they built and maintain CUDA, a programming language with a massive community of AI developers, which is tightly integrated with their processors. If they have a “moat”, this is it. Really take notice if a regulator starts asking about CUDA. That said, there are not enough top-end, new Nvidia processors to go around, so antitrust officials are asking about how Nvidia is determining who to sell to, concerned that buyers get a leg up on everyone else. French regulators are interested in whether Nvidia processor sales to US-based cloud services giants ahead of smaller EU competitors will undermine their prospects. Finally, China is opposed to US efforts to restrict the sale of Nvidia’s top chips to the country.

Indonesia Claims that TikTok in Violating Social Media-Ecommerce Ban

Report from TechInAsia

In Brief – The Indonesian Minister for Cooperatives and SMEs has criticized TikTok’s plan to address the government’s recent ban on direct online commerce over social media apps, saying that the company continues to violate the policy. Indonesia, with one of the largest ecommerce sectors in Southeast Asia, was the first and largest market for TikTok Shop, an ecommerce marketplace integrated into TikTok’s core short-video-based service. Launched in 2021, it quickly grew into the country’s fifth-largest ecommerce platform, disrupting online and traditional retail in the country. October’s regulatory change was clearly aimed at TikTok, and the ByteDance subsidiary said that it was committed to complying with local laws and regulations. It shut down TikTok Shop Indonesia when the ban went into effect. In December, TikTok and Tokopedia, an online marketplace unit of Indonesia’s GoTo Group and the second largest in the country, announced that TikTok would invest $2.2 billion in the development of a new joint online shopping service that would operate separate from the two existing platforms. However, when the deal closed at the of January, it was reported as TikTok acquiring 75% of Tokopedia. The government contends that the investment itself is not problematic but most of the sales are still occurring directly over the TikTok social app in violation of the law.

Context – TikTok’s massive global popularity, expanding business aspirations, and an exploding user base that is uploading content that stretches far beyond short music and dance videos, is leading to a diverse range of regulatory challenges. In some markets, such as India and the US, China-related security concerns over data and content are preeminent, but not Indonesia. It’s third-largest ecommerce platform, Lazada, is owned by China-based Alibaba.  Like with Indonesia, Malaysia is reported to be considering a regulatory change to prohibit the app from being a shopping site. Finally, adult-themed streaming on TikTok is reportedly a factor in efforts to ban the platform in Kenya and Nepal.

EU Commission to Fine Apple 500 Million Euros in Spotify Antitrust Case

Report from CNBC

In Brief – The European Commission (EC) is reportedly set to impose a nearly 500 million euro fine on Apple in an antitrust case filed against the iPhone giant by Stockholm-based music streaming giant Spotify. The original complaint was submitted in 2019, the Commission’s initial statement of objections was released in 2021, and the case was narrowed last March and focused on the Apple’s anti-steering rules that prohibited app developers from informing customers that there were often-cheaper alternatives to making purchases directly on their app. The result largely parallels the outcome of Apple’s lengthy battle with Epic Games in US federal court, where Apple prevailed on the antitrust charges but has been forced to end its anti-steering policies because they violate California’s Unfair Competition Law.

Context – Apple’s largest app developer critics, including Spotify, Epic Games, and Meta, are focused on government actions that will force Apple to lower their fees on in-app purchases. The biggest stick they are currently attempting to wield is the EU’s landmark Digital Markets Act that regulates the largest digital “Gatekeepers”. Apple is one of the six. Wrapping up the Spotify-inspired antitrust complaint seems mostly to be a clearing the deck chairs exercise before the real action regarding DMA compliance. Apple’s core DMA plan is to open its “walled garden” and allow app “sideloading” and the use of alternative payments services. But it is establishing a set of rules and processes for sideloading app stores that parallel the App Store rules to “protect” users. Again, fees are the top issue. Apple is changing its fee structure in Europe, reducing their general commission from 27% to 17% (and from 15% to 10% for subscription renewals), but also establishing a new 50-euro cents charge for every app download after an app is downloaded one million times in a 12-month period. If app developers choose to use Apple’s payment service, it will cost an additional 3 percent commission. Apple’s critics are denouncing the plan, while Apple is equally critical of developers like Spotify, and this will be a major early test of DMA enforcement processes.

Supreme Court Oral Arguments on State Laws Regulating Social Media Viewpoint Discrimination

A Report from Platform Economy Insights

Special Report 

In Brief – The Supreme Court today heard nearly four hours of oral arguments in a pair of cases involving the constitutional limits of states regulating how large social media companies engage in content moderation. The cases involve laws passed by Florida and Texas in 2021 that their Republican backers said were designed to stop the censorship of conservative viewpoints by the largest social media platforms. The Florida law was blocked by a Federal Judge and the 11th Circuit Court of Appeals. Florida challenged that decision and that was the first case argued. The Texas law was likewise blocked by a Federal District Court judge, but that decision was overruled by the 5th Circuit Court of Appeals. The Supreme Court then imposed an emergency injunction pending their decision on the appeal. That was the second case argued.

This is the second of three sets of social media-related cases that the High Court is hearing this term. The first, argued in November, involved a pair of cases involving the constitutional limits of government officials blocking people from accessing their social media accounts, in particular defining when government employees are acting in an official capacity online, and are therefore bound by First Amendment restrictions on censorship, and when they are acting as private citizens online, with their own free speech rights. Finally, later this year, the court will deal with the question of how to determine if communications by top government officials to encourage social media platforms to restrict some online posts or speakers amounts to state censorship.

Here is a good timeline of thoughts from The Verge made during the arguments.

A few post-game thoughts — 

  • Last spring, when the Supreme Court heard arguments in the cases Twitter v Taamneh and Gonzalez v Google, many expected a tightening of Sec. 230. Those cases involved potential liability of social media companies for terrorist actions because some terrorist content and communications occurred on the platforms. In the end, the Twitter case was decided 9-0 in favor of the platform. The court then simply remanded the Gonzalez case to be decided based on the Twitter decision, widely expected to lead to a Google win. Some were surprised by the unanimity because three of the conservative justices, Thomas, Alito, and Gorsuch, have been overtly skeptical of Sec. 230 and the platforms. Justice Thomas even wrote the opinion. Well, today’s arguments played out a lot more like what was expected last year, with those three justices obviously sympathetic with the states and opposed to the platforms.
  • Speaking of Sec. 230, one theme repeatedly expressed by those three justices and other Sec. 230 critics is that the law was initially premised on the promise of so-called neutral platforms. The argument goes that either the platforms should not be able to moderate content based on viewpoint or they should not be protected by Sec. 230. Paul Clement, representing the platforms, and Solicitor General Elizabeth Prelogar, representing the Federal Government, both succinctly and clearly explained that Sec. 230 was never based on the idea that online platforms would be neutral, and in fact Sec. 230 was specifically designed to promote content moderation by the platforms based on what each determined was best for their platform and their users. That said, the myth of Sec. 230 being premised on neutral platforms will die hard.
  • As expected, a major point of contention surrounded the question of whether the giant social media platforms were appropriately considered “common carriers” and were like telegraph or phone companies, or newspapers, exercising editorial control regarding the selection and placement of content on their pages. The distinction is linked to 1st Amendment court precedents related to those businesses and their ability to regulate communications.
  • To Sum Up –  Don’t expect 9-0 this time. Instead, the three social media skeptics seemed solidly opposed and aligned. Justices Sotomayor, Kavanaugh, Kagan, and Chief Justice Roberts seemed most focused on the core platform argument that 1st Amendment scrutiny against censorship is designed to limit government, not private platforms. Barrett and Jackson were hardest to read. To again harken back last year’s Sec. 230 cases, Justice Barrett wanted to dispense with the conclusion that the platforms could be liable for the terrorist attacks while not having to decide the Sec. 230 question in the Google case involving YouTube algorithms. She foretold the eventual outcome, which was to decide for Twitter and punt on Sec. 230 and algorithms. Today, there seemed to be similar interest in protecting the ability of social media platforms to moderate content but to find some way to carve out some types of communications, like email and private messaging, and apply a different standard.

FTC v Amazon Antitrust Trial Scheduled for October 2026

Report from Bloomberg

In Brief – Federal Judge John Chun has set October 13, 2026, as the date to begin the antitrust trial pitting the Federal Trade Commission against Amazon. The case will be heard in the US District Court for the Western District of Washington. The FTC sued Amazon in September alleging anticompetitive practices that harmed consumers and the small business sellers who use Amazon’s massive online marketplace. The regulator contends that Amazon pushes sellers to raise the prices they offer on other websites to match the higher prices they set on Amazon to account for high Amazon fees, creating an elevated virtual price floor on the internet. Amazon moved for Judge Chun to dismiss the case in December, but the judge’s schedule for quarterly status conferences through the trial date doesn’t address a ruling on the motion to dismiss.

Context – Regulators have had a tough time focusing on Amazon given the kaleidoscope of their interconnected services. However, the links between Amazon’s massive logistics business, their treatment of third-party marketplace sellers, and how Amazon’s algorithms influence who makes sales, are increasingly getting the most attention. This is because Amazon commissions on third-party sellers often approach 50% due to high logistics fees, and Marketplace algorithms reward sellers who buy Amazon logistics to the point that most sellers recognize that it is necessary to make meaningful sales. In the EU and UK, Amazon has settled antitrust investigations by promising to break that link. The FTC’s lawsuit also alleges that Amazon unfairly preferences sellers who buy Amazon logistics and penalizes sellers who offer lower prices elsewhere, even when lower logistics costs would allow lower prices. Those “price fixing” complaints are largely contingent on Amazon being such a dominant platform for small online retailers that sellers cannot afford to forgo sales on Amazon, and instead raise prices elsewhere to protect sales on the giant’s marketplace. A similar private antitrust suit and one from the California Attorney General have survived Amazon’s motions to dismiss. It looks like Judge Chen sees the FTC’s doing so as well.

Canadian Legal Panel Determines that Air Canada is Responsible for Its Chatbot

Report from the Washington Post

In Brief – The Civil Resolution Tribunal of British Columbia has ordered Air Canada to stand behind the information provided to a customer by the chatbot on its website despite the information being incorrect. The customer, Jake Moffatt, who lived in British Columbia, interacted with the chatbot in November 2022 and purchased tickets to attend his grandmother’s funeral in Ontario. He later filed for a partial refund amounting to over $800 CAD based on the Air Canada bereavement policy as explained by the chatbot. However, the airline rejected the claim because the chatbot was wrong and the company website page describing the bereavement rules showed that the process the customer followed did not qualify. The civil dispute resolution panel sided with the customer and rejected the company’s argument that the chatbot was an independent agent separate from the company, instead describing the chatbot as just another part of the company’s website, which the company is responsible for. The panel likewise rejected the argument that it is the responsibility of the customer to determine which part of the company’s website, the chatbot or the bereavement page, was accurate.

Context – To be clear, this civil dispute resolution panel decision lacks the depth of many judicial rulings. Air Canada did not even provide the panel a detailed description of the technology behind its chatbot, instead arguing that the company was not responsible for incorrect information given out by its agents, human or digital. However, as more customer service chatbots become AI-enabled, the case is a helpful datapoint. A key learning since the release of ChatGPT is that generative AI chatbots are designed to compile realistic sounding responses to queries. They often deliver accurate facts and credible ideas, but they also get things wrong. We’ve now heard plenty about chatbot “hallucinations”, which are realistic-sounding fabrications, such as those that peppered a ChatGPT-prepared legal brief that led to two lawyers being sanctioned in US District Court, or fuel chatbot defamation liability questions. Pro Tip: Check your AI-enabled work.

Mexican Antitrust Authority Calls for Amazon and Mercado Libre Changes

Report from Fortune

In Brief – The Mexican competition authority (COFECE) has issued a preliminary report on the country’s ecommerce market that identifies barriers to competition and calls on market leaders Mercado Libre and Amazon to make changes. COFECE says the two ecommerce giants hold a combined 85% share of the market, and their dominance is harming consumers and small businesses that sell in their marketplaces. The regulator proposes that the platforms change a range of policies that are reinforcing barriers to competition, including separating video streaming from online shopping loyalty programs such as Prime, increasing transparency in marketplace algorithms so that sellers better understand how products are selected and ranked in search results, and ending the preferences given to products from sellers who use the company’s own logistics services. The regulator is also calling on the two companies to enhance transparency about their own logistics standards so third-party logistics services can better integrate into their platforms. COFECE plans to issue a final report to the government in six months which could be used as the basis for more formal antitrust action.

Context – Amazon is a complex conglomerate with digital services, hardware divisions, loss leaders that drive super-valuable Prime subscriptions, their original first-party retail business that is giant but has low margins, a third-party marketplace that is larger with much higher margins, and a unique and massive logistics business that is their biggest moat. Regulators have had a tough time understanding and focusing. Even FTC Chair Lina Khan, who made her name breaking down the Amazon business, was slow shifting the focus from first party retail to their third-party marketplace and logistics. The COFECE report follows the trend of homing in on links between marketplace ranking algorithms and company logistics services like the EU and UK have done with Amazon. This is also part of the FTC’s antitrust suit targeting Amazon that will take years to resolve. The Mexican regulator is breaking ground with their call for streaming video to be pulled out of shopping loyalty programs.

EU Report Calls Out Lack of “Influencer” Transparency Around Sponsorships

Report from Reuters

In Brief – The European Commission and the consumer protection regulators of 22 EU member states, Norway, and Iceland released the results of a review of top online influencers in Europe claiming that only 20% properly disclose commercial content as advertising as required under EU law. Ninety-seven percent of the 576 influencers screened for the study posted commercial content on platforms such as Instagram, TikTok, YouTube, Facebook, X, Snapchat, and Twitch. More than three-quarters of them were engaged in a level of commercial activity warranting registering as a business, but only 36% did so. Most failed to provide company details on their posts, use the labels and other offerings provided by the platforms to identify commercial content, or make their sponsorship disclosures visible in the manner required. Nearly half of the influencers operated their own websites, with many engaged in direct online commerce. The review is part of an effort to promote compliance with EU consumer law by online influencers, and 358 of them are earmarked for follow-up by national authorities to request that they appropriately comply with regulatory duties and take additional action if necessary. The Commission will also analyze the results in light of the legal obligations the largest platforms have under the Digital Services Act.

Context – In 2023, the global social media influencer industry was valued at $21.1 billion. The US FTC released updated online “Endorsement Guidelines” last June and is pursuing regulations on unfair and deceptive practices in digital marketing. The UK CMA updated its influencer guidance in late 2022 and the draft UK Digital Markets, Competition and Consumers Bill includes a section on fake online reviews. The Commission launched an Influencer Legal Hub in 2023 providing information to help with legal compliance in the EU. And France has been most aggressive, enacting legislation last year to prohibit influencers from promoting some highly objectionable practices and heavily regulating others, while requiring influencers to adopt business practices such as holding liability insurance.

Judge Blocks “Unconstitutional” Ohio Social Media Parental Consent Law

Report from the Washington Post

In Brief – Federal Judge Algenon Marbley has found an Ohio law requiring children under 16 to get parental consent to use social media apps to be “unconstitutional” and blocked it from going into effect while a legal challenge works its way through federal courts. The state’s Parental Notification by Social Media Operators Act, which claims to target social media sites like YouTube, TikTok, and Instagram, requires digital platforms to verify whether users are 16 or older and get parental consent for younger teens to gain access. NetChoice, a trade group representing digital companies that has played a central role in many legal challenges of state internet laws, filed suit in January to block Ohio’s law from going into effect, alleging that it violates the 1st and 14th Amendments. As he did in his January ruling, which temporarily paused the law, Marbley again focused on 1st Amendment concerns and said, “Foreclosing minors under 16 from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media’s harm to children.” He also again noted the problematic lack of clarity in defining the types of digital services that were covered by the law, and questioned exceptions such as those for “established” and “widely recognized” news media outlets.

Context – Although the data showing causality between teen mental health and social media use is thin, the effort to cordon off parts of the internet and regulate how platforms serve teens is proceeding apace, globally. Congressional legislation has bipartisan backing but has been tripped up by partisan and ideological division over the kind of content that should be walled off from teens. US States under Republican leadership were quick off the mark with parental consent laws, but California and New York are also engaged. The 1st Amendment is unsurprisingly proving a tough hurdle in the US. Those trying to change the way social media companies operate might find the design negligence class actions trying to circumvent Sec. 230 more impactful. Meta’s recent changes to content practices for teen users are likely a response.

EU Opens Formal Investigation of TikTok Under the Digital Services Act

Report from the New York Times

In Brief – The European Commission has announced a formal investigation into whether TikTok is violating the Digital Services Act (DSA) that regulates how digital platforms handle objectionable content. The law was enacted in 2022 and went into effect in mid-2023 for the largest online platforms called Very Large Online Platforms (VLOPs). Currently numbering 23, they are regulated centrally by the Commission. The investigation of TikTok includes reviewing whether the service appropriately protects teen users, including from algorithms that are addictive or create what the Commission calls ‘rabbit hole effects’, has proper age verification methods to keep objectionable content away from children, maintains a searchable database of advertising content, and is giving proper access to outside researchers. TikTok is the second VLOP to face a formal Commission investigation, following X, which saw one opened last fall following the Hamas attack on Israel, the resulting regional violence, and accusations that X was not meeting its obligations to keep violent and extremist content off its platform. Formal DSA proceedings, which do not face specific deadlines, allow the Commission to take enforcement actions such as direct interim measures and impose non-compliance decisions, as well as accept company remedy commitments.

Context – A huge digital policy trend you must be tracking is the EU heading fully down the path of unprecedented regulation of platforms under the DSA and DMA. The DSA mandates only impacted VLOPs in its first year, but all digital platforms now fall under its general requirements with the Digital Services Coordinator of a company’s EU member state of establishment serving as their regulator. Add regulators expounding on the digital service “addictive” design trope to regulators directing how platforms moderate controversial topics in ways that are totally not censorship. In the US, the claim that social media platforms are intentionally addictive is being fought out in class action lawsuits. The US Supreme Court is set to begin to consider the 1st Amendment constraints around government directing content moderation.

Modified “Kids Online Safety Act” Gets Bump in Senate Support

Report from the Washington Post

In Brief – The lead sponsors of the Kids Online Safety Act (KOSA), legislation touting “safety by design” to require digital platforms to ensure that younger teens do not use services that include a set of design features that backers argue harm young users, have tweaked their bill to address concerns of some progressive advocates for marginalized youth and added 15 additional cosponsors, including the Senate Majority Leader. The key change is to limit enforcement of the “duty of care” portion of the bill that most directly implicates so-called harmful content to the Federal Trade Commission, cutting out State AGs. Some critics have argued that conservative State AGs would pressure platforms to restrict teen access to LGBTQ or reproductive health content. State AGs can still enforce rules against design features including auto play, notifications, personalized recommendations, or in-game purchases.

Context – The top takeaway from the KOSA tweaks is that power and money, in this case of internet companies, was not the problem. That claim is pretty much always just whining. There are deeper issues. Progressives were clearly a huge problem. And some super smart progressive tech policy experts argue that KOSA is still problematic for marginalized youths, both because the President and FTC will be conservative someday, and because legal attacks on design features can easily be a proxy to target disfavored content. Take “autoplay”. Will feeding a teen endless math or vocabulary questions be a problem? Privacy advocates still don’t like the fact that age-based design rules point to either data-heavy age verification or shutting down features for everyone. And free speech advocates still see censorship. In terms of votes, that might not matter much. But the 1st Amendment and the courts do. Federal judges have mostly been ruling against state laws involving similar age-based restrictions. Finally, given the deep frustrations among some top House Republicans with the current FTC, it will be interesting to watch the reception to a bill that sets the FTC up as the national decider on what is online harm.

A Defective Design Class Action Lawsuit Targets Dating Apps

Report from NPR

In Brief – Six plaintiffs filed a federal lawsuit on Valentine’s Day accusing popular Match Group dating apps, including Tinder and Hinge, of designing their services to “transform users into gamblers” who engage in compulsive behavior and pay the companies in ongoing subscriptions and various add-ons. Users are described as “addicts” who fall victim to the apps who “gamify” the search for love and use hidden algorithms to lock users into a pay-to-play loop. The complaint, which is seeking class action status, claims that Match Group has violated state and federal consumer protection, false advertising, and defective design laws. While many popular dating apps like Tinder and Hinge are free to download and use, paid subscriptions offer premium features such as unlimited swipes. A Match Group spokesperson denied the allegations and said the lawsuit is “ridiculous and has zero merit.” In 2022, the Pew Research Center found that 10% of American adults who are married, living with partners, or in committed romantic relationships, report meeting their partner on a dating sites or app while a 2019 Stanford University study found that more than half of new couples meet online today.

Context – Class action lawsuits accusing digital services of being “addictive” and using “algorithms” to engage in non-transparent nefarious things… Where have we heard that? OK, social media. Critics have been trying for years to construct legal strategies to circumvent Sec. 230. Two of the most common avenues are that the harms are caused by the conscious design of the platforms to encourage longer and more intensive use, which the companies knew, or should have known, would lead to harms, or that the use of algorithms by the platforms is an activity that is not protected by the federal statute. Those suits are starting to gain some traction at the federal and state level. Most recently, the City of New York sued TikTok, Meta, Snap, and YouTube in California Superior alleging that the companies intentionally manipulate and addict younger users to keeping them on their platforms.

Meta Downranking Political Issues on Instagram, Reels, and Threads

Report from the Washington Post

In Brief – Meta has announced changes to the recommendation policies of Instagram, Threads, and Facebook’s short-video service Reels, to de-emphasize political issues and commentary. Users will still see posts about political and social issues from accounts they follow, but the company won’t show users posts focused on laws, elections, or social issues from accounts the users don’t follow. Meta also said it plans to develop tools to allow users to opt into seeing more political content, and they are including political topics in their new Threads “top topics” service. In recent years, Meta officials have made it increasingly clear that the company was finding that serious news and political commentary were detrimental to its platforms, bringing misinformation and politicized content moderation. But most of all, they increasingly claimed that many users did not want the contentious content. For example, when they announced the rollout of the Twitter-like Threads service and many progressive Twitter critics hailed the prospect of an alternative, Instagram head Adam Mosseri said that they would not encourage politics and hard news on the platform, as they were not worth the scrutiny and negativity.

Context – The public evolution of Meta’s thinking about the value of hard news and politics to their business has been clear for well over a year. We’ve been highlighting it in the context of legislation to require Meta (and Google) to pay news media companies. The standoff in Canada has been front and center. Canada required the platforms to pay government-set licensing fees to media companies when their news content appeared. Meta argued that most of their users don’t value news, they are shifting their platforms more to entertainment content, and the media companies chronically undervalue the ability to reach people over the Meta platforms. So, Meta blocked news in Canada and say they intend to do it whenever they are forced to pay media companies a government-set rate. Maybe Meta’s leadership really does see “hard news” as a net negative. And when you add a direct tax into the mix, they are happy to be out of the business.

Once Again, an AI Copyright Lawsuit Is Heading to a Fair Use Battle

Report from Bloomberg

In Brief – Federal Judge Araceli Martinez-Olguin largely sided with OpenAI in her ruling to dismiss most of the copyright claims made by a collection of authors led by comedian Sarah Silverman. The plaintiffs alleged six claims, and the judge dismissed four of them — vicarious copyright infringement, violations of the Digital Millennium Copyright Act, negligence, and unjust enrichment. As has been the case in the other major AI training copyright infringement lawsuits, the claim that training is itself a violation of federal copyright law is proceeding. OpenAI did not move to dismiss it. In addition, Martinez-Olguin did not dismiss the plaintiff’s claim that OpenAI violates California’s Unfair Competition Law (UCL), noting their business may prove to be “Unfair”. However, she said if the first claim eventually goes OpenAI’s way, and training is determined to not violate federal copyright law, then the California UCL claim will likely be dismissed based on federal preemption. The plaintiffs have 30 days to amend their claims to address the judge’s ruling.

Context – Federal judges continue to narrow the lawsuits filed by copyright holders challenging AI models down to the basic question of whether unauthorized training is a violation of copyright law. The rulings point to an eventual battle royale over AI training and the “fair use” exception. The same plaintiffs filed a similar lawsuit against Meta, and Federal Judge Vince Chhabria reacted similarly, dismissing the expansive claims and focusing the case on the core training issue, which Meta likewise did not move to dismiss. When the increasingly voluminous cases reach a judge, a key question continues to be whether the AI model produces copies in their outputs. With these authors, the judges say that they do not. In the New York Times suit against OpenAI, the paper is contending ChatGPT does produce copies, but OpenAI argues that it is a rare occurrence, a bug that is only triggered when a user goes to extraordinary means to entice the result. So, add one more ruling to those ushering us toward a fair use showdown, and reiterating that actual copied outputs is key to moving beyond that.

Biden-Harris Reelection Campaign Sets Up on TikTok

Report from the Washington Post

In Brief – The Biden-Harris campaign released its first post on TikTok during the Super Bowl, asking the President who he was backing and whether the game was rigged for political purposes. The campaign, which utilizes memes to connect with young people, clearly sees the platform as central to that effort as well, and uses a black and white photo of the president with laser eyes, popularized by the “Dark Brandon” meme, as its TikTok avatar. Although the President’s reelection campaign using TikTok is independent of the ongoing review of the national security implications of the platform by the Committee on Foreign Investment in the United States, in a nod to security issues the campaign said that they are taking safety precautions around their devices and incorporating a protocol to ensure security of the account. The White House itself has often used TikTok as part of its official communications channels on issues as varied as the war in Ukraine and the President’s infrastructure initiatives, as well as briefing TikTok content creators in the Oval Office in the run-up to the 2022 mid-term elections. Several congressional Democrats are particularly active on the platform, as is the Democratic National Committee.

Context – TikTok is the social media platform that US politicians, especially Republicans, most love to hate, which is saying something given the vitriol many conservatives direct at Meta and Google. Even before the latest Biden-Harris campaign move the chances that legislative or regulatory action would shut down the platform were slim and getting slimmer. Federal judges have been very sympathetic to claims by the company and its users that the service is protected by the 1st Amendment, including courts rejecting Trump Administration efforts in 2020, or Montana’s recent state-level ban. CFIUS has been the biggest risk, potentially by backing a reported call from Biden Administration security officials for ByteDance to sell its US TikTok operations. Speaking of the Super Bowl can’t help but bring wagering to mind. We suspect that the odds just got much longer for negative federal regulatory rulings from now through November.

FTC Beats Motion to Dismiss in Second Attempt to Prosecute Location Data Broker

Report from The Record

In Brief – Given a second chance by US District Court Judge B. Lynn Winmill after he dismissed the Federal Trade Commission’s lawsuit against Kochava, a major geolocation data broker, the agency’s amended complaint survived a motion to dismiss from the company and will proceed towards a trial. In May, Judge Winmill said the initial FTC complaint failed to show evidence that the sale or use of the data collected by Kochava caused any actual substantial injuries. He described the agency’s claims as theoretical, but left the door open by agreeing that the sale of location data could be an intrusion on consumer privacy. This time, the judge concluded that it is possible that the ability of Kochava’s customers to purchase data that is granular and non-anonymized “invades consumers’ privacy and exposes them to significant risks of secondary harms” and could be a violation of Section 5 of the FTC Act and an “unfair” practice as claimed by the FTC. The regulator claims the company combines geolocation data with a wide range of other information including names, addresses, age, gender, ethnicity, marital status, income, and the apps installed on a consumer’s phone. The company’s CEO responded saying, “This case is really about the FTC attempting to make an end-run around Congress to create data privacy law.”

Context – The FTC is fully engaged in a campaign to protect consumers from what privacy advocates and its Democratic commissioners call “commercial surveillance”, in particular the collection and use of detailed physical location data gathered by individuals’ smart phones. In recent weeks, the agency reached settlement agreements with two data aggregation companies accused of deceptive and unfair practices. InMarket agreed to not sell, license, or share any product that targets users based on sensitive location data, while Outlogic agreed to not sell location data that could reveal a person’s medical visits. The FTC’s regulatory activism is increasingly being criticized by business trade groups.

Meta Challenges DSA Funding Mechanism Based on Profits Not Burdens

Report from Politico

In Brief – Meta and TikTok have filed lawsuits in the EU General Court challenging the funding mechanism used by the European Commission to enforce the Digital Services Act (DSA). Meta is arguing that the DSA’s “supervisory fee”, which is calculated based on the users and profits of the “Very Large Online Platforms” (VLOPs), imposes unfair and disproportionate burdens on some platforms while allowing others to avoid paying into the enforcement kitty. At the heart of Meta’s challenge is the policy of basing the fees on a business’s profits, which has some platforms, including some brining significant regulatory burdens, not contributing anything because they are not profitable. Amazon, Snap, Pinterest, and X (formerly Twitter) are among those not charged any fees because they did not report a net profit in 2022. Google and Meta were expected to be charged €31 million in fees, almost three-quarters of the total. TikTok’s is challenging the use of third-party estimates of user numbers. A Commission spokesperson responded to the suits saying, “Our decision and methodology are solid.”

Context – The DSA imposes new rules on digital platforms to address illegal and objectionable content. The largest, the VLOPs, face the strictest mandates. All other platforms deal with a Digital Services Coordinator in their EU member state of establishment, like under the GDPR. Unlike the Digital Markets Act, which has the Commission regulating the 22 largest “core platforms” of the six largest digital “gatekeepers”, but does not include any specific funding mechanism, at least they tried to fund the DSA regulator. Among the VLOPs not contributing, X has drawn attention because its content moderation practices have brought the most initial scrutiny, while Amazon is taking the Commission to court for even being included in the DSA despite the law directly covering marketplaces. Amazon’s ability to generate huge revenues but show minimal taxable profits by reinvesting so much in logistics centers is the main reason that Digital Services Taxes in EU countries like France tax company revenues, not profits.

Internet Works Becomes a Formal Trade Association for Mid-Tier Companies

Report from Axios

In Brief – Eighteen top internet companies who describe themselves as “mid-tier” have announced the establishment of a new trade association called Internet Works to formalize their collective work representing the policy interests of small and mid-size digital platforms and their users. The group, many of which have been cooperating in recent years through a less formal coalition with the same name, is dedicated to pushing policymakers to look beyond the handful of “Big Tech” corporate giants when considering key internet policies such as Sec. 230 of the Communications Decency Act, which allows all platforms to moderate content in a manner that best serves their users. For example, Internet Works and its member companies recently submitted an amicus brief to the US Supreme Court in opposition to the laws of Texas and Florida to regulate how social media platforms engage on content moderation, arguing that content moderation and curation by digital platforms, including by small and mid-size firms, is expressive activity that is protected by the First Amendment and benefits users.

Context – Back in December 2021 the Internet Association (IA), the first DC trade association that claimed to “exclusively” represents internet companies, shut down after nine years of operation. It had more than 40 members, including giants like Google, Facebook, Amazon, and Microsoft, as well as many of the mid-size firms now in Internet Works. When IA started, it sent a message that all “tech” companies were not alike. Internet platforms were different from software, hardware, network, and content companies. When IA folded, it was a signal that digital platform businesses, like the internet itself, had become vast and varied. In the end, the very largest members, among the biggest companies in the world, appeared to have outgrown their ability to work together. They were also increasingly subject to antitrust criticism and investigations and may not have been willing to fund a group diverse enough to include many members who held competition policy concerns or simply did not want to be drawn into the so-called Techlash. Internet Works is those firms’ take.

UK CMA, Google’s Regulatory Co-Pilot, Pauses Privacy Sandbox Rollout

Report from MediaPost

In Brief – In its latest report on the status of Google’s “Privacy Sandbox” plan to phase out the use of third-party cookies in the Chrome browser, the UK Competition and Markets Authority (CMA) says that Google must pause their timeline and address concerns that surfaced since the digital giant began market testing last fall. In 2019 Google announced their intention to replace third-party cookies with targeting tools they called more respectful of user privacy, citing shifting user expectations and similar policies of other major web browsers, especially Apple’s. However, Google’s digital ad businesses are far larger than the other browser companies, and Chrome is the browser market leader. Many ad industry participants quickly voiced concerns that Google could use changes to bolster its market position. Google promised to work with the industry and reached an agreement with the CMA in 2022 to have the agency serve as a kind of lead regulator certifying that the new ad targeting tools do not unfairly benefit Google. The CMA says that Google complied with its regulatory commitments in Q4 but that further rollout of the Privacy Sandbox will enter a “standstill period” while they resolve competition questions.

Context – “Targeted advertising” creates a policy maelstrom with crosscurrents between privacy advocates who hate it and a digital ad industry that continues to grow and diversify. When giants like Apple and Google change AdTech in the name of “privacy”, competitors are skeptical. Apple has been the chief driver, criticizing ad-based business models while rapidly growing their own ad business. Their 2021 policy to require third-party apps to get up-front user approval to track them for ads is under investigation by competition regulators in France, Italy, Germany, and Poland. The Google relationship with the CMA is interesting because of its role as a new kind of government co-pilot for a deeply technical business. In addition, the EU’s DMA applies to some of the advertising platforms of gatekeepers and the Commission included Google’s, Meta’s, and Amazon’s in its initial list. Two full-time government co-pilots for Google’s AdTech teams?

The Last Bid for a Deal on the EU Digital Labour Platform Directive

Report from EuroNews

In Brief – The on-again, off-again negotiations to wrap up the EU Digital Labour Platforms Directive to regulate Gig-style work platforms has reached a pared-back compromise that appears to be the last throw of the dice to enact the legislation before June’s elections. The Commission proposed a draft in December 2021, the European Parliament approved their version a year-ago, but member states have been divided between those aiming to strictly regulate Gig-style work and those championing the greater flexibility of independent work models. Although a top goal of labor advocates throughout has been to standardize so-called “worker classification” to determine when platform workers are considered platform employees with applicable rights and benefits, the final compromise bid crafted by Belgium, the current Council President, dropped that key component and leaves it to national governments to set their own in-country criteria that establishes the criteria for a rebuttable presumption that a platform’s workers are employees. Alongside the issue of worker classification, the Directive also proposed new rules regulating the use of personal worker data and algorithms to manage workers. These rules proved less contentious and remained in the final compromise, including a prohibition on workers being dismissed by automated systems.

Context – Finding a balance between ending models based on so-called “phony” independent contractors, while expanding opportunities for non-traditional workers and giving skilled freelancers flexibility and independence, has proven very challenging. In the US, federal and state legislation, even in Democrat-controlled areas, has been stymied since California voters exempted Gig-drivers from the state’s worker classification law in 2020. In the run-up to the 2024 election, the Biden Administration is pressing regulatory action at the Department of Labor and the FTC that will appeal to progressive labor advocates while not likely changing actual conditions for platforms or workers before November’s election.

Google Wants Judge to Overturn Jury Decision in Epic Games Antitrust Case

Report from Reuters

In Brief – Google has filed a motion asking Federal Judge James Donato to either overturn the December jury decision that sided with videogame giant Epic Games and ruled that Google’s Android ecosystem violated federal antitrust law, or order a new trial, which Google says should be a bench trial. They argue that the jury’s decision was not reasonable based on the evidence and the law, and that numerous rulings and instructions by the presiding judge, who was Donato, were contrary to federal court precedents and justify nullifying the decision. Epic’s jury trial win over Google came more than a year after a federal judge rejected Epic’s similar lawsuit targeting Apple, ruling that the iPhone maker, whose mobile ecosystem imposes far more limits on app developers, did not violate federal antitrust law.

Context – Google’s motion argues that the jury and the presiding judge got things very wrong. They start with, and regularly cite, the decision in the Epic v Apple case. It certainly struck legal laymen as odd that the more closed Apple ecosystem, which holds larger market shares than Google’s Android, could be on the right side of federal antitrust law while the more open system with a smaller market share violated the law. But Epic v Google was a jury trial, while Epic v Apple was a bench trial. Plaintiffs in all the Big Tech suits are now asking for jury trials. Second, Judge Donato was very upset about Google’s conduct deleting intra-company messaging in violation of legal hold orders. Very upset. And he instructed the jury in a manner strongly averse to Google. Now, to be clear, Google’s motion is being made to the same judge who presided over the case and trial. On most points, they are asking him to rule “My Bad”. Not likely. But the real audience is the Ninth Circuit Federal Court of Appeals. They did not just uphold the Epic v Apple ruling; they upheld the determination that the Apple and Google ecosystems directly compete. Donato refused to allow Apple v Google competition to be part of the case, including the results in Epic v Apple, which is the first point argued by Google and a lynchpin to their motion and appeal.

Meta Oversight Board Backs Ruling on Manipulated Video But Want New Rules

Report from CNN

In Brief – The Meta Oversight Board (OB) has ruled that the company properly adhered to its manipulated content policy when they refused to take down a video of President Biden that accused him of inappropriately touching one of his granddaughters, but urged the company to expand the scope of its “incoherent” and “confusing” rules. The Board noted that Facebook rules targeted so-called “deep fakes”, prohibiting AI created or manipulated videos that attempted to fool viewers by showing people saying things they did not say. The video in question was not created with AI, was obviously manipulated, showed the President doing something he did (placing an “I Voted” sticker on his granddaughter’s chest, but on a loop), and did not include any fake speaking by the President. However, while backing Facebook’s decision, the Board also called on Meta to revise its manipulated media rules to better police efforts to mislead users. In particular, the experts urged a shift away from limiting action to AI-enabled fakes to instead focus on whether content is manipulated with an intent to deceive or misrepresent in order to further a harm Meta aims to prevent.

Context – Ironically, the Board calls on Meta to move away from limiting its manipulated media rules to just AI-generated “deep fake” content because AI is expected to help people create so many more kinds of fakes. To step back, dealing with AI-generated fakes, generally through some type of digital watermarks, is widely seen as a top “responsible” AI practice. OpenAI’s recent announcement of new rules and restrictions on using its services in politics includes new image watermarking, and Meta announced an AI image identification plan in parallel with the Oversight Board’s announcement. At the end of the day, the Oversight Board simply wants Facebook and Instagram to combat misleading content more aggressively. Many people do. But many people want less intervention. As tech policy expert Mark Masnick says, the job of content moderation is “impossible to do well” and there is no correct answer. At least X is unlikely to mimic Meta.

Google Loses Appeal of KFTC “Anti Fragmentation” Decision in Seoul High Court

Report from Yonhap News Agency

In Brief – The Seoul High Court has dismissed Google’s appeal of the Korean Fair Trade Commission’s 2021 decision that Google violated competition law by prohibiting digital device manufacturers from developing so-called Android “forks” while also manufacturing fully Android-compatible devices. The KFTC imposed a fine of 224.9 billion won (about $171 million today) and ordered the digital giant to stop enforcing their Anti-Fragmentation Agreements on device manufacturers. The KFTC argues that freeing device manufacturers like Samsung and LG to develop modified Android systems for phones, smart watches, and smart TVs will expand innovation and consumer choice. Google, which defends its policies as promoting Android device compatibility that benefits consumers, is expected to appeal the decision to the South Korea Supreme Court.

Context – Google has a more “open” device ecosystem than Apple, yet it is running into more legal and regulatory problems. Apple does not license its operating systems, third parties cannot build devices, and it maintains strict “walled gardens”. Google’s Android ecosystem is a kind of hybrid. It is open to third party device manufacturers and app developers but is restrictive in many non-transparent ways. The base Android OS code can be used by developers in an “open source” manner, but device manufacturers that produce officially branded Android devices (think green robot logo) must follow strict contractual mandates. Amazon’s Fire devices are an example of an Android fork. They use modified Android OS code, but do not comply with Google rules for official Android devices. So, Amazon cannot manufacture Android branded devices. Samsung and LG have likely refrained from aggressively developing devices based on Android forks so they don’t threaten their huge Android-branded devices sales. Google faces similar challenges to Android policies in India and Japan, and Google’s recent loss to Epic Games in US Federal Court, a suit Apple largely won, turned in part on its complex Android policies.

TikTok May Have US Data Protection Problems but They Aren’t Going Anywhere

Report from the Wall Street Journal

In Brief – TikTok, the short-video mega-phenom owned by Chinese digital giant ByteDance, continues to face credibility challenges in its effort to argue that its arrangement with Oracle to protect the data of US users, and oversee the app’s recommendation algorithm, walls off the company’s US operations from China. Dubbed “Project Texas”, TikTok’s management claims that it has implemented technology and policy firewalls between its US and global operations, especially blocking off access for TikTok and ByteDance employees in China. The top two security concerns of US officials involve the possibility that the Chinese Government gains access to data on US users that could be used for intelligence or influence operations, and the separate issue of China influencing TikTok’s massively important algorithms that recommends the videos users see, including on sensitive news and political topics. While Oracle employees are supposedly in position to police both risks, leaks have repeatedly undermined company credibility, including ongoing claims that China-based employees receive data related to US users and charges that Oracle employees lack the bandwidth to track and understand the constant stream of algorithmic code changes from Chinese-based workers.

Context – TikTok is the social media platform that US politicians most love to hate, especially Republicans (which is saying something given the vitriol directed at Meta and Google in particular). Regardless, the prospect that legal, legislative, or regulatory action will shut them down anytime soon seems unlikely. Yes, Iowa’s Attorney General the latest to sue TikTok to force it to change its operations, but federal judges have been very sympathetic to claims by the company and its users that the service is protected by the 1st Amendment, whether it was courts rejecting Trump Administration efforts to shut down the app or Montana’s legislation to ban it. Yes, they are part of the class action lawsuits targeting all the big social media platforms for negligent design, and they also face various shutdown threats in foreign markets.

Corporate Minimum Taxes Moving Ahead While Digital Tax Deal Lags

Report from the Wall Street Journal

In Brief – Dozens of countries led by EU member states, Japan, South Korea, and Switzerland have enacted legislation to adopt the 15% corporate minimum tax plan included in the Organization for Economic Cooperation and Development (OECD) 2021 global tax agreement, but the United States has not. The result is that some US-based multinational corporations (MNCs), including Boeing, Pfizer, and Johnson & Johnson may face increased global tax burdens. The two-part OECD agreement aims to address two long-running concerns – the right way to tax digital giants like Google and Apple, and frustration with “tax havens”. The “tax haven” half of the deal, dubbed “Pillar 2”, commits countries to set a minimum corporate tax of 15% on large MNCs and allows those countries to tax an MNC even more when the company operates in a country to that does not tax it at least 15% on its domestic earnings. The OECD estimates governments will raise up to $192 billion more globally, although many say it will be lower.

Context – We keep tabs on the OECD tax agreement because of the link to Digital Services Taxes (DSTs) intended to increase taxes on 20 or so large, mostly US-based, digital companies. Many governments, especially in Europe, argue that digital companies are not properly taxed where they make their sales because they can operate across borders so easily. During the Trump Administration, efforts by France to impose a DST were thwarted by threats of tariff retaliation. Talks at the OECD dragged on. President Biden changed tack in 2021, shifting the US priority to setting a 15% global minimum tax and winning agreement to replace DSTs with a tax on highly profitable consumer-facing companies beyond just digital firms. That’s dubbed “Pillar 1”. Agreement was reached at the OECD but countries need to legislate changes. Both pillars stalled in the US Congress. A few countries have implemented national DSTs, but most have agreed to refrain until the end of 2024. Canada is an outlier and is threatening to implement a DST this year. They face bipartisan threats of retaliation in the US.

Meta’s Zuckerberg Joins Top App Developers Lambasting Apple’s DMA Plan

Report from TechCrunch

In Brief – Apple’s plan to modify its app and app store rules to comply with the EU’s Digital Markets Act (DMA) are being criticized, often harshly, by a growing number of industry elites, most recently Meta CEO Mark Zuckerberg. The iPhone giant has announced that it will roll out separate versions of its iOS operating system and App Store in Europe to allow for the “sideloading” of apps and the use of alternative payments services. It is establishing a set of rules and processes to protect users under the new regime, including approving and monitoring sideloaded apps using App Store standards. There are also major fee changes, including reducing Apple’s general commission from 27% to 17% (and 15% to 10% for subscription renewals), but also establishing a new 50-euro cents charge for every app download after an app is downloaded one million times in a 12-month period. Like in other markets that are mandating in-app payments options, Apple is giving app developers a 3% lower fee when they use payments alternatives. Zuckerberg said Apple’s new rules were at “odds with the intent of” the DMA, “so onerous” that he doubted if any developer adopted them, and that Facebook was not currently planning its own app or game store.

Context – Among the DMA’s 18 mandates are non-discrimination and interoperability requirements expected to force big changes to Apple’s “walled garden”. Many of the largest app developers, including Spotify and Epic Games, as well as fellow digital giants like Microsoft and Meta, have been criticizing Apple for years, and have all come out against Apple’s plan. The primary focus is on Apple fees. They argue that Apple is not lowering fees enough to meaningfully help consumers or increase app store options. We’ve been saying for years that the fight over in-app payments choice was based on a phony premise that app developers wanted payments options. They don’t. They want to pay lower fees. If Apple won’t lower them a lot, they want regulators to force it. The DMA compliance system is unclear and untested. This Apple fight will be a learning experience.

The CPSC May Pin Down Amazon as Being Like a Retailer for Product Liability

Report from the Wall Street Journal

In Brief – The US Consumer Product Safety Commission (CPSC) is preparing an order to classify Amazon as a “distributor” of goods, like a retailer, when the company operates both the marketplace where a good is sold and handles that good in its distribution centers. Amazon has long argued that it is not like a retailer when a third-party sells goods on its marketplace, even when Amazon stores, handles, and distributes the goods just like a traditional retailer. It claims that it is instead a marketplace, like Etsy or eBay, and simultaneously a distinct third-party logistics business, like UPS or FedEx. The dispute matters because of the product safety and liability responsibilities of retail distributors differ from those of true third-party marketplaces and logistics providers. The CPSC has been engaged in the dispute for years, with Amazon agreeing to promote product safety to a greater degree, including on recalled items, but consistently fighting any designation as a distributor.

Context – Amazon is unique. They combine some first-party retail sales, some true third-party marketplace sales, and a hybrid form of retail where products from third-party sellers are handled by Amazon in the same way it handles its own goods. It has basically reimagined what it means to be a retail industry wholesaler. Third-party sellers have accounted for more than 60% of the sales on Amazon since 2019, when Jeff Bezos proudly claimed that third-party sellers were “kicking our first-party butt”. But the truth is that most of those third-party sales involve goods housed and handled by Amazon. They are not beating Amazon. The company often prefers them over first-party sales. Seller fees often exceed 50% and are more profitable than first-party sales. Plus, Amazon controls the products, the customer experience, and some contend the sellers. When its Marketplace and FBA are considered unrelated businesses there are big liability benefits. But in the real world, the Amazon Marketplace and FBA function as one. None of the other major online marketplaces or third-party logistics services are similar. The CPSC, the FDA, and several product liability lawsuits are really pushing the ecommerce giant on this point.

EU AI Act Clears Last Major Hurdle as Member States Approve Deal

Report from Euractiv

In Brief – The EU’s landmark AI Act creating the first comprehensive regulatory regime for AI technologies has passed what is expected to be its last big hurdle when the governments of the EU’s 27 Member States came together and unanimously approved the final version hammered out in December negotiations. The top two substantive disagreements separating the EU Parliament and the Council of Member States were the regulatory scheme for Foundational Models such as ChatGPT and EU startups such as Mistral and Aleph Alpha, and the ability of national law enforcement services to use biometric surveillance outside the strict limits imposed on the private sector. France and Germany, home to Mistral and Aleph Alpha, were reluctant to sign off on regulating foundation models, expressing opposition to December’s deal. However, Germany came around in the week leading up to the meeting of the Member State ambassadors and France did not attempt to block the measure in the end. Parliament is expected to approve the final measure this spring and it phases into full effect over a two-year period.

Context – The EU spent five years putting together the AI Act. The vision was for a tiered, risk-based system imposing rules on AI applications that did things, not the underlying technologies. AI-enabled services would be designated as unacceptable risk, meaning prohibited (such as social credit scoring), high risk, requiring review and pre-approval from regulators (such as self-driving cars or medical machines), and limited risk, which has transparency and testing requirements (such as touching up photos). Basing regulation on concrete risks is generally applauded. The Parliament changed the underlying tenor of the effort by adding regulation of general foundation models following the release of ChatGPT. The final compromise imposes more regulatory burdens on the largest foundation model companies (US-based today) and less on smaller ones (which includes the current EU-based leaders). Don’t expect other major governments to legislate similarly broad regulation anytime soon due to the potential flight of very small startups.

Forget Parental Consent, Florida House Passes Social Media Age Limit of 16

Report from Politico

In Brief – The Florida House of Representatives has passed legislation that sets a strict 16-year-old age limit for individuals to have an account on a social media service. The measure, which passed by an overwhelming vote of 106-13, attempts to define social media platforms based on their tracking of users’ activity and targeting content based on information derived from tracking, while excluding a wide range of digital platforms including those primarily used for email, direct messaging, streaming, news, sports, non-user generated entertainment, online shopping, gaming, photo editing, and professional networking. Unlike social media bills enacted in states like Utah, Texas, and Ohio, the Florida bill does not propose to require some form of parental consent for teen users, it instead simply bans the services until age 16. The platforms would be required to use a “reasonable age verification method” to ensure that new and existing users meet the age standard. Like the backers of a growing number of lawsuits that are claiming that social media platforms are intentionally addictive and harmful, especially to younger users, the bill’s definition of social media platforms includes that it “utilizes addictive, harmful, or deceptive design features… to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.”

Context – Although the data showing causality between teen mental health and social media use is thin, the effort to cordon off large parts of the internet and regulate how platforms serve teens is proceeding apace. Constitutional problems are often not a deterrent to state legislators, and the 1st Amendment is proving a tough hurdle. The words of Federal District Judge Algenon Marbley when he recently blocked Ohio’s social media parental consent law are noteworthy. He focused on 1st Amendment problems, said the measure was not narrowly tailored enough to meet constitutional scrutiny, and criticized the criteria to define social media as undefined and malleable and questioned “eyebrow-raising” exceptions. Expect more of that.

Amazon Reduces Police Access to Stored Ring Camera Videos

Report from the New York Times

In Brief – Amazon has announced that it is ending a feature of its Neighbors app that allows law enforcement agencies to easily access Ring surveillance camera videos from users who have downloaded the app and signed up for law enforcement sharing. Instead, the company will require law enforcement to get a judicial warrant for video from a camera and present it to Amazon in all cases. Although the digital giant did not explain the reason for the major change in policy, the use of Ring videos by law enforcement has been criticized by civil libertarians and advocates for racial minorities.

Context – So much attention is given to AI chatbots and large language models that other AI policy areas like surveillance and facial recognition can fall through the cracks. Don’t forget that as the EU worked to wrap up their landmark AI Act that the two final controversies were regulating “Foundation Models” like ChatGPT and the guardrails around surveillance and biometric identification such as facial recognition, in particular leeway for law enforcement and national security. Before OpenAI burst on the scene, Clearview AI was the most known, or at least notorious, AI start-up. When the New York Times produced its expose on Clearview AI, the firm had just a handful of employees and limited financing, but it built a “search engine for faces” that rivaled tech giants like IBM, Microsoft, and Google. Data protection authorities in France, Canada, Australia, Italy, Greece, and the UK quickly challenged the start-up, as did US states including Illinois and Vermont. Eventually, the company drastically pared back its business aspirations, agreeing that it would only offer its face-based search service to government authorities in select countries, particularly the United States. US security agencies and police use it and federal legislation to pare that back has remained stalled. A further step in the web of AI and facial recognition was a report that law enforcement has used a service that claims to create face drawings from DNA and then run them through facial recognition services.

Biden Executive Order on AI Expected to Face Legal Challenge for Overreach

Report from Politico

In Brief – Conservative advocacy organizations and tech industry trade groups are preparing to launch legal challenges to the Biden Administration’s massive executive order on artificial intelligence if the Administration enforces mandates on unwilling tech companies. The White House is using the decades-old Defense Production Act (DPA) to impose several unprecedented requirements on companies developing the largest AI foundation models, such as sharing information with the federal government when training their models and providing results of so-called “safety” tests. The Administration argues that large AI systems pose serious risks to national security, economic security, and public health and safety, justifying application of the DPA. Republican lawmakers and conservative policy thought leaders, with backing from some in industry, are criticizing some requirements as interfering with research and undermining innovation. They are threatening court challenges arguing some mandates exceed federal authority under the DPA or any other federal statute.

Context – The AI Executive Order is a large and varied collection of mandates and policy directives. Some may be vulnerable to legal challenges from companies or researchers directly impacted, especially from a federal judiciary more receptive to arguments under the “Major Questions” doctrine and other challenges to expansive regulatory activity. But much of the order is on more solid ground by using the federal government’s buying power to promote safe AI practices, directing federal agencies to use AI to bolster cyber defenses, and calling for agency and congressional action to increase digital privacy and combat algorithmic bias. The biggest macro policy question when it was released last fall was whether governments were moving in the direction of strict AI regulation or more cooperative efforts with big AI companies to promote safety while providing flexibility for research and innovation. That remains the case today. While the EU continues to plod toward regulation, the tide is more toward the latter, including at the G-7 and in the US Congress.

Senate Show Trial for Social Media CEOs Is Everything Expected

Report from CNBC

In Brief – The CEOs of five social media platforms accused of harming teen users testified before the Senate Judiciary Committee and were the targets of criticism and abuse. The four-hour event featured Meta’s Mark Zuckerberg, a relative veteran of congressional hearings, TikTok’s Shou Zi Chew, who is quickly catching up, and first-timers Linda Yaccarino of X, Evan Spiegel of Snap, and Jason Citron of Discord. Last year, the Committee reported out a collection of bills to regulate how digital platforms serve teens, and many of the Senators warned the executives that the Congress is poised to pass legislation to stop what they consider a scourge of online harms to young people. Some Senators threatened to repeal or drastically pare back Sec. 230. TikTok was harshly singled out by several Republican Senators for links to China, and Meta’s Zuckerberg was confronted many times regarding internal documents showing company debates over how to address teen problems.

Context – The Senators were trying to capture the hype of the blockbuster 2020 hearing of the House Antitrust Subcommittee that saw the CEOs of Amazon, Apple, Google, and Facebook grilled about alleged competition abuses. The Senate Judiciary Committee includes some of the most strident, and loud, tech critics in Congress. But their standard flamethrower styles are nothing new. Instead, the feature that really made for a show trial atmosphere was filling the room with friends and family members of teens who suffered a range of harms, including self-harm, while using social media platforms. Performance art. But the 2020 CEO hearing did not lead to any major Big Tech antitrust reforms. Backers blamed evil tech money and lobbyists. Rather, volume on Capitol Hill is often not a sign that legislation is on a fast track. It’s often a sign that it’s not. Instead, states are rushing in, especially with social media parental consent and age limit laws. But they are running into 1st Amendment problems. Potentially more expensive threats are the major civil lawsuits gaining traction that aim to circumvent Sec. 230 and pin liability on the social media platforms via claims of negligent design.

Ask A Question!