Archive – 2023

October 2023

Biden Executive Order on AI Aligns Federal Agencies with Safe AI Practices

Report from the New York Times

In Brief – President Biden has issued an Executive Order (EO) on Artificial Intelligence that claims to use the longstanding Defense Production Act to require companies developing AI foundation models that pose a serious risk to national security, national economic security, or national public health and safety, to notify the federal government when training their models, and share the results of safety tests. The national security standards will be released next year, although many specifics may remain classified. It also aims to leverage the federal government’s buying power to promote safe AI practices, modifies H1B worker visa standards to prioritize AI technology ability, directs agencies to use AI to bolster cyber defenses, and calls for agency and congressional action to bolster digital privacy and combat algorithmic bias.

Context – The Biden Administration EO is being released just prior to the AI summit hosted by the UK Government. The biggest question continues to be whether governments are moving in the direction of strict AI regulation or more cooperative efforts with big AI companies to promote safety while providing flexibility for research and innovation. The tide continues moving toward the later. As we’ve been saying for months, if the US Congress legislates, expect the focus on how the US Government itself uses and evaluates AI tools. That aligns with the new EO. The EU, in the middle of legislating a far more regulatory AI Act, is at the other end of the spectrum. The Commission proposed a tiered, risk-based approach to regulate AI-enabled services, not underlying technology, but the European Parliament’s version expanded the regime to regulate foundation models. Senior European officials are now raising concerns about over-regulation and negotiations to finalize the AI Act may be slipping. Finally, more tech policy schizophrenia in the UK. They have called out Brussels for over-regulation, but Prime Minister Sunak regularly talks up potential big security risks. That said, his Bletchley Park AI summit is not a forum for regulation, his government has released a compendium of big company AI safety policies that are basically safe AI commitments, and rather than legislate the UK is creating an AI Safety institute to study the global state of the technology.

Intuit’s AI Content Moderation Looks Like Social Media Content Moderation

Report from the Wall Street Journal

In Brief – Intuit, maker of Turbo Tax, is building a Generative AI (GAI) tool to offer financial advice based on a customer’s data. Along with using algorithmic guardrails, the company is building a team of content moderators to help ensure the system is not abused by users and does what the company wants it to do.

Context – From The More Things Change, The More They Stay the Same Department, let’s start with the phenomenon of GAI “hallucinations”, which is made-up or false AI output. Hallucinations highlight how GAI creators don’t fully understand why their systems produce what they do in any individual instance. And they suspect the phenomenon is a feature of the model. But hallucinations are just one type of bad output. There are many kinds of undesirable results. Take the story of Amazon’s AI-enabled Alexa saying that the 2020 Presidential election was stolen. Not desirable to them. And that’s just one of potentially many thousands of bad results from a system producing almost unlimited results overall. How to solve that problem? User-generated content, which is not so different from AI-generated content, provides parallels with massive volumes of content from creators who are not fully controlled. The solutions have been technological and human. “Guardrails” are basically algorithms used by the AI operator to directly instruct the system to do certain things, overriding results and blocking various words. Digital platforms have been using them for decades. Commerce platforms scan for key words associated with banned products. Social media sites scan for key words related to illegal and objectionable activity. Amazon will likely add a guardrail to tell Alexa what to say about the 2020 election. But there has always been a second “Trust and Safety” moderation tool, which is human reviewers. Algorithmic scans are empowered to block things that are black & white prohibited, but they “flag” things in the many grey areas. Humans often check them. Ironically, large digital platforms have been claiming for some time that they are using AI to improve the tech side of content moderation processes. And with all that, nobody does it close to perfectly.

Speculating on What Speaker Mike Johnson Means for Digital Platform Issues

Report from the Washington Post

In Brief – As Rep. Mike Johnson (R-LA) unexpectedly ascends to the position of Speaker of the House, experts and analysts are wondering what it means for policy. Based on his service on the House Judiciary Committee as a seemingly loyal lieutenant of Chairman Jim Jordan (R-OH), some speculate that he won’t support major antitrust reforms, will pressure big platforms to avoid restricting conservative viewpoints, and will threaten Sec. 230 protections if they platforms aggressively moderate.

Context – Johnson has the shortest track record in Congress for someone elected Speaker since 1883. He is unquestionably conservative, especially on social and cultural issues. The Judiciary Committee is home to highly partisan and ideological hot button issues, including abortion, immigration, and gun control. His Republican colleagues put him there for a reason. Chairman Jordan (R-OH), another outspoken cultural conservative, was nominated for Speaker before Johnson. On the committee, Johnson has largely aligned with Jordan on digital policy issues. That could be because they agree. It could be because Johnson was loyal to his committee leader. Probably some of both. That said, Jordan, backed by Johnson, criticized social media platforms led by progressive for (allegedly) censoring conservative viewpoints, a long-simmering frustration among conservatives that rose in prominence over issues such as Covid policies, election integrity, and so-called hate speech. Committee Republicans rejected Big Tech antitrust reforms empowering federal agencies, especially the FTC, and instead focused efforts on stopping ideological content moderation, especially pushed by government officials. This included threats to repeal Sec. 230. The results have been most antitrust reform, content moderation regulation, and Sec. 230 changes being stymied by partisan differences, and thoughtful policy analysts did not see meaningful bills being enacted this Congress regardless of who was Speaker. It is noteworthy that Johnson co-hosted a bipartisan congressional forum on AI. He’s probably interested. Many AI regulation bills have the FTC enforcing rules. I wonder what he thinks about that.

The JFTC Joins the Global Investigation of Google’s Android and Search

Report from CNBC

In Brief – The Japan Fair Trade Commission (JFTC), the country’s antitrust regular, has announced that it has opened a probe of Google policies that allegedly push smartphone makers to use Google’s search and web browser applications as the defaults on their devices. The competition enforcer is asking third parties to provide information and comments by November 22nd regarding Google’s practice of requiring Android device makers to install the Google Search and Chrome browser as defaults if they pre-install the popular Google Play app store on their devices, as well as agreements made by Google to share some search-based ad revenue with device manufacturers when Google Search is set as the search default on their phones. The JFTC claims that it is asking for third party input at an earlier stage than usual for a single-company investigation in keeping with its June 2022 policy statement on competition in digital markets.

Context – Google’s antitrust problems began earlier than the other digital giants. They are a decade into major EU competition cases involving their Search, Advertising and Android businesses. Most Google antitrust challenges still generally fall into one of those buckets. In this case, the JFTC is looking at the kind of Android practices that led the EU to impose a massive $5.1 billion fine in 2018. While Google argues that its Android Operating System is open source and can be used for free by developers and manufacturers, manufacturers of official “Android” devices (think green robot logo), have been required to follow contractual mandates including regarding apps. Antitrust regulators in South Korea and India have been targeting a range of Google Android practices, including restrictions on manufacturers developing Android “forks”, as well as how Google uses its payments service to collect its commissions. The huge and lengthy antitrust trial pitting the US Department of Justice against Google also targets Google policies intended to build market share for its search service, in particular financial incentives the company uses to be the default search engine for other companies’ services.

Google Asks a Federal Court to Dismiss One of the Many AI Training Lawsuits

Report from Reuters

In Brief – Google has asked a federal court in California to dismiss a proposed class action lawsuit that claims the company’s use of data available on the internet to train generative artificial intelligence systems violates millions of people’s privacy and property rights. Google’s motion argues “Using publicly available information to learn is not stealing. Nor is it an invasion of privacy, conversion, negligence, unfair competition, or copyright infringement.” The suit alleges that Google misused content posted to social media and shared on Google platforms to train its AI systems. Google’s General Counsel denied that the company used non-public information from services like Gmail in AI training without permission.

Context – Every week another lawsuit is filed against large companies who built Generative AI systems alleging copyright or privacy violations in the “training” of their neural networks. Copyright holders have been especially litigious, including writers, visual artists, and musicians. It is now well known that large neural networks have processed unimaginably large bodies of data collected from the internet, including texts, images, and music. Huge amounts were copyrighted material, others were posted by individuals on all manner of digital platforms. On one hand, neural networks are not like traditional databases. They don’t store or retrieve training data. They learn from existing material and then produce new output. Of course, like with human creators, everything created by an AI system may resemble something that already exists and was reviewed, and copying is often legally frowned upon. For example, a suit brought by music publishers claims Anthropic’s AI service produces directly copied song lyrics. Absent copied results, many legal analysts expect the use of legally obtained copyright works in training to be considered “fair use”, which is a relatively robust exception in US copyright law. In the US, federal courts will be key, as Congress is unlikely to legislate anytime soon. But in the EU, the Parliament’s version of the AI Act regulates generative AI foundation models including a transparency requirement to identify copyrighted materials used in AI training.

Amazon Offered to Settle Price-Fixing Antitrust Suit in California in 2022

Report from Bloomberg

In Brief – On September 8, 2022, Amazon unsuccessfully offered to settle an antitrust investigation by the California Attorney General’s office into company policies that the state alleges push sellers on the Amazon marketplace to raise prices on other websites to match their price on Amazon, harming consumers by setting the price on Amazon as an artificially high price floor. Amazon’s lead competition lawyer reportedly offered to have Amazon change its seller notices and pricing policies to make clear that it doesn’t require price parity with other websites. The offer did not lead to a settlement and the California AG filed its antitrust lawsuit in California Superior Court a few days later. In March of 2023, Superior Court Judge Ethan Schulman tentatively rejected Amazon’s motion to dismiss the case.

Context – The charge that Amazon is engaged in anticompetitive “price fixing” is gaining traction in US courts. Most critics argue that the product search algorithms on the Amazon marketplace penalize retailers who offer lower prices on other websites to protect its high seller fees from lower-fee competition. Many sellers say they could sell products at lower prices on other lower-fee venues, but if they do that, they lose their sales on Amazon, which is the largest online marketplace. So they raise prices elsewhere instead. Along with the California lawsuit, a similar consumer class action complaint has survived Amazon’s motion to dismiss in federal court in Washington State, and the same set of Amazon policies is a part of the FTC’s recently filed antitrust complaint. Central to the debate is Amazon’s role as the largest ecommerce logistics services provider, especially for small online sellers. Just as Amazon’s algorithms push sellers to keep prices off Amazon at the price on Amazon, the algorithm rewards sellers who buy Amazon logistics. Amazon settled an antitrust investigation in the EU with concessions related to its search algorithm and logistics and are proposing a settlement in the UK. Amazon’s September 2022 offer to the California AG appears to fall short of the EU settlement by not involving its product search algorithm or Buy Box results.

Top Consumer Review Companies Launch Coalition for Trusted Reviews

Report from AP

In Brief – A collection of the leading and most trusted digital platforms using consumer reviews to inform people as they make many types of purchase, travel, and other decisions, including Amazon, Booking, Expedia, and Tripadvisor, have formed the Coalition for Trusted Reviews to help advance corporate and government efforts to combat fake and deceptive online reviews. The group, which came together after members met last October at a Fake Review Summit hosted by Tripadvisor, will look to identify, and promulgate a range of best practices for hosting trustworthy online reviews, and share methods on how to detect and avoid fake ones. It is widely understood that there is a commercial fake review industry, with illicit enterprises, often based outside the United States, soliciting phony customer reviews at scale in exchange for money or other benefits, offering both to boost businesses with positive reviews and harm business competitors by engineering fake negative reviews.

Context – One of the great benefits of the internet has been the ability to apply “the wisdom of crowds” to uncounted consumer decisions, from the everyday to major purchases and experiences, with online reviews consistently being cited among the most trusted sources to help make decisions. At the same time, fakes, frauds, and deceits have always been part of the ecosystem. The US FTC has been providing truth-in-advertising direction to the online influencer sector since 2017 and released its updated “Endorsement Guides” in June. The agency is also proposing to move beyond guidelines and recommendations and is instead issuing regulations regarding unfair and deceptive practices in digital marketing, including regulating fake reviews, and practices such as paying for positive (or negative) reviews, suppressing negative reviews, or buying or selling followers or other means of online influence. The UK Competition and Markets Authority updated its guidance for online influencers last December and the UK Government’s draft Digital Markets, Competition and Consumers Bill includes a section aimed at policing fake online reviews as well.

EU DSA Supervisory Fee Mechanism Rewards Not Being Profitable

Report from Bloomberg

In Brief – The EU’s landmark Digital Services Act (DSA) imposes new rules on how digital platforms address illegal and highly objectionable content, imposing the strictest mandates on the largest platforms. While most digital platforms will deal with a Digital Services Coordinator in their member state of establishment in the EU as their DSA regulator in a manner akin to the GDPR’s One Stop Shop mechanism, the largest platforms, categorized as Very Large Online Platforms (VLOPs), will have the European Commission serve as their DSA regulator. The DSA legislation included a funding mechanism, framed as a bank-style supervisory fee, to provide the commission with resources for that task. The commission is estimating it needs €45.2 million ($47.7 million) to enforce the Digital Services Act this year and determined that VLOPs would be required to pay a fee not to exceed 0.05% of their profit in 2022. Of the 19 initial VLOPs, platforms including Amazon, Snap, Pinterest, and X (formerly Twitter) will not be charged any fees because they did not report a net profit in 2022, while Google and Meta are expected to carry about €31 million — almost three-quarters of the total. Some are reportedly critical of the methodology alleging that X’s allegedly more lax content moderation practices and Amazon’s lawsuit challenging its designation as a DSA VLOP are taxing the commission’s enforcement resources to a disproportionate degree while they are not contributing to the enforcement funds.

Context – It is interesting to see Amazon on the list. Reinvesting its massive revenues, often into its logistics business, to the point that the company has regularly reported scant profits, has often resulted in Amazon paying little to no corporate income taxes in Europe. Amazon’s ability to avoid showing profits is the major reason that when EU member states like France created Digital Services Taxes they taxed company revenues not profits. On the other hand, the social media companies on the list to pay no DSA fees have taken revenue hits in recent years from Apple’s policy to block user tracking for ads.

High Court to Determine When Government Pressure is Social Media Censorship

Report from the Washington Post

In Brief – The Supreme Court will hear arguments in a social media censorship case involving efforts by senior Biden Administration officials to influence how digital platforms moderate content and will allow the government to continue communicating with the companies while they decide the case. The court’s move follows a ruling of the Fifth Circuit Court of Appeals that senior federal government officials likely violated the First Amendment by coercing social media platforms to restrict posts regarding issue like the coronavirus pandemic and elections. The appeals court imposed an injunction on the White House, Office of the Surgeon General, Center for Disease Control, FBI, and the Cybersecurity and Infrastructure Security Agency (CISA), blocking them from actions to “coerce or significantly encourage social-media companies” to restrict online posts. The Biden Administration appealed to the Supreme Court and the high court’s stay of that injunction will be in effect until the case is decided by the full court, likely sometime next year. Three of the court’s conservative justices, Samuel Alito, Clarence Thomas, and Neil Gorsuch dissented in the decision to stay the injunction.

Context – This case is following the same trajectory as the challenges to laws enacted by Florida and Texas in 2021 that regulate large social media platforms. All the cases rest on allegations that social media platforms have censored predominantly conservative viewpoints. The Biden Administration successfully appealed to the Supreme Court to take up both sets of cases, first arguing that digital platforms have the right to make content moderation decisions and now that the Federal Government has the right to express their views on those decisions. The same three conservative justices objected to the majority’s decision to put off enforcing a controversial new policy until the matters are fully decided. In the first case, they objected to a stay of Texas’s social media law, and now the injunction blocking the Biden Administration from pressuring platforms. The other three conservative justices will be most in the spotlight on both cases next year.

Clearview AI Wins Appeal in UK to Avoid Privacy Sanction

Report from the BBC

In Brief – Controversial facial recognition company Clearview AI has won an appeal in UK court overturning a £7.5 million fine from the Information Commissioner’s Office (ICO), the country’s data protection and privacy authority. The company, who built and trained its service using billions of photos scraped from public social media profiles, has faced numerous legal and regulatory challenges since it emerged from obscurity in 2020. Although the company argued that the UK regulator did not have jurisdiction over the business because it has no operations or customers in the country, the court did not accept that argument. Instead, it avoided sanction because its clients are now exclusively foreign government agencies, and British data protection law provides an exemption for foreign law enforcement authorities. In response to the decision, the ICO said, “It is important to note that this judgement does not remove the ICO’s ability to act against companies based internationally who process data of people in the UK, particularly businesses scraping data of people in the UK.”

Context – Clearview AI might have been the most widely known, or at least notorious, AI start-up prior to OpenAI publicly releasing Chat-GPT. When the New York Times produced its expose on Clearview AI, the firm had just a handful of employees and limited financing, but it built a “search engine for faces” that seemed to outperform tools from tech giants including IBM, Microsoft, and Google. Data protection authorities in France, Canada, Australia, Italy, Greece, and the UK quickly challenged the start-up, as did US states including Illinois and Vermont. Eventually, the company drastically pared back its business aspirations, agreeing that it would only offer its face-based search service to government authorities in select countries, in particular the United States. Facial recognition used by law enforcement continues to bedevil debates around AI policy. The EU’s draft AI Act significantly limits biometric monitoring but divisively exempts many uses by security services, and a recent GAO report notes the many US Government agencies are still accessing Clearview AI.

A New Way to Influence Digital Policy If You Have the Money – Directly Fund Staffers!

Report from Politico

In Brief – Open Philanthropy, a non-profit organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms, is using an affiliated non-profit called the Horizon Institute for Public Service to fund the salaries of more than two dozen “fellows” to work on the staffs of key congressional offices, federal agencies, and think tanks. The staff fellows work on Artificial Intelligence (AI) and Biosecurity policy. All three Senators who have been tapped by Majority Leader Chuck Schumer (D-NY) to assist him on developing AI legislation — Sens. Martin Heinrich (D-NM), Mike Rounds (R-SD) and Todd Young (R-IN) — have a Horizon fellow on staff, as does Sen. Richard Blumenthal (D-CT), a leading advocate of digital regulation who is proposing a national AI licensing regime. Along with placing policy professionals in congressional offices, including on key committees, current and former Horizon fellows are now working at the Department of Defense, the Department of Homeland Security, and the State Department, as well in leading think tanks contributing to the debate on AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology.

Context – As someone who has worked for more than 30 years in public policy and lobbying, both on Capitol Hill and in the middle of a wide range of influence campaigns, it’s rare when I come across something I’ve not really seen before. This is one — directly funding experts to work as “fellows” inside congressional, government, and think tanks offices on policy you care about. The report says the first-year budget was $3 million and the organization identifies 25 fellows, but how much more could it scale? Honestly, the last public policy influence strategy in the digital space that I similarly thought, “Wow, that’s smart… If you have the money” was when the Wall Street Journal reported in 2017 on Google’s massive years-long and multi-million-dollar project to fund antitrust policy work by leading law professors across the country. That was not technically reported as lobbying either. If you’ve seen this kind of paying for staff before, at scale, let me know at .

California Judge Buys Argument That Social Media Might Be Defectively Designed

Report from Bloomberg

In Brief – A Superior Court Judge in Los Angeles is allowing hundreds of plaintiffs to proceed with a lawsuit in California state court alleging that Facebook, Instagram, Snapchat, TikTok, and YouTube were negligent in the design of their services in ways that led to users, especially children, becoming addicted and suffering harms including depression, anxiety, self-harm, and suicide. Judge Carolyn Kuhl rejected outright 6 of the 8 claims brought by the plaintiffs but is allowing a second complaint against Meta for concealing known risks to children on Instagram and Facebook. In her order, Judge Kuhl’s rejection of company arguments that Sec. 230 of the Communications Decency Act and the First Amendment protected them from the negligence claim drew heavily on the Federal Ninth Circuit Court of Appeals ruling in 2021 that Section 230 did not protect Snap, Inc. from facing a claim of negligent product design from the parents of teenagers who died in a fatal car accident while using Snapchat’s “Speed Filter”.

Context – Critics of social media platforms have been trying for years to construct legal strategies to circumvent Sec. 230. Two of the most common avenues are that the harms are caused by the conscious design of the platforms to encourage longer and more intensive use, which the companies knew, or should have known, would lead to harms, or that the use of algorithms by the platforms is an activity that is not protected by the federal statute. Similar cases regarding negligent design are also proceeding in federal court in California. The argument that algorithms are not covered by Sec. 230 was teed up for the US Supreme Court in the case of Gonzalez v. Google with oral arguments in February. The High Court passed on the opportunity to decide that question after ruling unanimously in a related case that Twitter was not liable for terrorist attacks despite some pro-terrorist material appearing on its platform. A growing number of US States are passing laws claiming to protect kids from online harms, including age limits and parental approval requirements. They are running into First Amendment problems.

Space Force Prohibits Employees from Using Internet-Based Generative AI Services

Report from Bloomberg

In Brief – The US military’s Space Force has informed its staff that they are not permitted to use Generative AI (GAI) tools found on the internet while on duty or using government data. Although the Space Force memorandum included positive messages about the value of AI tools, it cited cybersecurity concerns regarding the data handling practices of publicly available GAI services as the reason for the policy, which was described as temporary. The memorandum elicited pointed criticism from officials with Ask Sage, a GAI tool that claims to be based on Chat-GPT and similar services “fine-tuned with a vast amount of government-related data, including FAR, DFARS, DoD policies and governance documents, technical content, biographies, and more” that reports that it is used by employees across the federal government, including the Department of Defense and within the Space Force.

Context – How governments “safely” use AI tools is just one of the many challenging issues related to GAI, large language models (LLMs), and other neural network-based AI services. Issues related to data security are not dissimilar to legal challenges surrounding copyrighted works being used to “train” neural networks. LLMs are not traditional databases that copy and store searchable data and respond to inquiries with fixed results. Instead, they are statistical models that have ingested and processed massive amounts of data that “learn” to compile high quality responses to queries by determining which fragments of data best follow other sequences. Developers are not certain themselves exactly why their model responds as it does in any specific case, hence the chatbot “hallucinations” issue. It’s likewise never clear if data used in training will appear in a model’s response at some future point, including data from queries. That is one reason for concerns from the Space Force and other security agencies and it is leading to the development of corporate-focused specialty AI tools that claim greater data security. Finally, if the US Congress legislates on AI policy, expect directions on how government uses AI rather than overall regulation.  

Korean Regulator Fines Google and Apple Over In-App Payments

Report from TechCrunch

In Brief – The Korean Communications Commission (KCC) has announced that they will fine Google $35.4 million and Apple $15.2 million for violating the country’s in-app payments law. Enacted in 2021, the legislation requires mobile app store providers to allow app developers to use third-party payment options for in-app purchases. Both Google and Apple eventually implemented changes to their South Korea app stores to comply with the law, although both were criticized by some developers for structuring their new policies in ways that continued to discriminate against developers that chose alternatives and continued to impose unfairly high fees. Both Google and Apple responded to the KCC’s announcement with statements saying that they disagreed with the decision, believed that they were complying with the in-app payments law, and will continue to engage constructively with the regulator.

Context – Apple and Google have long used their payments services to conveniently collect their app commissions. The app developers who pay the commissions don’t like it. Legislative and regulatory challenges in South Korea and the Netherlands have been particularly out front, but India and the EU are fully engaged at this point as well. Android is the dominant mobile ecosystem in South Korea and India and so Google has been more in focus there, but Apple has also faced scrutiny and criticism. Since 2021, both Google and Apple have responded to legislative and regulatory demands to allow payments options by rolling out plans that certified approved alternatives and reduced fees by 3-to-4% when an app uses a payments alternative. However, both companies still propose collecting most of their commission on in-app purchases, which can reach 30 percent. The argument that providing payments is a service worth 3 to 4% is strong, and the rest of their commissions are in line with fees on a wide range of digital markets. But app developers want to pay much less. This issue will be part of the implementation of the Digital Markets Act in Europe and is at the heart of November’s Epic Games v Google antitrust trial.

Japan Forges Ahead on Leading G7 Effort to Create International AI Principles

Report from the Japan Times

In Brief – Japan is continuing to use its year leading the G7 nations, a diplomatic forum made up of the US, Japan, the UK, France, Germany, Italy, and Canada, to foster agreement on codes of conduct to promote the safe and responsible development and use of Artificial Intelligence (AI) technologies. Prime Minister Fumio Kishida has announced his intention to follow up on the mid-year creation of the G7’s Hiroshima AI Process by proposing guidelines to coordinate how companies building Generative AI services help distinguish AI-created content, especially images and video, from authentic material. Other goals for non-binding codes of conduct to help protect against potential societal harm created by AI systems include cybersecurity controls and risk management systems.

Context – The big question is whether AI regulation globally is moving in the direction of the EU’s AI Act, the bloc’s prescriptive legislation governing the development and use of AI-enabled services and that would then spread globally through the so-called regulatory “Brussels effect”, or not. The answer is Not. The global trend is clear, and it looks like the US approach of AI principles and gathering corporate commitments to adhere to “responsible” practices like AI watermark standards and third-party audits of AI systems. Kishida’s plan appears to fit into watermarking standards. The UK Government is hosting a global AI Safety Summit in November that is noteworthy for including China. China is even ahead of the EU in regulating AI, but its focus is on ensuring AI services respect the so-called “Great Firewall” content limits, hardly a global model. The Canadian Government is also talking about leading on AI governance, but the CEO of Canadian tech-giant Shopify attacked the idea if it meant actual regulation. Finally, while European Commissioner Vera Jourova said on her way to the latest meeting in Japan that there was much high-level agreement between those two, not only is Japan not on board with a legislative plan but Asia’s ASEAN nations are consolidating around a flexible principles-based model to promote business innovation.

Utah Sues TikTok Alleging Deceptive Practices That Harm Children

Report from Reuters

In Brief – TikTok has been sued in Utah state court by the state’s Attorney General who alleges that the short-video social media phenom violates state consumer protection laws by deceptively using “highly powerful algorithms and manipulative design features” to hook young people into excessively using the service in ways that harms their “mental health, their physical development, their family, and their social life.” The lawsuit, which parallels similar efforts from Indiana and Arkansas, seeks civil penalties and an injunction prohibiting TikTok from engaging in the alleged deceptive business practices. TikTok responded to the suit saying that it “has industry-leading safeguards for young people, including an automatic 60-minute time limit for users under 18 and parental controls for teen accounts.”

Context – This is a convergence of two ongoing global trends: age-based challenges to social media and governments targeting TikTok. Utah’s state government leaders have been trying to get out front of both. In March, Utah enacted legislation strictly regulating social media for those under 18, including mandatory age verification and parental approval to set up an account. The age-gating phenomenon extends from a growing list of Republican-led US States to progressive jurisdictions including California, France, and the UK. But the US laws are running into First Amendment problems. Chinese-owned TikTok’s explosive popularity is leading to a diverse collection of regulatory challenges. They have faced sanctions in many countries for past data practices related to young users, as well as data security and censorship accusations related to links to China, most notably the ongoing CFIUS standoff in the US, but links to China also shut the app down in India. They were recently forced to shutter their growing ecommerce shopping service in Indonesia, as well as change their live-streaming service in Kenya due to adult content. Finally, all eyes are on the federal court in Montana that is hearing TikTok’s constitutional challenge to the state’s first-in-the-nation law actually banning TikTok downloads in the state.

EU Presses X (formerly Twitter) on Israeli-Hamas Content Moderation

Report from the Washington Post

In Brief – Led by Commissioner Thierry Breton, the European Commission has opened an investigation of X, the platform formerly known as Twitter, to determine if its efforts to deal with the massive amount of content being posted to the platform related to the Hamas-Israel conflict meets the standards of the EU’s recently enacted Digital Services Act (DSA). Within days of the brutal Hamas attack and Israeli military response, X and other social media platforms have been inundated with objectionable content and calls to block, screen, or otherwise restrict material for reasons including being overly violent, hateful, false, deceptive, or fabricated. The DSA regulates how platforms address illegal and highly objectionable content, imposing mandates on large social media platforms to quickly and transparently deal with it in a way that complies with the law, protects the citizenry, and preserves freedom of expression. Breton has publicly accused X, Meta, and TikTok of falling short, reminding them of the new law and the authority to enforce it with significant fines. He has now opened an investigation of X and is demanding a full reply by the end of October. X’s owner Elon Musk and its CEO have said that the company is complying with the law and making content decisions in an open transparent manner.

Context – As platform policy expert Mark Masnick has often said, “content moderation at scale is impossible to do well”. And he is no friend of Musk’s leadership of X. The DSA requires platforms to have clear rules about acceptable content, enforce them fairly and transparently, protect freedom of expression, and take proportionate and effective measures in the event of a crisis. This is the first DSA-era crisis. Agreement on what is misleading, false, overly violent, and otherwise objectionable, let alone what’s fair, proportionate, or effective? The Commission spent a year reviewing whether the platforms’ responses to pro-Russian propaganda after its Ukraine invasion would have been DSA compliant and said they failed. X now has less than a month to explain their policies and practices followed by the EC directing content moderation on the platform.

As Expected, UK CMA Approves Microsoft-Activision and the Deal Closes

Report from Reuters

In Brief – The UK Competition and Markets Authority (CMA) has signed off on the revised acquisition plan from Microsoft and Activision and the massive $69 billion deal has been completed. The CMA rejected the deal in April arguing that Microsoft could use preferential access to Activision games to dominate the emerging market for cloud-based videogame platforms. However, the European Commission then approved the deal based on Microsoft agreements to grant access to Activision games to a wide range of gaming platform providers, followed by a US Federal Court ruling against the Federal Trade Commission who was trying to block the deal. Microsoft and Activision then submitted a new offer to the CMA to address its cloud market concerns, proposing to sell Ubisoft, a French-based game developer, the rights to market Activision games on cloud platforms for 15 years. It was judged to be acceptable by the regulator. The companies then proceeded to close the deal.

Context – Such a big acquisition was always going to spark scrutiny from enforcers who regularly criticize Big Tech. But did Microsoft really beat the odds? Most of all, they benefited from the fact that they are not the top provider in any meaningful video game market, that the game creator market has many providers, and that their chief antagonist held a larger market share in game consoles and often makes game titles exclusive. Microsoft also engaged in a global good behavior campaign aimed at progressive regulators, including acquiescing to unprecedented labor organizing, supporting antitrust reforms and app store regulation, and offering software license changes to promote EU-based cloud services providers. If antitrust regulators are going to block an acquisition based on the idea that a giant platform should not be able to use an acquisition to grow and improve, look to the European Commission’s effort to block Booking from buying eTraveli. In that case, the regulator can sit confidently on the fact that Booking is the top OTA in Europe and claim the deal will add to its dominance. Also, the regulator from the largest of the three markets might be less susceptible to pressure to revise their thinking.

UK CMA to Investigate Cloud Services “Hyperscalers” Led by Amazon & Microsoft

Report from CNBC

In Brief – Ofcom, the UK’s communications regulator, has published the findings of its study of the country’s cloud services sector initiated in 2022, and asked the Competition and Markets Authority (CMA) to open a competition investigation of the sector. Ofcom claims that the market is dominated by Amazon and Microsoft, with a combined market share of between 70 and 80 percent, with Google as the third largest provider at 5 to 10 percent, and that these cloud “hyperscalers” allegedly engage in practices that make it difficult for customers to switch to another provider or use multiple cloud services at the same time. Among those practices were fees to move their data out of a provider’s cloud, discounts that incentivize using a single cloud provider, and technical barriers to switching or integrating services among multiple providers. Ofcom specifically called out Microsoft, but not Amazon, who is reported to be the market share leader. The CMA has opened an investigation with a target date of April 2025.

Context – Mid-year, Ofcom released cloud services market share estimates for 2018 to 2021 in the UK. Amazon fell from 41% to 40% but was the leader in 2018 and 2021. Microsoft’s share grew from 17% to 25% and Google’s grew from 12% to 16%. Ofcom’s final report claims that confidentiality concerns preclude releasing specific numbers for 2023, but the ranges indicate Microsoft’s share further increased and Google’s fell. Microsoft practices that allegedly link its cloud to its other services are a feature in a number of competition investigations, including the CMA’s rejection (and now expected approval) of its acquisition of videogame developer Activision and European Commission scrutiny of Microsoft allegedly preferencing its cloud services with Office 365 and Windows OS customers. Although cloud services are in scope for the EU’s new Digital Markets Act regulatory regime, and Amazon, Microsoft, and Google are each designated as “gatekeepers”, none of their cloud businesses have been named as regulated “core platform services” in the initial DMA determination.

European Commission Funds New Study of Apple and Google Mobile Ecosystems

Report from Reuters

In Brief – The European Commission’s competition authority is taking bids for a study of the Apple and Google mobile ecosystems “to support the supervision and enforcement of the DMA vis-a-vis the gatekeepers”, according to the tender for the project. Apple and Google are two of the six “gatekeepers” designated by the European Commission under the landmark new Digital Markets Act (DMA), and both company’s mobile operating systems and mobile app stores have been designated “core platform services”, meaning that the law’s regulatory mandates apply to their operation. Some expect the two companies, but Apple in particular, to push back on the application of the unprecedented regulatory regime and see the new study as an effort to bolster the European Commission’s effort to establish new rules and procedures as well as defend against potential legal challenges.

Context – The European Commission named six DMA “gatekeeper” companies – Amazon, Apple, ByteDance, Google, Meta, and Microsoft – and designated 22 of their platforms as “core platform services” meaning that the law’s 18 so-called “Do’s and Don’ts” apply. Eight of Google’s services are covered, the most of any gatekeeper, while Apple saw only three named, the App Store, iOS mobile operating system, and Safari browser. However, Apple’s strict “walled garden” mobile ecosystem, a hallmark of its very popular iPhone business from its earliest days, is expected to provide a major challenge to the DMA regime, which mandates that gatekeeper platforms are open to competitor offerings in ways that run directly contrary to many core Apple mobile policies that the company aggressively defends as pro-consumer by maximizing user security, privacy, and customer experience. Thierry Breton, a lead European Commissioner on tech policies, recently said after meeting with Apple’s CEO that the DMA will require Apple to “open up its gates to competitors” including for “the electronic wallet, browsers or app stores.” So, it’s a study to help the unstoppable force deal with the unmovable object.

EU Telecom Company “Fair Share” Plan Pushed to 2025 by Commission

Report from Reuters

In Brief – It looks increasingly likely that the European Commission will delay until at least 2025 further action on the European telecom industry’s “fair share” plan to require the largest digital platforms to pay a new fee to help fund the ongoing rollout of 5G and broadband. That would leave the release of a legislative proposal to the next Commission, as the current Commission’s five-year term ends in November 2024. It has long been expected that Thierry Breton, Europe’s Internal Market Commissioner who oversees the telecoms industry and sympathizes with their sender pays proposal aimed at video streamers such as Amazon, Apple, Google, Netflix, and TikTok, would propose legislation in 2024 after formally seeking feedback from both sides this year. However, Breton, a former chief executive at France Telecom, faced wide ranging blowback, including from fellow commissioners, Member State governments, EU Parliamentarians, industry regulators, and consumer advocates.

Context – Telecom executives are envious of their industry colleagues in South Korea, the one major market with a “fair share” style regime of “data usage fees” paid by digital content companies to network providers. The South Korean regime technically only applies to in-country digital content companies, with overseas-based online companies such as Netflix and Facebook long pushing back against the network company charges. Legislation in the Korean National Assembly to clarify that the obligations also fall to large non-South Korean online companies faces divided consumer opinions. Netflix and SK Broadband, one of South Korea’s largest internet service providers (ISP), recently announced that they are ending their long-running legal battle over whether Netflix should pay SK “network usage fees” and instead are creating a strategic partnership where the streamer would pay SK Broadband for a range of service enhancements. Expect concerns over ways big content streamers could pay big network companies for better service to be a topic of upcoming net neutrality discussions at the US FCC.

Musk and Breton Spar Over DSA Rules and Hamas Attack Content

Report from CNBC

In Brief – European Commissioner Thierry Breton has issued a stern public warning to Elon Musk, owner of X, the platform formerly known as Twitter, alleging that illegal content and disinformation regarding the Israel-Hamas conflict is spreading on the platform in violation of the Digital Services Act (DSA). Breton, who leads the DSA enforcement effort, said in a letter to Musk (shared on X) that his office has “indications” that groups are spreading misinformation and “violent and terrorist” content on X, and called for a response within 24 hours. Many online analysts have alleged that significant amounts of misleading, false, violent, and otherwise questionable content relating to the conflict is spreading on X, often criticizing changes in its content moderation practices. In an exchange over the platform, Musk replied saying: “Our policy is that everything is open source and transparent, an approach that I know the EU supports. Please list the violations you allude to on X, so that that [sic] the public can see them. Merci beaucoup.” Breton further replied: “You are well aware of your users’ — and authorities’— reports on fake content and glorification of violence. Up to you to demonstrate that you walk the talk.”

Context – Keep in mind the admonition of super-smart digital policy expert Mark Masnick that “online content moderation at scale is impossible to do well”. That said, the DSA requires very large platforms to have clear rules about acceptable content, enforce them fairly and transparently, protect freedom of expression, and take proportionate and effective measures in the event of a crisis. Simple. And when it comes to the Arab-Israeli conflict, everyone agrees what is misleading, false, overly violent, and otherwise objectionable, right? The European Commission recently issued a report on what they consider failures of the major social media platforms to control the wide-ranging pro-Russian online propaganda campaign following the invasion of Ukraine. X, along with Telegram, were top targets of criticism and again are being highlighted for not intervening enough in this Hamas-initiated “crisis”. To be clear that he was not picking on X and Musk, Breton followed up the next day with a similar letter to Meta’s CEO criticizing Hamas conflict moderation and election misinformation.

Google Agrees to Data Policy Changes with German FCO Showing Interplay with DMA

Report from TechCrunch

In Brief – Google has reached an agreement with the German competition authority to allow users to choose when Google can combine data collected on each of its services together to create a comprehensive user profile for the individual. The settlement is based on the new regulatory authority granted to the Bundeskartellamt in 2021 that allows it to designate very large digital platforms as companies “of paramount significance on competition across markets” and proactively establish rules to protect competition in any of the markets they occupy. Currently Google presents its users with a single choice to accept data processing across all its services. The Bundeskartellamt’s announcement says Google will now give users in Germany the ability to give “free, specific, informed and unambiguous consent to the processing of their data” for each service and do so without using manipulative “dark patterns”.

Context – The German law allowing its competition authority to proactively regulate digital giants was a harbinger of the EU-wide Digital Markets Act (DMA). Differences between the two regimes and their potential interplay have been an issue. Some German officials questioned differences in the DMA model, including the possibility that it would try to regulate too many big companies while at the same time being limited in the number and type of services covered, as well as the demands that can be imposed on them (the 18 DMA Do’s and Don’ts). The German regulator is using this agreement to address those concerns and illustrate that it believes the regimes can work together. Five companies are covered in Germany — Amazon, Apple, Google, Meta, and Microsoft. The European Commission eventually named just six DMA “gatekeepers” (the same five plus ByteDance). However, while every digital service of a covered business is in scope in Germany, the DMA applies to 22 platforms from the six companies, including 8 from Google. In this case, the German regulator is applying their settlement to all the Google services not covered by the DMA, which is expected to apply similar rules to Google’s largest platforms.

5th Circuit Court of Appeals Adds One More Federal Agency to It’s Social Media Ruling

Report from the Washington Post

In Brief – A three-judge panel of the US Fifth Circuit Court of Appeals has amended their ruling that senior federal government officials likely violated the First Amendment by coercing or significantly encouraging large social media platforms to restrict posts regarding the coronavirus pandemic and elections by adding the Cybersecurity and Infrastructure Security Agency (CISA) to a list of offenders that included the Biden White House, Office of the Surgeon General, Center for Disease Control, and the FBI. The appeals court again chose not to cover the departments of State, Homeland Security, and Health and Human Services, as well as the Census Bureau. The injunction imposed by the appeals court says that the covered agencies cannot “coerce or significantly encourage social-media companies” to act against content containing protected free speech. The Supreme Court is expected to maintain a stay on the injunction pending its decision on whether to hear an appeal by the Biden Administration

Context – This case is very likely to end up in front of the Supreme Court. And when it does, it will be the second to spring from Republican arguments that social media platforms censored conservative viewpoints. The first involves First Amendment challenges to laws enacted by Florida and Texas in 2021 that regulate the content moderation activities of large social media platforms. The Biden Administration’s encouraged the Court to take up those cases and protect the right of digital platforms to make content moderation decisions. The Court did. Oral arguments are expected in early 2024. Should the Court agree that the First Amendment protects platforms when they make content moderation decisions, this complaint, brought by the Republican AGs of Missouri and Louisiana (Murthy v Missouri at this point) will test the boundaries of the principle that the First Amendment, which prohibits government censorship, also prohibits government from coercing private parties to engage in censorship, in particular what “coerce or significantly encourages” means and when “jawboning” crosses those lines.

TikTok Shutters Successful Ecommerce Service in Indonesia Following Ban

Report from the New York Times

In Brief – TikTok has shut down TikTok Shop in Indonesia following a regulatory change to prohibit ecommerce transactions from occurring over social media platforms. Indonesia, with one of the largest and fasting growing ecommerce sectors in Southeast Asia, was the first and largest market for TikTok Shop, a live-streaming and video-based ecommerce marketplace feature launched in 2021 and integrated into TikTok’s super-popular short-video-based social media app. The company claims that over 6 million Indonesian sellers and content creators were already using the service. TikTok is developing an ecommerce business within its core social media app that parallels a similar move into ecommerce by Douyin, parent company ByteDance’s version of TikTok that operates inside the Chinese market. The Indonesian Government announced on September 25th that it was planning an immediate ban on the use of social media platforms to engage in direct ecommerce transactions, limiting social media services to be used only for the promotion of goods and services, as well as imposing a minimum import transaction of US$100 on goods purchased from abroad. The move was cheered by many offline retail business owners who object to the very low prices offered on the streaming shopping service, which included Indonesian sellers but also offers directly from overseas manufacturers. In its statement announcing that it would close the shopping service, TikTok said, “Our priority is to remain compliant with local laws and regulations. As such, we will no longer facilitate e-commerce transactions in TikTok Shop Indonesia.”

Context – TikTok’s massive global popularity is leading to a diverse collection of regulatory challenges. Of course, there are concerns over data security and censorship due to Chinese influence, most notably the ongoing standoff in the US but they also shut the app down in India, but the platform’s emergence as an ecommerce hub in Indonesia and as an adult entertainment live-streaming platform in African countries like Kenya is adding very different challenges in important developing markets.

European Commissioner Jourova Warns of AI Act Over-Regulation

Report from the Financial Times

In Brief – Commissioner Vera Jourova, who is the European Commission’s co-leader on the AI Act effort, is warning that paranoia and “dystopian concerns” with AI’s future might lead to too much regulation and undermine positive technological innovation in Europe. She defended the risk-based regulatory model that was the basis of the Commission’s initial AI Act with a tiered system that applied mandates to enterprises using AI-enabled applications based on the risks associated with the specific service itself rather than attempting to regulate the underlying technologies. However, following the boom created by Chat-GPT, the EU Parliament made a big change by proposing that the AI Act regulate “foundational models” and the creators of underlying AI technologies rather than just those who use AI technology. Jourova said that she would attend the upcoming G7 forum on AI in Kyoto and promote the EU’s model of AI regulation.

Context – American advocates for aggressive digital regulation often look enviously to Europe. The AI Act is another example. But they generally only mean the Parliament’s expansive version. However, regulating foundation models inspired Jourova’s comments and also those of French President Macron, who is not usually a voice of caution on regulating tech. Champions of digital regulation often say the EU leads global trends, but AI talks at the G7 show more support for the US approach, such as the Biden Administration’s Safe AI principles and gathering corporate commitments to adhere to “responsible” practices like AI watermark standards and third-party audits of AI systems. If the US Congress legislates, it will be highly targeted, such as how federal agencies use AI-enabled tools. But even that can be a major challenge. A recent GAO report highlights the many federal and law enforcement agencies still accessing Clearview AI’s controversial facial recognition service, while facial recognition and biometric surveillance used by Member State security services remains the other big issue beside foundation models dividing the EU Parliament and Council of Member States in AI Act “trilogue” negotiations.

Apple to Face Federal Class Action Antitrust Lawsuit on Tap-to-Pay Restrictions

Report from Reuters

In Brief – Apple has failed to convince a federal judge to dismiss a class action lawsuit that the iPhone giant violates federal antitrust law by not allowing other digital wallet and payments companies to access the NFC-scanning hardware on its devices. The suit, brought by three credit unions, argues that competitor payment apps, forced to use QR codes rather than the more convenient NFC technology, are unable to effectively compete with Apple Pay, leading to Apple being able to charge overly high fees on its service. In his order, Judge Jeffrey White agreed that it is plausible to claim that Apple has a monopoly in the “aftermarket” for iOS tap-to-pay payment services and that Apple blocking third-party apps from NFC access has resulted in a lack of competition in that market that is harmful to consumers by inhibiting innovation that would likely lead to lower fees. However, the judge did reject the plaintiff’s tying charge because Apple does not require iPhones users to use Apple pay as a condition of using an iPhone.

Context – Apple has restricted access to NFC (Near Field Communications) capabilities for payments since 2015 when the company launched Apple Pay. Apple claims that its policies do not restrict competition in digital payments, citing thousands of banks and other payments providers working through Apple Pay to serve customers, and argues that restricting access to NFC technology protects iPhone users from fraud. In Europe, Apple is in the middle of an antitrust fight with the European Commission on this issue, with the regulator initially finding that Apple’s NFC policies are an abuse of its dominant position and violate competition law. In the EU, this Apple policy is just one aspect of its “Walled Garden” that is likely to face significant challenges from the emerging regulatory structure of the EU Digital Markets Act policing “gatekeepers”. Thierry Breton, a lead European Commissioner on tech policies, recently said after meeting with Apple’s CEO that the DMA will require Apple to “open up its gates to competitors” including for “the electronic wallet, browsers or app stores.”

The FTC Says “I’m Not Dead Yet” on Microsoft-Activision

Report from Bloomberg

In Brief – The three Democratic commissioners of the Federal Trade Commission have voted to continue forward with their challenge to Microsoft’s $69 billion acquisition of giant videogame developer Activision Blizzard despite a federal judge’s decision to reject the FTC’s request to halt the deal. In July, Federal District Court Judge Jacqueline Scott Corley turned down the regulator’s request for an injunction blocking the deal while the agency adjudicated the case in its internal court system. In her opinion, Corley said that the FTC had failed to show that Microsoft owning Activision games would harm competition in the videogame console or cloud-gaming markets, and even indicated that the deal would more likely lead to “more consumer access to Call of Duty and other Activision content.” The FTC is appealing that ruling. Its latest order says that it still plans to continue forward with its plan to fight the acquisition in its internal court system, with proceedings beginning 21 days after the 9th Circuit Court of Appeals issues its opinion on the FTC appeal.

Context – Yes, it seems like Monty Python’s “I’m not dead yet” skit. The Activision acquisition is almost certain to “close” on October 18. The low point for Microsoft was the UK Competition and Markets Authority (CMA) decision in April to reject the deal. But the tide turned in May when the European Commission approved the plan based on Microsoft market access commitments rejected by the CMA and FTC. Then Judge Corley ruled against the FTC. The UK CMA was then isolated and appears to have done a scripted walk back. Microsoft and Activision quickly came back to them with a “substantially different” offer based on selling French videogame publisher Ubisoft the cloud-streaming rights to Activision Games. The CMA has provisionally accepted that deal. The best way to think about the FTC leadership insisting that they still oppose the tie-up and still plan to challenge it if they prevail in the federal appeals court is that they are not surrendering on that long-shot appeal and will wait for the judges’ ruling.

Epic and Apple Ask the Supreme Court to Intervene (Apple Has a Better Shot)

Report from Reuters

In Brief – Apple and Epic Games have each appealed to the US Supreme Court with final bids to overturn aspects of their antitrust battle that they lost. Federal Judge Yvonne Gonzalez Rogers’ initial ruling in 2021 was a split decision that largely went Apple’s way, including determinations that Apple did not violate federal antitrust law, was not an illegal monopoly, and could justify its payments policies based on user safety and security. However, Judge Gonzalez ruled that Apple’s “anti-steering” policy that prohibits app developers from telling customers that cheaper payments options were available off Apple violated California’s Unfair Competition Law and she imposed a permanent injunction prohibiting the company from enforcing that policy with all app developers. The Ninth Circuit Court of Appeals upheld the full District Court decision in 2022. Epic Games’ appeal to the High Court (488 pages, here) is a comprehensive challenge of the courts’ interpretation of federal antitrust law, while the Apple appeal (37 pages, here) is far more focused. Apple is arguing that the judge’s use of an injunction to set national policy from the bench is an unconstitutional overreach, as well as violating federal law by applying the injunction to a large “class” of businesses not represented in the lawsuit and contrary to a settlement Apple had made in a certified federal class action lawsuit by developers on the same topic.

Context – Apple has a more straightforward shot at High Court review. Despite Epic mustering an impressive amici coalition for its appeal, the 9th Circuit panel ruled that Apple’s justification for their walled garden model is strong, that consumers are aware of its restrictions, and that Epic did not provide an alternative model that met those needs and compensated Apple for their investments. The justices would need to wade into a wholesale review of digital antitrust. Not likely. Apple’s appeal is focused on an existing narrow issue of federal courts using national injunctions, and circumvention of federal class action processes. While most cert petitions are longshots, some are less so than others. Epic does get back into federal court in November against Google.

Meta Is Proposing Subscription Plans in Europe Without Ads or Data Processing

Report from the Wall Street Journal

In Brief – Meta is discussing with European officials plans to offer paid versions of Facebook and Instagram that would not involve advertising to comply with regulatory requirements of the bloc’s General Data Protection Regulation (GDPR), as well as the newly enacted Digital Services Act and Digital Markets Act. Privacy advocates have challenged its business model of offering free services to users on platforms that also allowed advertisers to efficiently target those individuals with ads based on data collected from their online activity. Critics of Meta’s targeted advertising model have made more headway in Europe than other major markets. The company is calling the plan Subscription No Ads (SNA) and proposes a cost of 10 euro per month for one service, and 6 euro for an additional service, when purchased on a computer (30% more on a mobile phone due to Apple and Google commissions). So, Facebook and Instagram together, purchased on a computer, would cost a user 16 euro per month. SNA users would not see ads and their data would not be collected and processed by the company’s advertising business. It remains to be seen if a paid, ad-free offering will be acceptable to all the European regulators who could claim to have jurisdiction over Facebook and Instagram, or if regulators will object and argue that Meta must offer free or meaningfully less expensive versions of ad-free, or non-personalized ad-based, services. The plan is currently envisioned only for users in the EU and is being shared with privacy and digital regulators in Ireland, Brussels, and EU member states.

Context – Facebook and Instagram are Very Large Online Platforms under the DSA and core “gatekeeper” platforms under the DMA. A growing number of European privacy regulators and competition authorities have been insisting that Meta gain consent from users for the data collection and processing used for its advertising services. As digital platforms have learned from Apple’s policy requiring app developers to get user consent for ad-based data collection, many users will say no when there is no cost. Meta is arguing that ad-free options need to earn an alternative revenue stream, reportedly citing ad-free offerings on Spotify and YouTube as examples.

Booking .com to Challenge European Commission Rejection of ETraveli Deal

Report from Reuters

In Brief – Booking. com will challenge the decision of the European Commission to block its proposed $1.83 billion acquisition of Sweden-based “Online Travel Agency” (OTA) Etraveli and is encouraging member state competition authorities to oppose the Commission’s move. The regulator’s decision is based on its finding that Booking is the dominant hotel OTA in Europe, that the acquisition is likely to strengthen that position of dominance, and that the concessions offered by Booking were insufficient. The Commission claims to have reached out to over 15,000 hotels for their views and many were concerned that the transaction would strengthen Booking’s dominant position, reduce OTA competition, and increase prices for hotels directly and possibly for consumers. Booking’s proposal to address concerns by surfacing hotel offers from competing hotel OTAs on its flight check-out page using Booking’s Kayak algorithm was rejected for being insufficiently transparent (a “black box”), and that the offer would not apply to many of the touchpoints Booking could use to sell hotel OTA services to flight OTA customers. Booking’s response included its CEO saying, “The European Commission’s decision not only departs from settled law and precedent but it deprives consumers of travel options that they are entitled to have.”

Context – Just another example of the disjointed sequential competition policy roulette. The UK CMA announced its decision not to challenge Booking-ETraveli last September. The US FTC also did not challenge the deal. Unlike Microsoft-Activision where the European Commission’s approval was key to overturning the others’ objections, in this case the Commission is taking the hardest line. One difference is the reported high market share of Booking in the Hotel OTA market in Europe, much higher than any Microsoft market share in videogames. There is also the fact that Booking is one of the few EU-based VLOPs under the Digital Services Act and might soon be the first EU-based DMA “Gatekeeper”. The European Commission likely appreciates the opportunity to hold EU-based digital platforms to a tough standard.

TikTok Facing Expanding Threats to Ban Service in African Countries

Report from the Wall Street Journal

In Brief – Coming on the heels of the Senegalese Government blocking TikTok due to charges that it is was being abused to foment political discord, government officials in Kenya and Somalia have added to the African-based problems of the global short-video social media phenom. In Kenya, one of Africa’s most populace countries and home to one of the continent’s most robust digital economies, increasing outcry over sexually-explicit content proliferating over the platform, in particular a live-video feature that critics argue is being used for solicitation, led to calls for country-wide ban. The service continues to operate in Kenya after pledging to the government that it would make changes to its live channels feature. In Somalia, TikTok and social media platform Telegram were officially banned over violent extremist content as the country deals with the terrorist group Al-Shabaab.

Context – Governmental threats to ban TikTok only reinforces the fact that it has fully joined, if not passed, the US-based social media platforms like Facebook and Twitter in its impact on global audiences. It’s also a reminder that Africa has been a hotbed of government efforts to restrict online activity for political purposes, ranging from complete internet shutdowns, restrictions on social media, to targeted actions against specific platforms. A 2021 report claimed that 31 of 54 African countries had blocked social media platforms since 2015. Highlights include a seven-month standoff between the Government of Nigeria and Twitter, the Ugandan Government shutting down the entire Internet in the country in 2021 during national elections and blocking Facebook for months, and intermittent shutdowns of all major social media platforms in Ethiopia. China’s model of comprehensive digital communications control seems an aspirational goal of authoritarian governments around the world. And considering that TikTok’s corporate culture was formed within that Chinese version of the internet, one can expect the company to be willing to work with government demands that are technologically feasible.

As Expected, US Supreme Will Review Texas and Florida Social Media Regulation

Report from the New York Times

In Brief – As expected for more than two years, the US Supreme Court has accepted a pair of cases involving the ability of state governments to regulate content moderation activities of social media platforms. The laws in question were enacted by Florida and Texas in 2021. Republican backers argued that the giant social media platforms were censoring conservatives, while bill opponents argued that digital platforms can and should intervene to restrict hate speech, misinformation, and other objectionable content on their platforms. The Biden Administration’s Solicitor General encouraged the Court to take up the cases and protect the right of digital platforms to make content moderation decisions. The Court’s grant of certiorari focuses the case on whether the laws’ content-moderation restrictions and requirement to explain content moderation decisions to impacted users comply with the First Amendment. The Court has chosen not to address the laws’ general transparency provisions nor the issue of “viewpoint discrimination”. Oral arguments are expected to be set for early next year.

Context – The Florida and Texas laws suffered different fates in the 11th and 5th Circuits, with Florida’s law being put on hold while Texas’s law was backed by the 5th Circuit Court of Appeals. However, that 5th Circuit decision was itself put on hold by a five-justice majority of Roberts, Breyer, Sotomayor, Kavanaugh and Barrett. Justices Alito, Thomas, Gorsuch, and Kagan dissented, with Alito submitting a five-page opinion joined by Thomas and Gorsuch. Kagan did not join the Alito dissent and did not provide an explanation for her decision. The public views of those three conservative justices on social media content moderation led to some surprise with the unanimous USSC ruling on Twitter v Taamneh last May which was written by Justice Thomas. Other changes worth noting since May 2022 include Justice Breyer’s replacement by Justice Jackson as well the change in leadership and content moderation philosophy at Twitter, now called X. For example, Elon Musk’s X is suing to block California’s law on content moderation transparency focused on progressive concerns.

Norway’s Data Authority Continue Battle Against Meta and Irish Data Protection Commission

Report from Reuters

In Brief – Norway’s data protection authority has asked the European Data Protection Broad (EDPB) to make its three-month emergency order to block targeted advertising by Meta’s Facebook and Instagram in Norway permanent and extend it across the European Economic Area (EEA). The standoff involves the EU’s General Data Protection Regulation (GDPR) and challenges the law’s “One Stop Shop” process that puts the data protection authority of the member state that is home to a company’s EU headquarters in the lead regulatory role. The Irish Data Protection Commission (DPC) is the lead authority for Meta’s businesses. The Norwegian regulator claimed to exercise emergency authority granted by the GDPR to circumvent a lead regulator, in this case the DPC, for up to three-months. It’s emergency ban on Meta tracking and profiling Norwegian users, and related daily fine, was issued in July and has still not been implemented by the company. That order was upheld by a Norwegian national court. The national regulator is now calling on the EDPB, an umbrella organization of national data protection authorities and the European Data Protection Supervisor, to exercise its own emergency authority under the GDPR to make a temporary national order permanent and extend it across the EEA. A Meta spokesperson has said the company is surprised by the decision and continue to work with the DPC to resolve the issue.

Context – Ireland, the European home to many global digital giants, has been widely criticized for going easy on Big Tech. Recent GDPR decisions and large fines against Instagram and TikTok highlight how the DPC has been forced to accept changes, and ramp up fines, pushed by other national regulators. For example, a part of the DPC’s TikTok order regarding deceptive “dark patterns” was reportedly pushed onto the regulator by the German and Italian authorities. Also, frustration with the GDPR One Stop Shop led to the European Commission being put in the regulatory driver’s seat for very large platforms like Facebook and Instagram in both the Digital Services Act and the Digital Markets Act.

Ask A Question!