News insights

March 2024

Meta Oversight Board Urges Change in Treatment of Arabic Term for Martyr

Report from Reuters

In Brief – The Meta Oversight Board, funded by Meta but operating independently to advise the company on its content moderation policies, has called for the company to change its policies regarding the use of the Arabic term “shaheed”, which often but not always means “martyr” in a praiseworthy way. The board’s recommendation is that posts containing the word “shaheed” should be removed only when they are linked to clear signs of violence or if they separately break other Meta rules. Meta has been criticized for years for its handling of content involving conflicts in the Middle East. An internal study commissioned in 2021 expressed concerns with the impact of its policies on Palestinians and other Arabic-speaking users, and the company asked the Oversight Board in February 2023 to rule on its policy after it could not come to a consensus internally. The board was reportedly prepared to release its recommendation last fall but delayed its decision following the Hamas terror attack on Israel and the subsequent Israeli miliary response to gauge how its recommendation would deal with such a crisis. Meta currently removes any posts using “shaheed” in referring to people it designates on its list of “dangerous organizations and individuals,” which includes members of Islamist militant groups, and can accept or discard the Oversight Board’s general policy recommendations.

Context – The Meta Oversight Board is an innovative attempt to improve the process of platform content moderation. However, as tech policy expert Mark Masnick says, the job is “impossible to do well”. There is no right answer for content in the grey areas of many public policy controversies. Rather than attempt to impose the same rules on all online platforms, advocates of free speech should applaud when major platforms have somewhat different policies on controversial topics. That certainly expanded after Elon Musk purchased Twitter. But the European Commission is using the DSA to investigate X after criticizing the platform for not properly moderating content related the Hamas-Israel confrontation.

Google Fined 250 Million Euros for Not Properly Paying French Media for Snippets

Report from the Wall Street Journal

In Brief – The French antitrust authority has fined Google 250 million euros (around $270 million) for failing to fully adhere to its 2022 agreement to properly remunerate French publishers for using their content. That settlement ended an antitrust investigation into the use of so-called “snippets” of news articles in Google’s search services begun in 2020. France updated its copyright law in October 2019 to incorporate “neighboring rights” and French publishers used to the law to demand compensation from Google. When the search giant threatened to instead end the use of news snippets, the antitrust regulator responded that doing so would likely be ruled an abuse of Google’s dominant position and warned the company not to take that action. Google’s settlement included agreeing to engage in structured, good faith negotiations with the publishers, as well as to sharing relevant revenue information with a monitoring trustee. The French agency says the latest fine is based on Google’s processes not fully conforming with their agreement, as well as arguing that Google did not properly inform the media companies that their content was used to train Google’s AI foundation models, including Gemini, and not providing an acceptable opt-out that did not pull them out of Google search.

Context – Besides adding another European regulator to the EU AI beat, the third in recent weeks, the biggest news in the long campaign to force Google and Meta to pay “news” media companies has been the evolution of Meta’s strategy from trying to negotiate acceptable arrangements to their realizing that news is not content their users much value and banning it rather than be compelled to pay. That played out in Canada after that country followed Australia’s model, and now Australian publishers fear Meta will do the same there. That would leave only Google paying, like in Canada. France followed a different course, only targeting Google over neighboring rights. We wonder if TikTok, being prodded to make payments in Australia, will follow Meta given that much of the concern with potential Chinese influence operations involves “news” content on the platform.

Giant App Developers Rally Around Epic Games Challenging Apple’s New App Policies

Report from the Wall Street Journal

In Brief – Four of the largest app developers in the world, Microsoft, Meta, X, and Match have submitted a legal brief backing the effort of Epic Games to have Federal Judge Yvonne Gonzalez Rogers intervene and block Apple’s plan for alternative in-app payments but only with numerous rules and restrictions and only a small reduction in Apple fees. Epic sued Apple for anticompetitive App Store policies in 2020. Judge Roger’s ruled for Apple on the federal antitrust charges but issued a one-page order requiring Apple to end its anti-steering policies prohibiting alternative payments options for in-app purchases. She said that she would be watching how Apple complies and could decide at some point to amend her ruling. After the US Supreme Court refused to hear Apple’s appeal of Rogers’ anti-steering order, Apple announced new App Store rules for the US market. Many developers decried the new rules and Epic Games petitioned the judge to intervene.

Context – The long-awaited Federal Government antitrust lawsuit accusing Apple of operating an iPhone ecosystem that violates antitrust law in a wide range of product and service markets stole the thunder from the ongoing effort by giant app developers to use the anti-steering order of Judge Rogers to simply get a court-ordered reduction in Apple fees. As we’ve been saying for years, “payments options”, the subject of Epic Games antitrust complaint in 2020, and the global legal and regulatory campaign built around it, always hid the ball. Epic never wanted to compete in the market for payments processing services. Neither do Microsoft, Meta, X or Match. They all object to Apple’s fees. More to the point, they say they are way too high. So they object to Apple’s new payments plan because it offers only slightly reduced fees. In the US, Apple is rolling out their new payments and fees plan due to the Rogers’ anti-steering order. In Europe, it is the heart of their Digital Markets Act (DMA) compliance plan. Not surprisingly, the same app developers are howling mad about that plan as well and are pressing for the European Commission to get around to simply regulating Apple’s fee structure.

Utah Changes Social Media Age Limit Bills from Mandates to Parent Lawsuits

Report from the Associated Press

In Brief – Last March, Utah became the first state to attempt to severely restrict the ability of teens to use social media, enacting legislation mandating that teens under 18 have parental approval to set up an account, establishing approved age verification methods, requiring that the approving parent be given a separate password to access the account, and requiring that access to teen accounts be disabled between 10:30 pm and 6:00 am, unless waived by the parent. As more states followed with various laws endeavoring to regulate teen social media use, and those tested in federal court were blocked by judges raising First Amendment and privacy concerns, Utah’s Republican Governor and legislative leaders entered the year proposing to amend the initial statutes and have delivered on their promise. While the new versions require social media companies to verify the ages of their users, disable certain features on accounts owned by teens, and set default privacy settings for minors to restrict access to direct messages and some sharing features, it removed the parental consent requirement. Instead, the state created a new private right of action for a parent to sue a social media company if their child’s mental health worsens from excessive use of an algorithmically curated app, and then provides social media companies the ability to escape liability by implementing a range of restrictions from the previous bills, such as limiting minors to three hours of use a day, disabling so-called addictive design elements such as autoplay, and giving parents the ability to independently access their teen’s full account.

Context – Early indications from the litigants that have been successfully challenging the state teen social media laws are that Utah will simply get back into the litigation queue with laws from Arkansas , Ohio, Texas, California, and most recently, Florida. We will see what federal courts think of shifting from legal directives to lawsuit-enforced “options” to avoid liability in the context of First Amendment and privacy grounds problems that judges have tended to see as content-based and overly broad.

Meta’s Ad-Free Subscriptions Shows Price Regulation has Arrived in the EU

Report from Reuters

In Brief – Meta is discussing with regulators in Europe dropping the price of their ad-free subscription option from 9.99 euro per month per service to 5.99 euro per month. Meta began offering the plans in the EU last year to bring their services into compliance with the growing raft of EU digital regulations. The talks regarding prices are underway with the Irish Data Protection Commission, who is Meta’s lead regulator for the General Data Protection Regulation (GDPR), as well as with the European Commission officials who are the regulators for Meta’s top platforms under the Digital Markets Act (DMA), and likely with the European Commission officials who regulate Meta under the Digital Services Act (DSA). Privacy advocates who strongly object to targeted advertising have been engaged in a years-long legal battle to force Meta to offer users the option to reject the practice. The European High Court backed the position that online users should have that choice under the GDPR, but also pointed to the acceptability of online services offering consumers a paid-alternative for not having ads, leading Meta to offer their no-ads subscription options. Privacy advocates have objected that consumers should not be forced to pay to not be tracked, and also claimed that Meta’s prices were far too high.

Context – Many European leaders backing the DMA and DSA have been very clear that digital platform regulation was needed because the services are so important, equating it to regulating banking, telecommunications, and utilities. We’ve been saying for years that price regulation was inevitable, and it is appearing already, especially in the context of the DMA. For example, the years of antitrust conflict between large app developers and Apple (and Google) about “payments choice” has really always been about Apple and Google fee levels, and Apple’s new DMA-based fee regime is now squarely in the EU regulators’ laps with the same app developers complaining. Commissioner Vestager has confirmed that price levels are a clear DMA compliance issue with Apple and Meta straight away, and once it starts, it’s part of the regime.

Uber Pays Damages to Australian Cab Drivers for Hurting Their Business

Report from CNN

In Brief – Uber has agreed to settle a class action lawsuit in Australia that will result in paying more than 8,000 taxi and car-for-hire drivers more than 271 million Australian dollars ($179 million USD) for losses they suffered in the early years of Uber’s operations in the country. Uber entered Australia in 2012 and the lawsuit was filed in 2019, with the taxi drivers claiming that Uber’s operations violated the cab laws and regulatory system in four of Australia’s six states, diverting business from cabs, reducing compensation for cab drivers, and reducing the value of taxi medallions and cab business licenses. The settlement, described by the plaintiffs’ lawyers as the fifth largest in the country’s history, is described by Uber as putting “these legacy issues firmly in our past”. The ridesharing company said that when it started its business there were no ridesharing regulations in Australia or anywhere else and noted that it now operates within the regulatory scheme of every Australian state and territory.

Context – Uber has faced more legal and regulatory challenges in markets around the world than any other Gig work company. (AirBNB runner-up?)  This settlement is unique because they appear to be taking financial responsibility for a business model reform that captured consumer demand by offering lower prices and better service, and paying some compensation to suppliers in the defeated business model. In most other markets, the focus has been trying to force Uber to conform to the regulatory structure of the taxi industry, especially by classifying Uber drivers as employees of the platform and applying employment rules and benefits. The EU appears on the brink of approving the Platform Labor Directive that was intended to do that to Uber and other Gig work platforms, but instead the final deal leaves the key worker classification issue in the hands of the individual member states and so more political battles. The situation in the UK is different because of its three-level worker classification regime, with Uber, who recently signed a cooperation agreement with the London cabs, seeing their drivers categorized by the courts as middle tier “workers” with some, but not all, employee benefits.

US Department of Justice Files Long-Awaited Apple Antitrust Lawsuit

Report from the Washington Post

In Brief – The US Department of Justice (DoJ) has filed a major antitrust complaint against Apple for a broad range of anticompetitive practices in its popular iPhone-centered ecosystem of devices and services. Sixteen states joined the effort. A Federal Government antitrust challenge against the company has long been expected given the Biden Administration’s public commitment to activist antitrust enforcement and criticism of Big Tech. The wide-ranging complaint argues that Apple built “a dominant iPhone platform and ecosystem” and then battled competitive threats through restrictive rules and restrictions to allow the company “to extract higher fees, thwart innovation, offer a less secure or degraded user experience, and throttle competitive alternatives”, harming consumers and developers across many technologies, products, and services, including super apps, text messaging, smartwatches, and digital wallets. Apple has responded saying that the lawsuit threatens “the principles that set Apple products apart in fiercely competitive markets” and “our ability to create the kind of technology people expect from Apple—where hardware, software, and services intersect.”

Context – The DoJ and FTC divvied up the then-Big 4 tech giants for increased antitrust scrutiny in 2019. Apple is the final target to face a major complaint. The DoJ did not limit itself to a single market like the App Store or one Apple practice like withholding NFC from payments processors or undermining Android messaging. They went after the whole ecosystem. The biggest elephant in the room is the 2021 decision in the Epic v Apple antitrust lawsuit that Apple’s restrictive App Store rules did not violate federal antitrust law, and the subsequent backing by the 9th Circuit Court of Appeals of Apple’s arguments that their integrated and restrictive ecosystem is pro-consumer by protecting user safety and customer experience, and that consumers know this when they buy in. Then there is the matter of whether iPhone and Android ecosystems truly compete. Finally, given that the FTC v Amazon antitrust trial is scheduled for fall 2026, don’t expect a decision here soon.

The EU Commission is Using the DSA to Regulate Artificial Intelligence Now

Report from Euractiv

In Brief – The European Commission announced that it is requesting information from eight of the largest digital platforms, including YouTube, Facebook, Snapchat, TikTok, and X, regarding their risk assessments and mitigation measures concerning generative AI’s impact on a range of concerns including electoral processes, illegal content dissemination, mental well-being, personal data protection, and intellectual property. The inquiries, made under the Commission’s authority as the regulator of Very Large Online Platforms (VLOPs) under the Digital Services Act, pertain to both the dissemination and creation of generative AI content. A request for information is an investigative measure and does not predetermine any subsequent actions the Commission may undertake, but fines can be imposed for providing inaccurate, incomplete, or deceptive information in response. The companies have been given until April 5 to respond to questions regarding generative AI and elections, and until April 26 on the other issues.

Context – Yes, the EU’s AI Act is a big deal. The European Parliament recently passed the final version, and the Council will soon sign off as well. It creates a tiered, risk-based system for AI-enabled services and a bi-level regime for generative AI foundation models with the strictest burdens on the largest models (all US-based today). While there are some worries about overregulation, the Parliament says the law’s requirements are not fully applicable for 24 months, meaning mid-2026, with some exceptions such as rules for foundation models in 12 months and obligations for high-risk systems in 36 months. In the meantime, the new EU AI Office also oversees the “voluntary” AI Pact that allows businesses to engage with the Commission and other stakeholders on AI best practices. However, it’s increasingly clear that there are plenty of European regulators already on the AI beat. If the DSA regulates how the biggest platforms deal with AI, then Thierry Breton does not need more. And some Member State privacy authorities, for example Italy’s, are already keen to regulate generative AI models as well.

Microsoft Whistleblower Criticizing OpenAI and Microsoft AI Image Systems

Report from Bloomberg

In Brief – Shane Jones, a Microsoft software engineer who engaged in what he described as “personal red teaming work” on OpenAI’s image generator DALL-E3 and Microsoft’s Copilot Designer, has sent letters to Microsoft’s board, his congressional representatives, Washington State’s Attorney General, and the Chair of the Federal Trade Commission, telling them that the image generators can be used to create abusive, violent or otherwise objectionable material, and that Microsoft refused to follow his recommendation to withdraw their Copilot service from public availability, press OpenAI to do the same, or change the description of each to recommend it only for mature users. In his communications, some of which he posted on his public LinkedIn profile, he discussed circumventing suspected “guardrails” created by OpenAI and Microsoft to deal with violence, political bias, underage drinking and drug use, conspiracies, and copyright violations, as well as claiming that the image generators tend to add questionable content, including material objectifying women, when given some generic prompts. Unlike nearly all the other recent “whistleblowers” involving major tech companies, Jones has carried out his campaign while employed by Microsoft, and the company said it’s “committed to addressing any and all concerns employees have in accordance with our company policies”.

Context – Add Microsoft to the list with Facebook, Amazon, TikTok, Uber, and Instagram. Are tech employees less loyal or more easily offended than in other industries, or do the companies just garner more attention? On so-called AI guardrails, like with social media content moderation, there is no right answer for everyone. Google suffered major criticism, especially from conservative commentators, because its generative AI services appeared to be trained and governed by guardrails that skewed left. Jones’s fallback to ask Microsoft to better describe “risks” and market the services just to adults raises an interesting question for the FTC regarding deceptive trade practices. How should AI hallucinations be fairly described?

European Commission Opens Formal DSA Investigation of AliExpress

Report from CNBC

In Brief – The European Commission has opened a formal investigation of AliExpress under the Digital Services Act (DSA), legislation enacted in 2022 to regulate how digital platforms address illegal and objectional content. The largest platforms are called Very Large Online Platforms (VLOPs) and face enforcement by the Commission itself. There are currently 22 VLOPs, including three online commerce marketplaces, AliExpress, Amazon, and Zalando. The Commission announcement says that its initial inquiry raised questions with AliExpress’s compliance with ten sections of the law, including on managing and mitigating risks, the effectiveness of content moderation practices and the consistent enforcement of terms of service prohibiting dangerous products, processes for handling user complaints, the level of transparency of its advertising and recommender systems, the ability to trace sellers, and whether outside researchers have necessary data access. The investigation will also determine if the platform is meeting its obligation to block minors from accessing pornographic material.

Context – AliExpress is the third VLOP to face a formal DSA investigation by the Commission, following X in December and TikTok in February. The Digital Markets Act (DMA) is now operating side-by-side with the DSA. It aims to foster fair and competitive online markets by regulating the largest “gatekeeper” platforms. Like the VLOPs, the DMA gatekeepers are regulated directly by the Commission. The number of DSA VLOPs and DMA gatekeepers are likely to grow. Shein, a China-based marketplace like AliExpress, looks like a VLOP, and Booking and X, both already VLOPs, may meet the DMAs gatekeeper thresholds. The DMA and DSA aim for ongoing regulation of digital platforms like the financial services or telecoms sectors. As DSA and DMA investigations proceed, the next thing to watch is the level of fines meted out. The lack of huge fines imposed on Big Tech in the first years of the GDPR was a big criticism of that law and a motivation for the Commission-centric enforcement model for big platforms in the DSA and DMA.

Chinese Marketplace Platforms Drawing South Korean Government Response

Report from The Korean Economic Daily

In Brief – South Korea is planning a set of legislative and regulatory measures to pressure Chinese marketplace businesses such as AliExpress, Temu, and Shein to better address suspected counterfeits and knockoffs and provide better dispute resolution procedures. Chinese-based online marketplaces have rapidly grown their share of the robust Korean ecommerce market, overtaking US marketplace sales and putting major pressure on the historically very successful domestic ecommerce leaders. The plan includes amending the Electronic Commerce Act to require large foreign online platforms to establish a local presence in the country, which will increase the ability of Korean Government to enforce laws and regulations on the marketplaces and their often China-based sellers. The move to require domestic establishment of marketplaces is especially aimed at Temu and Shein, who are both relative newcomers to the Korean market but have rapidly expanded their sales by connecting China-based online sellers with South Korean shoppers, with the goods shipped directly from China. Both do not have any offices or logistics operations in Korea. AliExpress, a subsidiary of long-time Chinese ecommerce giant Alibaba, has a more established presence in the country, and has attempted to lure Korean companies to its platform by offering zero commission sales, recently wooing South Korea’s top domestic food brand to join the platform and abandon Coupang, Korea’s top local ecommerce platform.

Context – The EU now has a standing regulatory regime in place, the Digital Services Act, to govern how digital platforms moderate illegal and objectional online content, including how online marketplaces address issues like fraud, counterfeiting, and the sale of illegal goods. Platforms with more than 45 million active monthly users in Europe are designated a Very Large Online Platform (VLOP) and face the most stringent demands, direct oversite by the European Commission, and the submission of regular risk assessments. AliExpress is already a DSA VLOP and Shein is on the path to that designation.

The EU AI Act Clears the European Parliament on the Way to Enactment

Report from the Wall Street Journal

In Brief – The final version of the EU’s landmark AI Act, creating the first comprehensive regulatory regime for AI technologies, has passed the European Parliament on the way to enactment. The effort started five years ago and was built around a tiered, risk-based system imposing rules on AI-enabled applications based on the service they provide. They will be designated as unacceptable risk, meaning prohibited (such as social credit scoring), high risk, requiring review and pre-approval from regulators (such as self-driving cars or medical machines), and limited risk, which have transparency and testing requirements (such as touching up photos). Basing regulation on concrete risks has been applauded. Following the release of chatbot phenom ChatGPT last year, the Parliament changed the tenor of the law by directly regulating general foundation models rather than just the services built on them. Months of negotiations between the member states and parliamentarians landed on an agreement to bring foundation models into a bi-level regulatory regime, imposing the strictest burdens on the largest foundation model companies (all US-based today) and less on smaller ones (which includes the current EU-based leaders). The European Council is fully expected to clear the measure this spring, and various aspects of the law start to come into effect in 2025.

Context – As the EU wraps up the AI Act, the big question is whether it will prove a global trailblazer or outlier. Note that even after the “final” deal was struck, French President Emmanuel Macron was expressing concerns of over-regulation, and leaders of top European start-ups such as Mistral and Aleph Alpha have reluctantly acquiesced to the plan. The rest of the world, especially the US Congress, UK, and Japan, does not seem to be following and likely want to be seen as more friendly to AI investment and entrepreneurs. The Guidelines for Secure AI System Development, high level recommendations for safe AI use from the US’s top national security and surveillance partners, appears to be the kind of action with more backing right now.

Supreme Court Rules on Public Officials Blocking People on Social Media

Report from the NBC News

In Brief – The Supreme Court issued a pair of unanimous rulings in cases involving government officials blocking people from accessing their social media accounts. Justice Barrett’s outlined a fact-based framework for courts to determine if a public official’s social media presence was primarily personal or for their public duties, as well as to consider the nature of the communication in question, and whether the method of blocking prevents a person from seeing future public announcements. In short, the Court said that sometimes blocking is permitted, sometimes it is not, and that “it can be difficult” to determine when the social media postings are official or private. The two cases involve local officials, one from Michigan, the other a pair from California. The Barrett opinion was rendered in the Michigan case, but both were sent back to their respective federal district courts to be reconsidered in the manner outlined by the justices. Overshadowing the two cases was a similar one involving President Trump blocking people on his Twitter account that ruled against the President but was dismissed by the High Court for being moot after Trump was banned by Twitter and then left office. This opinion points to the justices believing that the President blocking users on a platform like Twitter (now X) would not be permitted.

Context – The High Court is considering three issues this term applying the 1st Amendment to social media. Three weeks ago, the justices heard arguments challenging the ability of states, in those cases Florida and Texas, to regulate how social media platforms moderate content. Yesterday was the third, Murthy v Missouri, a case in which the Court may try to clarify the line between acceptable government use of the “bully pulpit” to encourage social media platforms to make content moderation decisions, and pressure to censor-by-proxy that violates the 1st Amendment. The oral arguments revealed a skeptical court overall, in particular to the argument made by the states that government officials “encouraging” publishers to restrict speech was often unconstitutional, with several justices indicating that has been commonplace with traditional media for a long time.

Amazon Wins Dismissal of a Consumer Class Action Focused on Logistics Practices

Report from the Bloomberg

In Brief – US District Judge John Chun sided with Amazon and dismissed a prospective federal class action complaint brought by two Prime subscribers alleging that the company undermined competition for ecommerce fulfilment services, causing them to pay more for products they purchased on Amazon. Chun, who is overseeing a series of antitrust cases targeting Amazon’s marketplace practices, including a complaint filed by the Federal Trade Commission last fall, ruled that the consumers had failed to show that they had suffered injury in the market for Amazon’s logistics services, and that they did not effectively argue that the Amazon Buy Box was itself a cognizable market. Instead, the customers for Amazon logistics who may have paid inflated prices were the product sellers. The consumers were “indirect purchasers” and as such are precluded from bringing antitrust suits in federal court. This is the second time the plaintiffs have had their complaint dismissed by a federal judge, and Chun’s ruling closed the door on amending their complaint and trying again.

Context – Judge Chun is learning that Amazon is a unique and complicated hybrid of a third-party marketplace and an ecommerce logistics provider. The relationship between Amazon’s logistics business, how it treats third-party sellers on its marketplace, and how Amazon’s algorithms determine who makes sales, is increasingly the focus of antitrust scrutiny. In particular, charges that Amazon pushes sellers to buy high-priced logistics as a condition of making large sales on Amazon. In the EU and UK, Amazon has settled antitrust cases promising to break that link. In the US, a series of antitrust complaints allege various harms due to the same set of practices. Chun himself rejected Amazon’s motion to dismiss on a different consumer class action last September, and a suit from the California AG survived Amazon’s motion to dismiss in California state court. Chun is also overseeing the FTC’s Amazon antitrust case, which he recently scheduled for trial in fall of 2026.

EU Platform Labor Directive Deal Wins Enough Support to Cross Threshold

Report from the Bloomberg

In Brief – Negotiations to wrap up the EU Digital Labor Platforms Directive regulating Gig-style work platforms appears to have crossed its final big hurdle and looks set for enactment this spring. The European Commission proposed its draft in December 2021, and the Parliament approved their version last year, but member states remained divided between those looking to strictly regulate Gig-style work platforms and those championing flexible and independent work models. Belgium, serving as Council President and a strong supporter of the Directive, broke the deadlock by dropping the section setting EU-wide standardized criteria for “worker classification” to determine when a person using a platform would be considered an employee of the platform, with employee rights and benefits. Instead, the deal leaves it to national governments to establish their own criteria for the rebuttable presumption that a platform’s workers are its employees, meaning more country-by-country policy battles. Even that compromise appeared to fall short, with France, Germany, Greece, and Estonia not in support, but Greece and Estonia came around at a meeting of EU Labor Ministers, setting the stage for final adoption by the Council and Parliament this spring. Behind that contentious issue, the Directive also includes new EU-wide rules regulating the use of personal data and algorithms to manage workers, including a prohibition on workers being dismissed by automated systems.

Context – Finding a balance between ending so-called “phony” independent contractors, while also expanding opportunities for non-traditional workers and giving skilled freelancers flexibility and independence, has proven very challenging. In the US, federal and state legislation has been stymied since California voters exempted Gig-drivers from the state’s worker classification law in 2020. Looking to November, the Biden Administration is pressing regulatory action at the Department of Labor and the FTC that they surely hope will energize progressive labor advocates but are not likely to change actual conditions for platforms or workers before votes are cast.

Fast Growing Chinese Online Marketplace Shein Under Review as a DSA VLOP

Report from the Reuters

In Brief – EU officials are in contact with Shein, a rapidly growing Chinese-based online marketplace, regarding the likelihood that it will be designated as a Very Large Online Platform (VLOP) under the EU Digital Services Act DSA). Shein has grown from being a low-price, slow delivery, “fast-fashion’ retailer into a hybrid retailer-marketplace platform like Amazon and Zalando, and recently reported 108 million EU users. The DSA regulates how digital platforms moderate illegal and objectional online content. Platforms with more than 45 million monthly active users are VLOPs. They face a more stringent regulatory regime, including oversite by the European Commission rather than a national regulator, stricter criteria for dealing with objectionable material, and the submission of regular risk assessments. The European Commission initially designated 19 digital platforms, including two search engines, to be VLOPs, and added three popular online pornography sites to the roster in December.

Context – Along with the current 22 VLOPs, six of the largest platform companies are “gatekeepers” under the Digital Markets Act (DMA), which aims to maintain fair and competitive online markets. Amazon is on that list as well. Like the DSA VLOPs, the DMA gatekeepers are regulated directly by the European Commission. And like with the VLOPs, the number of gatekeepers may soon grow, as Booking and X, both already VLOPs, have informed the Commission that they may meet the DMAs gatekeeper thresholds. The fact that the DMA and DSA rosters are likely to keep growing is another example of how the new EU digital regulatory regime is nothing like a one-off, it is permanent regulatory oversight. Shein surely knows that both Amazon and Zalando have challenged their designation as DSA VLOPs and the Commission has responded that the scope of the DSA is very clear and covers all platforms that expose their users to content which can be illegal, including online marketplaces for physical or digital goods, which could be dangerous, fraudulent, or illegal.

Italian Privacy Authority Initiates Review of OpenAI’s Sora Video Generator

Report from the Reuters

In Brief – The Italian Data Protection Authority (GDPD), which made news last spring by banning OpenAI’s chatbot phenom ChatGPT in Italy for a month, has announced that it has asked OpenAI to clarify how its new video generating system Sora operates, including how it processes the personal data of users. OpenAI’s video generator has not yet been released to the public but was showcased by the company in February. When the Italian regulator imposed its ban on ChatGPT, it had already been operating for five months. The questions it is asking the developer about Sora include the timing of its plans to make the service available in Italy or the EU, how the model was trained, whether it plans to collect and use data connected to users once it is released publicly, its legal basis for data processing, whether the company plans to offer the public the choice to not have their data processed, and the planned treatment of sensitive personal data such as that related to religious beliefs, health, and sexuality.

Context – The EU spent five years on the AI Act. The plan was a tiered, risk-based system imposing rules on AI applications that did things, not the underlying technologies. Services would be designated as unacceptable risk, meaning prohibited (such as social credit scoring), high risk, requiring review and pre-approval (such as self-driving cars or medical machines), and limited risk, with transparency and testing requirements (such as touching up photos). Basing regulation on tangible risks is generally applauded. The release of ChatGPT upset that approach, with the EU Parliament adding regulation of general “foundation models”, something initially opposed by officials such as Commissioner Vestager. The final version, expected to be enacted this spring, imposes tougher regulatory burdens on the largest foundation model companies (US-based today) and less on smaller ones (which includes the current EU-based leaders). But Italy’s Garante is reminding everyone that while AI Act rules are not scheduled to go fully into effect for more than two years, there are EU regulators on the job now.

The FTC Finally Gets Some Republican Commissioners

Report from the Bloomberg

In Brief – For the first time in nearly a year there will be Republican commissioners of the Federal Trade Commission as the Senate has finally confirmed Andrew Ferguson of Virginia and Melissa Holyoak of Utah to serve as commissioners of the antitrust and consumer protection regulator. The Senate also approved a new term for Democratic Commissioner Rebecca Slaughter. Both Ferguson and Holyoak each served in the role of Solicitor General for their respective states under Republican Attorneys General, and Ferguson has served as a counsel to long time Senate Republican Leader Mitch McConnell. The FTC has not had any Republican commissioners since Christine Wilson resigned last March 31, which followed fellow Republican Noah Phillips stepping down from the agency the previous October. While the three nominations were reported out favorably by the Senate Commerce Committee last October, they were held up by Sen, Josh Hawley (R-MO) over concerns that he claimed to have over Ferguson’s commitment to what the Senator called “robust antitrust enforcement”.

Context – The tenure of Lina Khan as FTC Chair has been highly partisan and divisive. Wilson made an especially “noisy exit” from the Commission, harshly criticizing Khan, the other Democratic Commissioners, and the Chair’s senior staff for policy and operational failings. The three Democrats wished her well in her next endeavor and continued stepping into contentious debates on policy issues where Congress has refrained from legislating. Meta accuses the FTC of trying to make policy on behavioral advertising and how social media serves teens, and Kochava accuses them of legislating on the use of location data. Add in rulemaking on “unfair methods of competition”, employer non-complete clauses, “commercial surveillance”, Gig work, “all in pricing”, and merger reviews that appear more aligned with European regulators than US court rulings. Business groups are increasingly critical. With two Republican Commissioners, they can now move forward with 3-2 votes rather than 3-0 votes.

Bill Banning Chinese-Owned TikTok Speeds Through the US House

A Report from Platform Economy Insights

Big 1st Amendments Issues Await
Report from Platform Economy Insights

In Brief – The House of Representatives passed H.R. 7521, the Protecting Americans from Foreign Adversary Controlled Applications Act, by a vote of 352 to 65. The bill requires Beijing-based ByteDance to sell TikTok’s US operations to an owner that is not based in China (or Russia, Iran, or North Korea) or face being banned from app stores, such as Apple’s and Google’s, and web hosting services, in the United States. The measure moved very rapidly through the House of Representatives. It is reported that TikTok and its team of US-based advisors and advocates were surprised by the pace of the effort, and that TikTok’s response of helping its users communicate with their  Member of Congress, a now common “grassroots” lobbying tactic by internet companies, energized support for banning the app by some House Members because it confirmed that the app could be used as a political influence tool by a company some argue is under the sway of the Chinese Government based on the location of its parent company. The bill faces a less clear path in the Senate, including some very vocal opponents such as Sen. Rand Paul (R-KY), a group of Senators such as Mark Warner (D-VA) and Marco Rubio (R-FL) with long time concerns over TikTok’s Chinese ownership who may have their own views on the legislative specifics, and the continued lack of clarity regarding the views of former President Donald Trump, who’s administration tried to ban the app from app stores in 2020 but has made recent comments that appear to oppose the latest effort.

Context – Many security officials have long raised two major concerns with TikTok being owned by a Chinese-based parent: (1) personal data collected by the app might be used by Chinese authorities to influence US users, and (2) that the service, which is increasingly a major platform for news and policy debates in the US, could be influenced by the Chinese regime to push various issues and messages. Focus on the second concern. Foreign actors can already buy huge volumes of personal data on Americans and hacking by state actors happens. But the media influence of TikTok is unprecedented for a platform connected to a foreign policy adversary. This is the issue that raises the biggest 1st Amendment questions as well. The Trump Administration tried to ban Chinese-based TikTok and WeChat. WeChat is an active participant in China’s digital censorship and influence regime. The Trump efforts were challenged by TikTok, TikTok users, and WeChat users. Federal judges sided with each, largely on 1st Amendments grounds (here, here, here). Similar issues will be central to the eventual legal challenges to this legislation, or even a CFIUS order to divest. It is an interesting question whether a major US media outlet under the thumb of the USSR during the Cold War would have been protected by the 1st Amendment. Remember, there was a US Communist Party during the Cold War, and it had a daily paper. It just wasn’t that popular. These are serious constitutional questions.

Gatekeepers Turn in Their DMA Assignments – Investigations Expected

Report from the Reuters

In Brief – The Digital Markets Act, which currently regulates 22 large platform of six digital “gatekeepers”, imposes 18 competition law “Do’s and Don’ts” on the digital giants, such as requiring interoperability with competitors and simple data portability, while prohibiting practices like self-preferencing or requiring users to have their data shared between different services. March 7 marked a key deadline, with the companies submitting their initial compliance reports to the European Commission “describing in a detailed and transparent manner the measures they have implemented to ensure compliance” with the obligations. The Commission posted summaries are here. Based on the tenor of stakeholder reactions to company DMA announcements, Apple, Google, and Meta seem especially ripe for investigation.

Context – Of course, there will be “investigations”. The unique DMA regulatory structure required the companies to determine for themselves that their platforms comply with the often general requirements of the law, including explaining changes that they believe were needed to do so. They all say they have done so. European Commission staff now have a year to review the plans, including highly technical changes and justifications. These are 22 of largest digital platforms in the world. They are all highly complex and all try to balance countless competing interests. The Commission, with a reported 80 officials dedicated to the task (just 80!), will apply their own technical expertise, but they are certain to gather, even lean on, lots of feedback from third-party stakeholders, including various competitors, customers, and advocates. Then the Commission’s regulators sign off on a company’s plan, or more likely, prod the giant to make further or different changes. The DMA was borne of European frustration with decade-long antitrust cases, especially involving Google. The new system basically jumps forward to an ongoing remedies phase, with the Commission given a year to get agreement or mete out penalties. The companies can then challenge those outcomes in court. So a faster pace, but still years. And new reports come out annually, so an endless process.

Florida Legislature Sends the Governor a Revised Social Media Age Limit Bill

Report from the AP

In Brief – As expected, the Florida legislature has sent Governor Ron DeSantis a revised bill to prohibit social media services from allowing teen users under age 16 without parental permission. The state’s House and Senate quickly passed an amended version after the Governor vetoed the initial bill that set a strict 16-year-old age limit for social media. DeSantis wanted to add a parental consent option for teens aged 14 and 15, as well as soften the bill’s age verification requirement. It is not clear which social media platforms are covered by the new bill. It requires 10% or more of a platform’s daily active users to be younger than 16, and the under-16 users must average at least two hours or more on the service. Speculation is that Facebook, X, Instagram, and Reddit don’t meet the 10% test, and while Snapchat and TikTok might, there is no readily available data on the average time on the services by under-16 users. The revised bill dropped the requirement that social media companies all adopt an age verification system, although it does require platforms that contain a “substantial portion of material harmful to minors”, generally considered pornography sites, to verify each user’s age and block those under age 18. The bill’s sponsors are hopeful that the revised version is better positioned to survive expected court challenges.

Context – Efforts to cordon off parts of the internet, especially social media platforms, from teens, are only gathering steam despite the lack of data linking social media use to worse mental health by teen users. The Florida legislature’s first effort was interesting because it was a blanket prohibition under age 16. The version that will become law will instead simply get into the litigation queue with laws from states like Arkansas, Ohio, Utah, Texas, and California. Federal judges have not been sympathetic to the state social media regulation efforts, with concerns on both 1st Amendment and privacy grounds. The Florida bill backers can take hope for one part of their bill, with a recent 5th Circuit Court of Appeals panel restoring a Texas law to establish an 18-year-old age limit with age verification for online pornography sites.

Congressional Bill Aims to Drive Chinese-Owned TikTok Out of US App Stores

Report from the CNN

In Brief – Bipartisan legislation to pressure Chinese digital giant ByteDance to divest its US TikTok business was passed by the Energy and Commerce Committee by a unanimous vote following a closed door session to discuss classified national security information. The Protecting Americans from Foreign Adversary Controlled Applications Act imposes a civil penalty of $5,000 per download on app stores, such as those operated by Apple and Google, that permit downloads of TikTok if it remains part of ByteDance, with the US Attorney General authorized to bring enforcement actions. The legislation gives TikTok’s parent company 180 days to divest its US business before the penalties for the app stores and hosting services come into effect. The legislation also gives the President the authority to apply the same treatment to any social media website, desktop application, mobile application, or augmented or immersive technology application that has over one million US users if the service or its parent company is based in China, Russia, Iran, or North Korea. It may reach the House floor this week.

Context – Federal judges and the 1st Amendment remain the biggest hurdles. When the Trump Administration banned TikTok and WeChat in 2020, federal judges said no. TikTok won an injunction. So did a TikTok content creator. Even WeChat users won an injunction on 1st Amendment grounds, and WeChat is a part of the Chinese internet censorship regime. TikTok insists they are not. Most recently, the Montana state law banning TikTok downloads was blocked. Congress legislating does not dispense with the 1st Amendment problems. And pushing a bill through Congress remains a big hurdle, especially with public support for a ban receding, although TikTok encouraging large numbers of users to call their Members of Congress did not help in committee. Then throw into the mix that some big US investors have deep conservative ties and Donald Trump announced that he opposes the bill because it would help “enemy of the people” Facebook. Go figure. The years long CFIUS review of the TikTok acquisition that created the service, begun by the Trump Administration, is probably still the company’s biggest threat.

Judge’s Ruling Makes 3 — Google to Face Ad-Buyer Class Action Suit

Report from the Reuters

In Brief – A federal judge in New York has ruled that Google must face a class action lawsuit brought by small online advertisers that claims the digital giant monopolized the ad exchange market. Judge Kevin Castel narrowed the case by dismissing a number of related claims, including those focused on ad-buying tools used by large advertisers, and a claim that Google conspired with Facebook to give the social networking giant preferential access to advertising market data through a program called “Jedi Blue”, but said the plaintiffs “plausibly allege antitrust standing as to injuries they purportedly suffered from anti-competitive practices in the ad-exchange market and the market for small advertisers’ buying tools.” The private class action suit will now join the queue of other antitrust litigation targeting the biggest company in the digital advertising industry.

Context – Google is in the middle of a gauntlet of federal antitrust trials. Their first result went very badly. A jury in California sided comprehensively with Epic Games, ruling that Google violated antitrust law in the operations of its Android app store and in-app payments service. Judge Castel’s ruling returns the focus to a series of antitrust complaints targeting Google’s advertising businesses, which have always been their top moneymaker. They now face major adtech antitrust suits in three Federal Circuits. The United States Department of Justice, led by a head of the Antitrust Division who is a long-time Google adversary, filed an adtech-focused antitrust complaint in January 2023 that is scheduled for trial in Virginia in September. An adtech-focused antitrust complaint filed by a coalition of State AGs led by Texas, caught up in a venue battle since late 2020, was finally assigned to federal court in Texas last year and is scheduled for trial in spring 2025. In Europe, Google is the subject of a similar antitrust investigation of its advertising business by the European Commission and was recently hit by a $2.3 billion antitrust lawsuit from a coalition of 32 news media companies for illegal adtech practices.

Canadian Online Harms Act Aims to Protect Kids and Target Hate Speech

Report from the BBC

In Brief – Years after promising legislation to better address online harms in Canada, the Government of Prime Minister Justin Trudeau has unveiled the Online Harms Act targeting seven types of objectionable content on social media sites, live-streaming services, and adult content platforms. The bill targets content that sexually victimizes a child, bullies a child, induces a child to harm themselves, communicates intimate content without consent, foments hatred, incites violence, or incites violent extremism or terrorism. In addition to addressing online harms, the bill amends Canadian criminal law to make it a separate offence to carry out a crime motivated by hate, amends the Canadian Human Rights Act to classify hate speech as discrimination, and sharply raises the penalties for those found guilty of advocating or promoting genocide, increasing the maximum sentence to life in prison. The leader of the New Democratic Party indicated that the party would support the measure, while the Conservative Party leadership criticized the Liberal Prime Minister’s “woke authoritarian agenda” and effort to regulate so-called “hate speech” which they describe as speech he does not like.

Context – The EU is committed to being the global leader, at least among democracies, in regulating digital markets, platforms, and now, AI. Their DSA directs platforms on how to address objectionable content, their DMA reorients digital competition policy by replacing antitrust enforcement with regulation, and they are on the cusp of enacting their AI Act. The UK may criticize EU regulation, but they recently enacted their Online Safety Act, a take on the DSA, and the Sunak Government has proposed a DMA-like Digital Markets Competition and Consumers Bill. Canada is really playing catch-up with online harms regulation. They are in a similar place on Digital Services Taxes. Countries like France and the UK are collecting duties while Canada threatens to implement a DST in the face of US retaliation. At least they can learn from European experiences.

Another OpenAI Copyright Suit fom Media Companies

Report from the New York Times

In Brief – Three online-only media outlets have sued OpenAI in federal court in New York for copyright infringement in the training of their AI chatbot. The Intercept, Raw Story, and Alternet, add to the steady stream of lawsuits being filed by media companies, authors, artists, and musicians, accusing OpenAI, its primary corporate backer Microsoft, and other AI leaders with violating numerous provisions of federal copyright law. The lawsuits argue that OpenAI scraped huge volumes of copyrighted content off the internet without authorization from, or compensation for, the copyright owners, and used it to build commercial services for “Big Tech” to replicate the copyrighted works without attribution or compensation for the copyright holders. The progressive publishers are seeking damages of at least $2,500 per violation, as well as asking OpenAI to remove all their copyrighted articles from training sets. OpenAI says they are committed to building an AI ecosystem that works for copyright holders and argues that including copyrighted material in AI training data is permitted under fair use.

Context – As copyright holders keep filing AI suits filled with voluminous claims of infringement, federal judges are narrowing them down to what should be a battle royale over training and the “fair use” exception. If there is a second key consideration it involves whether an AI system produces actual copies in their outputs. These latest publishers do not make that claim, but the biggest publisher suing OpenAI, the New York Times, is. OpenAI’s response is that the Gray Lady intentionally misused the service in violation of the terms of use to cause it to create copies. Now, for something completely different on the OpenAI litigation front, multibillionaire Elon Musk, an initial founder, and funder, of OpenAI in 2015, has sued OpenAI and its CEO (and fellow founder) Sam Altman in California court for breach of contract by violating the startup’s founding mission of creating open-source AI technology for the benefit of humanity, instead turning OpenAI into a “closed-source de facto subsidiary of the largest technology company, Microsoft.”

European Commission Looking Into Meta’s Ad-Free Subscription Plans

Report from TechCrunch

In Brief – Regulatory scrutiny of Meta’s digital advertising business model now includes the European Commission requesting information on its “Subscription for no Ads” options for Facebook and Instagram. The inquiry comes under its authority as Meta’s regulator under the Digital Services Act (DSA), which is primarily focused on how platforms deal with objectionable content but also establishes transparency rules for digital ad systems and bans targeted advertising based on some special categories of personal data. Meta instituted paid, ad-free versions of Facebook and Instagram in Europe last fall to comply with a range of EU laws, including the General Data Protection Regulation (GDPR), the DSA, and the Digital Markets Act (DMA). Meta’s targeted ad business model has faced years of legal challenges under the GDPR from EU privacy advocates, and eight consumer organizations have filed new complaints with national privacy authorities arguing that the “pay or consent” plans violate GDPR principles of fair processing, data minimization, and purpose limitation, and that 9.99 euros per month is unfair to low-income users.

Context – Fights over “behavioral” advertising are highly ideological. Backers argue that efficient targeting helps small businesses and those with small budgets. Many opponents see effectiveness as a sign that unwary users are manipulated. One conundrum is that online users tell pollsters that they strongly support advertising “privacy”, but few will pay to avoid ads. GDPR challenges keep ending up in front of EU judges, and Meta’s subscription plan is based on a reading of court decisions that point to setting up paid services as a legally valid alternative to “free” ad-based options. At the same time, the Commission is now in the regulatory mix with its direct authority over Meta. Frustration with the Irish DPC and the GDPR One Stop Shop is a major reason the biggest platforms are regulated by the Commission under the DSA and DMA. Given the arguments around the price of Meta subscriptions, it will be interesting to see if the Commission steps in as old-school price regulators.

Promising Better, DeSantis Vetoes Bill Setting Strict Social Media Age Limit in Florida

Report from Politico

In Brief – Florida Governor Ron DeSantis vetoed legislation passed by the state legislature that would have set a strict 16-year-old age limit to have an account on a social media service in the state, promising that a “superior” bill would be enacted soon. The measure that overwhelmingly passed the State House and Senate did not follow the model of social media bills enacted by states like Utah, Texas, and Ohio, which each provided an option for parents to consent to their teens using social media services. Instead, it simply banned the services for those under age 16 and required the covered platforms to use a “reasonable age verification method” to ensure that new and existing users met the age standard. Governor DeSantis called for legislation to protect kids while also “supporting parents’ rights and maintaining the ability of adults to engage in anonymous speech.” Operating under a tight legislative schedule in the state’s 2024 legislative session, an amended version of the legislation is quickly advancing that allows parents to give consent for 14- and 15-year-olds to have social-media accounts, as well as doing away with the requirement that platforms adopt age-verification technology. Backers admitted that the revised bill was likely to be better positioned to withstand expected First Amendment court challenges.

Context – Data showing causality between teen mental health and social media use is thin. But efforts to cordon off parts of the internet from teens are only gathering steam. Congressional legislation has bipartisan backing and appears to be gaining steam although it may be running into new procedural challenges. US States under Republican leadership were quick off the mark, but California and New York are also engaged. As the backers of the Florida bill know, the First Amendment is proving a tough hurdle in US courts, including concerns with the seemingly selective nature of the platforms being implicated and judicial wariness with age verification. The design negligence class actions trying to circumvent Sec. 230 might still prove the most impactful efforts to change the social media platforms in the US.

Apple’s Huge EU Fine Sets the Stage as the DMA Comes Online

Report from the Wall Street Journal

In Brief – The European Commission (EC) has fined Apple over 1.8 billion euros for abusing its dominant position in the market for the distribution of music streaming apps to iPhone and iPad users through its App Store. The decision focused on Apple’s “anti-steering provisions” that prevented app developers from informing their users that there were cheaper ways to buy services. The case originated in 2019 with a complaint from Spotify. The eventual fine exceeded estimates, and Commissioner Margrethe Vestager noted that the penalty went far past what a traditional calculation would have arrived at, saying, “if you are a company who’s dominant, and you do something illegal, you will be punished.” Apple said that the Commission failed to “uncover any credible evidence of consumer harm” and it will appeal. The CEO of Epic Games, a notorious Apple critic, decried Apple’s “bitter griping.”

Context – Getting this decision out the door days just before the Digital Markets Acts (DMA) comes online is clearing the deck chairs and sending a message. The DMA is the big regulatory stick. It provides some specificity with 18 mandates but enforcement mechanisms are nebulous. The “gatekeepers” are charged with determining how they bring their covered platforms into compliance. Then a chorus of stakeholders critique and complain, with the Commission staff weighing in, trying to arrange changes, and potentially fining the giants. Key historical context includes the EU trying to avoid the decade-long pace of the Google antitrust cases and avoiding the anger of Big Tech critics who criticized the beloved GDPR for not quickly hitting tech giants with mega-fines. So, the DMA basically dispenses with the initial phase of Big Tech antitrust cases, creating a permanent “remedies phase”. But they won’t likely be fast or clear. Apple’s complex DMA plan is eliciting howls from giant app developers who just want far lower fees. Google’s plans to change search results pages are dividing potential winners and losers. Privacy advocates strongly oppose Meta offering ad-free paid subscriptions as part of its compliance plan.

Germany Believes the DMA Needs a Supervisory Fee Like the DSA

Report from Reuters

In Brief – Germany is proposing that a funding mechanism be created to provide the European Commission with the resources needed to engage in the ongoing enforcement activities related to Digital Markets Act (DMA), which is set to regulate the 22 largest platforms of six Big Tech companies – Amazon, Apple, ByteDance, Google, Meta, and Microsoft. The law directs the companies to bring their covered platforms into compliance and the European Commission is assigned the task of determining the adequacy of the plans. A team of Commission officials from across several bureaus are said to be engaged. The German Government is proposing the creation of a DMA supervisory fee modeled after the one already in place to fund the Commission’s compliance activities under the Digital Services Act (DSA). The DSA requires the Very Large Online Platforms (VLOPs) regulated by the Commission to pay 0.05% of their annual worldwide net income to fund compliance.

Context – Both the DMA and the DSA create ongoing regulatory regimes to govern digital platform businesses. With both, regulating the largest platforms is assigned to the European Commission, avoiding the GDPR’s One Stop Shop model. Leave it to the Germans to speak plainly. It was always odd that the DSA had a regulatory funding mechanism while the DMA did not, as if overseeing how large platforms deal with objectionable material is a meaningfully more arduous task than determining if mega-giants comply with non-discrimination and fair competition mandates on their complicated platforms. Not that the DSA fee has been without controversy. Meta has challenged the DSA funding mechanism because it is based on company profits, and a number of the DSA VLOPs, including Amazon and X, show minimal if any profits. So, they barely contribute, or not at all. The DMA gatekeepers are a more profitable lot, although Amazon is there again. Also, the pool of gatekeepers might be growing, as Booking and X have reportedly notified the Commission that they may have hit the DMA’s size thresholds.

Australian Media Companies Concerned Meta Will Drop News Payments

Report from The Australian Financial Review

In Brief – As Australian news media leaders look toward negotiating a second round of agreements with Google and Meta over payments under the once-novel Australian News Media Bargaining Code, there is increasing concern that Meta might walk away from the regime that currently has them paying $70 million AUD to the country’s media companies. Although both Google and Meta (then called Facebook) threatened to block news-related services during the tense legislative negotiations in 2021, and Facebook did block news for a few days, the measure was eventually enacted, and both firms negotiated enough media payments deals to avoid the arbitration process. In aggregate, those confidential payment agreements amount to $200 million AUD, with Meta carrying $70 million. They are set to be renegotiated in 2024, but despite outreach from the media companies, Meta is reportedly not engaging in a meaningful way, and recently announced that they are ending the Facebook News service in the country. There are growing concerns that the company will choose to follow their playbook from Canada, where Meta blocked news content on their platforms after the country enacted similar legislation. Another possible change is to push TikTok into the regime. Although the law was enacted under the prior government, the Albanese Government has said that they are committed to supporting the news media sector.

Context – The Australian media companies should be concerned that Meta will drop news from their platforms. Facebook’s move in 2021 did not go well technically. Their tools blocked more than news. But they appear to have improved the process in Canada. News media company content has been blocked for months and it does not appear to have harmed their usage. They now consistently take the position that they will not pay when third parties, including media companies, post the content. They will only pay fees when they do the posting. They argue that the media companies undervalue Meta’s distribution platforms and overvalue their content. In Canada, Google eventually agreed to pay $100 million CAD rather than excise media content from search results. If TikTok is brought into the Australian regime it will be a first.

DMA Compliance Team Looking at Apple Plan to Ax Progressive Web Apps

Report from the Financial Times

In Brief – The European Commission has begun preliminary work that could lead to a formal investigation of Apple’s decision to end the use of Progressive Web Apps (PWAs) as part of their massive Digital Markets Act (DMA) compliance plan. PWAs allow companies to develop applications accessible as webpages, featuring an icon on a mobile user’s home screen, thus bypassing traditional app stores. Apple has allowed some PWAs in the past, but they are disabling them as part of their DMA changes claiming that the unrelated DMA requirement to allow alternative web browsers can result in privacy and security risks from malicious PWAs operating over third-party browsers. They said that building the technical capabilities to deal with PWAs over multiple browsers is too complex an undertaking at this stage. The company said that they regretted any negative impact on PWA developers and “a small number of users”, and that EU users would still be able to access websites directly from their Home Screen through a bookmark “with minimal impact to their functionality”. Open Web Advocacy, a not-for-profit advocacy group, has been engineering opposition to Apple’s policy claiming that its true intent is to undermine a potentially popular alternative to App Store downloads that bring Apple significant fees.

Context – Among the DMA’s 18 mandates are non-discrimination and interoperability requirements forcing Apple to open its “walled garden”. Apple has never been shy about how they control the ecosystem, telling users in clear terms that they design the system to promote privacy, security, and controlled user experiences. The headline DMA issue for Apple has been allowing alternative app stores and “sideloading”. Apple claims their DMA plan meets that demand, but they include new fees and safety standards, and will vet apps taking advantage of the new openness. Apple’s critics, including many of the largest app developers, are denouncing the plan. Apple’s plan will be a high-profile test of the nebulous DMA enforcement processes. With PWAs, we’ll be watching if Commission staff will tell Apple technicians that their security concerns are off base.

KCC Tells Twitch “Don’t Let the Door Hit You on the Way Out”

Report from Yonhap News Agency

In Brief – Twitch, a popular live video streaming service owned by Amazon that is widely used by gamers, has been fined 435 million won ($327,067) by the Korean Communications Commission (KCC) for changes the service said it made to deal with increased bandwidth costs the company attributed to high “network usage” payments mandated by South Korean law. The country has a unique telecom payments regime requiring online services providers to pay local network companies based on the bandwidth that their customers use on the online services. As those costs rose, Twitch reduced video quality from 1080p to 720p in 2022 and terminated its Video on Demand (VOD) offering in early 2023. In December, the company announced that it was leaving the market entirely due to the fees. As the service prepares to shut down, the regulator imposed fines for ending the VOD service, reducing video streaming quality without proper justification, and failing to implement a system to block the distribution of illegal content. The KCC also ordered Twitch to prepare various user protection measures, including refunds, to accompany the ending of its operations in Korea.

Context – Twitch executive Dan Clancy said that operating in South Korea was “prohibitively expensive” with network usage fees 10 times greater than in other countries, and that the company saw “no pathway forward for our business to run more sustainably”. Nevertheless, the decision to exit South Korea, a major video game and competitive “esports” market, was a surprise to many. Digital giants have rarely walked away from a meaningful market due to costs imposed by legislation or regulators. Amazon may be more willing than most. They recently abandoned their offer to buy robot vacuum company iRobot rather than make marketplace changes to appease EU officials. The highest profile recent example is undoubtedly Meta’s willingness to stop the circulation of news on Facebook and Instagram when a government mandates payments to news media companies, as they’ve done in Canada and are threatening elsewhere.

Meta Sets Up Disinformation Office to Protect This Year’s EU Elections

Report from Reuters

In Brief – Meta has announced that it is ramping up its effort to address disinformation and other risks related to the upcoming European Parliament elections, including setting up an EU Elections Operations Center. The company’s efforts are guided by the Digital Services Act (DSA) and commitments under the EU Code of Practice on Disinformation. Combatting misinformation will include banning ads that contain “debunked content” as well as those that discourage people from voting or question the legitimacy of the elections, priorities that align with the EU law on political advertising. Meta currently works with 26 fact-checking organizations across the EU and will add three new partners. Many see election security risks increasing with the recent growth of AI services available to the public and Meta said that it plans to reduce the ranking of fake or manipulated audio, videos or photos in social media feeds so fewer people see them. The company’s own AI services label the photorealistic images they create, and Meta working in a coalition with 19 other tech companies, including OpenAI, Microsoft, Adobe, TikTok, and X, to combat deceptive AI-generated content from interfering with elections globally.

Context – The DSA went into effect in late August 2023 for the official Very Large Online Platforms (VLOPs). Currently numbering 23, they are regulated centrally by the European Commission with Thierry Breton serving as lead regulatory stick wielder. Within days of the brutal Hamas attack on Israel and the corresponding Israeli military response, he publicly accused X, Meta, and TikTok of falling short of their duties under new law to control illegal content and disinformation, raising legitimate concerns that the DSA was already a tool to censor controversial views. He has since opened a formal investigation of X for their content moderation practices. Breton’s public letter to Meta CEO Mark Zuckerberg in October also included a lengthy exposition on Meta’s responsibilities to deal with disinformation and so-called deep fakes in the context of elections. Fortunately, nobody questions the objectivity of fact-checkers.

Ask A Question!