Archive – 15 Sep 2023

August 2023

Facebook and Instagram to Allow Chronological Feeds in EU (Like TikTok)

Report from TechCrunch

In Brief – With the EU’s landmark Digital Services Act (DSA) coming online August 28th, Meta has announced that European users of Facebook and Instagram will be offered the option of receiving non-personalized content feeds, bypassing the algorithmic feeds currently provided to users on both platforms. The DSA imposes new rules for moderating a wide range of illegal and objectional online content, as well as prohibiting certain advertising practices, and requiring the sharing of data with researchers and authorities. The 19 largest platforms, including TikTok, Amazon, Apple, Google, Meta, Microsoft, and X (formerly Twitter), face the most stringent standards. One specific requirement is that users of the largest platforms must be given the option of receiving content that is not based on tracking or personalized data, such as by displaying content in reverse chronological order or ranked by local popularity. Meta’s changes follow a similar announcement from TikTok to offer EU users the ability to “turn off personalization” behind the videos in their “For You” feed.

Context – The EU is relentlessly putting in place a digital platform governance framework with wide-ranging rules and mandates enforced by regulators. The level of regulatory oversight reminds many of the financial sector. And like with big banks, EU regulators have been carrying out “stress tests” on the largest platforms. With banks, regulators try to project how the institutions would handle various economic or business shocks. However, unlike in the long-regulated banking sector, standards and practices for various kinds of digital platforms are not widely established and set, so the process is being developed on the fly. Some advocates of free speech and civil liberties are concerned with government directing platforms on how to police speech, as well as the prospect that platforms could be shut down during times of political unrest. European Commissioner Thierry Breton has responded claiming that the DSA “is the most progressive rulebook in the world for governing the responsibilities for online platforms.”

News Organizations Developing Policies on How They Deal With AI ChatBots

Report from Axios

In Brief – Leading news businesses are adopting policies governing their use of AI chatbots and image-generating services. Most are allowing some use of AI to assist in writing under editorial supervision and apply extra scrutiny to AI-generated images and video. News media businesses that have released policies include the Associated Press, The Guardian, Insider, and Reuters, with the organizations generally announcing principles rather than hard and fast rules and limitations, making it clear that they believe their writers are fully responsible for the accuracy of their work. As the AP standards note, material from generative AI “should be treated as unvetted source material” and Insider tells its writers that they are “responsible for the accuracy, fairness, originality, and quality of every word in your stories.” Media companies are also considering a range of policies to govern and monetize their stores of written material to “train” AI systems, including AP signing a deal with OpenAI to allow its content library to be used in training while the New York Times is reported to be considering joining the legion of IP rights owners taking legal action against OpenAI for using material to train without consent.

Context – A key AI learning since the public release of ChatGPT is that generative AI tools are nothing like traditional databases or internet search engines. They don’t store and return fixed data. Instead, they compile realistic, human-sounding responses to queries by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed billions and billions of examples often pulled from all over the internet. They often deliver accurate facts, credible ideas, and responses that seem sensible or creative. But being “correct” is not the goal of the tools, and we’ve all heard plenty about chatbot “hallucinations”. Realistic-sounding fabrications, such as those that filled a ChatGPT-prepared legal brief that led to two lawyers being sanctioned in US District Court, or are at the center of chatbot defamation liability concerns, are basically a design feature rather than one-off mistakes.

Microsoft and Activision Make “New” Offer Expected to Resolve CMA Opposition

Report from Reuters

In Brief – Microsoft and Activision continue to move toward what is increasingly expected to be approval of its $69 billion acquisition of the giant videogame developer by presenting a modified deal to the UK regulators that rejected the tie-up in April. Despite that setback at the hands of the Competition and Markets Authority (CMA), the European Commission approved the deal in May with conditions based on Microsoft committing to giving access to top Activision games to competitor platforms, and then a US Federal Judge rejected the Federal Trade Commission’s effort to halt the deal. To facilitate a new merger review process by the CMA, Microsoft and Activision submitted a “substantially different” deal to the competition authority, in particular proposing to sell French videogame publisher Ubisoft Entertainment cloud-streaming rights for Activision games. Ubisoft will have exclusive rights outside the European Union to sell cloud streaming rights to existing and new Activision games for 15 years, and non-exclusive rights to streaming sales inside the EU. The CMA has now set a deadline of October 18 for an initial ruling on what it considers a new deal, the same date the companies have set for their deal closing.

Context – Antitrust regulators in the US, EU, and UK have expressed concerns with the digital giants for years, including deep skepticism over acquisitions. Microsoft-Activision has been the biggest deal in that time frame and so promised to test everyone involved. In the end, it highlighted the increasingly disjointed sequential competition policy roulette. The CMA rejected Microsoft-Activision but then found itself isolated, with the UK Government clearly feeling pressure for anti-tech regulatory policies. The months-long process of walking back seems highly scripted, with the CMA completely reaffirming its initial rejection and the companies having a “new deal” ready to go immediately. Expect approval. Meanwhile, the European Commission has taken three harder line tech antitrust actions since it was key to turning the tide for Microsoft in May. The US FTC court losses are another story.

Federal Judge Backs US Copyright Rule that AI Works Cannot Gain Copyright

Report from Politico

In Brief – Federal District Court Judge Beryl Howell has affirmed the determination of the US Copyright Office that works created entirely by an AI-enabled digital system cannot be awarded a copyright. The case was brought by computer scientist Stephen Thaler, who owns an AI computer system that created a piece of visual art called “A Recent Entrance to Paradise”. He attempted to have the AI’s work granted copyright, and then to have the copyright transferred from the AI system to himself. The Copyright Office had refused the initial grant of the copyright. Judge Howell agreed with the Copyright Office that human authorship is “a bedrock requirement” of copyright, and ruled that absent granting of the initial copyright, the issue of transferring the copyright to Thaler was moot. Thaler has attempted to expand intellectual property protections for computer-generated works before, including unsuccessfully challenging the US Patent and Trademark Office policy of refusing to grant a patent to a computer-made innovation.

Context – Judge Howell’s ruling continues the clear trend that absent a change in law, the answer to whether AI systems can be granted IP rights for new works is likely to be a No. Although one big unanswered question remains how much human involvement co-creating with an AI system will be enough to qualify. While the plaintiff continues to try to win IP protection for work he explicitly claims was created solely by his AI system, artists and writers are certain to increasingly use AI-systems like they do other digital tools. How much is too much, and how do you prove how much AI contributed? The issue of copyrighted works being used to train neural networks that then create works that are not copies is gaining attention as some copyright owners argue that training is a copyright violation and demand payment. Many analysts believe that using copyright works in training is likely to be considered “fair use” in the US, although that will be up to courts to decide. Using AI to mimic but not copy an artist or writer might be better addressed by creating a federal Right of Publicity rather than amending copyright law.

UK Researchers Show ChatGPT Leans Left on Public Policy Responses

Report from the Washington Post

In Brief – Researchers at the University of East Anglia in the UK have published a paper claiming that ChatGPT has “a strong and systematic political bias” and is clearly inclined to the left side of the political spectrum in the three markets studied, which were the US, UK, and Brazil. The researchers asked the chatbot to answer a battery of public policy questions from a range of ideological perspectives, as well as without any ideological prompting. The unprompted default results were compared to the results that the chatbot intentionally crafted from an ideological perspective. The chatbot’s supposedly neutral default responses consistently leaned to the left in all three jurisdictions. The researchers pointed to the potential for bias in training data and also bias in the algorithm itself as potential causes, claiming both likely contribute. They also asked the chatbot about the source of its biased responses and noted ChatGPT “was categorical in affirming that every reasonable step was taken in data curation, and that it and OpenAI are unbiased.”

Context – The potential for political “bias” with AI-enabled chatbots became a point of contention soon after ChatGPT crashed the public consciousness. Political bias is just one example of objectionable chatbot content, not unlike the Center for Countering Digital Hate (CCDH) claiming that three major chatbots, including ChatGPT, often reply to requests about eating disorder topics with unhealthy responses. Generative AI chatbots involve Large Language Models that are “trained” on massive volumes of text found on the internet. A key part of the training process includes human reviewers ranking responses to further teach the algorithmic system. Bias can come from the choice of training data or from the human reviewers. Then there are “guardrails”, where the chatbot operator instructs the system to do or not do certain things. Those guardrails operate a lot like social media content moderation standards. CCDH is complaining about the guardrails. Of course, every aspect of policy content moderation is subjective with no agreement on what is politically left, right, accurate, or objectionable.

Canadian Leaders Calls on Meta to Turn Off News Media Ban for Wildfire News

Report from Reuters

In Brief – In the face of ongoing wildfires and evacuations in the country’s Northwest Territories, top Canadian leaders have called on Meta to reverse its recent ban on posting Canadian news company content. Meta’s decision to block the posting of news media stories in Canada was instituted in response to the recently enacted Online News Act that allows government arbiters to mandate payments from Meta to Canadian media companies when their content appears on its platforms. Meta has said for months that it would block news media content rather than pay when users, including the media companies themselves, post or share stories. Meta has argued throughout the debate that the media companies gain more value from using Meta’s platforms as free distribution outlets than it gets from advertising related to the content. Although the new law will not be implemented until early 2024, Meta began instituting technical changes in early August to prepare for the law. Media companies and political leaders have been expressing outrage for weeks, with the wildfire issue only the latest point of contention. Meta has responded by noting that Canadian users can continue to use the platforms to connect with their communities and access reputable wildfire information, including from official government agencies, emergency services and nongovernmental organizations.

Context – The standoff in Canada is the top flashpoint in the global media company campaign to force Meta and Google to pay government-set rates when “news” appears on their platforms. Both companies created “curated” media programs several years ago to pay media companies in hopes of reducing pressure. It didn’t work. Google has increased their efforts, but Meta moved in the other direction. They argue that their users don’t value news and its platforms are moving toward entertainment-style content. They are increasingly straightforward about blocking news when forced to pay a government-set rate. Canada is clearly a testbed. It is noteworthy that Google is also threatening to change search in Canada to exclude news.

Google’s Latest Appeal to Keep Texas AG Antitrust Suit in New York Circuit

Report from Reuters

In Brief – Despite a recent string of judicial setbacks, Google continues to try to keep the federal antitrust suit brought by Texas and 16 other states from being transferred from federal court in the Southern District of New York to the Eastern District of Texas. The company’s latest maneuver is an emergency appeal to the US 2nd Circuit Court of Appeals to block the case’s transfer back to Texas. The Texas AG-led advertising-focused antitrust suit was filed in late 2020. Google tried to transfer it to federal court in California, the AGs argued for Texas. The federal Judicial Panel on Multidistrict Litigation (JPML) chose to consolidate it in the Southern District of New York with several similar private complaints. In June, the JPML ruled that the case should be reassigned to the Texas-based circuit based on the provisions of the State Antitrust Enforcement Venue Act of 2022, federal legislation enacted by Congress last December that gives State Attorneys General the right to prosecute federal antitrust suits in the district court of their choosing. The JPML ruled that the legislation applies to pending state-led antitrust suits, a decision it confirmed at its meeting in late July. However, the panel stayed the move allowing for Google to appeal to 2nd Circuit Court of Appeals. If Google is unsuccessful, it will be battling antitrust lawsuits targeting its digital advertising practices in federal courts in New York, Texas, and Virginia.

Context – The Texas-led AG suit from 2020 was joined by a similar DoJ-led suit (that included six more States) in January 2023. Google’s effort to move the DoJ complaint from Virginia to New York was quickly turned down, and the pending remand back to Texas illustrates that while most of last-Congress’s Big Tech-antitrust fervor came to naught, giving State AGs the ability to pick their federal court venues will impact the evolution of case law, likely by engineering more circuit splits. On the substance, all these cases will increasingly need to address the fact that the overall digital ad market shares of Google, and number two Meta, are falling, with Google’s now below 30 percent.

“Amazon not scared at all,” says Analyst After Company Imposes New 2% Logistics Fee

Report from Bloomberg

In Brief – Amazon has announced that third-party sellers will be charged a new 2% fee if they fulfill orders themselves as participants in the company’s Seller Fulfilled Prime (SFP) program. The new fee comes amidst reports that Amazon officials have met with Federal Trade Commission leaders as the agency moves toward filing a long-expected federal antitrust complaint. The Amazon behaviors that have raised the most legal and regulatory concerns in recent years, including in major overseas markets, have been Amazon practices that push third-party sellers to buy its Fulfilment By Amazon (FBA) logistics service, which gives Amazon control over seller inventory and costs the sellers high commission fees. Nearly all products that qualify for “Prime” designation on Amazon, which leads to the most sales, are stored, packed, and shipped in Amazon’s FBA system. In 2015, the company created a limited SFP program to allow sellers to purchase alternative logistics services and still qualify for Prime, but joining SFP was suspended in 2019, pushing all new sellers interested in Prime to use FBA. Amazon announced this summer that SFP was selectively reopening. In the face of FTC scrutiny of Amazon seller fees and logistics practices, Jason Boyce, an online sales advisor, said the new SFP fee “shows Amazon is not scared at all.”

Context – Years ago, Amazon was seen as a low-price online retailer trading profits for market share. That’s out-of-date thinking. Smart analysts now focus on how Amazon operates its marketplace and pushes third-party sellers to use its massive FBA logistics service. Those sales are growing, carry very high fees that often exceed 50%, and high margins. Amazon’s marketplace practices that push sellers into FBA were central to the antitrust settlement between Amazon and the European Commission earlier this year, and the company is trying to wrap up a similar investigation in the UK. The biggest questions surrounding a potential FTC antitrust suit are whether the Marketplace-FBA links will be a focus, whether Amazon will make a similar offer to settle as they did in Europe, and whether this new 2% fee will be extended to Europe as well.

Indian High Court Rules That Google Can Face Trademark Claims Over Ad Keywords

Report from TechCrunch

In Brief – A two-judge panel of the Delhi High Court has upheld a single-judge ruling delivered in 2021 that held that the use of registered trademarks as search keywords in the Google’s Ads Program violates Indian trademark law. The decision is in a case brought by Agarwal Packers and Movers, a logistics business that argued that the use of its trademark and variations as Google ads keywords led to a diversion of traffic from its website to those of competitors. The High Court panel also ruled that Google could not fall back on a safe harbor provided to intermediaries by Section 79 of the IT Act, as the search giant had argued in its appeal, because the company was an active participant in the program that offered trademarked keywords.

Context – Google’s search-based ads, a foundation of the company’s business since its earliest years, allow advertisers to bid to have their short text ads appear on the Google search results page when a user searches for particular words or phrases, called “keywords”. Allowing advertisers to bid on trademarked terms held by competitors has been a controversial issue for many years. Initially, Google prohibited competitors’ trademarks from being used within an ad’s text and allowed trademark owners to ask Google to block their trademarks from being used as keywords in ad auctions. Google changed that policy in 2009, focusing solely on prohibiting trademark violations in the ad texts themselves. Google argues that allowing competitors to bid on a trademarked term in a keyword auction does not amount to a trademark violation because the consumer does not directly encounter the third party’s registered trademark on the search result page, and that consumers are able to discern when an ad is not from the trademark holder. Google also argues that consumers benefit from competition in online advertising. Google announced its latest global policies on trademarks in the Ads Program in June, which includes continuing to generally allow the use of trademarks as keywords, although in the EU the company will review the combination of a trademarked keyword and related ad text to determine potential confusion.

Center for Countering Digital Hate Says AI Services Promote Eating Disorders

Report from The Register

In Brief – The Center for Countering Digital Hate (CCDH), a UK-based advocacy group policing online platforms for objectionable content, has released a report that claims six popular “Generative AI” services often respond to requests related to eating disorder topics with dangerous and unhealthy responses. They are calling on the companies to improve their “guardrails” to better protect users. CCDH tested chatbots ChatGPT from OpenAI, Bard from Google, and My AI from Snapchat, and text-to-image generators Dall-E from OpenAI, DreamStudio from Stability AI, and Midjourney. They provided each 20 prompts about eating disorders. They also tried to circumvent the chatbot safety protocols. The chatbots returned objectionable responses 23% of the time, although none by Snap’s My AI, but all three were negatively impacted by intentional circumvention, jumping to 67% objectionable responses. The image generators created objectionable results 32% of the time.

Context –  CCDH was recently sued by X, formerly Twitter, over a report on “hate” on the platform. Other recent targets include “climate denial” on YouTube and “anti-choice” ads on Google. It’s all content they object to. Their AI chatbots claims offer an opportunity to distinguish between two very different concerns. To back up, Generative AI systems based on Large Language Models are creative writing systems built to turn out things that sound realistic, based on things found on the internet. And like the people that created all the stuff on the internet, they can create bad stuff, especially since “bad” is such a subjective concept. Advocates like CCDH want the companies involved to intervene with the systems and block certain topics or certain answers. The models each already do this to some degree, and groups, including some sponsored by government, are testing how the restrictions work. Of course, given the subjectivity of objectionable content, speech interventions like those create controversies, like charges that OpenAI intervenes to make ChatGPT “left leaning”. Then there are the “hallucinations”, or realistic-sounding fabrications made by the AI system, which are basically a design feature rather than mistakes.

Internet Archive Agrees to Settlement in Federal Court Loss to Speed Appeal

Report from Reuters

In Brief – A coalition of major book publishers and the Internet Archive (IA), a non-profit led by digital innovator Brewster Kahle, best known for the internet Wayback Machine, have agreed on a negotiated settlement that accepts the key findings of Federal Judge John Koeltl’s March ruling that the IA was engaged in copyright infringement through its “controlled digital lending” activities. The agreement speeds the process for the IA to appeal that decision and puts off financial penalties until after the appeal is completed. The proposed settlement bars the IA from lending unauthorized scans of in-copyright, commercially available books, or assisting others in those endeavors. Controlled digital lending involves a book owner, often a library, making a digital scan of a book and then lending out the digital copy, one person at a time, in the same manner that a library lends out the physical copy of a book it owns. IA claims that controlled digital lending is legally acceptable based on the copyright law concepts of fair use and the first sale doctrine. Publishers argue that scanning paper books into digital copies for lending is not legal, and that they produce and sell access to digital versions of books separately from physical books, including granting special licenses to libraries for digital books.

Context – Platform Economy Insights readers following the debates around Artificial Intelligence, especially Generative AI tools like chatbots, know that intellectual property, copyright, and “fair use” are a part of the debate, as many copyright holders claim that the creators of “Large Language Models” violate copyright when they “train” their models on copyright works and should pay them a license fee. The LLM creators generally argue that training is a fair use, creating the generative AI system is transformative, and the systems don’t create copies in the end. This litigation pitting publishers against IA is a reminder that copyright fights are an internet policy staple, including a federal court win for Google in 2015 allowing it to scan books for indexing and providing snippets (but not lending).

Google Argues Match Not Setting Aside Enough While Violating Payment Policies

Report from TechCrunch

In Brief – In a preliminary skirmish to the upcoming federal antitrust trial pitting Google against app developers Epic Games and Match Group, Google has accused Match of setting aside insufficient funds in escrow while the two sides battle over Google’s Play Store and in-app payments policies. While Epic Games is the highest profile app developer alleging that Google’s app store policies violate antitrust laws, Match Group joined Epic’s suit against Google claiming the company charges exorbitant fees. In May 2022, Google agreed that Match would be able to process in-app payments outside the Google payments service in violation of Google policies, but that Match would set $40 million in escrow to cover the prospective commission payments should Google prevail at trial. Google now alleges that $40 million falls far short of the fees that Google would be due because Match has earned more than expected. Google is asking the court to require Match to place a total of $84 million into escrow. The jury trial is currently scheduled to begin on November 6.

Context – App developers have been campaigning for years for governments to regulate Apple and Google fees. Apple has been more in the spotlight, particularly in the US, where the antitrust complaint brought by Epic went to trial very quickly. Apple won on the federal antitrust charges. However, Apple lost a California state law-based anti-steering ruling that they are appealing to the US Supreme Court. A big difference with the Google trial is that, unlike with Apple, which was argued before a judge, the Google trial be in front of a jury. While Android has always been a more “open” system than Apple’s “walled garden”, it’s not clear how consumers or a jury will understand it. Finally, as governments and regulators globally have pressured both giants to allow alternative in-app payments options, both Apple and Google have rolled out plans that allow payments choices but retains their general fee levels, which leaves many app developers unhappy. And before the Android payments trial starts, Google’s faces the US Department of Justice in a huge antitrust trial involving its search practices starting in September.

Canada Moving Forward on Digital Services Tax Despite US Opposition

Report from Wall Street Journal

In Brief – Canada is moving forward with its 3% Digital Services Tax (DST) on large social media, ecommerce, and digital advertising companies starting January 2024 despite strong opposition and trade sanction threats from the US Government. Canada’s plan runs contrary to the recent agreement of 138 countries at the Organization for Economic Cooperation and Development (OECD) to delay national DSTs until 2025 in hopes of finalizing the stalled OECD global corporate tax reform effort. Canada, which first proposed its DST in 2020, delayed it under strong objection from the US Government, which argues that DST taxes discriminate against US-based tech companies. Several European countries, as well as others including India and Turkey, proceeded with national DSTs at that time, with many going into effect in 2021. EU countries and India reached agreements with the US in 2021 to rescind the DSTs if the OECD plan went into effect by January 2024, refunding back to the companies DST payments that exceeded what the OECD plan required. Canada opposed the recent OECD agreement arguing that it is unfair that some countries can continue to collect DST taxes while they cannot.

Context – “Pillar 1” of the two-part OECD tax reform deal would replace national DSTs with a tax imposed on 100 or so highly profitable consumer-facing businesses, including large digital platforms, allowing countries to impose a new tax on companies that largely operate from other countries. “Pillar 2” aims to undermine tax havens by having countries agree to tax multinational companies at least 15 percent. In 2020, the Trump Administration delayed the implementation of national DSTs, including by Canada, by aggressively threatening trade retaliation. The Biden Administration changed the global tax reform dynamic by focusing US attention on Pillar 2. However, the overall effort has hit hurdles in the US, Europe, with some developing countries, as well as Canada. The showdown over Canada’s DST is happening at the same time Canada is also battling Google and Meta over new payments to its media companies.

Justice Kagan Maintains Anti-Steering Stay While Apple Appeals to Supreme Court

Report from Bloomberg

In Brief – Supreme Court Justice Elana Kagan has rejected Epic Games emergency request to overturn the decision of the 9th Circuit Court of Appeals to stay Federal District Court Judge Yvonne Gonzalez Rogers’s anti-steering order that would allow app developers to offer iPhone users alternative payments options for in-app purchases. Judge Gonzalez Rogers’ 2021 ruling in the landmark Epic v Apple trial was a split decision, determining that Apple did not violate federal antitrust law, was not an illegal monopoly, and could justify its payments policies based on user safety and security, but ordering those payments policies to be opened to alternatives based on California Unfair Competition Law. Epic appealed the bulk of Judge Gonzalez’s ruling, but Apple appealed the anti-steering order arguing it overturned the core antitrust ruling. A three-judge panel of the Federal Ninth Circuit Court of Appeals affirmed the full 2021 ruling but stayed implementation of the anti-steering order pending the US Supreme Court choosing to accept or reject Apple’s appeal to the High Court.

Context – Despite mustering an impressive amici coalition for its appeal, Epic Games mostly lost again at the 9th Circuit. The appeals panel ruled that Apple’s justification for their walled garden model is strong, that consumers are aware of its restrictions, and that did not provide an alternative model that met those needs and compensated Apple for their investments. With US judges proving resistant to political winds, the digital regulation and antitrust regimes in the US and EU are moving far apart. The EU DMA comes online in less than a year and will likely regulate the big app stores. New in-app payments systems created by Apple and Google that allow alternatives but collect the bulk of their commissions will only become more prevalent and important. And it will be interesting to see how Epic’s federal trial challenging Google, kicking off in November, plays out. While Android has always been more open than Apple’s iOS, it’s less clear how consumers (or in this case, a jury) understand it.

Canadian Media Companies Appeal for Antitrust Relief Against Meta

Report from Reuters

In Brief – Canadian media companies are calling on the Competition Bureau of Canada, the country’s antitrust authority, to force Meta to carry, and therefore pay for, Canadian news media content on Facebook and Instagram. The move came in response to Meta’s decision to block the posting or sharing of news media stories created by Canadian media companies in response to the country’s recently enacted Online News Act. Meta had said for months that it would do so if the company was mandated to pay media companies when users, including media companies themselves, post or share stories. Meta insists that media companies gain more value from using its platforms as free distribution outlets than it gets from advertising related to the content itself. Google has also said it would make changes with its core services in Canada, threatening to drop Canadian media stories from basic search results. Canadian Government and media reactions have been very critical of both companies for not carrying Canadian media stories and paying fees set by the government.

Context – Canada is the latest hotspot in the global effort by news media companies to extract payments from digital platforms, especially Google and Meta, when their “news” content appears on the platforms. The appeal to antitrust regulators harkens back to the media payments fight in France, where Google threatened in 2020 to drop news “snippets” from search results rather than pay media companies under a new “neighboring rights” copyright law. The French antitrust authority intervened, forcing Google, which had a 90% search market share, to carry snippets and pay. Google backed down. A key question in Canada is whether Meta is dominant in any relevant market given the rise of platforms like TikTok. Meta claims its users don’t particularly want news content, preferring entertainment content. In France, news service AFP is now suing X (formerly Twitter) to pay neighboring rights fees (joining Google and Meta) when news appears on that platform. If X is not adjudged a dominant platform in France, they could likely choose to block links instead.

TikTok Changes Algorithm and Ad Policy in Europe in Light of the DSA

Report from TechCrunch

In Brief – With the EU’s landmark Digital Services Act (DSA) coming online August 28th, Chinese-based short-video social media phenom TikTok has announced a series of changes to its policies and service in Europe. The DSA imposes new content moderation rules on digital platform to deal with a wide range of objectional online content, such as child sexual abuse material, terrorism, disinformation and hate speech, prohibit certain advertising practices, and share data with authorities. The 19 largest platforms, including TikTok, Amazon, Apple, Google, Meta, Microsoft, and X (formerly Twitter), face the most stringent standards. European Commissioner Thierry Breton, who is overseeing the implementation of the new law and has been holding voluntary “stress tests” with many of the largest platforms to help determine their ability to comply, warned TikTok after their test that they had more work to do. The recent set of policy changes by the platform includes allowing users in Europe “to turn off personalization” behind the videos in the “For You” feed, making it easier to report illegal content, banning targeted advertising for users aged 13 to 17, and increasing transparency related to content moderation decision.

Context – The EU is relentlessly putting in place a digital platform governance framework with wide-ranging rules and mandates enforced by regulators. Stress tests are regulatory exercises which are most notably used in the financial services sector to try to project how banks and other regulated institutions would handle various economic or business shocks. However, unlike in the long-regulated banking sector, standards and practices for online content moderation by various kinds of platforms are not widely established, so the process is being developed on the fly. Some advocates of free speech and civil liberties are concerned with government directing platforms on how to police speech, as well as the prospect that platforms could be shut down during times of political unrest. In response, Breton explains that the DSA “is the most progressive rulebook in the world for governing the responsibilities for online platforms.”

Amazon-FTC Meetings Indicate Big FTC Antitrust Case is Finally Coming

Report from Bloomberg

In Brief – Amazon officials are expected to meet with Federal Trade Commission leaders next week in so-called “last rites” discussions held prior to the agency filing a long-expected federal antitrust complaint. The FTC has been led since mid-2021 by progressive antitrust activist Lina Khan who rose to prominence in 2018 as a law school student accusing Amazon of anticompetitive abuses that she argued were not appropriately addressed by decades of misplaced federal enforcement practices. Amazon, Google, Facebook (now Meta) and Apple were the targets of a lengthy and detailed investigation by the House Judiciary Committee’s Antitrust Subcommittee in 2019 and 2020 that Khan helped staff. Google and Meta are battling major federal antitrust lawsuits filed by federal antitrust enforcers and state attorneys general in 2020, and Apple is under investigation by the Department of Justice.

Context – Ironically, Khan’s law school focus on Amazon may have put the FTC behind the curve. When she wrote her paper, Amazon was seen as a low-price online retailer trading profits for market share. Today, first-party retail is a shrinking part of the business with low margins. Smart thinking focuses on how Amazon operates its marketplace, populated mostly by small third-party sellers, and pushes those sellers to use its massive FBA logistics service. That business is growing with high fees and high margins. Links between Amazon’s marketplace and FBA logistics were part of a major antitrust settlement between Amazon and the European Commission earlier this year, and the company is trying to wrap up a similar investigation in the UK. So, the biggest question is whether the marketplace-logistics links will be the FTC focus, and whether Amazon will make a similar offer to settle. In addition, rather than a 2017 view of Amazon trading low prices for market share, Amazon increasingly faces “price fixing” lawsuits in federal and state courts accusing it of pushing third party sellers to raise prices elsewhere on the internet to shield the high Amazon marketplace and FBA logistics fees from low-price competition.

Federal Judge Narrows the Upcoming Google Search Antitrust Case

Report from the New York Times

In Brief – Federal Judge Amit Mehta has delivered a split decision in Google’s effort for summary judgement to dismiss the antitrust complaints brought by the Department of Justice and a coalition of State Attorneys General. The DoJ-led lawsuit from 2020 focused on allegedly anti-competitive efforts by Google to prevent alternative search engines from gaining meaningful scale by arranging for Google search to be the default on a range of digital products and services, especially Google’s business deals with Apple. The complaint from a coalition of 38 States led by Colorado AG Phil Weiser (D) alleged that Google engaged in anticompetitive conduct to harm specialized “vertical” search businesses such as Yelp and Expedia. Most notably, Judge Mehta is allowing government lawyers to move forward with the DoJ’s primary claim against Google. However, similar allegations by the DoJ regarding a range of other Google policies related to Android, Google Assistant, and IoT devices, will not proceed to trial. The biggest win for Google was the judge’s rejection of the State AGs’ claim that Google’s practices illegally weakened specialized vertical search businesses, saying that the states’ allegation “relies not on evidence but almost entirely on the opinion and speculation of its expert”. The stage is now set for a September kickoff in the biggest tech monopoly trial brought by the federal government since the Microsoft clash in the 1990s.

Context – If you remember back to October 2021, Federal Judge James Boasberg dismissed the initial antitrust complaints targeting Facebook (now Meta) from both the FTC and a coalition of State AGs. The FTC amended their complaint, which later passed muster with Boasberg, and is now headed to trial. On the other hand, the State AGs lost their initial appeal. The State AGs again appear to have suffered a big setback. But the biggest losers appear to be “vertical search” businesses like Yelp who have been at the forefront of the campaign targeting Google’s search abuses. Expect them to be even more focused on the EU where the new Digital Markets Act may offer some relief as early as 2024.

Senegalese Government Blocks TikTok in Bid to Undermine Protests

Report from Reuters

In Brief – The Senegalese Government has attempted to block access to popular social media app TikTok as part of a growing internet services crackdown in the escalating power struggle between President Macky Sall and Ousmane Sonko, the leader of the main opposition party. Protests and demonstrations, sometime violent, have been happening throughout the year with charges that the government is trying to disqualify Sonko from running for President in 2024. These developments have raised concerns about Senegal’s reputation as the most stable democracy in West Africa, a region facing a series of military coups in recent years, including most recently in Niger. In announcing the ban, Communications Minister Moussa Bocar Thiam said TikTok was being used to disseminate “hateful and subversive messages.”

Context – On one hand, this is another coming of age moment for TikTok, joining established US-based social media platforms like Facebook and Twitter as a top target of politically motivated internet services shutdowns in Africa. It also is an opportunity to remember that the continent has been a hotbed of governmental efforts to restrict online activity for political purposes, ranging from complete internet shutdowns to restrictions on social media broadly, to targeted actions against specific online services. A 2021 report claimed that 31 of 54 African countries had blocked social media platforms since 2015. Highlights include a seven-month standoff between the Government of Nigeria and Twitter, the Ugandan Government shutting down the entire Internet in the country in 2021 during national elections and blocking Facebook for months, and intermittent shutdowns of all major social media platforms in Ethiopia. China’s model of comprehensive digital communications control seems an aspirational goal of authoritarian governments around the world. And France’s President Emmanuel Macron suggested during weeks of anti-government protests earlier this summer that his government might need the be given the ability to block social media access during riots

X, formerly Twitter, Sues Online Speech Monitoring Advocacy Group

Report from the Washington Post

In Brief – X, formerly known as Twitter, has sued the Center for Countering Digital Hate (CCDH), a UK-based advocacy group that opposes what it considers online hate speech and disinformation, alleging that the organization violated X’s terms of service and engaged in a “scare campaign to drive away advertisers.” The novel lawsuit, filed in federal court in California, claims that the CCDH improperly collected data from the platform, including by scraping, aggressively supports censorship, and misrepresents X’s efforts to combat illegal speech. The company asks the court to block CCDH from accessing X’s platform and unspecified financial relief. The CCDH, which is very public in its criticism of the platform’s operations while under Musk’s control, is represented by high profile counsel who responded to X in writing reiterating that the group would continue to engage in its public interest research and reporting activities, cautioning X about the risks of engaging in frivolous litigation, and demanding that the company retain all documents regarding its content moderation policies and practices, instances of hate speech and disinformation on the platform, and its advertising business and revenues.

Context – This is all deeply ideological. Elon Musk’s robust defenses of “free speech” and criticisms of the prior Twitter regime’s content moderation practices has upset many progressives and delighted many conservatives. Agreement is rare on where to draw the lines on concepts like “hate speech” and “disinformation”, let alone when and how. In the US, the First Amendment is key. It gives X and other platforms great leeway to impose editorial standards in content moderation. It also provides organizations like CCDH protection to disseminate its critical reports, short of evidence of some business scheme or conspiracy. The First Amendment does limit government entities from engaging in the kind of content moderation advocacy undertaken by the CCDH. We are waiting for the US Fifth Circuit Court of Appeals to offer some guidance on those parameters in the coming weeks.

X Appeals Court Loss on Content Moderation Orders in India

Report from Reuters

In Brief – X, formally known as Twitter, has appealed to the Karnataka High Court in Bangalore to overturn a late June judicial ruling that found the company had not complied with valid government orders to remove objectionable content and imposed a fine of 5 million rupees. The company’s appeal continues its legal efforts to challenge the parameters of a series of government orders to remove content that it received in mid-2022. At the time, Twitter filed a legal challenge in which the company claimed it complied with valid takedown requests filed under the Indian social media law enacted in 2021 but argued that requests to block user accounts rather than restrict specific posts were overbroad and beyond the scope of the law. That law, which imposed new duties on digital communications platforms with at least 5 million users to cooperate with law enforcement to address unlawful content, including content the government determines affects “the sovereignty and integrity of India”, was followed up with demands for Facebook, Instagram, and Twitter to take down dozens of social media posts, especially criticism of the government’s handling of the pandemic, including from opposition politicians.

Context – Elon Musk’s robust defense of “free speech” and criticism of the prior regime’s content moderation practices has upset many progressives and delighted many conservatives. But more relevant to content moderation policy has been his repeated references to the need for the platform to follow “local laws” regarding legal speech. In the US that is built on the First Amendment, but following local laws means entirely different things in many markets around the world, whether India and Turkey where governments criticized as authoritarian ask social media platforms to take down critical posts, or the EU and Australia, where democratic governments mandate content moderation practices. Given all his talk of complying with local speech laws, as well as Musk’s other business interests in the Indian market, the company’s interest in continuing this legal challenge is noteworthy.

AFP Sues X (formerly Twitter) Under Neighboring Rights Copyright Law

Report from TechCrunch

In Brief – French media company Agence France-Presse (AFP) has sued X, formerly known as Twitter, for failing to work with it to establish a license payment regime for AFP-created content appearing on the platform. AFP is asking a Paris court to intervene and demand that X share information on the number and types of impressions of AFP-created content that appeared on the social media platform and negotiate a payment regime consistent with the “neighboring rights” established by France in 2019. Under the new law, media companies are to be paid when snippets of their stories appear on digital platforms or in Internet search results. X owner Elon Musk posted a reply to the suit saying, “This is bizarre. They want us to pay *them* for traffic to their site where they make advertising revenue and we don’t!?”

Context – This is a new front in the global media company campaign to compel big digital companies to pay them when their content appears on the platforms. Google and Facebook, the two largest digital advertising platforms, have been the primary targets. In Australia, legislation was enacted in 2021 requiring dominant digital ad platforms to negotiate payments to media companies, backed by a threat of government-led arbitration. This forced arbitration model is spreading as far afield as New Zealand and Indonesia, to California and Canada. Canada is front and center with targets Meta and Google both threatening to block Canadian news from their services rather than be forced to pay for search result links (Google’s objection) or posts from users and the media themselves (Meta’s objection). The issue evolved differently in Europe. After France created “neighboring rights”, Google proposed simply ending news “snippets” in search rather than paying. Media companies objected and the French competition authority intervened, ordering Google, as the dominant search platform, to maintain snippets and pay the companies. After a two-year legal battle Google agreed to pay a fine and the media companies. But X is not a dominant platform and so it may just block AFP snippets from its platform as a potential resolution.

Talk of Internet Legislation in the Senate after the August Break

As Senators wrapped up their summer session and prepared to take their August break, major digital policy legislation was touted by a handful of senior leaders for activity this fall. Most notable were two “online child protection” bills passed by the Senate Commerce Committee.

Online Child Privacy and Safety Legislation

Report from the Washington Post

In Brief – The Senate Commerce Committee reported two bills to dramatically increase federal regulation of how digital platforms deal with young users. Both passed without any votes against (video available here). The Kids Online Safety Act, the product of vocal Big Tech critics Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), touts “safety by design” with digital platforms having to vet their products for risks to children and take steps to address them, including giving parents’ greater control and allowing users under age 13 to opt out of algorithmic recommendations and other features. The Children and Teens’ Online Privacy Protection Act, led by Sens. Edward J. Markey (D-Mass.) and Bill Cassidy (R-La.), expands the Childrens Online Privacy Protection Act (COPPA) from age 13 up to age 16, banning companies from collecting the data of users up to 16 years old without their consent and creating an “eraser” button allowing children to remove their data from digital services.

Context – Both of these bills were passed by the Commerce Committee very late in the last Congress, but neither was voted on by the full Senate or considered in the House of Representatives. Yes, “protecting kids” and hitting out at “Big Tech” is popular on both sides of the aisle, but the bills failed to make it across the finish line for real reasons. Most important, there was political risk aversion during the election season on both sides of the aisle. In addition, there was a legislative standoff between backers of broad-based online privacy legislation and backers of narrower child-focused measures. Champions of the comprehensive legislation feared success on narrow bills would bleed away energy and backing. Finally, broad privacy legislation faced broad opposition, especially progressive Democrats from California. This time around, the Senate Committee has moved earlier, which generally helps, and a progressive Democrat is no longer the Speaker of the House. But other hurdles remain in place. The most relevant are progressive groups who believe digital platforms are especially valuable for marginalized young people, as well as privacy advocates concerned with the impact of broad-based online age verification.

Focus On – Fall is too soon, but these bills are in play for 2024. One new feature is the impact of states banning younger teens from social media without explicit approval from parents. A handful of Republican-led states have already done so, and federal legislation has been introduced as well. The reaction of federal judges to the state laws will inform the federal debate.

Lightning Round of Other Senate Legislative Talk

The Chair of the key Commerce Committee said she planned to add a new bill to the debate over security risks posed by TikTok, and two of the most polarizing Senators came together with a big plan to create a new federal regulator to govern the largest digital platforms.

Cantwell Plans to Enter Debate Over TikTok and China-Based Apps

Report from the Washington Post

In Brief – Commerce Committee Chair Maria Cantwell (D-WA) has indicated that she plans to step more directly into the legislative debate over TikTok and potential risks of digital services from companies based in countries seen as national security risks. TikTok, while vociferously disputing charges that it is beholden to the Chinese Government, has offered to make operational changes in the context of a now years-long federal security review. They have not assuaged Biden Administration concerns. Congressional legislation to regulate or outright ban TikTok and other apps, especially from China, is in play but no proposal has clear momentum. The Senate Commerce Committee would be at the center of any substantive legislative effort.

Context – Cantwell believes the current bills need work. And the most relevant action is not even on Capitol Hill. If the Administration takes more forceful action, especially from the Committee on Foreign Investment in the United States (CFIUS), that’s news. Don’t expect it this fall. But if it did, the real action will be in the federal courts. When the Trump Administration attempted to impose bans on TikTok and WeChat, federal judges responded positively to legal challenges.

Focus On – The lawsuits in federal court challenging Wyoming’s effort to ban TikTok in the state. They will provide more insight whether the next shoe falls from CFIUS or congressional actions.

Senators Warren and Graham Back a Powerful Regulator for Big Tech

Report from the New York Times

Summary – Senators Elizabeth Warren (D-MA) and Lindsay Graham (R-SC), two of the most strident partisans in the Senate, have teamed up to propose the Digital Consumer Protection Commission Act that would create a new uber regulator for online platforms and data processors. The bill covers most of the digital hot buttons, including competition and antitrust, transparency for terms of service and content moderation, privacy, duty of care, limits on targeted advertising, and national security.

Context – Smart money sees the Big Tech antitrust agenda as mostly dead this Congress due to partisan divisions. More dead are the prospects for empowering a new federal regulator while the fights over the FTC and CFPB only ramp up..

Focus On – Why some media outlets treat this Warren-Graham bill like a serious thing.

The Canada Trainwreck Over Forced Media Payments is Coming

Report from the Wall Street Journal

In Brief – Meta has begun to block the posting or sharing of news media stories created by Canadian media companies on Facebook and Instagram in response to the country’s recently enacted Online News Act. The move follows months of warnings from the company that it would do so if they were legally mandated to pay media companies when users, including media companies themselves, post or share stories. During a 2021 showdown in Australia over similar legislation, Meta and Google drew lines in the sand. Meta rejects paying for media content posted by users and media companies themselves. Google objects to paying for basic search results and links. Both companies claim that the Canadian law crosses those lines and that they will change their operations in the country. Meta admits that when it temporarily blocked news in Australian, the process was technically flawed and captured other content. It says it will institute a more technically effective ban in Canada. Google, who has been more reticent to threaten to change search, is also threatening to drop Canadian media stories from basic search results in Canada. Canadian Government and media reactions have been very critical.

Context – The standoff in Canada is now the top flashpoint in the global campaign by big media companies and their backers in government to force Google and Meta to pay them when “news” is found, posted, or shared online. The two digital giants initially responded by creating “curated” media content programs to pay media companies in hopes of reducing pressure. It didn’t work. Google has increased their efforts, but Meta moved in the other direction. They argue that their users don’t value news. The company is pushing entertainment-style content and is increasingly straightforward about being willing to block news when they are forced to pay a government-set rate. Along with Canada, there is New Zealand, California, and broadly in the US. They almost seem like they want to, with Canada a good test bed. It is also noteworthy when Google threatens to change search, such as in Canada. A similar conflict is brewing in Indonesia.

Open-Source AI Companies Offer Suggestions to Improve the EU AI Act

Report from VentureBeat

In Brief – A collection of small companies developing AI services based on open-source models for the underlying foundational systems have shared recommendations with EU policymakers as they work to finalize the AI Act in trilogue negotiations. The companies are concerned that “overbroad obligations” threaten to disadvantage the open AI ecosystem versus closed and proprietary AI development pursued by many of the largest corporate AI developers, such as Microsoft, OpenAI and Google. Their suggestions include clearer definitions of AI components, clarifying that hobbyists and researchers working on open-source models are not commercially benefiting from AI, allowing limited real-world testing for AI projects, and setting proportional requirements for different foundation models.

Context – The EU is far ahead of other Western states in efforts to regulate AI. The initial AI Act plan was built on a risk-based regulatory model to police AI-enabled applications in “high risk” fields rather than the underlying technologies. But the public release of chatbot phenom ChatGPT kicked into high gear talk of existential AI threats. The European Parliament responded by expanding their AI Act beyond high-risk AI applications by also regulating “foundational models”. Outside of the EU, the push to legislate and regulate is more restrained. The Biden Administration is rallying US-based companies to agree to implement “responsible” AI practices such as helping combat AI-generated fakes with industry standard digital watermarks, and allowing outside experts to test AI system security, capabilities, and vulnerabilities before public release. At the G7, Japan is leading AI policy talks aimed at promoting principles of safe AI development rather than imposing regulation. The UK Government’s AI policy framework criticizes the EU for being overly regulatory. Are EU leaders listening? French President Macron, prompted by concerns of French AI start-up Mistral (which also uses an open-source model), said that the AI Act might be moving too far in a direction that would undermine prospects for EU-based innovators.

Italian Competition Authority Reaches Deal with Google on Data Portability

Report from Reuters

In Brief – Italy’s competition authority has accepted commitments from Google to improve its user data transfer practices to end an antitrust complaint that the company undermined the market for data portability services. The regulator opened the investigation last year following a complaint from Italian startup Hoda that operates a direct marketing service that encourages individuals to transfer their online data to the startup, who claims it is anonymized and then used for commercial purposes such as targeted display advertising, with a meaningful portion of the revenue then shared with the user. Hoda accused Google of hindering the ability of users to share their personal data by making its data portability process overly complicated and difficult to use. Google, who claims to have long supported the concept of user data portability, proposed a series of changes to make its current data transfer programs easier for third-party companies and Google’s users, and pledged to make a test version of a new data transfer tool it is developing for release next year available to third-party services providers this October. A Google spokesperson welcomed the regulator’s decision and expressed the company’s support for data portability “in a way that improves user experience while protecting user privacy and security”.

Context – Giving smaller competitors access to large platforms’ data, and users greater ability to move between platforms, is receiving serious attention in a growing number of markets. The EU, with the Digital Markets Act (DMA), is furthest along. Interoperability and data portability are both among the new law’s requirements for digital “gatekeepers”. Other examples include the recommendations from the Japan Fair Trade Commission in its mobile operating system market study and the data privacy law enacted this spring by the State of Iowa. However, interoperability and data portability are complicated and there is no consensus, especially concerns with trade secrets and privacy risks. They were both raised at the FTC’s 2020 data portability workshop and by the Biden Administration in regard to the DMA.

Booking Offers Concessions to European Commission to Close Acquisition

Report from the Financial Times

In Brief – Booking.com is offering to make concessions to the European Commission competition authority to win approval of its $1.83 billion acquisition of Sweden-based “Online Travel Agency” (OTA) Etraveli. The regulator’s preliminary conclusion is that Booking is the dominant hotel OTA in Europe and the acquisition may strengthen its position of dominance in that market and reduce competition in other online travel services markets. In 2021, Booking announced its plan to acquire the OTA that was already powering Booking’s then nascent flights’ reservation offering. Adding flight reservations to Booking was seen as an important expansion of its accommodations-focused platform. The Commission expressed concerns that the deal will increase Booking’s bargaining position towards hotels that use its platform, divert demand from cheaper alternative sales channels for accommodations, and make it harder for competing OTAs to build and grow hotel OTA businesses. To address those concerns, Booking is reported to be offering to allow online users who book flights through Etraveli to receive accommodation offers from other OTA services besides Booking. The companies are reported to be hoping to be able to resolve the competition regulator’s objections quickly due to reports that Margrethe Vestager, who heads to European Commission’s competition bureau, may soon leave the Commission to head up the European Investment Bank, delaying ongoing competition matters.

Context – Booking.com is offering to make concessions to the European Commission competition authority to win approval of its $1.83 billion acquisition of Sweden-based “Online Travel Agency” (OTA) Etraveli. The regulator’s preliminary conclusion is that Booking is the dominant hotel OTA in Europe and the acquisition may strengthen its position of dominance in that market and reduce competition in other online travel services markets. In 2021, Booking announced its plan to acquire the OTA that was already powering Booking’s then nascent flights’ reservation offering. Adding flight reservations to Booking was seen as an important expansion of its accommodations-focused platform. The Commission expressed concerns that the deal will increase Booking’s bargaining position towards hotels that use its platform, divert demand from cheaper alternative sales channels for accommodations, and make it harder for competing OTAs to build and grow hotel OTA businesses. To address those concerns, Booking is reported to be offering to allow online users who book flights through Etraveli to receive accommodation offers from other OTA services besides Booking. The companies are reported to be hoping to be able to resolve the competition regulator’s objections quickly due to reports that Margrethe Vestager, who heads to European Commission’s competition bureau, may soon leave the Commission to head up the European Investment Bank, delaying ongoing competition matters.

EU Commission Rejects Meta Settlement Offer to Limit Use of Marketplace Data

Report from Reuters

In Brief – European Commission officials have rejected an offer made by Meta to resolve concerns that the social media and advertising giant breached EU antitrust rules by distorting competition in the market for online classified ads to benefit its own Facebook Marketplace service. The company launched the Facebook Marketplace in 2016 and quickly integrated it into its massively used core social media platform. Facebook Marketplace has since grown into one of the top classifieds ads platforms in many markets, including in Europe. The Commission initially flagged two practices as undermining competition, the tying of Facebook Marketplace to the dominant Facebook social network service, and the imposition of unfair terms and conditions, including regarding access to user data, on third-party classified ads and marketplace businesses that advertised on Facebook, in ways that benefited Facebook Marketplace in competing with those services.

Context – On one hand, Facebook’s ability to grow its Marketplace into a classifieds giant using data from, and links to, its core platform is an example of platform “self-preferencing” targeted by advocates of more robust Big Tech antitrust regulation. On the other hand, we are seeing another example of the increasingly disjointed competition policy standards from the US, EU, and UK. The UK’s Competition and Markets Authority opened a parallel investigation of Facebook and its Marketplace in 2021, and the regulator recently accepted the settlement offer from Meta that the EU has now rejected. It is the third EU Bad Cop tech company decision since May when the Commission accepted Microsoft’s package of concessions and approved its massive acquisition of Activision-Blizzard. That call appears to have turned the tide on the mega deal in the face of objections from the CMA and the US FTC. However, since then, the Commission has objected to Amazon’s acquisition of iRobot (after the CMA cleared it), turned down Meta’s Marketplace advertising settlement offer after the CMA accepted it, and brought a complaint against Microsoft for tying its Teams service to Office 365.

Vietnam Proposing Even More Aggressive Internet and Social Media Controls

Report from The Diplomat

In Brief – Vietnam’s Ministry of Information and Communications has announced a draft decree on internet services and online information that further attempts to tighten governmental control online services. Digital service providers, including telecommunications firms, web storage companies, and data centers will be subject to orders from the Ministry to suspend services for organizations and individuals who violate the country’s cybersecurity laws, including through social media livestreams. The draft law also requires social media services to verify user accounts through a Vietnamese phone number and suspend accounts of anyone that does not. The decree is the latest effort by the Communist Party of Vietnam (CPV) to pressure social media networks, including large global platforms, to remove politically sensitive content. Vietnam’s 2018 cybersecurity law requires platforms to take down posts within 24 hours of receiving a government request. The CPV also already requires foreign tech firms to have a representative office in Vietnam and store users’ data locally.

Context – Like much of Southeast Asia, Vietnam, with its large and relatively young population dominated by “digital natives”, is experiencing rapid gains in a wide range of digital services, including e-commerce and “social commerce”. While these trends are helping spur growth and investment, including by major regional and global platforms, national governments are attempting to exercise more control over digital services and the forces they unleash in a way that balances economic gains with a range of domestic political and social goals. Vietnam, with an online ecosystem rated “Not Free” by Freedom House, is more repressive than many in the region, but remains an important growth market to many firms. And while social media regulation in places like Vietnam, China, Russia, Turkey, India, and Nigeria is easy to criticize on censorship grounds, Germany, Australia, and France were early adopters as well, while the EU’s Digital Services Act and the UK Online Safety Bill also dictate digital platform conduct.

Google Starts Widespread Rollout of Privacy Sandbox Digital Ad Tools

Report from the Verge

In Brief – Google has begun the widespread rollout of changes in the Chrome browser dubbed the “Privacy Sandbox”. They are designed to end third-party cookies and replace them with ad targeting tools that are deemed more respectful of user privacy. The plan to replace third party cookies in Chrome was first announced by Google in 2019. The company claimed that the huge change to the digital advertising ecosystem was driven by shifting privacy expectations and paralleled cookie policies in other major web browsers, especially Apple’s. However, Google’s digital advertising businesses are far larger than the other browser companies, and their proposal to phase out a core digital ad technology on their market-leading browser generated major concerns from ad industry participants that changes could be used by Google to bolster its market-leading adtech businesses. Google has repeatedly said it would work with the industry to address concerns and reached an agreement with the UK Competition and Markets Authority to have the agency serve as a kind of lead regulator certifying that the new ad targeting tools do not unfairly benefit Google’s ad businesses. The CMA reports that Google intends to quantitatively test the effectiveness of its Privacy Sandbox technologies between Q4 2023 and Q2 2024 and inform the CMA assessment team.

Context – “Targeted advertising” creates a policy maelstrom with crosscurrents between privacy advocates who hate it and a digital ad industry that continues to grow and diversify. Many in the industry sense anticompetitive intentions when giants like Apple and Google make “privacy” changes. Apple has been the chief driver, aggressively criticizing ad-based business models while growing their own ad business. Apple’s 2021 policy change to require apps to get up-front approval to track users across the web is under investigation by a growing number of competition regulators including FranceItaly, Germany, and Poland. Expect digital ad technology policies to be a major topic of interest of the EU’s incoming Digital Markets Act regulators.

As Expected, House-Senate Showdown Coming Over Antitrust Funding

Report from the Washington Post

In Brief – Funding for the Biden Administration’s antitrust agencies, the Federal Trade Commission, and the Antitrust Division of the US Department of Justice, are increasingly looking like a partisan flashpoint in this year’s budget and appropriations cycle. The agencies are led by high-profile advocates for the kind of aggressive antitrust enforcement championed by the progressive wing of the Democratic Party, with particular emphasis on pushing against the digital giants. While there was meaningful Republican support for getting tough with “Big Tech” in 2021, when, for example, Lina Khan was confirmed as an FTC Commissioner, that bipartisanship is long past. She and her FTC colleagues have come under fire from Republicans at recent House hearings. Republican leaders have been raising red flags for months with the FTC’s budget request that asks for increases to challenge a wide range of tech company practices and mergers, which is likely to manifest in the House where Republicans hold a slim majority. But the slim Democratic majority in the Senate is another matter altogether.

Context – Funding for the Biden Administration antitrust agencies is primed to be more than just another politically tinged appropriations flashpoints. In particular, the FTC. The “independent” consumer protection and antitrust enforcement agency has been a central player in public pushes from the Biden Administration on a range of progressive policy issues, many of which go beyond Big Tech. FTC initiatives on employee “non-compete clauses”, “unfair methods of competition”, merger review guidelines, privacy and data use, Gig Work platforms, and cooperation with European regulators implementing antitrust and merger policies supported by progressive reformers in the US but rejected by the Congress, have created broad-based concerns within the US business community. And the FTC has lost key court challenges and suffered employee discontent. Republicans likely see a battle over FTC funding as a chance to highlight to the business community how they are aligned against aggressive progressive regulators.

Ask A Question!