Archive – 2021-22

News & Insights

December 2022

Big Tech Bills Fail in Lame Duck but Narrow Antitrust Bills Get Across Finish Line

Report from The Hill

In Brief –  All the major anti-Big Tech bill failed to be enacted in the Lame Duck session that brought the 117th Congress to a close. However, three narrow measures, one allowing state attorneys general to keep their federal antitrust lawsuits in the federal circuit court they choose, the second increasing corporate merger filing fees for large transactions, and the third requiring merging parties to disclose any economic support they receive from foreign “entities of concern”,
were included in the final “Omnibus” government funding bill. (See H.R. 3843 for text.) The Big Tech measures that failed to make the cut included headliners such as the American Innovation and Choice Online Act, prohibiting the giant platforms from preferencing their own products and services, the Open App Markets Act, regulating the Apple and Google app stores, the Kids Online Safety Act and Children and Teens’ Online Privacy Protection Act, regulating how digital platforms serve internet users between age 13 and 17, and a bill to force Google and Meta to pay media companies when their news content appears on the two platforms.

Context –  Context – Once the EU wrapped up the DMA and DSA, the biggest policy question left in 2022 was whether the Congress would rewrite the rules governing the tech giants in the US. As we expected, the answer ended up being No. While some progressives argued that the narrow bills were a major accomplishment, they were more a parting gift. Looking forward, a few claim the big bills will be back on track in the 118th Congress, but most see the divided Congress as a major barrier due to the massive partisan divide over platform content moderation. The venue bill will likely prove the more impactful of the enacted measures. Under current law, antitrust enforcement actions filed by the US Government cannot be transferred between circuits by the US Judicial Panel on Multidistrict Litigation, which consolidates complex cases to promote judicial efficiency, but State AGs have not enjoyed that benefit. Going forward, AGs can forum shop, meaning their digital platform antitrust suits will be playing out less often in the Northern District of California and the Southern District of New York.

EU Quiety Drops Investigation of Google-Meta “Jedi Blue” Adtech Deal

Report from EU Law Live

In Brief –  The European Commission has dropped the antitrust investigation opened last March into “Jedi Blue”, a 2018 business agreement between Google and Facebook that came to light through the December 2020 federal antitrust complaint brought against Google by a coalition of US State Attorneys General led by Texas AG Ken Paxton (R). It was alleged that Google gave Facebook special terms and access to its tools allocating advertising space across the Internet in return for Facebook abandoning a rival advertising technology called “Header Bidding” that some argued threatened Google’s online ad services. The companies aggressively disputed the allegations. A note in the European Commission’s December 12 Daily News stated that “following a careful assessment of all relevant evidence,” it had “concluded that the evidence did not confirm its initial concerns and has therefore decided to close its investigation.” The Commission’s decision follows a similar ruling from a US Federal District Court judge that allowed the multi-state antitrust lawsuit against Google spearheaded by Texas to proceed, but dismissed the claims related to Jedi Blue because the states failed to show that the agreement harmed competition. The probe by the UK’s Competition and Markets Authority remains open.

Context –  Context – Context – Digital technology revolutionized the ad industry by delivering more effective targeting and more quantifiable returns. Google was the first digital adtech giant and remains the leading services provider, while Meta (then Facebook) emerged as the second digital ad giant, leading in display advertising. Both have been subject to digital ad industry competition investigations in the US, UK, EU, Australia, and Japan. As those investigations have proceeded, a key development is the shrinking share that Google and Meta hold in the digital advertising market, as Amazon, TikTok, Microsoft and Apple have grown, with the combined share of Google and Meta appearing to have peaked in 2017 and falling to 48.4% in the US market and 49.5% internationally in 2022.

Australia’s AGs Propose Defamation Law Reform to Relieve Media Comment Board Liability

Report from Australia’s State Council of Attorneys General have thrown their support behind amending defamation law to clarify that media companies are not liable for third-party comments left on the comment boards of the company’s social media pages. Last year, the High Court of Australia ruled that news media companies that post stories on Facebook are liable for defamatory user comments in the same manner as if the newspaper had published them in traditional Letters to the Editor. The AGs are proposing an innocent dissemination defense available to a person running a news social media page, shielding them from liability until they receive a written complaint about allegedly defamatory third-party comments. They would then have seven days to remove the material.

Context –  Context – Australia and the EU are breaking ground in the application of defamation laws to social media and internet search. The AU High Court’s ruling on Facebook comments boards led many sites to shut down social media comments. Former PM Scott Morrison, a vocal Big Tech critic, proposed legislation shifting the liability to the social media sites themselves for comments from anonymous “online trolls” if the platform refused to turn the user’s identity over to a court. That social media anonymity bill failed to clear the AU parliament before Morrison was defeated in May. On search, the State of Victoria court of appeals ruled in 2020 that Google was liable for including links in search results to an online article had been ruled to be defamatory, however, that was overruled by the AU High Court in August in a decision stating that the offending article was written and published by independent content producers and Google “does not own or control the internet.” On the other hand, the European Court of Justice has just expanded the “right to be forgotten”, which allows Europeans to require search engines to delist web pages that include outdated personal information, to also require Google to keep pages with “manifestly inaccurate” information from search results when asked.

Tech Trade Group Sues to Block California Law to Regulate Online Services to Teens

Report from New York Times

In Brief –  NetChoice, a trade group that includes Amazon, Google, Meta, Pinterest, and TikTok, has sued California to block implementation of a law requiring many online services to restrict services to users under age 18, including shielding them from potentially harmful content and turning off features such as video auto-play and friend-finder tools that could be accessed by adult strangers. The California Age-Appropriate Design Code Act, based on website rules outlined in Britain’s Age-Appropriate Design Code in 2020, was enacted in September with unanimous votes in the state legislature. In Britain, many social media and video game platforms, including companies who are members of NetChoice, began complying with the UK rules in 2021 by turning on their highest privacy settings for British users under age 18, as well as turning off auto-play. In a legal complaint filed in US District Court in California, NetChoice argued that the California law would require online services to act as content censors, violating constitutional free speech protections, and harming minors by hindering their access to free and open online resources. Asked why NetChoice is challenging a law based on rules many of its members are complying with in the UK, the trade group responded that Britain does not have either a First Amendment or a long tradition of protecting free speech online. California’s law is scheduled to take effect in 2024.

Context –  Context – Context – “Protecting kids” has always been the top justification of internet regulation. California’s law slots in with regulatory proposals in the UK, EU, and France. An under-18 “Splinternet”, backed up by regular online age verification, is a real possibility, and many civil society groups believe the restrictions would threaten vulnerable teens. Nevertheless, two such bills passed by the Senate Commerce Committee earlier this year, and now being tweaked and pushed for action in the end-of-the-congress “Lame Duck” session, appear to be the last anti-Big Tech bills standing at year’s end.

Musk-Twitter Disbands Previous Regime’s Trust and Safety Council

Report from Wall Street Journal

In Brief –  Elon Musk-led Twitter has disbanded a Trust and Safety Council of outside experts first established by the company in 2016 to advise the platform on policies intended to address a wide range of objectionable content. The council, made up of volunteers, met with a wide range of civil society groups, think tanks, and tech critics, and would regularly be briefed by Twitter executives developing new company products and policies. Advisory groups in past years included Online Safety and Harassment, Human and Digital Rights, Suicide Prevention and Mental Health, Child Sexual Exploitation, Content Governance, and Dehumanization. Although Musk initially said he would create a content moderation council to advise the company, he later scrapped the idea claiming progressive groups were attacking the business.

Context –  Context – Social media content moderation is a highly ideological and highly partisan issue. Musk made it clear throughout his dealings with Twitter that he disagreed with how he thought the platform handled the issue. Whether one accepts, rejects, or thinks there is a nugget of truth behind claims that Twitter (and other giant platforms) have been biased in their moderation values and practices, it’s no surprise that he’s changing the prior regime’s institutions. Besides the entertainment value of the daily Musk-Twitter happenings, there are some potential substantive public policy impacts. First, the harder Musk-Twitter pushes against the narrative that the giant platforms are all progressive, the less likely Republicans and Democrats are going to align on any social media legislation. Second, governments outside the US, not constrained by the First Amendment, are increasingly willing to tell Twitter and other platforms how to moderate content. Musk has indicated his flavor of Twitter “free speech” is based on enforcing local law, which veers close to (and sometimes fully into) censorship in some places. Twitter seeming to zigzag on content moderation could face serious regulatory challenges in Europe, for example. Finally, how will the US Supreme Court react to Musk-Twitter in their upcoming.Sec. 230 cases?

Omegle Judge Continues Sec. 230 Circumvention Based on Negligent Product Design

Report from Bloomberg

In Brief –  Federal District Court Judge Michael Mosman in the US Ninth Circuit has doubled down on his ruling that Omegle, a small chat service that randomly pairs anonymous users with each other to engage in one-off text or video communications, can be sued for liability for sexual abuse suffered by a girl who, starting at age 11, met and communicated with an abuser in his late thirties on the service. In July, Judge Mosman rejected Omegle’s use of Sec. 230 of the Communications Decency Act to protect itself from liability for the abuser’s actions, arguing that liability in the case is based on Omegle’s negligent product design rather than the company’s content moderation decisions. In his latest ruling, he rejected Omegle’s argument that its chat service is not a “product” as defined by product liability law. However, Mosman also indicated he will likely again dismiss the plaintiff’s amended complaint based on the Fight Online Sex Trafficking Act (FOSTA), citing a recent win for Reddit in another FOSTA case.

Context –  Context – Advocates for those who have suffered harm using online services have argued for years that courts have applied Sec. 230 over-broadly. The Snap speed filter case is the most prominent product-liability crack in Sec. 230 and is at the heart of Judge Mosman’s rulings. Cases like Omegle’s point to the prospect that, judge-by-judge, nearly every bad online outcome could be considered avoidable if platforms were simply designed to stop bad outcomes. This line of reasoning is tightly linked to Sec. 230 challenges that are framed as being about “algorithms”, which are basically “the product” for digital platforms ordering and presenting information to users. While federal judges have rejected efforts to circumvent Sec. 230 based on algorithms being different, including TikTok recently winning dismissal of a liability suit arguing its algorithm surfaced “blackout challenge” content to a young girl who killed herself, the Supreme Court itself has accepted a Sec. 230 case that looks squarely at algorithms and liability. Justice Thomas has been very clear about what he thinks.

Indiana’s Attorney General First to File State Lawsuits Targeting TikTok (Expect More)

Report from Washington Post

In Brief –  The Republican Attorney General of Indiana has filed a pair of lawsuits in Indiana state court accusing TikTok violating Indiana’s consumer protection laws. One complaint claims that the super-popular social media company deceived parents on the amount of sexual and drug-related content accessible to young users and failed to appropriately shield young users. The other complaint argues that TikTok, a wholly owned subsidiary of Chinese digital giant ByteDance, has misled users about the authority of the Chinese Government to access sensitive user data. The two suits are asking for fines of up to $5,000 for every violation. They also ask a state Superior Court to order an end to the allegedly deceptive claims about the company’s data handling, and to order a stop to the marketing of the app to young teens.

Context –  State Attorneys General have been active in the highly politicized legal and regulatory actions targeting the largest digital platforms. The two AG suits in Indiana are the first targeting TikTok, but more are expected. TikTok is the only Chinese digital giant with hundreds of millions of rabid users outside China. Security officials in countries that see themselves in strategic competition with China are raising concerns that the Chinese Government, which is heavily invested in digital surveillance, has the legal authority and political power to gain access to user data held by Chinese companies. TikTok and the Committee on Foreign Investment in the United States have been negotiating for years to address security concerns over potential Chinese access to user data, as well as its ability to engage in political influence operations by impacting TikTok’s recommendation algorithms. A deal putting US-based Oracle in a key intermediary role was thought to be close, but now reportedly has hit roadblocks. A series of leaks from TikTok employees claiming Chinese-based company officials tracked the location of US-based users, could access US user data regardless of where it was stored, and have always had authority over the US-based executives, has only increased the challenges to resolving the issues.

Facebook Oversight Board Shocked That The Company Protected VIP Users

Report from Wall Street Journal

In Brief –  The quasi-independent Facebook Oversight Board created in 2020 as a Supreme Court-like panel to adjudicate company content moderation decisions, has released a report criticizing Facebook’s “Cross Check” program that reviewed actions taken against high profile users. The fact that the company had a special program, not reported to the public, to review potential content moderation penalties imposed on famous or otherwise popular Facebook users, was the subject of the first installment of the Wall Street Journal’s “Facebook Files” based on France Haugen’s massive document heist. The documents revealed that the company did not apply its highly automated content moderation practices to famous and otherwise high-profile users in the same manner as it did to most users, instead keeping a running list of users, 5 million at one point, that would receive some manner of relief from the automated systems. Critics charged that these VIPs, including politicians, entertainers, athletes, and other influencers, were allowed to get away with harmful conduct.  The Oversight Board claimed that the program went beyond the stated intent of protecting users from erroneous automated content moderation decisions and instead often protected “important” users from facing appropriate penalties to protect company business interests.

Context –  When the Oversight Board, made up of noted academics, journalists, lawyers, and other free speech experts, rules on a few dozen specific content moderation decisions, the company promises to implement the rulings. When the Board speaks on a broad company policy, Facebook will consider but not always abide by the recommendations. In this case, the charges around VIP special treatment never really resonated with government officials who focused on the Journal’s second topic dealing with Instagram and kids. Cross-check was about famous, powerful, or otherwise important people, including in government, getting special treatment. Government officials were shocked like Casablanca’s Louis Renault.

The FTC Sues to Block Massive Microsoft-Activision Gaming Industry Acquisition

Report from Washington Post

In Brief –  Following months of review, the Federal Trade Commission (FTC) has sued to block Microsoft’s massive $69 billion acquisition of video game giant Activision Blizzard. The FTC action follows earlier initial determinations by the UK Competition and Markets Authority and the European Commission Competition Authority that the acquisition threatens competition in a range of videogame-related markets. Despite reports that the FTC would focus on the deal’s labor impacts, the complaint, backed only by the Commission Democrats, raises more traditional antitrust concerns over competition in consoles, game development, and emerging gaming platforms such as subscription services and cloud gaming. Despite Microsoft’s aggressive policymaker outreach campaign and assurances that it would not restrict videogame industry competitors from accessing top Activision games like Call of Duty, the FTC suit calls the company’s promises into question with claims that Microsoft did not follow through on similar assurances related to earlier game developer acquisitions. With the probes in the UK and Europe running to spring 2023, the FTC did not need to go to Federal District Court to try to win an injunction to block the acquisition and instead filed suit in its administrative court.

Context –  Although the top competition regulators in the US, EU and UK all express deep concerns with Big Tech acquisitions, until now they have only challenged deals for small startups, such as the CMA unwinding Meta’s deal for Giphy, the FTC suing Meta to stop it from buying VR-developer Within, and the EU blocking the Illumina-Grail deal. Some critics have called out the activist enforcers for shying away from the big fights. Microsoft’s Activision deal is the biggest digital acquisition of all time, so the FTC’s crossed that hurdle. The reaction of US judges is a big question mark. The Biden Administration’s antitrust enforcers admit to wanting to change precedents by trying to block deals where they very well might lose. That goes for both the vertical Microsoft-Activision deal and the Meta-Within challenge.

The EU Right to Be Forgotten Is Expanded to Cover Delisting Inaccurate Information

Report from Reuters

In Brief –  The European Court of Justice (ECJ) has expanded the “right to be forgotten” (RTBF) it created in 2015, which allows Europeans to require search engines to dereference web pages that include outdated accurate information about themselves, to also include the right to require search engines to dereference pages with “manifestly inaccurate” information of a timelier nature. The decision regarding allegedly inaccurate information and online search involved two financial advisors accused of misdeeds by an online fraud-fighting website. The pair demanded that Google stop listing the site’s pages, claiming that the charges were false and defamatory. The German Federal Court of Justice presented three options to the EU’s top court, with the plaintiffs arguing their request should be enough, Google calling for the plaintiffs to deal directly with the publishing web site, and the German Court recommending the plaintiffs get a court order. While the ECJ claims that there must be a balance between a person’s rights to protection of their private life with the right of internet users to access information, it says the right to freedom of expression and information falls where information is inaccurate. When a search engine is presented with clear evidence of inaccuracies about a person, it should dereference the site. If, after reviewing the evidence, it claims the evidence is not clear and chooses not to, the person can bring the matter before a judicial authority to get an order to force the dereferencing, and during those proceedings, the search engine must warn internet users of the judicial proceedings challenging the accuracy of the content.

Context –  “I can’t come to bed, someone is wrong on the Internet!” says a longtime internet meme. Google sees solving that problem beyond even their reach. The ECJ disagrees. However, back in September, the Australian High Court ruled that Google was not liable for including links in search results to an online article that was ruled to be defamatory.

Biden Administration Tells Supreme Court Sec. 230 Does Not Cover Content Recommendations

Report from Bloomberg

In Brief –  The Biden Administration believes that Sec. 230 of the Communications Decency Act, the landmark digital platform liability law that is a foundation of the commercial internet, does not protect digital platforms from liability for harms that can be attributed to the content recommendations of their algorithms. Their arguments are laid out in the Department of Justice’s amicus brief filed with the US Supreme Court in the case of Gonzalez v Google. They stand in stark opposition to a united internet industry. Sec. 230 clearly protects digital platforms from civil liability due to the content of most user-generated material, a view the DoJ claims to back. However, legal advocates pushing platforms to more aggressively police their sites have increasingly shifted to claiming that Sec. 230 does not protect platforms from liability due to how their algorithms handle said content.

Context –  The Supreme Court will hear a pair of cases in the spring challenging the scope of Sec. 230. Both deal with whether social media platforms can be held liable for terrorist attacks. Both involve ISIS-inspired violence. The amicus briefs (available here for Gonzalez v Google and Taamneh v Twitter) offer a great overview of the latest arguments on the meaning, purpose, and effects of Sec. 230. The law, and social media platforms, have been under attack from Democrats and Republicans for years. But congressional action has been stymied because the two sides want opposite changes. Democrats demand that platforms police more. Republicans demand platforms police less. We know where Justice Clarence Thomas stands. He has filed personal amicus brief-like opinions saying Sec. 230 has been interpreted far too broadly. The DoJ brief cites him. So does Sen. Ted Cruz (R-TX). And waiting in the wings are Republican social media laws from Texas and Florida that are likely headed to the High Court. The Justices could decide that platforms can be liable for harms if they don’t effectively restrict objectionable content and then allow states to block platforms from restricting objectionable content.

The European Commission and Amazon Appear to Agree on Antitrust Settlement

Report from Financial Times

In Brief –  Reports increasingly point toward European Commission acceptance of Amazon’s offer to settle a pair of antitrust investigations by making policy changes that the company also hopes will mollify the regulatory team that will oversee the new Digital Markets Act. The two Commission antitrust investigations involve alleged abuse of third-party seller data to unfairly compete as a retailer with those sellers, and unfair preferencing on its marketplace of products sold by third-party sellers who use Amazon’s logistics services. Along with agreeing to wall off third party seller data from its own retail business, the company is proposing to address charges of “Buy Box” discrimination by creating a second “Buy Box” that offers a similar product but with slower, and likely cheaper, delivery. Amazon’s settlement offer has generated opposition from consumer groups which could still delay final approval.

Context –  Amazon is the largest online retailer, the largest ecommerce marketplace, and the largest ecommerce fulfilment center services provider. Unlike true marketplaces, Amazon physically handles the goods supplied by most of their top marketplace “sellers” like a retailer handles wholesaler goods. For example, unlike other logistics providers who store an online retailer’s unique products and then fulfils the orders made to that retailer, Amazon holds products for hundreds of thousands of sellers in its massive network of fulfilment center, often in “commingled bins” holding interchangeable goods from many sellers, and then ships the closest product to a customer, rather than the product supplied by the retailer making the sale. This unique and opaque business model has been roiling the regulatory environment for years. While complaints about Amazon misusing third-party seller data to grow its own retail business have generated interest for years, they increasingly miss the point. Amazon’s core ecommerce business is not its own low-margin retail. So, when the European Commission added the Buy Box and logistics to their case, they improved it. If they settle, it will be interesting to see how the dual-Buy Box regime operates.

European Privacy Board Rules That Meta Can’t Require Targeted Ads on Its Platforms

Report from the Wall Street Journal

In Brief – The European Data Protection Board (EDPB), which includes representatives of the privacy regulators from all EU member states, has ruled that Meta Platforms cannot require users to accept personalized ads based on their activity on the company’s own platforms as a condition of using those services. The board also ruled that the company’s underlying terms of use violated the EU’s General Data Protection Regulations (GDPR) and warranted a significant fine. The EDPB’s rulings do not apply directly to Meta, but instead direct the Irish Data Protection Commission (DPC) to issue rulings to Meta that reflect the decisions. The DPC had largely concurred with Meta’s argument that its advertising practices were justified, but the overall board overturned the Irish decision. The company will be able to appeal the eventual formal decision, potentially delaying its application, but if upheld, it could drastically reduce the ability of digital platforms in Europe to show users ads based even on activity on their own services.

Context – The intersection between data collection, privacy, and advertising is a morass of conflicting interests and user expectations. “Targeted advertising” is one of the most contentious topics in digital policy. Many privacy advocates want to get rid of it altogether. They seem to think less effective, more expensive, advertising is better. They also discount the evidence that personalized advertising benefits small, specialized businesses with small advertising budgets. For years, Meta has allowed users to opt out of having data collected when they are on other websites and apps from being used for ads. But data collected from a user’s activity on Meta’s own sites was a different matter. Apple has contentiously forced app developers to offer users an opt-out of targeted ads, and ad-financed platforms like Meta and Snap have seen meaningful revenue impacts. While the two major EU tech bills enacted earlier this year did not overturn targeted advertising, if its privacy regulators require similar user consent, the business model may be significantly constrained, hurting ad-based platforms and small business advertisers.

Forced Media Payments Scheme Dropped from Defense Bill (But There’s Always New Zealand)

Report from Washington Post

In Brief – After a flurry of reports that a bill to force the largest digital platforms, in particular Google and Meta, to pay news media companies more money when their content appears on the platforms was slated to be added to an end-of-the-year defense bill, the deal fell through. The Journalism Competition & Preservation Act (JCPA), sponsored by a bipartisan collection of anti-Big Tech reformers, is a US version of a government-mandated media payments regime that initially gained traction in Australia and France and is spreading globally as more governments and their domestic media companies look to join the money grab. The measure, which required the digital companies to negotiate compensation deals with media collectives, backed up by federal arbitration under a process where the arbiter would choose one side’s final offer, drew opposition from a range of progressive groups who saw the effort as a threat to internet communications and coerced financial support for elite media companies.

Context – As the 117th Congress races to a close, advocates for a range of different digital policy bills have been engaged in the expected “Lame Duck” scramble. The annual defense authorization bill is generally thought of as a “must pass” bill (which almost true) and therefore often attracts year-end legislative “riders”. Two Big Tech antitrust bills, and a pair of bills regulating how digital platforms serve users under age 17, have also been top of the Lame Duck list for tech reformers. Despite the talk and hype, Lame Duck strategies for controversial legislation are big longshots, and the final version of the defense bill did not include any of the last-gasp tech proposals. That said, bill advocates can at least look to momentum in Canada and New Zealand for holiday cheer. And it may offer the rest of us an opportunity to see if Meta will follow through on their threat to drop news media sharing from Facebook, which might a smart business strategy. The company is clearly trying to move toward TikTok-style entertainment content and “news” brings a huge amount of strife and content moderation trouble.

Ongoing Security Concerns Plaguing TikTok Deal with Federal CFIUS Panel

Report from the Wall Street Journal

In Brief – A long-reported deal between TikTok and the Committee on Foreign Investment in the United States (CFIUS) to address security concerns over the super-popular social media video app appears to be stymied by ongoing uncertainty over the unprecedented influence of the Chinese-owned digital giant. CFIUS, a multi-agency federal government panel with powers to block foreign investments in the US, began investigating TikTok in 2020 as the Trump Administration ratcheted up national security concerns about Chinese-owned tech companies. While President Trump’s executive orders to block TikTok and WeChat were sidetracked by federal courts, CFIUS continued its investigation of TikTok parent company ByteDance’s 2017 acquisition of Musical.ly, an app they merged with TikTok. The tentative agreement, intended to address concerns with Chinese access to sensitive American user data, as well as the potential political influence implications of TikTok’s recommendation algorithms, is said to require the company to store US user data in the United States on Oracle servers, and give Oracle engineers the ability to monitor TikTok’s recommendation algorithms.

Context – Divisions are reported between the government agencies on CFIUS, as well as between senior officials within some agencies, and security experts on both sides of the political aisle. Regardless of where TikTok data on US users is stored, leaks point to data access by Chinese-based executives, and that top US-based TikTok officials who are American actually report to Chinese bosses who pull the ultimate levers and would likely bow to pressure from the Chinese Government in a pinch. When TikTok recently faced reports, again based on leaked documents, that the company’s Chinese-based corporate security team tracked the location of US-based users, TikTok’s carefully-worded rebuttal claimed they do not collect “precise GPS location information from US users” and that “TikTok has never been used to “target” any members of the U.S. government, activists, public figures or journalists”.

Progressive Groups Criticize Biden White House Lobbying Leader for Big Tech Links

Report from The Hill

In Brief – Progressive advocacy groups have called on the Biden White House to recuse Louisa Terrell, the Administration’s top Capitol Hill lobbyist, from the last-ditch Lame Duck campaign to enact two key antitrust bills targeting tech giants. The Revolving Door Project and six other groups pointed to Terrell’s past work as public policy director at Facebook and a media report that she rarely mentioned the antitrust legislation during her meetings on Capitol, instead pivoting conversations to other tech issues, such as data privacy. The two antitrust bills that Big Tech critics are pushing for in the final weeks of the 117th Congress are the American Innovation and Choice Online Act and the Open App Markets Act, bipartisan bills passed by the Senate Judiciary Committee. The first prohibits the five largest platform companies from preferencing their own products and services; the second regulates the Apple and Google app stores. Opposition from the House Republican leadership makes them major longshots next year.

Context – Once the EU wrapped up the DMA and DSA, the biggest policy question left for 2022 was whether the Congress would also rewrite the rules governing the tech giants in the United States. The House and Senate Judiciary Committees both reported out bills. The Senate, with its 60-vote requirement and arcane scheduling rules, has been considered the bigger hurdle, so the anti-Big Tech lobbying team, which includes the Biden Administration, has focused their Lame Duck efforts on the two Senate bills. Despite repeated claims by bill supporters that they “had the votes”, they never got their day on the floor. Lame Duck strategies for controversial legislation are always longshots, especially when at least one House of Congress changes hands. But lobbying campaigns don’t give up easily. A similar effort is underway with two Senate bills regulating how digital platforms serve users under age 17. And reports indicate that a bill forcing Google and Meta to pay media companies when their news content appears on the platforms may be the Big Tech bill that squeezes through during the Lame Duck.

Forcing Google and Facebook to Pay Media Companies May Squeeze into Defense Bill

Report from the Wall Street Journal

In Brief – Federal legislation to force the largest digital platforms, in particular Google and Meta, to pay news companies, both newspapers and broadcasters, more money when their content appears on the platforms, may be added to a defense bill that is likely to be passed in the final weeks of the 117th Congress. The Journalism Competition & Preservation Act (JCPA), sponsored by the Capitol Hill trio spearheading Big Tech antitrust reform — Sen. Amy Klobuchar (D-MN), Rep. David Cicilline (D-RI), and Rep. Ken Buck (R-CO) – is a US version of government-mandated media payments that initially gained traction in Australia and France. The latest version requires the digital platforms to engage in good faith negotiations with the media collectives, and in the event of an impasse, news publishers can demand baseball-style arbitration — an all-or-nothing process where federal arbitrators choose one side’s offer to settle a dispute but cannot consider the free distribution value of their content being on giant platforms. Meta argues that the one-sided process may lead them to drop news content sharing from the platform.

Context – Anti-Big Tech backers have been pursuing Lame Duck strategies for many of their bills. The JCPA benefits from the lobbying might of giant media companies and the fact that every politician has relationships with local media that hope to financially benefit from the bill. Interestingly, some progressive groups continue to oppose the measure, claiming it will undermine the internet and primarily benefit large media companies. A late change exempting giant newspapers carves out just three papers, and even the largest TV and radio companies will benefit. The global media payments train contiues to has pick up steam all the while Google and Facebook operate curated news offerings paying media hundreds of millions. The most interesting prospect would be for Meta to eventually follow through on its threat to simply drop news media sharing from Facebook. The company is clearly trying to move toward TikTok-style entertainment content and news brings a huge amount of strife and content moderation trouble.

Federal Court Challenge to MD Digital Ads Tax Might Be Mooted by State Court Win

Report from The Daily Record

In Brief – US District Judge Lydia Kay Griggsby will soon decide if a federal court challenge to Maryland’s first-in-the-nation digital advertising tax will continue forward after a Maryland state circuit court judge found the tax law unconstitutional in October. The US Chamber of Commerce and a handful of tech company trade groups filed federal lawsuits challenging the digital advertising tax after it was enacted in early 2021, but the suits were largely shifted to Maryland state court under the authority of the Federal Tax Injunction Act (TIA). However, Judge Griggsby ruled that the law’s “pass-through prohibition” blocking the companies from attributing a portion of the price of its services to consumers to the new tax was a law governing speech and not a tax covered by the TIA. In the meantime, State Judge Alison Asti comprehensively struck down Maryland’s tax on multiple grounds, including violating the federal Permanent Internet Tax Freedom Act (PITFA) and the Constitution’s Commerce Clause. Griggsby will now decide if the federal court challenge to the law’s speech provision is mooted by the state court decision.

Context – Governments around the world have been trying hard in recent years to increase taxes on the largest US-based digital companies. A massive corporate tax reform effort centered at the OECD built real momentum but has hit turbulence in the US and Europe. A handful of US States have tried to join the tech giant tax grab. Maryland was first in the nation to act, crafting a tax on digital ad revenues aimed to maximize the hit on Google, Facebook, and Amazon, but bills have been circulating in other states including Massachusetts, Connecticut, and Texas. The most compelling legal argument against the state digital taxes is that they clearly violate PITFA, which explicitly prohibits states from taxing commercial activity on the internet that is not taxed the same way offline. Although state governments often seem to enjoy a home court advantage in state courts, Judge Asti applied the law in the way that aligns with the clear reading and purpose of the law.

Japanese Labor Commission Recognizes Uber Eats Delivery Gig Worker Labor Union

Report from the Japan Times

In Brief – The Tokyo Metropolitan Government’s Labor Relations Commission has recognized Uber Eats delivery personnel as workers under Japanese labor law, ordering the company to hold collective negotiations with an Uber Eats worker union. The ruling is the first in Japan on behalf of “Gig” workers attempting to form a union and could set a precedent for other digital labor platforms. The Uber Eats delivery union, formed in October 2019, has been calling for delivery workers to be covered by accident compensation and employment insurance programs. Uber Eats claimed that delivery personnel are customers using their platform, not workers. The commission’s written order argued that the company provided more than a platform for delivery workers but was also involved in the performance of their duties like employees.

Context – The legal and regulatory treatment of Gig worker and platforms in the US and Europe has been diverging for the past few years, and this decision shifts Japan in the direction of the Europeans. In October, the European Commission announced new guidelines to allow Gig workers to engage in collective bargaining without violating EU competition rules. In addition, the European courts continue to support classifying ridesharing and delivery workers as employees, and both Spain and Portugal enacted Gig driver and food delivery worker laws last year. Finally, the European Commission has proposed legislation directing a Europe-wide change on worker classification for platforms that strictly control the performance of workers. Gig worker legislation has been relatively quiet in the US since California voters rejected their state Gig worker law and the issue has been stymied in Congress. However, the Biden Administration Department of Labor and has announced a new rule proposal changing the criteria that businesses should use when they determine if a person doing work should be classified as an employee or an independent contractor, and the three Democratic FTC commissioners are promising aggressive action against deceptive claims from Gig platform companies.

UK Tech “Watchdog” Group Urges CMA to Challenge Amazon-iRobot

Report from Forbes

In Brief – Foxglove, a UK-based tech watchdog group, is calling on the UK Competition and Markets Authority to fully investigate Amazon’s planned $1.7 billion acquisition of the iRobot, the manufacturer of the popular Roomba robot vacuum. The group, which accuses Amazon of a “copy-acquire-kill” strategy, alleges a number of potential anticompetitive impacts, including on head-to-head competition in household robots, as Amazon also sells the Astro, a high-end robot for household tasks, the connected smart-home device market, where Amazon is a strong competitor with its Ring security devices and Alexa digital assistance, and on the retail market generally. An Amazon spokesperson said the company doesn’t have a history of shuttering businesses it has acquired, pointing to Whole Foods and Ring as examples.

Context – Competition regulators in the biggest markets have repeatedly talked up their concerns with Big Tech acquisitions. The UK CMA has followed through more than most, in particular challenging and eventually overturning Meta’s acquisition of GIF-startup platform Giphy. However, some critics of the activist competition enforcers argue they are willing to challenge smaller deals, such as the Giphy deal or the FTC challenging Meta’s effort to buy VR-startup Within, because it is legally and politically safer. Amazon’s iRobot deal is hardly big, especially compared to Microsoft’s $70 billion bid for Activision Blizzard, or even Amazon’s recent $8 billion acquisition of MGM. The FTC has already submitted a “second request” on the iRobot deal, and Amazon’s $3.9 billion bid for One Medical as well. The UK CMA and the European Commission’s Competition Authority are both engaged in deep dive reviews of the massive Microsoft-Activision deal, and the FTC is reported to be moving toward challenging it as well, which could be taking priority in the agencies at this point.

Musk-Twitter Joins Battle Against Apple Over Advertising and Claims of Censorship

Report from the New York Times

In Brief – Elon Musk has attacked Apple claiming that the iPhone giant is an opponent of “free speech” and jumped into the anti-monopoly battle surrounding the company. Progressive advocacy organizations and political leaders have pressured online advertisers to stop advertising on Musk’s Twitter claiming that offensive content is on the rise as the platform has changed some policies and reduced resources directed at content moderation, which could result in brand advertising appearing alongside offensive content. The new owner of Twitter claims that Apple, which has historically been one of the top advertisers on Twitter, has bowed to political pressure and dramatically cut back on their advertising spend. Musk also claims that Apple is threatening to pull the Twitter app from the company’s App Store for alleged violations of Apple policies that require apps to moderate content to effectively block a range of objectionable behavior. Republican leaders who have led the conservative efforts to reign in Big Tech quickly joined the fray and called for an antitrust crackdown to reduce Apple’s ability to restrict speech through its App Store policies.

Context – Feuds between the tech giants, including the CEOs, are nothing new. Microsoft v Google has been such an open battle that they publicly agreed to a truce in 2015 and announced an end to that peace in 2021. Apple and Meta have had the most public battle in recent years, including at the CEO level, with very direct criticisms of each company’s business model. Meta and Microsoft have aligned with app developers who decry Apple’s (and Google’s) control over app distribution and fees, including support for highly regulatory legislation. Musk’s Apple criticism includes railing against Apple fees and Epic Games’ CEO, a champion of the anti-Apple (and Google) antitrust campaign, jumped in. Given the extreme political sensitivity of ideological content moderation, Apple is not going to pull the Twitter app like they pulled the Parler app after the 2021 Capitol riot. On the other hand, advertising spending is different.

UK Government Drops “Legal But Harmful” from Online Safety Bill

Report from the Financial Times

In Brief – The UK Government of Rishi Sunak remains committed to the Online Safety Bill (OSB) requiring digital platforms to police objectionable online content. However, in line with comments made by many Conservative Party leaders over recent months, the legislation will be changed to address concerns that the initial bill threatened “free speech”. Provisions applying to “legal but harmful” content by adults will be removed, and the bill will instead require greater transparency in content moderation policies and enforcement. The bill introduced to Parliament in March was the product of three years of work that was initially framed as targeting terrorism recruitment and child sexual abuse material but became much broader in scope. The OSB will still require internet companies to develop comprehensive risk assessments and remove illegal content, including racism, terrorism advocacy, and online child sexual material. Platforms face higher standards to police content harmful to young people, including bullying and encouraging self-harm, as well as clarifying age verification processes. While the government claims the bill strongly protects young internet users, some child advocates oppose the new changes.

Context – The UK OSB is relevant in three ways. Like the EU’s Digital Services Act (DSA) and an expected US Supreme Court battle over US state social media content moderation laws, it sets up the question of whether government can regulate online content moderation without effectively being censors. Second, the UK drive to “protect” Internet users under age 18 is only getting stronger. With the UK, France and California pressing for strict new liability related to younger internet users, an under-18 “Splinternet” is coming into view and could harm some at-risk teens. Finally, although the UK is trying to differentiate itself from the EU on technology industry regulation to promote tech investment and economic growth, the OSB is not meaningfully less regulatory than the EU’s DSA.

Ask A Question!