Archive – 15 Sep 2023

May 2023

<------------------------->

Google CEO Commits to Working with EU on AI Rules Before AI Act Is In Place

Report from TechCrunch

In Brief – Google’s CEO has committed to help establish what top European Commission leaders are calling an “AI Pact” of “voluntary” rules and standards for companies and labs developing AI technologies. The commitment came as Sundar Pichai was in Brussels meeting with European Commissioners Thierry Breton and Margrethe Vestager, who direct digital technology policy for the Commission. The EU is five years into its effort to create a comprehensive regulatory framework around AI technologies, which includes its AI Act legislation and related liability law revisions. The European Commission and Council of Member States have both passed versions of the AI Act, and the European Parliament is expected to wrap up its initial work in June. Then the three parties aim to agree on a final legislative position, which advocates hope to complete by year’s end. However, to speed AI oversight, the AI Pact is envisioned to adopt and implement new standards of conduct before the AI Act passes, let alone is fully implemented. In addition, it is described as open to non-European parties.

Context – Breton often says that digital companies need to follow all the rules in Europe, and following his meeting with Pichai he included the AI Act on a list with the GDPR, DSA, and DMA. Both Breton and Vestager have called for “universal rules” governing AI before the AI Act is finalized, and the AI Pact is apparently that effort. The digital giants that appear farthest ahead on AI, or at least “large language model” AI, are Microsoft (with partner OpenAI) and Google and they each appear comfortable with regulatory oversight, although OpenAI maybe less so. Of course, the two giants are well on their way to operating under unprecedented European regulatory oversight as “gatekeepers” under the DMA, and “very large online platforms” under the DSA. In that context, partnering with EU regulators on AI rules, even enforcing them on smaller enterprises, might be a business necessity. This is just a few weeks after the G7 leaders did not agree on how to achieve rules for “trustworthy” AI, with Japan, the UK, and the US taking notably less regulatory tacks at this point..

Minnesota’s Democratic Governor Vetoes Rideshare Platform Wage Bill

Report from the New York Times

In Brief – Gov. Tim Walz (D) of Minnesota vetoed legislation to impose a new driver minimum wage on digital ridesharing platforms such as Uber and Lyft. The companies strongly opposed the bill, which passed the state House and Senate by narrow margins, arguing that it would dramatically drive up the price of rides, which would hurt riders, especially with lower incomes, as well as reduce demand that would harm drivers. Uber threatened to shut down its services in Minnesota outside the Minneapolis-St. Paul region on August 1 if the law went into effect, arguing that the pay mandates, with minimums that were even higher outside the Twin City region than in the cities, were unworkable. In the letter announcing his veto, the first veto of his governorship, Walz said that “Ride-share drivers deserve safe working conditions and fair wages,” but that this “is not the right bill to achieve these goals.” The Governor focused on transportation access, especially for lower income and disabled people, and announced that he was establishing a working group of stakeholders, including legislators, drivers, company representatives, and riders to better address rideshare issues.

Context – First and most importantly, the Minnesota bill on rideshare drivers was not a Gig Worker reclassification bill. It did not propose to turn rideshare drivers into employees of the digital platforms. That organized labor priority has been stalled since late 2020 following California voters overturning worker reclassification bill AB 5 with 63% of the vote. Federal bills didn’t move when Democrats held slim majorities in both House of Congress. The Biden Administration Labor Department has proposed a new rule changing the criteria that businesses are advised to use when classifying a worker as an employee or a contractor, but court challenges are expected and past regulatory efforts to change independent contractor models have suffered setbacks. At the state level, Washington State has been breaking ground on compromise ridesharing regulation that foregoes reclassification but expands pay and benefits in ways the large Gig driving platforms support.

FTC Proposes New Regulation of Data by Wellness Apps and Trackers

Report from Fierce Healthcare

In Brief – The Federal Trade Commission (FTC) is proposing to revise the Health Breach Notification Rule to regulate how health-related digital service providers, such as health, wellness, and fitness apps and trackers, handle user data. The current rule dates to 2009 and stipulates that covered entities must disclose to consumers when there have been leaks of unsecured user health data. The agency has claimed since 2021 that it needs to address the boom in digital health and wellness products and services that generates voluminous user date and often falls outside the federal HIPAA Privacy Rule for traditional medical providers. The proposed rule broadly expands health data beyond traditional medical information like diagnoses and medications to also include data from apps and trackers, as well as “emergent health data”, such health-related information inferred from things like location data and purchases. Vendors not covered by HIPAA would be required to notify individuals, the FTC, and sometime the media, when there is a breach of such data. Finally, the proposed rule also expands the meaning of breach to go beyond unauthorized access, such as leaks and hacks, to include the business use of data, such as for advertising purposes, without the user’s permission.

Context – Led by Lina Khan, the FTC, which now has just the three Democratic commissioners, continues working to expand its regulatory reach, even in the face of increasing Republican and business opposition, as well as court setbacks. Expanded FTC activity on health-related apps and devices began in mid-2021 with a policy statement announcing the agency’s new views on data uses. They followed up with enforcement actions against prescription drug discount provider GoodRx, period tracking apps Premom and Flo, and online therapy company BetterHelp. Add the campaign to their efforts on “unfair methods of competition”, employer non-complete clauses, data privacy, Gig work, and charges that the FTC is working in concert with European antitrust regulators to stifle US tech acquisitions that survive scrutiny in US courts.

President Biden Nominates Anna Gomez as Third Democrat on FCC

Report from the Washington Post

In Brief – President Joe Biden has nominated Anna Gomez, a veteran telecom lawyer with significant government experience, to serve as the third Democratic commissioner on the Federal Communications Commission (FCC). Gomez has served in multiple federal government agencies, including the FCC and the National Telecommunications and Information Agency, and is currently a communications policy adviser at the US State Department. She has also served as a Senate staff member, a senior government relations official for Spring-Nextel, and an attorney in the law firm Latham & Watkins. The Gomez nomination comes more than two months after progressive consumer advocate Gigi Sohn withdrew her nomination after a year-and-a-half of strident opposition from Senate Republicans and segments of the telecommunications industry. If confirmed, Gomez would be the first Latina FCC commissioner in over two decades, and both the Congressional Hispanic Caucus and the National Hispanic Media Coalition recommended her for the agency. Confirming a third Democratic commissioner would break a deadlock of two commissioners from each party that has stretched more than two years, potentially allowing the new Democratic majority to move forward on fractious issues like net neutrality rules.

Context – Gigi Sohn should have won the support of all the Democrats in the Senate. She was highly qualified, and while outspoken on social media on political issues, the same can be said about many Biden Administration nominees who were confirmed. The whole affair illustrated that the major telecom companies still carry significant influence on both sides of the aisle. Given Biden’s support for net neutrality, including in his 2020 campaign, and the level of progressive activism in his federal agencies, it’s hard to believe that Gomez won’t serve in a very similar manner to Sohn. And while some in industry might try to keep the commission locked up 2-2 until 2025 by stalling the latest nomination, that would be a huge longshot. However, I suspect Gomez has not been that active on Twitter. We’ll be certain to hear soon.

G7 Leaders Agree That AI Should Be Trustworthy but Not How To Get There

Report from Reuters

In Brief – The leaders of the Group of Seven (G7) nations discussed the issue of artificial intelligence (AI) at their meeting in Hiroshima, Japan, and called for the development of technical standards to keep AI “trustworthy”. However, the leaders recognized that while the countries’ various approaches to achieving the “common vision and goal of trustworthy AI may vary”, they agreed that their rules for digital technologies like AI should be “in line with our shared democratic values”.  The leaders also agreed to create a ministerial-level forum by the end of this year that would be dubbed the “Hiroshima AI process” to discuss issues around generative AI systems, such as copyright and disinformation.

Context – ChatGPT has shaken up discussions of AI regulations everywhere, but there is no agreement on details. Back in April, over 28,000 self-identified experts, including big names like Elon Musk and Steve Wozniak, signed an online letter calling for a voluntary six-month AI-training pause backed up by governments stepping in to “institute a moratorium”. Of course, that did not happen. Hopefully the G7 meeting reminds people how democratic governments actually operate. Besides, tech leaders are deeply divided on the state of AI and whether large language models and their neural networks are close to anything like Artificial General Intelligence. If you are interested, read page 93 of this report by Microsoft experts billed as hopeful that ChatGPT may be nearing a major threshold. They summarize how far away the model actually is. If you are keeping score at home, the EU, in its deliberative but plodding manner, is the one major jurisdiction that may regulate a wide range of AI technologies anytime soon. Its AI Act has largely embodied a “risk-based” framework focused on AI applications, but the European Parliament is expanding it to directly regulate general “foundational models” like ChatGPT. The UK has criticized the EU’s model and proposes a lighter touch. Japan, host of the G7 AI forum, is particularly supportive of AI developments. And in the US, the Biden White House released two “Fact Sheets” on AI activities in May alone.

TikTok Sues Montana to Block Statewide Ban

Report from the New York Times

In Brief – As expected, TikTok has filed suit in federal court in Montana to block the state’s unprecedented statewide ban of the app from going into effect on January 1, 2024. The new law makes it illegal to allow the download of the app to someone in the state, imposing penalties of up to $10,000 on entities such as the Apple and Google app stores, or TikTok itself. Five Montana-based TikTok content creators have already filed a federal lawsuit challenging it on First Amendment grounds. TikTok’s lawsuit claims that the state law violates the First Amendment, the Commerce Clause, and is an impermissible bill of attainder that targets TikTok for sanction directly. A company spokesperson said, “We believe our legal challenge will prevail based on an exceedingly strong set of precedents and facts,” while a representative of the State Attorney General said, “We expected legal challenges and are fully prepared to defend the law that helps protect Montanans’ privacy and security.”

Context – Montana’s effort to ban TikTok faces high legal obstacles. When the Trump Administration tried to ban Chinese-based TikTok and WeChat, federal judges were very sympathetic to lawsuits challenging the app bans. TikTok won an injunction, as did a TikTok content creator. WeChat users even won an injunction largely on First Amendment grounds, and WeChat is unquestionably part of the Chinese internet censorship regime. TikTok insists they are not. A federal ban would face high bars on First Amendment grounds, and the Cold War-era Berman Amendments limit some other federal government legal avenues. And given how much of the justification to block TikTok is based on foreign policy and national security concerns, state laws will be even harder to defend. The now years long CFIUS review, which could try to unwind a 2017 acquisition that created what is now TikTok, probably remains the biggest threat. If CFIUS orders the company to be broken up, it would kick off an unprecedented legal battle over the limits of the body’s legislative backing in the post-Cold War environment.

EU Telcos Pick 5% of Internet Traffic as Threshold for “Fair Share” Payments Plan

Report from Reuters

In Brief – Two lobbying groups representing 160 telecoms companies in Europe, including Deutsche Telekom, Orange, Telefonica, and Telecom Italia, have submitted a proposal to the Europeans Commission that calls for big tech companies that account for more than 5% of a telecoms provider’s peak average internet traffic to pay new fees to help fund the telecoms’ rollout of 5G and broadband. Europe’s telecom operators have lobbied for years to have the biggest digital platforms, in particular the video streamers, charged some manner of new fees, often referred to as “fair share” payments in a digital “sender pays” model, to help fund network upgrades. The European Commission opened a formal consultation on the matter in February. The new proposal would likely capture Amazon, Apple, Google, Meta, Netflix, TikTok, and possibly Microsoft, and is the first time in years of calling for fair share payments that the telecoms proposed a specific digital platform size threshold.

Context – Telecom companies have long accused large internet platforms of “free riding”. This is despite every internet user paying for the bandwidth they use. These sender pays proposals have often been part of net neutrality debates because telecom companies often proposed that bandwidth intensive platforms be permitted to pay more for their consumer users to get better network service. But then the users of online services that don’t pay extra get worse experiences. The EU fair share campaign claims that the paying platforms would not receive any direct benefits, so it operates more like a targeted tax. The battle lines are forming, with consumer advocates, smaller internet network infrastructure firms, industry regulators, and several Member States, expressing concerns with the idea, while top EU Commissioners continue to express sympathy for the EU telecom companies they oversee. Most recently, Germany’s Minister for Transport and Digital Affairs publicly criticized establishing a new sender pays model, saying “The free and open internet is a valuable asset that needs to be protected.”

Speaking of AI, Copyright, and Fair Use – The Supreme Court on Warhol v Goldsmith

Report from the Wall Street Journal

In Brief – The US Supreme Court sided with photographer Lynn Goldsmith and ruled 7-2 that Andy Warhol’s silkscreen portrait of musician Prince, used for a Vanity Fair cover 2016, violated Goldsmith’s copyright. The decision is seen by some as limiting the copyright “fair use” defense by applying the “purpose and character” factor in a manner that focuses on the intended use of the work created, rather than how different the new work is. The Court’s majority deemed that the Warhol painting had the same “purpose” as the original photographic work, being sold as magazine cover art. The opinion highlighted that copyright owners have the exclusive right to make “derivative works” of the original. “If the new work achieves the same or similar purpose to the original work, and the new use is of a commercial nature, the first fair use factor likely weighs against a finding of fair use, absent another justification of copying.” The dissent focused on how Warhol clearly transformed the original piece into a new work of art, thus changing its character, and criticized the majority for threatening the advance of creativity and art.

Context – Copyright and fair use are important concepts in the rapidly emerging debates surrounding “generative AI systems” like ChatGPT (and similar systems to create images and music) because they are built on “neural networks” that “train” using often unimaginably large bodies of data, such as most of the texts, images, or music on the internet. But the systems generate output that is not copies. The training data is often filled with copyrighted works that are freely available on the open web. A recent hearing of the House IP Subcommittee discussed whether including copyright works in AI training databases should require some new form of payment, even if viewing the material on the internet it otherwise free, or if that use is “fair use”. One insight from the Court’s Warhol v Goldsmith decision is that it may prove critically important whether the Court sees the purpose of using copyrighted works as creating a distinct “large language model” database, or if the purpose is seen as the creation of the system’s outputs that compete in the market with the unnumbered works that helped build it..

Apple App Tracking “Privacy” Policy Challenged by Italy Competition Regulator

Report from Bloomberg

In Brief – Italy’s antitrust agency has opened a probe into Apple’s highly publicized 2021 policy change that required third-party apps to get explicit approval from iPhone users to collect online browsing data for use in digital advertising. The AGCM announced that it believes that the digital giant may have abused its dominant position in the apps market by applying a more restrictive ad-related data policy to third-party app developers than it applies to itself, undermining the quality and detail of data made available to third-party apps relating to both user profiling and measuring ad effectiveness, which are essential to the attractiveness of online advertising offerings. This, they believe, may lead to a diminishment in the advertising revenue third-party developers can earn as advertisers increasingly turn to Apple’s ad business. The agency is also concerned that reducing the ability of third-party app developers to successfully advertise on the Apple ecosystem will reduce competition in app development, where Apple is also a major competitor. Apple says that it applies privacy rules “equally to all developers, including Apple”.

Context – “Targeted advertising” creates a policy maelstrom with crosscurrents between privacy advocates who want to get rid of it despite clear evidence that it benefits small, specialized businesses, while ad industry participants sense anticompetitive intentions when giants like Apple and Google make “privacy” changes. The Italian investigation of Apple’s IDFA change follows similar action in Germany and Poland. Market results appear to bear out claims made by Apple’s ad industry opponents, with major ad platforms like Facebook and Snap seeing big negative impact on their ad revenues while Apple’s ad services market share has been growing. Google faces similar charges of anticompetitive intent related to its “privacy sandbox” and is working with the UK CMA as a competition policy regulatory partner to sign off on changes. Finally, these issues look set to fall squarely in the lap of EU DMA regulators.

More Republican-Led States Enact State Data Privacy Laws

Report from Mondaq

In Brief – Montana and Tennessee are set to become the eighth and ninth US States to enact comprehensive privacy legislation. The bills continue a trend of state privacy laws generally following the models set forth by Virginia and Connecticut, particularly in terms of having limited enforcement mechanisms. Neither bill contains a private right of action, with enforcement solely residing with each state’s Attorney General office. Neither bill includes a provision allowing for AG rulemaking or the creation of a separate privacy enforcement entity. Outside the general template, the Tennessee bill creates a safe harbor for businesses that implement a privacy program that aligns with the National Institute of Standards and Technology (NIST) privacy framework, and the Montana bill requires businesses to recognize opt-out preference signals, such as from internet browsers, by January 2025.

Context – For years, some commentators have argued that chances for US federal privacy legislation would go up as more states enacted their own legislation. At PEI, we’ve been focused on whether class action lawsuits were in the enforcement mix. In that light, state privacy laws are a sign of the progress that can be made when Democrats pull back from insisting on a “private right of action”. Successes include Democrat-controlled states like Virginia, Colorado, and Connecticut in 2021 and 2022, and now the biggest string of wins yet, all focused on Republican-led states. State bills all differ some, but compliance complexities are not boiling over. And the class action tsunami from Illinois’ biometric privacy law is an ongoing warning. Last year, the House Energy & Commerce Committee started to break the deadlock with the bipartisan American Data Privacy and Protection Act (ADPPA). It included a compromise on class actions that went farther than Republicans and business groups had been willing to go in the past, but the bill was stopped with strong opposition from progressives and California Democrats. The ADPPA also now appears to be serving as a tactical counter to teen-focused online privacy and regulation bills.

Meta Fined $1.3 Billion for US Government Digital Surveillance Policies

Report fromBloomberg

In Brief – Meta has been fined $1.3 billion for violating the General Data Protection Regulation (GDPR) by processing European customer data in the US. It is a record sum. The company is being held responsible based on the top EU court repeatedly ruling that US laws allowing national security agencies to review non-US user data to combat crime and terrorism violate the EU’s privacy law. Meta will appeal the fine and argue they have complied with bilateral government agreements.

Context – Meta is involved in multiple major legal and regulatory battles in Europe involving the GDPR. One is a challenge to the legal justification it uses to direct targeted ads to its users. That one is squarely aimed at Meta business practices. The platform was fined $414 million by the Irish data protection authority in January and may be forced to drastically reduce its targeted advertising. This is not that case. Here, Meta finds itself stuck in the middle of a decade-long US-EU dispute over “Cross Border Data Flows” following the Snowden revelations and the two “Schrems” decisions by the European Court of Justice. The US Congress has refused to pare back US intelligence authorities, and the US and EU have attempted to address the issue through executive branch agreements. The most recent US-EU deal was reached in March 2022. It will be challenged in European courts and a majority of the European Parliament recently agreed that it would fall short again. Thousands of US and EU businesses have regularly transferred European user data to the US during the same time period claiming various legal bases throughout the long legal and policy battle, Facebook has been most directly implicated because it was the one targeted in the litigation and for many years processed nearly all its EU user data in the US. Company plans to build enough data centers in Europe to work itself out of the problem have been sidetracked by a de range of opposition, including from farmers, anti-development activists, environmentalists concerned that too much clean electricity will be used to power data centers, and threats to EU energy sources during the ongoing Russian invasion of Ukraine.

State of Montana Bans TikTok – Lawsuits Incoming

Report from the Wall Street Journal

In Brief – The State of Montana has broken new ground in the legal and policy battle surrounding TikTok, one of the most popular and controversial social media apps in the United States and globally, by banning the app. The new law, which goes into effect on January 1, 2024, makes it illegal to allow the download of the app to someone in the state, imposing penalties of up to $10,000 a day for any entity, such as the Apple and Google app stores or TikTok itself, that makes the app available to residents. Lawsuits challenging its constitutionality are already being filed, including one by five Montana-based content creators who are active on the video platform. If the service is sold to a company that is not based in an “adversarial nation” then the ban would not take effect.

Context – Montana’s effort to ban TikTok likely faces even higher legal obstacles than the Trump Administration did when it attempted to ban TikTok (and Chinese “super app” WeChat) in 2020, but it’s worth reviewing that history. An irony of the Chinese app fight is that advocates for banning them base their concerns in large part on the absence of the rule of law or independent judiciary in China to limit the government’s power to manipulate TikTok’s algorithms or access user data. But Chinese-based companies and their users can turn to independent US courts for protection here. Back in 2020, federal judges were sympathetic to lawsuits challenging the app bans, including one by TikTok itself and one by a TikTok content creator. WeChat users even won an injunction on First Amendment grounds, and WeChat is unquestionably part of the Chinese internet censorship regime. So, to be clear, federal efforts to ban TikTok will face high bars from the First Amendment and the Cold War-era Berman Amendments. And given how much of the justification to block TikTok is based on foreign policy and national security concerns, state action will face still higher bars. The most promising, although legally under-developed avenue, is likely through aggressive action by CFIUS to try to force a sale of TikTok’s US business

Anti-Tech Republicans Join the Call to Keep Big Tech from Trade Talks

Report from Bloomberg

In Brief – A group of congressional Republicans who support antitrust legislation to drastically restrict the operations of the largest tech platforms and expand federal regulation of digital services are calling on the Biden Administration to stop cooperating with that industry on trade talks. Led by anti-tech maven Sen. Josh Hawley (R-MO), freshman Sen. JD Vance (R-OH), and Rep. Ken Buck (R-CO), who was the top House Republican champion for last year’s tech-focused antitrust bills, the conservatives argue that tech giants should not influence the US Government discussions on the Indo-Pacific Economic Framework for Prosperity (IPEF). They specifically oppose any effort to use the talks to limit the ability of other countries to enact antitrust legislation regulating the US-based companies, as well as any attempt to use the trade talks to bind the United States to competition policies that Congress may change.

Context – Advocates for new digital regulations keep complaining that impacted companies lobby against bills that they oppose. Shocking, yes? Add trade talks to lobbying on federal social media bills that undermine Sec. 230 and encryption, state legislation  setting online age limitsand mandating parental access to teenage accounts, and the Big Tech anti-trust legislation. Bill supporters claim popular mandates and decry “lobbying”. But with nine big US tech companies accounting for a quarter of the S&P 500 market cap and half the portfolio of retail investors, might there be sincere wariness to overturn industries? The IPEF intervention of the anti-Big Tech conservatives follows similar entreaties from progressive public interest groups and like-minded Democrats led by Sen. Elizabeth Warren (D-MA). Tech policy fights spilling into trade debatesis not a new phenomenon. Historically, there has been huge overlaps in the anti-Big Tech and anti-global trade movements on the left and the right. At this point, advocates for major digital regulation in the US are clearly frustrated with a lack of domestic success and are applauding aggressive actions in foreign markets on “gatekeepers”, acquisitions, online content moderation,privacy, app stores, and AI. They do not want pushback based on US laws and policy.

Federal Judge Dismisses FTC Suit Targeting Data Broker for Geolocation Data Practices

Report from New York Times

In Brief – The Federal Trade Commission (FTC) suffered another setback in federal court in its campaign to expand digital regulation as US District Court Judge B. Lynn Winmill dismissed the agency’s lawsuit against Kochava, a major geolocation data broker. The mobile analytics firm collects location data from about 35 million mobile devices and uses it to help online marketers target and measure ad campaigns. In the complaint filed last August, the FTC argued that the geolocation data could be used to track people’s visits to a wide range of sensitive locations and was an unwarranted intrusion into their private lives that was likely to cause substantial injury. The judge agreed that the sale of location data could be an intrusion on consumer privacy, but he dismissed the agency’s complaint for failing to show evidence that the sale or use of the data caused any actual substantial injuries, instead describing the agency’s claims as theoretical. The ruling deals a blow to the efforts of the FTC Democrats to expand federal privacy law through regulatory action, but the judge gave the FTC 30 days to amend its complaint and try again.

Context – When Lina Khan was nominated to the FTC she received bipartisan praise, but her tenure as chair has been unsettledinside the agency and on Capitol Hill. In addition, the agency has suffered significant court setbacks, including on efforts to block tech acquisitions. But neither she, nor the agency she leads, appears to be pulling back. Contentious FTC undertakings on “unfair methods of competition”, employer non-complete clauses, data privacy, and Gig work, and claims that the FTC is working in concert with more aggressive European regulators to impose digital policies rejected by the US Congress, are drawing increasingly aggressive criticism from business. Kochava’s CEO says that the FTC approached the company shortly after the Supreme Court’s Dobbs decision raising questions about abortion-related location data and demanding an open-ended admission of guilt, which is likely to add to concerns about politicization of the agency agenda.

Federal TSA is Testing and Training Airport Facial Recognition System

Report from the CBC

In Brief – The Federal Transportation Safety Administration is training a facial recognition tool that confirms that a person is, in fact, the person that matches their photo ID. A TSA employee still reviews the ID and signs off on the digital facial recognition tool. The trial is being used in 16 major US airports. It is not a mandatory system, meaning that passengers can choose to opt out, and the TSA says that the person-by-person facial data is not stored or used for future personal identification purposes. Although the TSA defends the program as voluntary and accurate, critics raise concerns about evidence of bias in facial recognition technologies, repercussions for passengers who choose to opt out, and the civil liberties and privacy risks that they believe are inherent in future biometric tracking systems.

Context – AI. AI. AI. And a reminder that AI is far broader than “large language models” and “AI chatbots”. Before ChatGPT captured the public, media, and policymaker attention and imagination, people were shocked by Clearview AI, a small, little-known digital company that jumped into the public limelight by creating an apparently top-tier facial recognition tool “trained” on images scraped largely from social media sites that claimed to be able to take a photo and find matches on the internet. Back in 2021, the US Government Accountability Office issued a report that claimed that 18 of the 24 federal agencies it surveyed were using facial recognition technologies, six used them to generate leads in criminal investigations, and five as part of real-time surveillance. Many planned to do more. Clearview AI, which has faced regulatory and litigation challenges in markets globally, was being used by at least four, and many admitted that their employees were asking state or local law enforcement entities who were Clearview AI customers to run pictures through the system and share results. The EU’s AI Act was developed based on a “high risk uses” model, which included real-time biometric surveillance as a sometimes-high risk use, although law enforcement and national security carveouts have been included.

European Commission to Approve Microsoft-Activision Acquisition Rejected by the UK

Report from CNBC

In Brief – European Commission competition regulators has approved Microsoft’s $69 billion deal to buy giant video game maker Activision Blizzard based on the set of multi-year game licensing deals that the tech giant is offering to game platform competitors. Regulators in three key markets, the US, EU, and UK, have been scrutinizing the deal for more than a year, focused on the prospect that Microsoft could restrict access to top Activision games. In recent months, the EU and UK competition authorities have focused most intently on the nascent cloud gaming business. Microsoft has repeatedly rejected suggestions to sell off major properties like Call of Duty but have instead proposed 10-year licenses to console, subscription, and cloud service providers. These access commitments have apparently resolved the concerns with the European Commission. However, the UK Competition and Markets Authority (CMA) and the US FTC continue to oppose the deal.

Context – The new era of sequential acquisition reviews, especially the US, EU, and UK, is creating an environment that calls into question established norms of competition policy and the rule of law. Multiple regulators review the same deal and sometimes appear to partner and venue shop in ways that are causing heartburn among business advocates. As the biggest digital acquisition of all time, Microsoft’s bid for Activision-Blizzard was always going to face reviews in all three markets (and more). The UK CMA is setting a mark for aggressiveness on tech deals, but the UK is the smallest of the markets and the EU’s willingness to approve deals with conditions is certainly calling into question the UK’s pitch as a tech investment hub. The US FTC and DoJ are aggressively challenging deals, but their record in court is pretty bad. Lastly, there is almost no bipartisan support in the US Congress for big changes to acquisition standards, so the perception that anti-merger antitrust agencies are working with overseas regulators to torpedo deals that fly under US law could quickly escalate partisan conflict.

Meta Appears to be Very Serious About Cutting Off News in Canada

Report from the CBC

In Brief – Meta officials in Canada continue to publicly discuss plans to block the posting or sharing of news media stories on Facebook and Instagram in the country if Parliament enacts the Online News Act. The bill is modeled after legislation enacted in Australia in 2021. Meta and Google would be directed to reach agreements with large Canadian media companies to pay them when links or posts of their stories appear on their platforms, and absent such deals, face a binding arbitration process led by a Canadian regulator. Both digital giants drew lines in the sand in Australia, with Meta rejecting paying for news media content that was posted on their platforms by users, including by the news media companies themselves. In Australia, Facebook temporarily blocked users from posting or sharing media content, and they say they plan to do the same in Canada. However, the company admits that their technical tools to block covered media pieces in Australia were rushed and not sufficiently accurate, impacting many posts not covered by the draft law. Meta officials say that they are working to have systems in place that will more effectively block covered media in Canada without impacting other posts.

Context – We learned in 1979 that video killed the radio star. Did the internet kill newspapers? Canada claims that more than 450 news outlets have closed since 2008 and one third of Canadian journalism jobs have disappeared. France and Australia have most aggressively come to the aid of media companies, but the desire to force Google and Meta to pay is a global phenomenon. In response, and to reduce political heat, Google and Meta both created curated media services and paid hundreds of millions of dollars to media companies. But Meta’s thinking is changing. They claim news posts are not economically valuable to their platforms. Most social media users don’t much want news. They prefer TikTok-style entertainment. Plus, “news” brings misinformation, angst, and political strife. So, turning off news in the face of a payments mandate might be a real possibility.

UK Good Tech Cop Mode – Online Safety Bill to Pare Back on Company Fraud Liability

Report from Financial Times

In Brief – The UK Government plans to pare back provisions in the Online Safety Bill (OSB) that would require digital platforms to pay financial compensation to victims of scams and frauds that targeted consumers through online ads and messages. The changes are driven by concerns over the negative financial impact on the tech sector and denting efforts to promote the country as a hub of digital innovation and investment. In parallel with changes to the fraud liability provisions in the OSB, the Sunak Government announced a National Fraud Strategy that aims for greater cooperation between government agencies, law enforcement, and the private sector, including major tech companies pledging to take a more proactive approach to combating digital frauds including through texts, calls, and online ads.

Context – The OSB began with the UK Government announcing the country would become “the safest place in the world to be online”. In 2017. The resulting Online Harms White Paper was focused on illegal content, especially child sexual material (CSM) and terrorism advocacy. Over years of legislative development, the effort expanded to cover more and more bad content, some illegal, some to be made illegal, and finally a general class of “legal but objectionable” material. That open-ended mandate was roundly criticized as too great a threat to free speech, and last fall the OSB was trimmed back related to adult users. But promises were made to further protect younger users. The current bill threatens jail time for an expanded range of executives if digital companies fail to police CSM, messaging services including WhatsApp and Signal are threatening to abandon the country if the OSB is enacted without specific protection for end-to-end encryption, and requirements for online age verification that worries privacy advocates. The back and forth over more or less interventionist digital regulation is truly schizophrenic. Besides the tortuous path of the OSB and DMU bills, the UK is increasingly the place where tech acquisitions, approved by the EU, are blocked, all while the government calls for a less regulatory AI regime than Brussels.

Open AI CEO to Testify on Capitol Hill About AI Regulation

Report from the Washington Post

In Brief – Sam Altman, the CEO of OpenAI, which created and operates chatbot phenom ChatGPT, will headline a May 16 hearing of the Senate Judiciary Committee’s Subcommittee on Technology on “Rules for Artificial Intelligence”. The emergence of ChatGPT, with its dizzying ability to spit out seemingly accurate responses while including quixotic “hallucinations” to keep humans on their toes, has shaken up discussions of digital regulations just about everywhere, including on Capitol Hill. Many reports have indicated that Altman supports some manner of AI regulation, but a top OpenAI lawyer recently said the company believes industry leaders should develop standards.

Context – Congress is not remotely close to regulating AI. It is such a nebulous concept that even deciding what AI is, is tough. So, what’s the point of the hearing? I asked ChatGPT why Congress holds hearings, and it gave a nice AP US Politics answer regarding “Oversight”, “Legislation”, “Confirmations”, and “Investigations”. Golf clap. But that’s not what’s behind this hearing. As anyone who’s worked on the Hill knows, hearings like this one are designed to provide media and PR opportunities to the Members. ChatGPT is in the news every day. The White House just had a big meeting. The EU is legislating. Altman means guaranteed media. And if he hadn’t agreed to come in and bring all the media coverage with him, think about a hearing with Chat-GPT testifying! And on the list of questions more interesting than how people can use AI to mislead people: What of the EU’s AI Act and the merits of application-focused and risk-based regulatory models? And their move to regulate “foundational models”? Why are there no operational self-driving cars? And are they more dangerous than chatbots? Why won’t TV and movie writers work with AI systems? How do small labs and companies make such big breakthroughs? Why does ChatGPT show left-leaning bias on public policy? Does ChatGPT “copy” copyrighted material in its training data? Is Justice Neil Gorsuch right that generative AI systems should not qualify for Sec. 230 liability protection because they create new works?

Google in Brazilian Government Crosshairs for Opposing Fake News Bill

Report from Reuters

In Brief –  The debate over the Brazilian Fake News Law that would hold digital platforms accountable when objectionable content appears online led to a confrontation pitting Justice Minister Flavio Dino against Google over the search giant’s effort to inform users of its view that the bill threatens online speech. Bill 2630 amends the Marco Civil da Internet, which protects digital platforms from liability for third-party content unless they fail to respond to a court order. The government of President Lula da Silva is proposing the change to combat what it calls a rise in harmful online content, radicalization, and misinformation, including in the run-up to the January storming of the capital. Top digital platforms, including Google and Facebook, publicly oppose the change. As part of its efforts to mobilize opposition, Google placed a link on its main Brazilian search page alerting users to the bill and linking to a page criticizing the legislation. Minister Dino threatened Google with fines of one million reais ($198,000) per hour if it did not label the link as an advertisement. The company soon dropped the link altogether. The Supreme Court of Brazil followed up by calling for executives from Google, Meta, and Spotify to testify on their opposition, and the country’s antitrust regulator said it would investigate Google’s and Meta’s campaigns. Although initially scheduled for consideration on May 2, the measure was temporarily pulled from the legislative calendar.

Context – The Marco Civil da Internet was a landmark of open internet legislation. Brazil, under a progressive populist government, undermining the regime, will be an equally important landmark as governments around the world, ranging from authoritarians in Turkey, and India, to the EU and UK, all push to direct online platforms on how to moderate content. The most noteworthy example of websites, big and small, using their home pages to alert users of objectionable legislation was in 2012, when a hundred thousand sites went dark for a day to protest US legislation called SOPA and PIPA that would impose new copyright liability on websites. The resulting user reaction helped stall the bills.

French Competition Regulator Orders Meta to Allow Open Access to Ad Verification Data

Report from the TechCrunch

In Brief – The French competition authority has ordered Meta to grant Adloox, a French digital advertising verification platform, fair access to its advertising verification data pool, finding that the digital giant may have abused its dominance in refusing prior access. Ad verification companies offer services to online advertisers to measure the effectiveness of advertising campaigns, including how many views online ads receive, detecting fraudulent online traffic, and ensuring their clients’ ads do not appear on websites or pages that harm their brand. The regulator says that Adloox attempted from 2016 to 2022 to gain access to relevant Meta data in the same manner as several Adloox competitors, and that its last request in 2022 went unanswered. The interim measures issued by the regulator give Meta two months to change the criteria it uses to determine businesses that can access its advertising data for verification services as part of Meta’s “viewability” and “brand safety” partnerships, requiring that the new criteria must be transparent, objective, non-discriminatory and proportionate.

Context – Targeted advertising is one of the most contentious topics in digital policy. Many privacy advocates want to get rid of it altogether. They seem to think less effective, more expensive, advertising is better, and ignore the benefits of highly targeted advertising to small, specialized businesses. On the other hand, small ad industry players sense anticompetitive intent when giants like Apple, Google, and Meta, who operate major platforms that carry huge advertising businesses, justify actions based on protecting data or “privacy”. For example, the French antitrust authority has been knee deep in reviewing, and re-reviewing, Apple’s ad data opt-out requirement. Several French digital ad businesses raised concerns when the plan was announced, and the regulator cleared it, but as Apple’s ad business grows, and others don’t, there are reports that the regulator may open a formal investigation. Or these types of conflicts all end up in front of the DMA regulators in Brussels by 2024.

FTC Aims to Ban Meta from Monetizing Data from Teens

Report from New York Times

In Brief –  The Federal Trade Commission (FTC) has formally accused Meta of failing to abide by its 2019 settlement agreement and is moving to impose aggressive new sanctions, including a complete ban on the use of data of users under age 18 to earn revenues. Meta’s 2019 settlement, which included the largest ever privacy-based fine of $5 billion, was itself the result of the company failing to live up to a 2011 FTC settlement. The FTC now alleges that the “independent assessor” tasked with reviewing the company’s privacy compliance identified several weaknesses, with specific examples drawn largely from before 2021. The new demands of the FTC would apply to all Meta businesses, including Facebook, Instagram, and Horizon Worlds, and would bar the company from monetizing data collected on under-18 users even after they turn 18. The new move was approved by a 3-0 vote of the commissioners, which currently only includes Democrats after resignations of the agency’s two Republicans. A Meta statement called the action “a political stunt” that aimed to “usurp the authority of Congress to set industry-wide standards”, reiterated that it worked for years with the agency to implement the 2020 agreement, and that it would vigorously fight the action. The process provides the company with 30 days to respond, a final ruling from the commissioners, and then the ability to challenge that decision in federal court.

Context –  When Lina Khan, a high-profile progressive critic of Big Tech was nominated to the FTC in 2021, she received noteworthy bipartisan praise, but her tenure as chair has been unsettled. But neither she, nor the agency she leads, appears to be pulling back. Contentious FTC undertakings on “unfair methods of competition”, employer non-complete clauses, data privacy, Gig work, and acquisition reviews that appear to be working in concert with more aggressive European regulators even in the face of US court setbacks, are drawing increasingly aggressive criticism from business. Congressional funding for the FTC may become a flashpoint, and Meta is certain to attempt to exploit growing frustrations.

Top White House Leaders Hold Meeting with Top Company Leaders to Show AI Focus

Report from the Washington Post

In Brief – Ten senior Biden Administration officials led by Vice President Harris met with the CEOs of Chat-GPT creator OpenAI, Microsoft, Alphabet, and Anthropic to discuss the state of AI development, risks, and possible regulation. President Biden dropped by the meeting to reiterate to the four executives that the Administration views AI to be “very, very important” and that “What you’re doing has enormous potential — and enormous danger.” While the Biden Administration has indicated it is interested in some manner of AI regulation or legislation, no specifics were revealed besides $140 million from the National Science Foundation to further fund national AI research institutions as well as a directive to come from the Office of Management and Budget to develop rules on federal government use of AI.

Context – With so much talk about regulating AI, keeping things grounded is key. The most interesting thing about the White House meeting is the few corporate attendees, with CEOs of two giants, OpenAI, which is largely funded by Microsoft, and a “public benefit corporation” founded by a former OpenAI researcher. That’s it. And AI is such an amorphous, nebulous concept, that even deciding what it is, is very difficult. But it’s way more than chatbots. Second, despite the endless talk (proving AI is not needed to produce more words), Congress is not about to regulate general AI technology. FTC Chair and progressive regulatory champion Lina Khan is calling for aggressive regulation, but the ills she cites, and the foreboding of domination by giants, are nothing new. And the reality is that very small labs and companies have proven at least as likely to make AI-related breakthroughs as giants. Clearview AI? A memo from a senior Google AI researcher highlights the fact that algorithmic innovation often comes from small places. That will prove super hard to regulate at the source. While US legislation is very, very unlikely, the EU will likely pass an AI Act later this year. And some even talk up the Chinese being the ones to create AI regulation that catches on globally. But that seems more like a chatbot hallucination.

Workers at Sega America’s HQ Announce Support for a Labor Union

Report from Engadget

In Brief –  Workers at Sega of America’s US headquarters in Irvine, California, have announced that they are forming a union and have asked the company to recognize their organization, which is affiliated with the Communications Workers of America. They claim to cover 144 workers and have filed for a union representation election with the National Labor Relations Board. While it would not be a particularly large union in the context of recent videogame company labor drives, it would be unusually broad, covering workers in departments beyond the Quality Assurance teams.

Context –  It only seems like everything is AI (and the End of the World), Social Media Harms (Teens or Democracy), Microsoft-Activision, or Tech Industry Labor Organizing. Videogame companies have been a nexus of tech labor activity. The key question is whether there is a breakthrough happening beyond Quality Assurance workers (“game testers”) who test (play) games in the development stage to find problems. They often work hourly for around $20 per hour. Many are temporary employees. The big videogame company labor “wins” (Raven, Blizzard, Zenimax) have involved game testers, a decidedly different type of worker than software engineers and developers. It’s much bigger news if tech company union drives were gaining ground with higher skilled, better compensated, workers. Through that lens, the Sega union appears to extend to workers beyond game testers. Programmers, developers and engineers are not mentioned, but job categories like “marketing”, “product design”, and “localization” are. Earlier in the year, workers across Proletariat, a small game studio owned by Activision Blizzard, including programmers, designers, and engineers, appeared to be moving to organize. But the effort fizzled out. So, a Sega might be different, although it’s not so clearly high-tech workers. And if you are keeping score at home, the biggest labor movement success with coders and other decidedly high skilled digital workers in the US was in the contentious battle at the digital division of the New York Times last year.

Federal Appeals Court Reaffirms Dismissal of State AG Antitrust Suit Against Facebook

Report from Bloomberg

In Brief –  A panel of the Federal DC Circuit Court of Appeals has comprehensively upheld the June 2021 decision of US District Court Judge James Boasberg to dismiss the antitrust complaint filed by 48 State Attorneys General in 2020 that alleged Meta (then Facebook) had illegally maintained monopoly power in the social networking market through its acquisitions of Instagram in 2012 and WhatsApp in 2014, as well as through a range of platform policies. Judge Boasberg ruled that the states had waited too long to challenge the acquisitions and that the Facebook policies they had cited were likely not illegal under federal antitrust law. The State AGs appealed, but the appeals court reaffirmed Boasberg’s opinion, calling it “comprehensive and well-reasoned”. They backed up his ruling on the statute of limitations related to the acquisitions, the determination that the platform policy allegations failed to state a cause of action, and that the States’ “exclusive dealing” theory failed as a matter of law, as app developers were free to develop applications for competitors. A similar complaint filed by the Federal Trade Commission was also heard by Judge Boasberg and initially dismissed as well. But the FTC complaint was later restored in an amended form by the judge as the statute of limitations does not apply to the federal agency.

Context –  When Judge Boasberg dismissed the FTC and State AG cases against Facebook, it was always out of left field. It was a strong affirmation of the independence of US Federal Courts. With the FTC complaint, he ruled their case was weakly developed and ordered them to come back with some evidence. They did. It is now proceeding, and like in many antitrust cases, defining the market Facebook operates in will prove key. If it is what the FTC calls “personal social networking,” which appears to be social media services designed like Facebook and not like TikTok or YouTube, then Facebook is dominant and the case flows from there. Meta argues that they are not remotely dominant in the online competition for user attention and advertiser dollars. The rise of TikTok seems to back that up.

Senators Introduce a Bipartisan Social Media Age Verification and Regulation Bill

Report from CNN

In Brief –  A bipartisan bill has been introduced in the US Senate that would require large social media platforms to verify users’ ages, stop anyone under 13 years of age from signing up, and set up special site rules for minors over 13, such as no use of algorithms. The “Protecting Kids on Social Media Act” would also require the US Department of Commerce to design and test out a nationwide age verification system that would use official records and IDs to check people’s ages online. The sponsors of the bill, who claim the measure is needed to allow parents to better control children’s online activities, argue the teenagers need to be restricted from unfettered online activity to prevent alleged harms including social media addiction and a range of mental health ills that are increasing among children and teens.

Context –  The politics of “protecting” teenagers from online harms, including social media use, is unquestionably good. US States with undivided Republican control seem especially receptive, or at least quick off the mark, with Arkansas and Utah already enacting strict statutes requiring social media platforms to get parental approval for teens under age 18 setting up social media accounts. Porn sites are unsurprisingly also a target. But California, France, and the UK are also prime examples where far more progressive governments are pushing to create a separate, more regulated version of the internet for teenagers. If there is a “splinternet” coming to Western countries, this is what it looks most like. We don’t have the space here to critique too deeply, but super-smart analyst Mark Masnick does here, here, and here. Creating internet-wide age verification causes most privacy advocates to be worried, both for the effect on teens but also on adults. Then there are serious concerns of advocates for at-risk youth who warn of their inability to access online services and support. Finally, actual studies of teen use of social media show it is not generally harmful and is often a positive force. Of course, in the US, there will be constitutional challenges to age-based online restrictions, including this one.

Microsoft Aims to Resolve Slack’s EU Antitrust Complaint About Bundling of Teams

Report from Financial Times

In Brief –  To avoid an official antitrust probe by the European Competition Authority, Microsoft has reportedly agreed to stop requiring European customers of its popular Office software to have Microsoft Teams video conferencing and messaging app automatically installed on their devices. Recent reports indicated that the Commission was planning to open a formal antitrust investigation based on a 2020 antitrust complaint from Slack, a leading chat-style remote collaboration platform, alleging that Microsoft unfairly bundles its Teams product with its market-leading Office 365 software package. As part of that process, the Commission had informed Teams rivals and customers of some of the data and evidence they hold that the Commission would intend to use in making its case. The new offer from Microsoft will involve allowing customers who buy Office 365 to do so with or without Teams, but the mechanism on how to do this remains unclear.

Context –  Despite being the second most valuable company in the world, trailing just Apple, Microsoft largely avoided the early years of the antitrust “techlash”. However, the company’s acquisition of video game giant Activision Blizzard has sparked scrutiny from competition enforcers globally. Besides engaging in a full-throated defense of the deal, Microsoft has also pursued a global good behavior campaign aimed at progressive regulators, especially in the EU, US, and UK. It has included acquiescing to unprecedented labor organizing, support for major antitrust reforms and app store regulation, and software license changes coordinated with the European Commission to promote European cloud services providers. To win over the EU on the Activision acquisition, Microsoft has reached a string of 10-year game licensing deals with cloud gaming firms, which are currently being “market tested” by the EU. However, in the new world of sequential regulatory reviews, the UK CMA appears to be rejecting those types of “behavioral” remedies and is going the route of just saying no

Google Faces NLRB Battle Over Contract Workers Claiming to be YouTube Employees

Report from Bloomberg

In Brief –  A group of fifty employees of staffing agency Cognizant Technology Solutions, who work as contractors for YouTube Music in Texas, have voted to unionize through the Alphabet Workers Union. The workers contend that they are also employees of Google, a claim that both Cognizant and Google reject. However, a regional director of the National Labor Relations Board (NLRB), which has certified the election, has agreed with the workers, and deemed Google a “joint employer” that is obligated to negotiate with them. Alphabet, Google’s parent company, is appealing the NLRB ruling and has repeatedly denied that it employs the sub-contracted staff.  “We have no objection to these Cognizant workers electing to form a union,” said a Google spokesperson. “We simply do not control their employment terms or working conditions – this matter is between the workers and their employer, Cognizant.”

Context –  The topic of unions organizing in Big Tech companies increasingly requires filters to track different developments. A key one is to segregate out organizing by “non-tech” workers. Videogame “game testers”, Tesla “image analysts” who check how automobile camera software labels images, Apple retail store employees, and Amazon distribution center workers, all tend to be hourly employees, not programmers or developers. The biggest union success with coders is still likely at the New York Times of all places. The Cognizant (and maybe Google) employees checking video labeling and doing customer service appear to fit the “non-tech” mold. The appropriate classification of contractor workers is another category. We’ve seen this before with Google. In 2019, 80 employees of contractor firm HCL Technologies, who work alongside Google employees in Pittsburgh, voted to unionize. They signed their first contract, with HCL, in 2021. Finally, there is the Alphabet Workers Union. Formed in 2021, it’s been a non-traditional “minority union” that did not negotiate contracts but instead facilitated worker activism. It now claims over 1400 members, including at contractors and vendors.

European Parliament to Amend Their AI Act to Regulate Generative AI

Report from the Euractiv

In Brief –  The European Parliament has reached a deal is expected to enact its version of the AI Act in June, allowing the EU’s three governing institutions to attempt to hammer out a final version by year’s end. The bloc is now five years into its effort to establish a comprehensive legal regime to govern Artificial Intelligence (AI). The European Commission and Council of Member States have produced similar versions built around a risk-based framework that is focused on AI applications rather than general models of AI technology. The most stringent regulations apply to “high risk” uses that threaten fundamental rights. However, the Parliament’s consideration coincided with the explosive popularity of ChatGPT, causing a major rethink on the regulatory treatment of “generative AI” and “foundation models” upon which some AI applications are built. Parliamentary negotiations resulted in changes that include stricter obligations for foundational models, a requirement that generative AI tools disclose the use of copyrighted material in their training data and results, and an expanded definition of high-risk applications.

Context –  The emergence of ChatGPT, with its dizzying ability to spit out seemingly accurate responses while including quixotic “hallucinations” to keep humans on their toes, has shaken up discussions of digital regulations just about everywhere. The EU AI Act shows “No plan survives first contact with the enemy.” When the Italian data protection authority banned ChatGPT (since lifted), EU Commissioner Margrethe Vestager said she still supported regulating AI uses, not policing underlying technology. She now applauds the Parliament’s changes, sees a final version wrapped up by the end of the year, and was part of a G7 discussion of AI regulation that saw the leaders agreeing on the importance of “trustworthy AI” without knowing how to get it. And while there are calls for global AI summits and government-mandated research pauses, keep in mind that AI is not really definable and uncounted tools, tweaks, and applications are being incrementally created in university and company research labs around the world. And check your work when using ChatGPT.

Another Encryption Fight as EARN IT Act Backers Hope Third Time is a Charm

Report from Engadget

In Brief –  For the third Congress running senior members of the Senate Judiciary Committee have introduced the “EARN IT Act”, legislation to push digital platforms to aggressively block user generated content linked to child sexual abuse. The bill again threatens platforms with the loss of Sec. 230 immunity against civil lawsuits and opens websites up to lawsuits under state online child protection laws. The measure creates a national commission to prevent online child sexual exploitation with authority to determine “best practices” for internet websites or apps to address the problem. From its initial crafting in 2020, the bill has faced charges that it will undermine the ability of websites and technology providers to provide strong encryption to users to protect their online communications from outside monitoring, rallying a diverse collection of privacy, civil society, cybersecurity and technology company advocates.

Context –  Very similar debates marked by tensions between secure end-to-end encryption and repugnant child sexual abuse are underway the US, UK, and EU. In the UK, where combatting online child sexual abuse has been the lead justification of the Online Safety Bill (OSB) since its earliest days, privacy advocates see the current bill as a threat to strong encryption. Like the proposed national commission in the US EARN IT Act, the OSB empowers Ofcom, the UK communications regulator, to set standards and practices for digital communications platforms, which critics argue could include being able to monitor user communications, which would be a de facto ban on strong end-to-end encryption. Platforms including WhatsApp and Signal have said they would abandon operations in the UK before doing that. The European Commission (EC) has proposed legislation to require online services to more aggressively screen, identify and remove CSAM. Their measure would establish an EU Centre on Child Sexual Abuse to work with law enforcement and platforms, as well as allow Member States to request that courts order companies to implement systems that can detect abusive content on their platforms.

Google Payments Plan in India Hits More Troubles

Report from the Medianama

In Brief –  The Madras High Court has imposed an injunction on Google to block the company from removing the apps of Matrimony.com, an Indian online matchmaking platform, from the Google Play Store for refusing to adopt the new Google in-app payments regime that was implemented to comply with an antitrust ruling of the Competition Commission of India (CCI). One of the remedies ordered by the CCI was for Google to allow developers to use alternative payments processors. Google’s response is to allow developers to choose from vetted non-Google payments providers, and when they do, Google’s fees of 15-to-30% are reduced by 4 percent. Matrimony.com argues that the remaining fees of 11% and 26% are still unfairly high, a position shared by many Indian developers, and it is trying to use a payments provider that does not pay the fees to Google. Google’s effort to enforce its new rules have been suspended by the court pending further judicial action.

Context –  The number one takeaway from the individual skirmishes in the years-long global battle over “in-app payments” pitting Google and Apple against app developers is that the developers are not interested in payments processing. Matrimony.com objects to Google’s underlying fees. While to Apple and Google, payments are just the best way to collect their fees. But their ability to mandate easy fee collection is ending, and both will need to accept payments processing alternatives. But they will also defend their right to charge app developers commissions. South Korea and India, two markets where Android has massive market shares, have been driving the change for Google. The company argues that payments processing is a stand-alone service worth 3 to 4%. They are not wrong. They also argue that the rest of their commission is in line with fee levels on a wide range of digital markets. Again, correct. But, of course, developers would prefer to pay less. Google’s payments offer is in front of the UK CMA right now. And we are intrigued to see which country tries to impose public utility-style price regulation on app store fees first. Will it be India? The EU with the upcoming DMA regime?

Ask A Question!