News insights

News & Insights

October 2024

California Law on Political Deepfakes Quickly Blocked by Federal Judge

Report from Politico

In Brief – A federal judge has blocked enforcement of a California state law that banned knowingly sharing deceptive election-related deepfakes 120 days before an election as well as 60 days after. The temporary injunction came just two weeks after the law was signed by Gov. Gavin Newsom (D). The measure was one of three election-related bills passed by the state legislature in August that targeted the use of AI to create misleading content that could impact elections, but AB 2839 was the only one that went into effect before January 1, 2025. It was quickly challenged as an unconstitutionally broad restriction on free speech by a content creator who had posted on X an AI-generated video that was a self-described parody of a Harris for President campaign ad. Although the law technically exempted parodies, Governor Newsome had specifically referenced the post as the kind of content that would be blocked. Judge John Mendez’s injunction strongly criticized the law, describing it as “a blunt tool” that “unconstitutionally stifles the free and unfettered exchange of ideas.” He did exempt a portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.

Context – Election interference and non-consensual pornography are the most cited harms from AI-related technology tools to help misleading “deepfake” content. Although the biggest AI developers have agreed to identify and label AI-generated images created by their services, so-called “watermarking” is considered of limited value by many because it can be circumvented and there are available AI tools that don’t use the technology. The Director of the US Cybersecurity and Infrastructure Security Agency believes that federal legislation is needed to give “real teeth” to deepfake rules, but she skims over the huge constitutional and political hurdles. On deepfake nudes, the US Senate recently passed a bill that would allow victims of nonconsensual sexually explicit images to sue people who create, share or receive them, and the UK’s Starmer Government plans to ban “the creation of sexually explicit deepfakes.”

DoJ Informs Judge of Their Thoughts on Google Search Antitrust Remedies

Report from Reuters

In Brief – The US Department of Justice (DoJ) has submitted a high-level framework of remedies in the antitrust case that found that Google acted illegally to maintain its monopoly in online search. The filing to Federal Judge Amit Mehta says that fully remedying the harms “requires not only ending Google’s control of distribution today, but also ensuring Google cannot control the distribution of tomorrow.” Mehta ruled in August that Google held a monopoly in the market for general internet search and engaged in illegal conduct to maintain that search monopoly, especially through large payments to Apple and Samsung to make its search engine the default option on smartphones and browsers. The judge found harm in the related market for the online text ads that appear on search results pages. Despite the judge’s relatively narrow findings of illegality, the DoJ has laid out a broad set of potential remedies hitting Google’s search and advertising businesses, and the company’s accumulation and use of data, saying that remedies for each “could include contract requirements and prohibitions; non-discrimination product requirements; data and interoperability requirements; and structural requirements.” Google responded saying that the government was outlining “radical changes” that “go far beyond the specific legal issues” with proposals that would “risk hurting consumers, businesses, and developers.” The company was especially critical of suggestions to split off Chrome or Android from Google’s search-based business, warning that it would “change their business models, raise the cost of devices, and undermine Android and Google Play in their robust competition with Apple’s iPhone and App Store.”

Context – Google’s search, advertising, and app store platforms are all facing antitrust legal challenges in the US, and all three are now regulated in the EU under the Digital Markets Act. Google continues to argue that it faces major competition in mobile-related markets from Apple, in digital advertising from Amazon and Meta, and in search from TikTok and AI companies, with years of falling market shares in digital advertising and more recently in search.

A Few Digital Platform Economy Cases on the Supreme Court Docket

Report from Reuters

In Brief – As the Supreme Court begins its latest round of cases after a three-month break, the highest profile ones include a few coming out of the high-tech community. Top of the list is Free Speech Coalition v. Paxton involving a 2023 Texas law requiring age verification for users to access pornography sites on the internet. A federal judge ruled in 2023 that the law likely violated the First Amendment, but a divided panel of the Fifth Circuit Court of Appeals rejected that argument, applying a more relaxed form of judicial scrutiny that seems at odds with the High Court’s 2004 decision in Ashcroft v. American Civil Liberties Union that struck down a similar federal law. Two other top cases are tech-adjacent, both involving investor lawsuits alleging securities fraud, one involving Meta and the other Nvidia. Finally, the Supreme Court is expected to be confronted this term with the outcome of TikTok’s legal challenge to the federal law that forces ByteDance to sell its US operation.

Context – The coming Supreme Court term does not include the internet policy headliners of the past two, with no case involving Sec. 230 and platform liability for their algorithms like in 2022-23, or last term’s cases involving state laws regulating how digital platforms engage in content moderation and when government encouraging platforms to moderate online content becomes state action that violates the First Amendment. But applying the First Amendment to online platforms remains the big theme. The Texas age verification law is a big deal because while it’s about porn sites, the number of states passing laws regulating how social media sites deal with minors, which ends up requiring age verification, is big, growing, and bipartisan. Those social media laws are pretty much all getting blocked on First Amendment grounds as lawsuits proceed. Clarity on age verification would quickly work itself into those efforts. However, while the court has been leaning in the direction of a robust First Amendment in the online context, they have also generally deferred from big, clear decisions.

German Competition Authority Says Microsoft Has “Market Significance”

Report from the Wall Street Journal

In Brief – The German Federal Cartel Office (FCO) has announced that Microsoft falls under the legal authority granted to it under Section 19a of the Competition Act to regulate the largest digital platforms when they are deemed to be “of paramount significance on competition across markets”. Microsoft is the fifth US-based digital giant to be so designated, joining Amazon, Apple, Google, and Meta. The German regime authorizing proactive regulation of digital giants by competition regulators rather than facing traditional case-by-case antitrust investigations was a major break with traditional antitrust policy when it was enacted in 2021. In announcing that Microsoft was now covered by Sec. 19a, the FCO highlighted that while the company was also regulated by the EU’s Digital Markets Act (DMA), that law only applied to the Windows Operating System and LinkedIn, while the German law empowers the FCO to “stop anticompetitive practices” involving any other Microsoft service.

Context – The German law was a harbinger of the EU’s DMA. Some German officials questioned the DMA model as trying to regulate too many companies while being too limited in the number of actual services covered and the range of anticompetitive conduct the regulator could prohibit (the DMA has 18 Do’s and Don’ts). The FCO is clearly pitching its Sec. 19a authority as addressing the later concerns with the DMA being too limited in the types of gatekeeper conduct covered. There are seven DMA “gatekeeper” companies. They are the same five US-based giants covered by the German law plus ByteDance and Booking. However, the DMA rules actually apply to 23 designated “core platform services” of the seven companies. Google leads the pack with 8 while ByteDance (TikTok) and Booking only have one. In the context of Google, it’s auto-related services are not a covered DMA platform, but the FCO is regulating Google auto services conduct. Microsoft has faced multiple complaints that it uses highly popular software packages like Office 365 to benefit products like Teams and its cloud services. They may not be in scope for the DMA but they are for the FCO.

More DSA Regulator Questions About Social Media Recommender Algorithms

Report from Euractiv

In Brief – The European Commission has announced that Snapchat, TikTok and YouTube have received requestions from Digital Services Act (DSA) regulators to provide information explaining how their recommender systems are designed and use generative AI to direct content to users. The DSA is the EU’s landmark law directing how digital platforms engage in content moderation and the Commission directly regulates the largest platforms, who are designated as “VLOPs”. The Commission’s questions focus on how recommender systems may drive users to extreme or harmful content, what controls are in place to allow users to opt out of these systems, and the effectiveness of those controls. All three platforms are being asked about harmful content related to “civic discourse”, such as information about elections, while YouTube and Snap are also asked about protecting youths and mental health, and Snap alone about illicit drugs. The companies have until November 15 to produce documents based on their DSA compliance efforts.

Context – Placing the Commission in charge of regulating the VLOPs, platforms with at least 45 million European users, was a key DSA provision. No more deferring to small state regulators in Ireland under the “one stop shop” of the GDPR. VLOPs also face stricter criteria for dealing with objectionable material. The Commission initially designated 19 platforms as VLOPs, later added three popular online pornography sites to the roster, and then ecommerce platforms Temu and Shein. The Commission has already opened full scale DSA investigations of five VLOPs: X, TikTok, Facebook, Instagram, and AliExpress. Meta was not included this time because the questions are already part of their DSA investigation. TikTok already faces some too. While member state DSA regulators got underway months after Commission regulators, Ireland’s DSA agency recently asked 12 major platforms based in Ireland about their reporting processes for illegal content, including mid-size services like Etsy not designated as VLOPs, but several VLOPs as well. Apparently the more regulators the better.

Epic Files Another Antitrust Lawsuit Against Google (and Samsung)

Report from The Hill

In Brief – Epic Games, which defeated Google in an antitrust trial last December, has filed a lawsuit alleging that Google and Samsung are violating federal antitrust law by conspiring the create an “Auto Blocker” feature designed to stop smartphone users from downloading apps from third-party app stores. Google and Samsung both responded to the suit with statements that Google did not request that Samsung design the feature, and both argued that downloads from unknown sources on the internet have long been recognized as a security risk and giving users the capability to block such downloads by default is a pro-consumer capability. The Auto-Blocker blocks a wide range of downloads, including apps that don’t come from Samsung’s Galaxy Store and Google’s Play Store. The default setting on new Samsung phones is on but users are asked during initial setup if they want it on or off, and can disable it temporarily or permanently later.

Context – Epic Games filed antitrust suits against both Apple and Google in 2020 alleging that their 30% commissions were monopoly rents. Apple largely prevailed in court. Google decisively lost. It struck many observers as odd that the “closed” Apple ecosystem with a larger market share was on the right side of US antitrust law while the more open Android was not. But Apple had a bench trial while Google faced a jury of individuals potentially primed against Big Tech, and Apple’s closed system was straightforward and had long claimed to protect user security while Google’s system was complicated. Judge James Donato has issued his remedy order requiring Google to fully open Android to third-party app stores and prohibits any Google conduct to incent device makers to undermine those competitors, so evidence to that effect will be key here. There are many moving parts on app store regulation in many global markets, including the European Commission’s DMA regulators determining whether Apple’s and Google’s new EU app store rule comply with the new law, and the federal judge who decided the Epic v Apple case is being asked to make a similar decision to Donato’s on Apple’s US anti-steering compliance plan.

Federal Judge Rejects Amazon Effort to Dismiss FTC’s Antitrust Lawsuit

Report from Bloomberg

In Brief – US District Judge John Chun, who is overseeing a collection of antitrust cases targeting Amazon’s marketplace practices, has rejected the company’s bid to have the Federal Trade Commission’s major complaint dismissed. The judge’s order, which does discard some charges, was sealed for a few days to give the two sides time to ask for redactions. However, on the big question of whether the main federal charges will proceed, the answer is yes. The order permits the FTC to continue prosecuting its claims that the company violated federal antitrust laws, while tossing out some of the claims brought by state attorneys general about alleged breaches of state laws. Chun said that Amazon’s arguments that its marketplace policies are pro-competitive need to be argued at the trial that is currently scheduled for October 2026.

Context – When news of Judge Chun’s mixed ruling broke there were reports that it was a win or Amazon. That was fake news. Mixed results on motions to dismiss are par for the course, especially with complaints involving several allegations and theories. For example, it was huge news in 2021 when a federal judge completely dismissed the FTC’s antitrust complaint alleging that Meta monopolized the social media market by acquiring Instagram and WhatsApp. And even that shock result was overturned when the judge later accepted the FTC’s amended complaint. Partial dismissals are generally not news and mean things move forward. For example, Google was able to get some State AG charges dismissed from the DoJ’s search-related antitrust complaint, but the suit proceeded. Google ended up losing. Sometimes judges do dismiss entire cases. eBay recently had a DoJ complaint tossed. Chun himself recently dismissed an antitrust case targeting Amazon. It was brought by consumers and the judge ruled that their claims of harm as “indirect purchasers” of Amazon logistics improperly applied federal antitrust law. However, this decision adds to earlier rulings by Chun and a California judge that allow suits to proceed that allege Amazon illegally pushed sellers to raise their prices on other online forums.

Judge Orders Google to Open Android to Competing App Stores in US

Report from VentureBeat

In Brief – Federal Judge James Donato has issued an order requiring Google to open its Android operating system to third-party app stores following last December’s jury verdict that Google illegally monopolized Android app distribution and in-app payments. Donato gives Google eight months to implement changes allowing third-party developers to operate alternative app stores, distribute their app store apps within Google’s Play Store, and give their app stores access to all the apps in the Play Store. Developers are also allowed to use their own in-app payments alternatives. Finally, the judge prohibits Google from using any incentives or policies to encourage developers and device manufacturers to preference Google apps, including the Play Store, or undermine third-party apps or app stores. The order applies only to the US market, runs for three years, and Epic and Google will collaborate to appoint a three-person technical committee to handle disputes. Google responded that they would appeal the original verdict, saying it was based on a “flawed finding” that Android was a market itself and ignores competition between Android and Apple, and that they will ask for the remedy plan, which they claim will cause unintended harm to consumers, developers, and device makers, to be paused while they appeal.

Context – Epic Games filed antitrust suits against both Apple and Google in 2020 alleging that their 30% commissions were monopoly rents. Apple largely prevailed in court. Google decisively lost. It struck many observers as odd that the “closed” Apple ecosystem with a larger market share was on the right side of US antitrust law while the more open Android was not. But Apple had a bench trial while Google faced a jury of individuals potentially primed against Big Tech, and Apple’s closed system was straightforward and had long claimed to protect user security while Google’s system was complicated. The judge who oversaw the Epic v Apple case is currently being asked to make a similar ruling on Apple’s anti-steering policies, while both companies are facing demands to open up in Europe, Japan, South Korea and other markets.

FTC Crackdown on Deceptive Businesses That Use AI Reveals Partisan Split

Report from Reuters

In Brief – The Federal Trade Commission has announced enforcement action against five companies that it says used artificial intelligence to engage in deceptive or unfair conduct. The five FTC Commissioners unanimously backed four of them, including three companies operating bogus ecommerce schemes and one that boasted “the world’s first robot lawyer”. The fifth firm, Rytr, a company offering a Generative AI writing tool that included a capability to quickly draft consumer product reviews based on a few prompts from the user, divided the commissioners 3-2 along partisan lines. The Democrats allege that the review writing tool’s only use is to facilitate massive numbers of deceptive scam reviews, while both Republicans argued that the service clearly had potential legitimate uses and holding the company responsible for potential wrongdoing by users, especially without evidence of that wrongdoing, was bad AI policy and clearly exceeded the FTC’s legal and regulatory authority.

Context – Only the FTC’s Rytr action warrants a second thought. It reflects a meaningful distinction between the Republicans and Democrats on AI regulation, or at least Progressive Democrats like the current FTC leadership. The debate within Democratic circles is most on display in California, where the state legislature passed SB 1047 in August, a bill that would hold AI companies legally liable if they don’t take regulator-mandated safety measures and their technology is later used by others to cause major harm. Governor Gavin Newsom (D) has been heavily lobbied by supporters and opponents, including tech and AI companies on both sides. The massive AI Executive Order issued by President Biden last year focuses on “soft law” efforts, such as voluntary codes of conduct and safe AI practices, but it also mandates some unprecedented information sharing between AI companies and government agencies and “safety” testing for large AI models. The Trump campaign has called for repealing that AI Executive Order and argues for less regulatory policies to promote AI innovation.

Australian Government Proposes Bill to Control Misleading Information

Report from Government News

In Brief – The Australian government has introduced legislation to combat online misinformation. The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 will give the Australian Communications and Media Authority’s (ACMA) authority to require online platforms to have acceptable processes to block online content that the regulator considers harmful misinformation and regularly report on their practices to deal with harmful information. The Albanese Government views misinformation as a serious threat to safety, democracy, and the economy, asserting that inaction on the problem is not viable. The most recent bill is the government’s second take, with an earlier draft being pulled back following widespread concern over its impact on free speech. The current measure follows a public consultation on the issue. Under the new bill, if a covered online platform, such as a social media company, does not appropriately deal with problematic content, such as material designated misleading by official fact checking panels, they could face penalties imposed by the ACMA, including fines that can reach 5% of their global revenue. The government says that the bill addresses earlier concerns by requiring the same platforms to also protect freedom of expression.

Context – Under former Prime Minister Scott Morrison, Australia was at the forefront of efforts to aggressively regulate online platforms. His government mandated takedowns of abhorrent images following the 2019 Christchurch massacre, enacted an Online Safety Bill in 2021 to begin regulating objectionable content on social media, instituted a regime to force Google and Facebook to pay big Aussie media companies, and proposed an earlier version of misinformation legislation. Free speech objections from conservative commentators will be even more pronounced for the Albanese Government’s bill. However, this time the EU, with its Digital Services Act, and the UK, with the Online Safety Act, are already well down the path of government regulating online content moderation for various harms.

EU Tasks Experts with Developing AI Act Rules for General Purpose AI

Report from Euractiv

In Brief – The European Commission announced the chairs and co-chairs of the working groups to develop the first Code of Practice for “general purpose AI systems” like Chat-GPT that is required by the EU’s AI Act and held the first plenary meeting. The code will flesh out the rules for providers of general-purpose AI models that will apply starting next August. Four working groups will focus on data transparency and copyright, risk identification and assessment, technical risk mitigation, and internal risk management and governance of AI developers. The 13 leaders bring significant technical, legal, social, and scientific expertise, includes citizens of the EU, United States, and Canada, and is heavily weighted to representatives of academic and research institutions. The first online plenary session was on September 30 and three more are planned before a closing plenary in April. Among the chairs are Yoshua Bengio, a professor at the University of Montreal who is often referred to as one of the three ‘godfathers of AI’, who will lead the risk mitigation group, and Marietje Schaake, a former MEP and now a fellow at Stanford’s Cyber Policy Centre, who will lead the governance group.

Context – The AI Act was initially designed as a tiered risk-based regulatory regime for applications using AI technology not AI technology itself. The stir created by the release of Chat-GPT caused the EU Parliament to change course and include regulation of large “foundation models” regardless of application. This shift was divisive and EU-based AI innovators pushed for lesser mandates on developers of smaller models. Many business leaders continue to argue that Europe imposing strict rules will drive investment and innovation to countries that hold back on AI regulation. The legislation pushed many specifics to the regulatory process, including rules that will emerge in this code. Some especially divisive issues include the application of copyright law and data privacy to AI model training, the transparency of AI models, and responsibility for harms caused by downstream AI applications and users.

eBay Wins Dismissal of Department of Justice Lawsuit Over Unsafe Products

Report from Reuters

In Brief – US District Judge Orelia Merchant has dismissed a lawsuit filed by the US Department of Justice (DoJ) that accused eBay of unlawfully selling “hundreds of thousands of products” that violated a range of environmental laws. The products were all sold by third-party sellers using the eBay marketplace, with the sellers holding possession of the products. The judge largely based her ruling on Sec. 230 of Communications Decency Act, rejecting the DoJ arguments that the platform was liable for the actions of its users and that eBay was a seller of the products because it was involved in the exchange of the money, noting instead that eBay never held title or possessed the products at issue. Merchant added that eBay’s administrative and technical support to sellers “does not materially contribute to the products’ alleged unlawfulness” and does not make the company itself a “publisher or speaker” on sellers’ behalf. eBay said maintaining a “safe and trusted marketplace” was fundamental to its business and it would continue to invest significant resources to prevent prohibited items from being sold on its platform.

Context – This decision reinforcing the long-standing understanding that Sec. 230 liability protection applies to online marketplaces is an obvious win for third-party ecommerce services providers. It’s also noteworthy that Judge Merchant highlighted that eBay was operating as a marketplace platform rather than possessing and handling the products in question. This is a very different scenario from Amazon in a recent ruling by the US Consumer Product Safety Commission (CPSC) that requires the ecommerce giant to abide by retailer-like recall requirements when products are sold on its ecommerce site by third-party sellers and are also physically handled in the company’s fulfillment centers. Those products look more like items provided by wholesalers in the traditional retail context rather than independent retailers selling on a true third-party marketplace. The CPSC, the FDA, and several product liability lawsuits are all now pushing the legal and regulatory argument than when Amazon operates both a marketplace and logistics network holding products that they are a retailer-like business.

Google Files EU Antitrust Complaint Against Microsoft for Cloud Practices

Report from the Wall Street Journal

In Brief – Google said it has filed a formal complaint against Microsoft with the European Commission antitrust regulator accusing the software giant of abusing its market power in enterprise software to push businesses to use its Azure cloud platform and keep them locked in. Microsoft is the #2 cloud services provider in Europe, with Amazon tops, Google third, and Alibaba fourth. Google is not the first cloud services competitor to complain to EU officials about how Microsoft uses software licensing terms, pricing, and other practices to incentivize business customers who use their market-leading software packages to also use Microsoft cloud services. EU-based cloud providers, each smaller than the top US and China-based companies, filed similar complaints against Microsoft in 2022 and the two sides settled in July after Microsoft agreed to make changes to their software practices. However, those Microsoft policy changes only apply to software customers who use the EU cloud providers, not Amazon, Google and Alibaba. While Google and Microsoft had a legal and public policy truce for a while, that’s long passed.

Context – Allegations that Microsoft leverages its dominant software packages, including Office 365 and Windows OS, to preference its other digital services, is a complaint that extends beyond Azure cloud services. The European Commission recently found that Microsoft has engaged in anticompetitive conduct by bundling its Teams service into Office 365 nearly four years after Slack filed an antitrust complaint. German software company NextCloud accuses Microsoft of unfairly bundling its OneDrive cloud system into Windows. The EU’s Digital Markets Act (DMA) regulatory regime will likely impact some or all of these issues at some point. Microsoft is a DMA gatekeeper (as are Amazon and Google). Microsoft’s Windows OS and LinkedIn are currently designated as regulated core platforms. And while the DMA does apply to cloud services, none of the gatekeepers’ cloud businesses have so far been designated as regulated core platforms.

FTC Releases Staff Report Detailing Lots of Data Collected and Used Online

Report from the Washington Post

In Brief – The US Federal Trade Commission has released a 129-page staff report that sums up the findings of a four-year study of the data collection and use practices of nine of the largest social media and video platforms. Besides noting that the companies collect immense volumes of user data, the FTC staff criticized them for using the data to power advertising services, direct content to users that is designed to keep them engaged on their platforms, and train AI systems. The fact that many young people, including teens, use the services, also comes in for much criticism. The report includes calls for Congress to extend COPPA regulations up to the age of 18 and also enact “comprehensive federal privacy regulation”.

Context – In the spirit of Casablanca’s Captain Louis Renault, the FTC is shocked, shocked to find that data collection and targeted advertising is going on over the internet. Like most things involving social media, there is a thin sheen of bipartisanship covering deep partisan fissures. All five FTC commissioners voted to release the report but there is little substantive agreement beyond there. The process began during the Trump Administration and many Republicans continue to complain that the platforms discriminate against conservative viewpoints. The two Republican commissioners submitted statements that are quite critical of the report. The FTC’s Democratic majority are fully committed to online advertising being anti-privacy and anti-consumer, deriding data collection as “commercial surveillance”. Moving beyond the political theater of the report, it’s worth taking the temperature of privacy-related federal bills. “Comprehensive” privacy legislation seems unlikely. A deal between House and Senate committee chairs has hit a House stumbling block. Other big hurdles include the legislative calendar, business antipathy for the current FTC, and GOP reluctance to give President Biden a big win. So, progressives are calling on FTC Democrats to enact federal privacy regulations now. Finally, some manner of online “child safety” legislation, such as expanding COPPA, is sure to be pitched in a post-election Lame Duck.

Meta Responds to Polish Copyright Law by Changing News Links

Report from Notes From Poland

In Brief – Facebook has changed how news links appear on its site in Poland, announcing that it will no longer include images and text summaries when third parties post media stories. Only the title and link will appear. The company says the policy is in response to the new Polish copyright law, which includes so-called “neighboring rights” for copyright holders. Facebook alleges that the proper treatment of non-commercial posts from third parties is not clear and therefore it will not add images or summaries that could require trigger payments. Media companies are directed to ensure that their own posts are properly labeled so that Facebook’s systems include images and summaries. Media companies reacted harshly, accused Facebook of exploiting its dominant position, and said they would explore legal action.

Context – This is one more front in the global media company campaign to have governments force Google and Meta to pay them when their content appears on the platforms. One thread involves government-supervised binding arbitration imposing licensing rates, which started in Australia, was picked up by countries including Canada, and has been considered by the State of California. While Google and Facebook agreed to pay media companies in Australia, Meta has since diverged from Google, blocking media posts in Canada to avoid the payments regime, and threatening to do so in California and Australia. Google pays in all those locales. In the EU, the 2019 Copyright Directive created neighboring rights to require license payments for news snippets. France changed its copyright law and called for Google to pay media companies. When Google threatened to stop using snippets, France’s antitrust regulator determined that stopping snippets was an abuse of Google’s search dominance and directed Google to pay. Poland is the last EU member state to implement the Copyright Directive, and its final version addressed media complaints. Meta’s reluctance to pay media companies when users, rather than Meta, post news content, is in line with their recent policy. The next big decision likely rests with Polish competition regulators.

Google Likely to Get a Negative DMA Preliminary Finding on Search Plan

Report from Bloomberg

In Brief – The European Commission is reportedly planning to issue preliminary findings that Google is not meeting the requirement of the Digital Markets Act (DMA) to treat vertical search rivals fairly. The formal chargesheet is expected in October, although the upcoming changeover in Commission leadership could delay the release. How Google’s main search service treats so-called “verticals”, specialized websites to search for things like hotels, airfares, retail products, local services, or jobs, has been a top concern regarding Google’s DMA compliance plan. Many vertical search providers believe that they have long been penalized by the digital giant who was intent on bolstering its own specialized search offerings. The concern sparked the EU’s first antitrust case against Google, and vertical providers continue to call on regulators to demand better treatment from Google. At the same time, many companies that aim to sell directly to online consumers, such as airlines, hotel chains, and retailers, argue that they should not be penalized in Google search results. Commission officials recently held a series of workshops with search market stakeholders, followed by a meeting with Google representatives. One proposal reportedly floated by the search giant would give users the option of navigating to alternative vertical search platforms or to directly search suppliers, such as hotel and airline websites. The Commission is due to issue its final decision on Google’s DMA plan by March.

Context – There are currently 7 DMA gatekeepers, and they operate 28 “core platform services” that are required to comply with the law’s regulatory mandates. In March, the Commission opened compliance investigations of Apple’s App Store, Meta’s plan to offer an ad-free subscription alternative for Facebook and Instagram, and Google’s treatment of vertical search. The Commission has already issued preliminary findings against Apple and Meta and looks set to add Google search to the mix. EU referees are certain to hear endless advice on how Google should do search in a fair way from here to the end of time.

California Governor Vetoes Landmark AI “Safety” Regulation Bill

Report from the Washington Post

In Brief – In a much-anticipated decision forced by the California State Legislature passing a bill to create the first major artificial intelligence regulatory regime in the US, Governor Gavin Newsom (D) vetoed SB 1047. The bill was the subject of intense lobbying for and against, serving as a proxy debate over AI regulation in the US as the Congress refrains from backing meaningful AI legislation. Many of the biggest California-based tech companies and leaders of state’s venture capital sector argued that the bill would reduce AI innovation and investments in the state. Advocates of so-called AI “safety” claimed that the bill only required large AI developers to implement AI testing and reporting policies that they have already committed to. The bill applied the most stringent mandates to AI models that involved at least $100 million in investment, and Newsome argued that this focus on large AI models ignored the possibility that smaller AI systems could involve highly sensitive data or be used in high-risk situations, while bigger models might be used to handle low-risk activities.

Context – The veto is likely about investment and is best understood in the context of the EU’s landmark AI Act. It was initially designed as a “risk-based” technology-neutral regime with mandates based on the purpose of the application using AI technology, not the underlying AI technology. However, the EU Parliament changed course following the sensational release of Chat-GPT and the final version of AI Act regulates large “foundation models” regardless of application. Newsome’s critique of SB 1047 harkens back to the risk-based theory rather than special mandates for the largest system. US progressives generally applaud aggressive tech regulation that is enacted in Europe but fails in the US Congress, including the expansive AI Act. They were among the top backers of the California bill. Just as critics of the EU regime argue it will nudge European AI innovators and entrepreneurs to move to the US to grow businesses, critics of the California bill were saying the same about pushing AI commercialization to less regulatory states. They prevailed in Sacramento.

Flipkart Sellers Suing to Delay Indian Marketplace Antitrust Case

Report from Reuters

In Brief – Three large Flipkart sellers have filed suit to block the Competition Commission of India’s (CCI) investigation of Amazon and Flipkart that found the two ecommerce marketplaces and several of their largest sellers in violation of competition laws. The antitrust agency, which began an investigation of the two leaders in Indian ecommerce in 2020, found that both US-owned platforms violated competition law by giving a few large-volume sellers preferences to boost their sales compared to other smaller retailers. A former high-volume apparel seller on Amazon’s marketplace, the largest in the country, filed a similar suit last week and won a temporary injunction from the High Court in Karnataka state. The three mobile phone sellers on Flipkart, which is owned by Walmart and is the country’s second-ranked marketplace, allege that they were called to submit data to help the CCI’s investigation, cooperated, but were later named as accused, which they argue violates due process.

Context – The Indian ecommerce market, hyped for long-term growth potential, has followed a unique development path due to the country’s strict foreign investment laws that prohibit foreign ownership of multi-brand retail businesses, online or offline, but allows third-party marketplaces for independent sellers to be owned by non-Indian businesses. Therefore, the country’s US-owned ecommerce leaders structure operations as marketplaces providing a platform for independent retailers. But they have faced years of accusations from Indian retailers that they violate the FDI laws and operate as retailers with a few giant sellers actually serving more like wholesalers. The charges against Amazon were highlighted by a 2021 Reuters expose based on leaked documents that detailed how Amazon operated its business without any formal first-party retail. Flipkart appealed to the Indian Supreme Court that there was no similar evidence surrounding its seller practices, but it was turned down. Throughout, Amazon has also been engaged in a pitched legal and business competition with Reliance Industries, an emerging domestic ecommerce powerhouse owned by Mukesh Ambani, India’s richest man.

Ask A Question!