News insights OOLD
May 2024
Google Pitching for Switch to a Bench Trial in DoJ AdTech Antitrust Case
Report from AP
In Brief – Google is arguing that the antitrust complaint filed by the US Department of Justice and eight states alleging that the digital giant has built an illegal online advertising monopoly, which is currently scheduled to go to trial in September in a federal court in Virginia, should be a bench trial rather than be argued in front of a jury as currently planned. Google claims that it is unprecedented for a jury to decide a federal antitrust case brought by the government, and that the case involves “a complicated, intricate technology ecosystem, which DOJ has acknowledged to this Court is ‘highly technical, often abstract, and outside the everyday knowledge of most prospective jurors.’” The company also argues that the constitutional right to a jury trial does not apply to a civil suit brought by the government, and that the right, “protects citizens against the federal government, not the other way around.” In an especially interesting twist, Google’s court filing acknowledges that the DoJ has a stronger argument for demanding a jury in a case where it seeks monetary damages, which the company claims is the reason the government’s complaint includes a claim of less than a million dollars in damages for US Department of Defense online advertising costs, and so Google has already sent a check to the government for three times the claimed losses in an effort to address that factor and narrow the case down to the complex technical and legal issues that are appropriately addressed by a federal judge.
Context – Last December, a federal court jury in California ruled decisively against Google in a lawsuit filed by Epic Games alleging that Android rules and practices violated federal antitrust law. Apple won a similar case. It struck many as odd that the “closed” Apple ecosystem, which generally holds larger market shares than Google’s, was on the right side of antitrust law while the more open Android was not. One big difference was that Epic v Google was a jury trial, while Epic v Apple was a bench trial. Plaintiffs in all the Big Tech antitrust suits, including the DoJ, are now asking for jury trials. It raises an interesting legal question.
Colorado Passes Broad-Based AI Regulation in Sharp Contrast to US Congress
Report from The Hill
In Brief – Colorado has become the first US State to enact broad-based legislation to regulate artificial intelligence systems. The law requires AI developers to “use reasonable care to avoid algorithmic discrimination” and both AI developers and deployers must complete system impact assessments and make disclosures to regulators and the public. The measure, which was opposed by several leading business and tech trade groups who argued it threatened innovation in the state, was ultimately signed by Governor Jared Polis (D), himself a former tech business entrepreneur before entering politics. However, Polis equivocated his support for the bill in a letter to the legislature that accompanied his signature, noting that while he appreciated “the sponsors’ interest in preventing discrimination and prioritizing consumer protection”, he encouraged them to significantly improve on it before the law takes effect in 2026.
Context – The biggest question in AI public policy is whether governments are going to directly regulate AI or, to adopt terminology from two Japanese companies who recently offered thoughts on the debate in Japan, will advance governance through “soft laws” such as voluntary “best practices” and “safe AI” recommendations. In one corner is the EU with its AI Act, the standard for concrete regulation and champion of those want governments to be far more activist than they were with the internet. But the soft law model has had the upper hand. The US Congress has not legislated, the massive AI Executive Order from President Biden is filled with soft law style principles and safe AI practices reinforced by federal contracting rules, and a legislative roadmap from the Senate Majority Leader was lambasted by champions of aggressive action. The UK Government initiated a global AI safety discussion, with South Korea recently hosting the second session, but no legislation. Japan has led the G-7’s “Hiroshima AI Process” that created International Guiding Principles and an AI Code of Conduct for developers. There is a clear reticence to have regulatory mandates push AI entrepreneurs to other jurisdictions.
UK Digital Markets Competition Bill Wrapped Up as Parliament Heads Out
Report from TechCrunch
In Brief – In the rush to wrap up business before the upcoming parliamentary elections that are expected to see a change in government control after 14 years of Conservative rule, the UK Parliament has approved the Digital Markets, Competition and Consumer Bill (DMCC). The measure fully establishes the Digital Markets Unit within the Competition and Markets Authority and gives the regulator the authority to designate very large digital businesses as having “strategic market status” (SMS) and impose on each a code of conduct backed up by fines of up to 10% of global turnover. Merger guidelines are toughened for the digital giants, and media companies are hoping the largest platforms will be forced to pay for media content like has been instituted in Australia. Finally, the bill creates new consumer protection rules that apply across-the-board, including on data use, terms and conditions, fake online reviews, and secondary market ticket sales. The Labour Party also backed the bill, putting it in prime position to be approved during the “wash up” before Members of Parliament headed to the campaign trail.
Context – While the UK Conservatives have portrayed their digital policy as a pro-innovation alternative to the EU, their Online Safety Act (OSA) is a close cousin of the Digital Services Act (DSA) and the DMCC largely parallels the Digital Markets Act (DMA). The UK policies are not so much less regulatory as a few years slower paced. One meaningful difference is that the UK model appears more committed explicit rule-making processes with specific obligations spelled out in detailed regulatory documents. The latest draft of OSA directions for social media platforms and search engines to protect minors cover more than a thousand pages, while the DSA directs platforms to protect minors and is letting the Commission staff hammer out what platforms like TikTok, Facebook, and Instagram specifically need to do as they investigate them. Similarly, the UK DMU will go through a regulatory process to determine which companies have SMS status and what they each must do, while the DMA covers 7 companies and applies 18 high-level do’s and don’ts.
EU Commission Learning That Everyone Can’t Like Google Search Results
Report from Reuters
In Brief – Trade groups representing Europe-based airlines, hotels, and retailers have called on top European Commission digital regulators to protect the interests of their members as Google changes their search engine algorithms in Europe to comply with the EU’s Digital Markets Act (DMA). Their letter to Margrethe Vestager and Thierry Breton calls for the Commission to consider broad industry impacts as regulators investigate Google’s plan to comply with the DMA’s “Do’s and Don’ts”. The industry groups are concerned that the Commission will focus too heavily on fair treatment of third-party intermediary services that compete directly with Google’s own specialized-search services for hotel, airline, and retail offerings, without appropriately considering the potential negative revenue impacts on European businesses if their direct online offers are downranked on Google search results pages in relation to the intermediaries’ sites.
Context – The “fairness” of search results has been a morass plaguing the Internet ecosystem for more than 20 years. Nearly every business generates a large share of its web traffic through Google search. Changes to the algorithms have always posed a risk of major hits to website traffic, and therefore business. Google has always argued that changes were intended to improve user experiences. And they have been relatively non-transparent about specific reasons a site’s traffic fell claiming it needed to avoid helping websites “game” search algorithms. “Vertical search” businesses that competed with Google’s own specialized search offerings have long accused the digital giant of downgrading vertical competitors, a charge at the heart of long-running “Google Shopping” antitrust case that helped lead to the DMA. If changes mean some websites see their position go up, others will feel aggrieved. There are only a few prime positions. Google said that this would happen from its DMA compliance changes. The EU referees will now forever hear endless advice on how Google should operate search in a fair way. And adding Generative AI “answers” to search will only add to the problem. The answers are always judgement calls.
Microsoft Pressed for More Info on AI Risks Under EU Digital Services Act
Report from TechCrunch
In Brief – The European Commission has asked Microsoft to provide additional evidence that the company has carried out adequate risk assessments and risk mitigation measures to ensure that its Generative Artificial Intelligence (GAI) tools, such as those integrated into the Bing search engine, will not be used to mislead voters in the upcoming European elections. Back in March, the Commission, as the regulator of the largest online platforms and search engines under the Digital Services Act (DSA), asked eight of them, including Microsoft, Google, Facebook, TikTok, and X, to submit their required risk assessments and mitigation measures regarding GAI and impacts on a range of issues covered by the DSA, including electoral processes, illegal content dissemination, personal data protection, and intellectual property. Electoral impacts of GAI were highlighted as an especially timely concern, including the production of ‘hallucinations’, the creation and dissemination of deepfakes and other forms of misinformation, and the prospect that AI could automate services that can mislead voters. The regulator is not satisfied with the initial response from Microsoft and has given the digital giant until May 27 to provide sufficient materials or face a fine of up to 1% of global turnover.
Context – The Commission is formally investigating five of the 23 very large platforms under the DSA for issues including violent and hateful content, political misinformation, and protecting younger online users from harm. Microsoft may be the sixth. The standards set by the DSA are very high level. Specifics will largely be up to the regulator. For example, TikTok and Meta are being investigated for not adequately protecting minors, which means to “ensure a high level of privacy, safety, and security”. Clear and actionable. With AI, “hallucinations” are a fact of life for all GAI chatbots. If developers knew when they would occur, they wouldn’t. The EU AI Act was celebrated as the world’s first comprehensive AI regulation, including a specialty EU AI Office. It does not go into full effect for two years. But other European regulators are already stepping into the role.
Minnesota’s Rideshare Minimum Pay Deal Might Signal National Shift
Report from KARE 11 News
In Brief – Following two years of contentious battles over legislation to set a minimum wage for rideshare drivers, Minnesota lawmakers, Governor Tim Walz (DFL), and leading ridesharing platforms Uber and Lyft, have reached agreement on a rate floor that is expected to give drivers a 20% pay increase. The deal is for $1.28 per mile and $0.31 per minute. The latest back-and-forth kicked off in March when the Minneapolis City Council enacted a city rate of $1.40 per mile and $0.51 per minute. Both companies announced that they would discontinue operations in the city. The state legislature then proposed a statewide minimum rate of $1.27 per mile and $0.49 per minute, and the city leaders agreed to lower their minimum rate to match. However, the companies were not involved and announced that they would respond by ending their operations statewide. The latest deal, which is the first to include the two companies, sets a statewide minimum and preempts the authority of the Twin Cities to go higher. A representative of Uber said, “While the coming price increases may hurt riders and drivers alike, we will be able to continue to operate across the State under the compromise brokered by the Governor.”
Context – This year’s game of MN rideshare pay chicken followed the state legislature enacting a floor last year of $1.45 per mile in Minneapolis-St. Paul, and $1.34 per mile elsewhere, with $0.34 per minute in both. The companies threatened to shut down statewide and that bill was vetoed by Gov. Walz (D). He then directed the state Department of Labor to study the pay issue, and its report said that $0.89 per mile and $0.49 per minute would bring driver pay up to the Twin Cities minimum wage of $15.57 an hour. The new agreement is noteworthy because while Minnesota is a solidly blue state and Minneapolis-St. Paul are progressive bastions, ridesharing regulation, in particular setting pay floors, has been limited to the very wealthy coastal cities of New York (2019) and Seattle (2022) and states of California (2020) and Washington (2022). A much larger number of city and state elected officials nationwide are likely to take note and explore similar moves.
Arizona Sues Amazon Over Dark Patterns and Antitrust Abuses
Report from AZ Mirror
In Brief – Arizona Attorney General Kris Mayes (D) has filed two lawsuits against Amazon in the state’s Superior Court alleging years of unfair and anticompetitive practices. One lawsuit accuses Amazon of violating Arizona’s Consumer Fraud Act by engaging in deceptive website practices, often called “dark patterns”, to trick Prime subscribers who wanted to cancel their subscription into not following through. The lawsuit follows similar complaints from the Federal Trade Commission, which filed a federal lawsuit challenging similar Amazon dark pattern practices in July 2023, as well complaints from European consumer protection authorities. The second lawsuit accuses Amazon of breaking Arizona’s Uniform State Antitrust Act through “price parity” policies that pressured third-party sellers to not offer their products for lower prices on other ecommerce platforms that have lower fees than Amazon. Amazon is also accused of promoting products to consumers from third-party sellers who use Amazon’s logistics services, by putting them in the Buy Box even when they are the higher-priced option. The antitrust suit covers similar grounds as an antitrust complaint filed in California state court by the California Attorney General in 2022 that has survived Amazon’s motion to dismiss and is also a component of the FTC’s big federal antitrust complaint.
Context – Arizona’s two lawsuits highlight the increasing divergence between the digital landscape in the EU and the United States. This Amazon Prime “dark patterns” complaint started in Europe in January 2021 and the company settled in 2022 by making major changes. But they kept the original over-the-top processes in the US for another year until days before an FTC lawsuit. Now Arizona is targeting the years of conduct before the change, even citing how the company addressed the problem in the EU earlier. The charge that Amazon preferences sellers who use Amazon logistics have also been raised and settled in the EU, with Amazon agreeing in mid-2022 to offer a second Buy Box with an alternative shipping option. Again, Amazon has not reached a similar agreement with the FTC and appears to be headed to a 2026 trial.
Meta Joins TikTok as Target of EU DSA Teen Safety Investigation
Report from Reuters
In Brief – The European Commission has announced a formal investigation into whether Meta is violating the Digital Services Act (DSA) by not appropriately protecting younger users. The DSA, enacted in 2022, regulates how digital platforms handle objectionable content. It places the most stringent requirements on the largest platforms, called VLOPs, and they are directly regulated by the European Commission. Facebook and Instagram are two of the 23 current VLOPs. Article 28 of the DSA directs platforms to “put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service.” Commission investigators will now be reviewing the two Meta platforms’ compliance with Article 28, including whether their algorithms stimulate addictive behavior in young people, so-called rabbit-hole effects where a user is inundated with similar content, the use and effectiveness of the platforms’ age-verification tools, and the privacy settings for minors.
Context – The European Commission is now investigating five VLOPs: X, TikTok, AliExpress, Facebook, and Instagram. The two Meta platforms were already being scrutinized for not adequately addressing political misinformation. The Commission’s investigation of TikTok also cites Article 28 and raises similar concerns about protecting minors from recommendation algorithms and rabbit holes. The European Commission is not Europe’s only regulator trying to police online platforms. The UK Online Safety Act covers similar grounds to the EU DSA. The UK’s digital regulator calls them “regulatory cousins” rather than “identical twins” and the regulatory processes are a big difference. The UK regulator recently released its draft Child Safety Codes of Conduct. They cover similar ground to DSA Article 28 but they run over 1200 pages and include 40 specific operational directions. The UK regulatory process is likely to run through next year. DSA Article 28 has 20+ words saying do a good job protecting minors. The Commission regulators will determine what that actually means as they investigate.
TikTok Creators File Federal Lawsuit to Halt Effort to Ban the App
Report from New York Times
In Brief – A group of content creators who successfully use TikTok have filed a lawsuit in the US Court of Appeals for the District of Columbia Circuit to block enforcement of the federal legislation to force ByteDance to sell the app within a year or face TikTok being blocked from US app stores and hosting services providers. The eight individuals, who collectively claim approximately 14 million followers, and say that they could lose money earned through TikTok’s Creator Fund or by promoting their small businesses, as well as alleging that their attempts to use other platforms have proven less fruitful. The First Amendment-focused lawsuit, which is being funded by TikTok, and parallels one brought directly by the company, describes the ban as “content-based, viewpoint-based, and speaker-based,” citing multiple instances of congressional backers suggesting that TikTok disseminates Chinese government propaganda or promotes “anti-American” and “anti-Israel” messaging. A spokesperson for the US Department of Justice said, “This legislation addresses critical national security concerns in a manner that is consistent with the First Amendment and other constitutional limitations. We look forward to defending the legislation in court.” Both sides are asking the court for decisions by December 6th.
Context – Back in 2020, federal lawsuits were filed by both TikTok and TikTok users block President Trump’s app ban. Federal judges granted temporary injunctions in both cases, citing First Amendment concerns, and questioning the legal authority cited by the Trump Administration. Backers of the current bill hope that the national security-focused law strengthens the government’s legal case. National security agencies provided classified briefings to Congress on the risks of China using TikTok to malignly influence US society and those concerns will next be pitched to judges. However, the First Amendment has protected even adversarial speech, including anti-American propaganda. But the US has never had such a huge mass media or propaganda platform potentially in thrall of a national security adversary.
Google’s AI “Answers” on Search Page Creating Big Website Traffic Concerns
Report from Washington Post
In Brief – Google’s plan to begin a major rollout of a new AI-enabled product it calls “Search Generative Experience,” or SGE, which aims to directly answer user queries with chatbot-style multi-paragraph replies, has online creators, website operators, and others who rely on traffic from Google worried about the impact of remaking of online search. While SGE includes some attribution links, the length of answers can push them down below the visible space of the SGE box. In addition, adding SGE near the top of the search results page pushes links further down the page, when they are often already depressed by other Google “one boxes” as well as ads. A wide range of experts and analysts have been speculating about the impact on websites, including Gartner predicting traffic from search falling 25% by 2026, and search engine optimization consultancy Siege Media estimating a 10% to 20% hit, with some publishers facing far worse. The head of MediaNews Group, which is suing OpenAI and Microsoft for copyright infringement on behalf of eight of its major papers, said the Google change could have an even bigger effect on newspapers than the loss of their classified’s businesses in the mid-2000s.
Context – In late April, Tuta Mail, a German encrypted email service, filed a complaint with the European Commission alleging that Google has violated the EU’s new Digital Markets Act (DMA) by changing how its search service treats the email company’s website, leading to a fall in traffic of 90% to its site. This was a reminder that for nearly 20 years every web-based business has generated a large share of its new traffic through Google search, that unexpected changes to the Google algorithms regularly posed a serious risk of a major hit to traffic, and that web operators feel shocked and aggrieved when their traffic from Google suddenly falls. The search giant has always been pretty non-transparent about specific reasons traffic falls, claiming it was necessary to avoid people “gaming” search algorithms. In Europe, regulators might soon insert themselves in the morass to rule on what’s fair and legal. Chatbot-style SGE potentially poses that problem on steroids.
UK Ofcom Releases Online Safety Act Draft Teen Safety Codes
Report from TechCrunch
In Brief – Ofcom, the UK online content moderation regulator under the new Online Safety Act (OSA), has announced the release of its draft Children’s Safety Codes of Practice. The OSA, intended to “make the UK the safest place in the world to be online”, imposes strict new duties on services that can be accessed by teens under 18, including social media sites, apps, and search engines. The platforms are required to “prevent” young users from encountering the most harmful content, such as pornography and material promoting suicide, self-harm, and eating disorders, and “minimize” their exposure to harmful content, such as violent, hateful, or abusive material, online bullying, and content promoting dangerous online challenges. Ofcom sees age checks playing a major part. Services that do not effectively ban harmful content will be expected to implement “highly effective age-checks” to prevent those under 18 from seeing it, either by keeping young users off the entire service or away from the parts of a site that includes harmful content. Website recommendation algorithms also come in for criticism, being described as “children’s main pathway to harm online.” Services with a recommender system and harmful content will be required to use age checks and configure their algorithms to, again, block the most harmful content from children’s feeds, and reduce the prominence of other harmful content. The regulator is seeking responses to its draft codes until mid-July, expects to publish final versions within a year, with the new measures coming into force in the second half of 2025.
Context – The EU enacted the Digital Services Act (DSA), their regime to regulate how platforms deal with illegal and harmful online content, in mid-2022. Ofcom’s head of online safety has said that the OSA and DSA are more “regulatory cousins” than “identical twins”, but that the regulators are trying to work together to make it easier to comply with the proliferating online content moderation regulatory regimes. TikTok and Meta are now both being investigated by the European Commission for violating DSA rules intended to protect young users.
Aussie Judge Lets Order Expire Calling for X to Globally Block Violent Video
Report from Washington Post
In Brief – Federal Court judge Geoffrey Kennett, who had initially granted a temporary injunction backing the Australian eSafety Commissioner’s effort to order X to globally take down 65 posts containing footage of an Assyrian church bishop being stabbed in Sydney, has decided to let the injunction expire in a turnabout siding with X. Kennett explained that the order from the eSafety Commission was “unreasonable” and clashed with the international legal concept of the “comity of nations,” which recognizes that countries’ laws have territorial limits. The knife attack occurred during a live-streamed church service and clips soon appeared online. Commissioner Julie Inman Grant called on platforms to take down the footage, and many complied, including X. However, X only blocked the video in Australia, not globally. The Commissioner argued that clips could still be viewed on X by Australians using VPN technology that hid their real location, so the company needed to extend their ban globally to meet Australia’s online safety standards. X refused and a public spat ensued between owner Elon Musk and top government officials, including Prime Minister Albanese, who harshly criticized Musk directly. Along with the government possibly appealing the latest court ruling, top officials have discussed amending Australian social media law to align with the eSafety Commissioner’s interpretation.
Context – Musk often defends “free speech”, but X’s policies are more nuanced. He also says the platform complies with “local laws” on legal speech, which means very different things in different countries. In this case, X quickly complied with Australian law in Australia, but not everywhere else. In Brazil, when a top judge recently ordered X to block over 100 Brazilian accounts as part of an anti-democratic activity probe, Musk initially objected, but the company then backtracked. In the EU, X is under investigation for how they operationally handle content moderation. YouTube has announced a similar decision in Hong Kong to that of X in Australia by agreeing to block the protest song “Glory to Hong Kong” locally.
Minnesota Considering Statewide Ridesharing Minimum Wage “Compromise”
Report from CBS News
In Brief – Following enactment of a Minneapolis city law in March setting a minimum wage for rideshare drivers that resulted in Uber and Lyft announcing that they would shut down their services in the city, legislation is moving in the Minnesota State House to instead set a statewide ridesharing minimum wage. The state measure is billed as a “compromise” involving the State House, Senate, and Minneapolis City Council, but has not included Uber or Lyft. Both have said that the new measure takes the problems with the city mandate and extends them statewide and will force them to shut down in Minnesota. The city minimum wage for drivers, which was initially scheduled to go into effect on May 1 and has been pushed back to July 1, is $1.40 per mile and $0.51 per minute, while the State House bill would set a statewide minimum wage rate of $1.27 per mile and $0.49 per minute as of January 1, 2025. As part of the current talks, the City Council agreed to lower the city wage to match the state rate. While the new measure does not propose to classify drivers as platform employees, rideshare companies would face new mandates including providing drivers with vehicle insurance, compensation for injuries incurred while driving, transparency into pay and discipline, and the right to appeal deactivation of an account.
Context – Minnesota has been a hub of rideshare platform minimum wage activism. Last year, state legislation setting a wage floor of $1.45 per mile they drive a passenger in the Minneapolis-St. Paul region, and $1.34 per mile elsewhere in the state, and $0.34 per minute statewide, was passed by the legislature but then vetoed by Gov. Tim Walz (D). Uber threatened a statewide shutdown in that standoff as well. In the aftermath, Walz directed the state Department of Labor to study the issue, and its report said that $0.89 per mile and $0.49 per minute would bring driver pay up to the Twin Cities minimum wage of $15.57 an hour. Ridesharing minimum wages have been enacted in two cities, New York (2019) and Seattle (2022), and two states, California (2020) and Washington (2022).
Former Twitter CEO Jack Dorsey Seems to Believe In X-Twitter Again
Report from Fast Company
In Brief – Jack Dorsey, a founder and longtime CEO of Twitter who helped bring about the company’s acquisition by Elon Musk, has indicated through a handful of cryptic tweets that he believes that the platform, now called X, promotes “freedom”. The relationship between Dorsey and Musk has swung wildly from support in both directions to apparent disdain as well. They appear to be back in each other’s good graces, or at least Dorsey is back to thinking X-Twitter is promoting broad and open discourse, calling it “freedom technology”. Dorsey also revealed that he is no longer affiliated with BlueSky, a small social media platform based on a non-traditional content moderation model that allows users to utilize various third-party content moderation services rather than having content moderation directed by the platform’s central leadership. In 2019, Dorsey helped create BlueSky while still CEO of Twitter and served on its Board until earlier this year.
Context – Dorsey has been a true believer in the open internet’s power to empower people by reducing the control of gatekeepers. And also a very sincere and serious thinker about how to balance the many interests and complexities of content moderation, the job that Mark Masnick rightly says is impossible to do well at scale. He always seemed conflicted by Twitter decisions in the nearly unlimited grey areas being seen as distorting public discourse and appeared to believe that Musk, another believer in open online discourse, could make decisions that were less tied to business demands. When Musk acquired Twitter and shook up its content moderation practices, many in the left-leaning Twitter “Indigo Blob” rebelled and Dorsey seemed to share many of their concerns. But that seems past. And while many in the Blob claimed to look elsewhere, whether Threads, Mastodon, or BlueSky, most appear to have remained on X. And while experiments in creative content moderation continue, such as BlueSky’s protocols or Nostr’s protocols (now a Dorsey favorite), the fact that X being under Musk means that the top social media platforms are not a content moderation monoculture is a huge development.
Apple Tweaks DMA Compliance Plan to Reduce Fees on Small Developers
Report from TechCrunch
In Brief – Apple has announced further tweaks to its new app rules and fees intended to bring its iPhone regime into compliance with the EU Digital Markets Act (DMA). The company’s popular products and services, well-known as a tightly controlled “walled garden” that Apple says is designed to protect privacy while delivering seamless user experiences, appear vulnerable to many of the DMA’s mandates, including interoperability with competitors and not preferencing one’s own products. For Apple, that means allowing “sideloading” of apps, and alternative app stores. So, the company proposed new rules for app developers using alternative downloading options, including applying App Store-equivalent security standards, as well as proposing major fee changes. On one hand, Apple is offering a reduction in its general commission from 27% to 17% (and 15% to 10% for subscription renewals), but it comes with a new 50-euro cents per download “core technology fee” (CTF) for every app download after an app is downloaded one million times in a 12-month period. The latest revision from Apple exempts apps from the CTF if the app earns no revenue, and delays for three years the application of the CTF to new developers crossing the million-download threshold. Apple reiterated that adopting the alternative fee arrangement is an option for developers who could also choose the current App Store system and fees, and that the new regime will also apply to the iPad starting this fall.
Context – The biggest app developers aggressively criticized Apple’s DMA compliance plan straightaway, quickly calling on the European Commission regulators to reject it and mandate meaningfully lower fees. Similar lobbying is underway in the US, where app developers are calling on Federal Judge Yvonne Gonzalez Rogers to drive down Apple fees by enforcing her order demanding that Apple end its anti-steering rules. The Commission is formally investigating Apple’s DMA plan (and Google’s and Meta’s) and the regulator has indicated that fee levels are on table, and they don’t appear sympathetic.
More Newspapers Sue OpenAI and Microsoft for Copyright Infringement
Report from the Washington Post
In Brief – Eight major US daily newspapers owned by Alden Group Capital have sued OpenAI and Microsoft for copyright infringement in the training of their AI chatbots. The papers, which include the San Jose Mercury News, Denver Post, and St. Paul Pioneer Press, add to the steady stream of lawsuits being filed by media companies, authors, artists, and musicians, accusing corporate AI leaders with violating numerous provisions of federal copyright law. The lawsuit alleges that both OpenAI and Microsoft used millions of copyrighted articles to train their AI models without compensation, that their chatbots regularly surfaced the entire text of articles held behind subscription paywalls, and their AI tools often did not prominently link back to sources. A representative of the papers said, “we can’t allow OpenAI and Microsoft to expand the Big Tech playbook of stealing our work to build their own businesses at our expense.” Spokespeople for OpenAI and Microsoft both responded to the litigation with statements claiming that the companies support news organizations around the world.
Context – Everyone following “Generative AI” now knows that today’s top AI chatbot systems were “trained” on much of the text on the internet, which obviously includes massive amounts of copyrighted material. While copyright holders keep filing AI suits filled with voluminous claims of infringement, federal judges are narrowing them down to what should be a battle royale over AI training and the “fair use” exception. While there are strong arguments that the tech companies will prevail, others see the rights holders in a strong position. If there is a key issue beyond fair use it involves whether an AI system produces actual copies in their outputs. The big initial rulings in the AI copyright infringement cases saw judges noting that the systems were not creating copies. When the New York Times became the first major newspaper to sue OpenAI, they alleged that ChatGPT did in fact make full copies, although it’s reported that doing so took some aggressive maneuvers, including submitting verbatim opening paragraphs of stories as a chat prompt.
Senate AI Roadmap Confirms That AI Legislation will Follow Piecemeal Route
Report from the Washington Post
In Brief – Senate Majority Leader Chuck Schumer (D-NY) has released an AI policy roadmap (summary, report) that points to a process where any AI legislation is likely to proceed in a piecemeal fashion, with targeted measures addressing specific policies, rather than anything big and broad. The legislative roadmap highlights a very wide range of issues, including the impact of AI on intellectual property, elections, security, and labor rights, and provides high level advice on how congressional committees could proceed. The proposal also calls for $32 billion in additional federal spending on AI research and development. Sen. Schumer has highlighted risks from China getting ahead of the US on AI development.
Context – To be clear, major AI legislation like the EU AI Act is not being enacted by the US Congress this year. Most sober analysts have long known that was the case despite many months of Chat-GPT inspired media craze. However, the AI policy roadmap offers a good reality check. If there is legislation enacted this year, and time is very short, we continue to think something like what Sen. Gary Peters (D-MI) has been developing regarding the federal government’s own AI policies, or a grab bag funding, is most doable and likely. Those are things that align well with President Biden’s massive Executive Order on AI, which mostly aims to influence AI through public investment, “voluntary” company principles, and reinforcing federal government policies and contracting rules. All the US-based AI giants are on board and integrated with the effort and will remain so as long as federal officials don’t force tech transfers or revelations that they are not comfortable with. Speaking of the AI giants, not only have they been very well represented in Sen. Schumer’s AI Insight forums, but they have reportedly been very successful in tamping down fears on the Hill of an AI-powered apocalypse, as well as reinforcing the view that having the US lead in AI innovations is key for both economic and national security. Note Schumer mentioning China. Finally, on IP rights, namely the issue of copyrights and AI training, that will be one for the courts in the US at this point.
Booking Designated an EU Digital Markets Act Gatekeeper
Report from Bloomberg
In Brief – The European Commission has formally added Booking to the digital “gatekeeper” platforms under the Digital Markets Act (DMA), meaning it will face the regulatory regime’s 18 duties. The law applies to eight kinds of digital services, including social networks, ads, browsers, operating systems, and “intermediation” platforms. The Commission initially designated 27 platforms of six digital giants – Amazon, Apple, ByteDance, Google, Meta, and Microsoft – in mid-2023, and they recently added Apple’s iPad operating system. Booking is the first European-based digital company added to the DMA. Along with their decision to designate Booking as a gatekeeper, the Commission announced that they did not believe TikTok’s ad service and X’s ad service were presently gatekeeper platforms, while they are continuing to study whether X was a social media gatekeeper.
Context – A big DMA question from the start was how many digital companies would enter the new world of ongoing regulation. Analysts identified around a dozen prospects, including EU-based Booking. Booking’s CEO was a strident critic of expanding beyond the very largest companies. The Commission’s first tranche ended up at six and looked a lot like the recommendation from leading EU Parliamentarian Andreas Schwab who said, “Let’s focus first on the biggest problems, on the biggest bottlenecks. Let’s go down the line — one, two, three, four, five — and maybe six with Alibaba.” ByteDance ended up the Chinese entry instead. Now Booking makes seven. It has long been the top hotel reservation intermediation platform with many run-ins with antitrust regulators over “price parity” policies that allegedly block hotels from offering lower prices on lower fee platforms. As to market caps, the five US giants all exceed $1 trillion, with three beyond $2T. ByteDance is estimated to be worth around $230 billion. Booking is “just” $125 billion. As to X being a “gatekeeper”, the idea that it has market power when it can’t seem to even stabilize its revenue, and is worth less than $10 billion, would redefine the whole regulatory scheme.
Microsoft’s Ongoing Cloud Services Antitrust Problems – Spanish Version
Report from Reuters
In Brief – A trade association representing startups in Spain has filed a complaint with the country’s antitrust regulator accusing Microsoft, #2 in cloud services globally, of anticompetitive conduct pushing its business software customers to adopt its cloud services. The Asociación Española de Startups, which represents 700 companies, also includes global cloud services leader Amazon Web Services and #3 Google. They have joined the campaign to force Microsoft to change technical and contractual practices that critics argue leverage its dominance in business software to grow its cloud business. Specifically, the complaint alleges that Microsoft makes it difficult and expensive to operate its Office 365 suite on other cloud platforms, including through barriers to data portability and higher priced licenses.
Context – In the wake of Microsoft’s regulatory good behavior campaign that helped win approval of its massive Activision acquisition, the company needs to manage a collection of other competition policy issues in Europe where antitrust regulators might be wanting to prove that they can be tough on the giant. Allegations that Microsoft leverages its dominant Office 365 software to benefit other services is not limited to cloud services, it is also at the heart of a complaint from business collaboration services providers such as Slack. Microsoft is trying to resolve a European Commission investigation by offering Office 365 without its Teams service, as well as offering a stand-alone Teams without Office 365. The European Commission is also investigating a complaint filed in mid-2022 by three smaller European cloud services providers that parallels the complaint in Spain, as well as a complaint from German software company NextCloud accusing Microsoft of unfairly bundling its OneDrive cloud system into Windows. The Windows OS and LinkedIn have been designated gatekeeper core platforms under the Digital Markets Act and the company thinks the OneDrive issue has been addressed though its compliance plan.
FCC Fines Mobile Carriers for Sharing Location Data in Case From 2018
Report from The Hill
In Brief – The Federal Communications Commission has fined Verizon, AT&T, T-Mobile, and Sprint, a combined total of nearly $200 million for sharing customer-location data without appropriate user consent. The agency’s action is based on an investigation that started in 2018 and found the carriers selling access to location data to data aggregators who then resold the data to third party services providers. Rather than get consent from their users directly, the wireless companies relied on the assurances of the data aggregators that they would get user consent as needed. Many failed to do so. The regulator further claims that even after the wireless carriers were made aware of those failures, they sometimes went up to a year before ending unauthorized access to the data, with the carriers often arguing that the data was used for critical services such as roadside assistance and emergency response. Although the agency wrapped up its investigation in 2020, the formal order was delayed for three years as the FCC operated with only two Democrat and two Republican commissioners. All the carriers have said that they disagree with the decision and will challenge the fines.
Context – From an institutional perspective, there was never a doubt that the FCC would want to wrap up this matter and issue the fines. The agency considers itself a privacy regulator for the businesses it regulates and there is long-running tension between the FCC and the FTC, with the latter thought to be aiming to be the sole federal privacy enforcer. The FTC is aggressively pushing to cut back on the collection and commercial use of detailed physical location data gathered by smartphones, including reaching settlement agreements with two data aggregation companies accused of deceptive and unfair practices, and suing Kochava, a major geolocation data broker, for unfair trade practices. So, while the institutional impetus behind the FCC action seems clear, the roots of the multi-year partisan deadlock are less so clear given that the investigation was a product of the Trump Administration FCC.
Federal Judge Thinks Apple Is Making It Hard to Circumvent Their Fees
Report from the Washington Post
In Brief – Federal Judge Yvonne Gonzalez Rogers, who ruled in 2021 that Epic Games did not prove that Apple violated antitrust law, but ordered the iPhone giant to stop prohibiting app developers from telling their customers that there are other ways to buy things through their apps, is now hearing arguments over Epic’s claim that Apple’s new payments policy does not comply with the spirit of her payments order because their fees remain too high. The judge used 185 pages to detail why Apple was not an illegal monopoly, but her order requiring payments options was one page with 50 operative words and no mention of fees. Read it here. Epic appealed her overall antitrust decision and Apple challenged her authority to impose a permanent national injunction based on one case and a California state law. The Ninth Circuit Court of Appeals backed the initial rulings and the Supreme Court rejected both sides’ appeals. Apple then rolled out new rules for in-app payments that reduced their fees just 3% when app developers process payments themselves and require informing users that there could be security risks with payments done outside the iPhone ecosystem. Throughout the trial, and now in the payments order compliance hearing, Judge Rogers has been openly skeptical that Apple deserves their sometime 30% commission rate. In the hearing, when Rogers said the Apple fees, which the company says average 18%, seemed to be a “windfall”, the company witness said, “We are running a business.”
Context – In the hearing, Judge Rogers said she wants more from Apple on why only 38 developers have set up new payments’ alternatives. As we’ve been saying, it’s quite simple, most app developers never wanted payments options, they wanted much lower Apple fees. Whether Judge Rogers aims to be Apple’s US regulator remains to be seen. Meanwhile, the same fight is underway in the EU, where the European Commission is now Apple’s regulator under the Digital Markets Act. They are reviewing a similar Apple payments plan with many app developers arguing that the iPhone giant’s fees are too high.
Judge May Entertain Broader Copyright Battle Over AI Image Generators
Report from Reuters
In Brief – US District Court Judge William Orrick appears inclined to allow a broader range of copyright violation claims than expected against four Generative AI (GAI) image creation services. The lawsuit by visual artists targets Stability AI, a company that created the Stable Diffusion GAI service, and three other companies that use the Stable Diffusion model in their own image-generating service. Last October, Judge Orrick appeared to have narrowed the case down to the key question of whether training AI systems with copyrighted material violates the federal Copyright Act. But in preparation of a final hearing on motions to dismiss the plaintiffs’ amended complaint, the judge’s “tentative ruling” indicates he may allow more claims to proceed, writing that, “Plaintiffs have plausibly alleged facts to suggest compress copies, or effective compressed copies albeit stored as mathematical information, of their works are contained in the versions of Stable Diffusion identified,” and that “facts regarding how the diffusion models operate, or are operated by the defendants, should be tested at summary judgment against various direct and induced infringement theories and precedent under the Copyright Act.”
Context – Massive volumes of copyrighted material have been used to train the neural networks that power GAI models. However, as the prevalence of “hallucinations” shows, GAI models are not like traditional databases. They don’t store and retrieve copies. They learn from data and then produce new output. Often, it’s what’s expected and sometimes totally not. It had seemed that Orrick and other judges were moving the proliferating AI-related copyright cases toward a showdown over AI training and the fair use exception. But now Orrick appears to be raising more technical questions, like what he means by “effective compressed copies albeit stored as mathematical information” and “how the diffusion models operate”. It will be especially interesting to see a court react to the fact the operators of GAI don’t know exactly how their models operate, which is a reason that unwanted hallucinations exist at all.
Japanese Lawmaker Group Proposes Stepping Up AI Regulation
Report from Nikkei Asia
In Brief – A group of lawmakers from Japan’s governing Liberal Democratic Party is calling for increased regulatory mandates being imposed on the developers of the most powerful AI foundational models, moving beyond principles-based codes of conduct, and instead legislating a more concrete regulatory regime. Proposed rules include requirements to disclose information on risks with the government, investment in cybersecurity, third-party access to models for security tests and reporting of vulnerabilities, deploying AI watermarks, internal and external red-teaming, disclosing AI capability limits, and research on social risks. Modeling their proposals on Japan’s online platform business transparency law, the plan envisions that covered companies would submit compliance reports to the government, or the AI Safety Institute, with noncompliance resulting in administrative fines or even criminal penalties. The proposal is expected to be discussed at the May meeting of Prime Minister Kishida’s AI Strategy Council.
Context – The big question in AI public policy is whether governments are moving toward legislation and direct regulation, or, as two big Japanese companies recently described it, governance through “soft laws”. The EU’s AI Act is the standard for concrete regulation. Soft laws, such as voluntary “best practices” and “safe AI” recommendations have been in the ascendency. Japan has been a leader of the soft law model, including through the G-7’s “Hiroshima AI Process” that created International Guiding Principles and an AI Code of Conduct for developers. None of the initiatives directed national governments to enact concrete regulations, and Japan, with its longstanding demographic challenges, has been expected to look for AI and robotics innovations to help address problems in coming years. President Biden’s massive executive order on artificial intelligence also includes many soft law efforts, but also proposes some unprecedented information sharing and “safety” testing mandates for companies developing the largest foundation models. Legal challenges are likely if the US Government attempts to enforce those policies on an unwilling AI developer.
Microsoft Updates Restrictions on Facial Recognition for Law Enforcement
Report from TechCrunch
In Brief – Microsoft has reiterated that its top commercial AI service cannot be used for facial recognition by police departments in the US, as well as globally prohibiting law enforcement from using the technology for “real time” facial recognition, such as through officer body cameras or police cruiser dashcams. The policy is part of an update to the Code of Conduct for the company’s Azure OpenAI Service that provides a suite of OpenAI tech to enterprise customers. The rules apply to police departments directly and to third-party contractors that serve police departments. Microsoft, and its close partner OpenAI, have both been attempting to grow their AI business with law enforcement and national security agencies, including Microsoft adding the OpenAI-related services to its Azure Government cloud services product with added compliance and management features geared to government agencies.
Context – Back before OpenAI released Chat-GPT to the public, facial recognition created the most AI policy angst, especially with progressives and civil libertarians. In 2020, amidst the heightened racial justice concerns, corporate giants including Microsoft, IBM, and Amazon, all stepped back from plans to provide facial recognition services to law enforcement, although small innovators like highly controversial Clearview AI were the market leaders anyhow. And law enforcement, especially in the US, along with several federal government agencies, continue to work with them. The other key point is that the Microsoft Azure AI terms of service, and Microsoft’s broader Safe AI policies, tightly align with President Biden’s massive executive order on AI. Most of the EO aims to influence AI through a combination of “voluntary” principles reinforced by government policies and contracting rules. All the US-based AI giants are on board and integrated with the effort and will remain so as long as federal officials don’t force tech transfers or revelations that they are not comfortable with. Issues with LLM-based generative AI producing hallucinations being very problematic for many national security uses is a different challenge.
TikTok and ByteDance Sue to Block US Federal TikTok Ban
Report from the Wall Street Journal
In Brief – As expected, TikTok and its China-based parent company ByteDance have filed a lawsuit in US federal court to block enforcement of the federal legislation forcing ByteDance to sell the app within a year or face being blocked from US app stores and hosting services providers. TikTok’s corporate leadership, which has consistently argued that the app is not based in China and does not allow Chinese authorities to access US user data or influence it algorithms, says that the service will not be sold, and that the legislation is a de facto ban that violates the First Amendment rights of the company and its users. When President Trump attempted to force the sale of TikTok’s US operations in 2020 based on similar concerns, suits were filed by TikTok and a group of TikTok users to block his Executive Order. In both cases, federal judges granted temporary injunctions blocking implementation citing First Amendment concerns as well as questioning the legal authority cited by the Trump Administration. Backers of the recently passed legislation hope that the national security-focused law with bipartisan backing strengthens the federal government’s legal case this time around.
Context – The US Government attempting to ban TikTok, whether through the Committee on Foreign Investment in the United States (CFIUS), or federal legislation, was always setting up for a historic constitutional battle in federal court. The showdown was diverted in 2021 when President Biden abandoned the Trump executive order and the related court fights. However, the Biden Administration never dropped the CFIUS review, never accepted TikTok’s efforts to resolve the Chinese-influence concerns, and strongly backed the eventual legislation, including giving classified briefings to Congress on the risks of China using TikTok to malignly influence US society. However, the First Amendment has protected even adversarial speech, including anti-American propaganda, and a federal judge also blocked the Trump effort to ban the Chinese WeChat app, which is definitely part of the Chinese online control regime. But TikTok does have an unprecedented reach for such platforms.
Privacy Group Files GDPR Complaint Against ChatBot Hallucinations
Report from TechCrunch
In Brief – High profile Austrian privacy advocacy group NYOB has filed a complaint with the Austrian data protection authority alleging that OpenAI’s ChatGPT operates in violation of the EU General Data Protection Regulation (GDPR). The complaint focuses on the chatbot’s propensity to occasionally “hallucinate”, meaning produce responses to user queries that include inaccurate, made-up information. NYOB argues that inaccuracies about people violate the GDPR, which gives individuals the right to have the data stored about them be accurate, and to have the right to require data holders to correct incorrect information. In the example cited by NYOB, ChatGPT produced a response that gave the incorrect birthday for an unnamed “public figure”, and when the individual asked OpenAI to correct the error, the company claimed that it could not guarantee that it could change the output of its system to be accurate due to the nature of its AI model. The company offers individuals the option of having their personal information removed from the whole system. A representative of NYOB said, “when it comes to false information about individuals, there can be serious consequences” and that if companies are unable to make chatbots comply with EU law, which includes correcting errors in their systems, then they should not be usable in Europe.
Context – Realistic-sounding fabrications are the most interesting and informative aspect of AI chatbots, especially considering the issue of AI and possible copyright infringement. Hallucinations show how neural networks are not traditional databases. They learn from data rather than store it. They then produce new answers each time, often accurate, but sometimes completely out of left field. Answers to similar questions can swing from accurate to made up. And the system creators don’t know when or exactly why. This report by Microsoft AI experts sheds light on why hallucinations are basically a design feature rather than mistakes. A key takeaway at this point is that if accuracy matters, results from generative AI systems must be double checked, as industry after industry, including the legal, medical, and defense fields, are realizing.
Meta Faces EU Investigation of Election Disinformation Practices
Report from the New York Times
In Brief – The European Commission is opening a wide-ranging investigation of Meta for not adequately combating the spread of disinformation in the run up to June’s parliamentary elections. The Commission has the authority to regulate Meta’s Facebook and Instagram platforms under the Digital Services Act (DSA), which directs platforms to address objectionable content. The regulator’s announcement says that it suspects that the company is not adequately addressing deceptive advertisements and disinformation campaigns, that its policy to demote political content in its recommender systems violates the DSA, that its decision to close its “CrowdTangle” public insights tool does not include an adequate replacement, and that its processes for users to flag illegal content are not user-friendly. Meta defended its policies saying, “We have a well-established process for identifying and mitigating risks on our platforms,” and “look forward to continuing our cooperation with the European Commission”.
Context – The gulf between digital platform governance in Europe and the United States grows. The European Commission appears fully committed to challenging the platforms on nuts-and-bolts operations. The regulators are concerned that Meta is not spending enough on moderation, that its systems for users to flag content are not simple enough, that its decision to replace CrowdTangle with a different tool is mistaken, and that its policies to reduce the overall prevalence of politics and news on their sites, which is a macro business priority, may not be permissible. Progressive content creators recently complained about that Meta policy. Similar DSA investigations of X, TikTok, and AliExpress, and DMA investigations of Apple and Google, are also underway. Then there is the impossible to avoid divergence between the EU DSA and the US ecosystem dominated by the First Amendment and Sec. 230. Of course, backers claim that the DSA is not a censorship regime at all. Election disinformation is clear to everyone, let alone how a platform should define, rank, and moderate political content.
Judge Wraps DoJ v Google Search Antitrust Trial with Tough Final Questions
Report from the New York Times
In Brief – US District Judge Amit Mehta has wrapped up the courtroom proceedings for the massive antitrust trial pitting the US Department of Justice (DoJ) against Google over their search business. The DoJ case has highlighted Google’s multi-billion-dollar business arrangement with Apple to be the default iPhone search engine as a prime example of anti-competitive tactics used to protect Google search from competition. Google counters that their agreements are legal, akin to standard retail shelf slotting deals, and are neither exclusive or mandatory, as well as arguing that their search service is the best in the world, consistently innovates, and faces meaningful competition from “vertical search” competitors like Amazon and AI-enabled innovators. Judge Mehta, who will deliver a verdict in the bench trial sometime in the next few months, asked tough and skeptical questions to lawyers on both sides.
Context – Google is in the middle of a run of major antitrust challenges in US federal courts. They lost the last one, a standoff with Epic Games in which a California jury ruled that Google violated federal antitrust law with their Android rules and practices. Apple won a similar case. It struck many as odd that the “closed” Apple ecosystem, which generally holds larger market shares than Google’s, was on the right side of federal antitrust law while the more open Android violated the law. One big difference was that Epic v Google was a jury trial, while Epic v Apple was a bench trial. Plaintiffs in all the Big Tech suits, including the DoJ in its other major Google antitrust suit targeting its AdTech business, are now asking for jury trials. But we are back to a bench trial here. The two Epic antitrust trials not only highlighted the potential impact of judges v juries on Big Tech antitrust, but also the potential legal creativity of judges in bench trials. While Judge Yvonne Gonzalez Rogers ruled for Apple on the federal charges, she was consistently critical of the company’s anti-steering rules and used a broad California business law to order the company to change their app payments policies, which might force a major rewrite of their iPhone business.
A German Email Service Claims Google Search Falloff is a DMA Violation
Report from Reuters
In Brief – Tuta Mail, a German encrypted email service, has complained to the European Commission that Google has violated the EU’s new Digital Markets Act (DMA) by changing how its search service treats the email company’s website, leading to a fall in traffic of 90% to its site. The email company, considered a top provider of highly secure email, submitted a formal complaint to the Commission’s DMA task force claiming that it was previously well-positioned for many keywords related to encrypted email, but now only appears highly for Google searches mentioning its brand name. Tuta’s CEO expressed frustration over the lack of clarity from Google regarding the change to its search algorithms, while Google denied allegations of preferential treatment towards its own products, asserting that search updates are intended to provide users with relevant and useful information and not to benefit any website.
Context – Putting European Commission regulators into the middle of complaints over how Google changes search algorithms promises endless business for the company’s new regulators. It is a morass that has plagued the Internet ecosystem for more than 20 years. Nearly every web-based business has generated a large share of its new business traffic through search, especially Google search, and intermittent changes to the Google algorithms have always posed a risk of major hits to traffic. Google has always been relatively non-transparent about specific reasons traffic fell claiming it was necessary to avoid “gaming” search algorithms. Search complaints were a big part of the “Google Shopping Case”, with the Commission accusing Google of unfairly downgrading vertical search providers. That case stretched on for so many years that it was one of the drivers of the DMA being enacted to regulate the giant platforms. Google search is a zero-sum game for websites because only a few can fill prime positions. Losers often feel aggrieved. Google said that this would happen from its recent DMA compliance changes as well. The EU referees are certain to hear endless advice on how Google should do search in a fair way from here on out.
The FCC Finally Enacts Net Neutrality Rules After Three Year Delay
Report from the New York Times
In Brief – As fully expected, the Federal Communications Commission (FCC) voted to reinstate Net Neutrality (NN) rules that largely parallel the ones enacted by the FCC in 2015 during the Obama Administration and were rescinded in 2017 by the FCC during the Trump Administration. They were enacted on a 3-2 party-line vote with Democrats in favor and Republicans opposed. The rules classify broadband internet access as a public utility, giving the FCC the authority to demand broadband that providers report and respond to outages, as well as expanding the agency’s oversight of the providers’ security issues. The multi-decade fight over net neutrality was initially focused on the prospect that phone and cable providers would block or slow down websites that competed with their core businesses. More recently, a top concern has been internet access service providers creating so-called internet fast lanes where web sites would need to pay for their data to be prioritized to entice consumers to use their services. The Democratic commissioners argued that internet access is a basic and fundamental service that should be regulated alongside other utilities, while the Republicans argued that the lack of problems in the years since the initial rules were repealed shows that the NN claims were warrantless.
Context – The net neutrality debate is like internet policy comfort food. People know what to expect. People know their role. The politics are clear. There is money to be raised and earned. The next certainty is the legal challenges. When the Obama FCC enacted Net Neutrality rules in 2015, the telecommunications industry challenges were rejected in federal court. When the Trump FCC overturned the Obama era rules, the legal challenges of the progressive NN advocates were also rejected. If there is an interesting new angle on the old debate, it may come in the form of 5G services and whether “network slicing” allows for digital services that are not the internet, or would be an exception that can swallow the rule, so to speak. The FCC did not resolve that question.
Fast-Fashion Shein Added to VLOP List for EU Digital Services Act
Report from AP News
In Brief – Shein, a rapidly-growing Singapore-based online shopping platform, has been formally designated as a Very Large Online Platform (VLOP) by the European Commission and will face the highest level of scrutiny under the Digital Services Act (DSA). Shein has quickly grown from being a low-price, slow delivery, “fast-fashion’ retailer into a hybrid retailer-marketplace platform for mostly China-based sellers. The DSA regulates how digital platforms deal with illegal and objectional third-party content. Platforms with more than 45 million monthly active EU users are VLOPs. They face a more stringent regulatory regime, including oversite directly by the European Commission, stricter criteria for dealing with objectionable material, and the submission of regular risk assessments. The European Commission initially designated 19 digital platforms, including two search engines, to be VLOPs, and added three popular online pornography sites to the roster in December. In its announcement of Shein’s VLOP status, the Commission noted the DSA’s obligations around illegal, unsafe, and counterfeit products available on the site. The company will now have four months to submit a risk assessment report to the regulator that includes its mitigation measures to address problematic content.
Context – Shein is the fourth online marketplace to be designated a VLOP, with AliExpress, Amazon, and Zalando included in the initial list. Although social media platforms were the focus of the DSA, targeting objectionable material like hate speech, child sexual abuse material, disinformation, and advocacy for terrorism and a wide range self-harm, third-party commercial platforms were always part of the regime. European rights owners, especially in the so-called luxury goods industries, have always been aggressively skeptical of online marketplaces, charging them with facilitating rampant violations of intellectual property rights. Although the first two VLOPs facing formal DSA investigations were X and TikTok, AliExpress was the third, reinforcing the Commission’s commitment to policing online commerce as well.
FTC Accuses Amazon of Improperly Deleting Executive Messages
Report from the Washington Post
In Brief – The Federal Trade Commission has accused top Amazon executives, including former CEO Jeff Bezos, of improperly using Signal’s automatic message deletion feature to destroy communications potentially relevant to the agency’s antitrust investigation of the ecommerce and logistics giant. The FTC claims that Bezos initiated the use of Signal, which is widely regarded for its end-to-end encryption and security, by top Amazon executives in 2019 following his phone messages being hacked. In June of 2019 the FTC formally notified Amazon that it was being investigated for unfair competition practices and told them to preserve company documents, including executive communications. The FTC says that it was not notified by Amazon about the company’s use of Signal until March 2022, just days before a Wall Street Journal article exposed the practice, and that Bezos was not informed of the obligation to retain messages until April 2020. They also claim that some executives used Signal’s disappearing message feature even after that. An Amazon spokesman said that the agency’s claims are “baseless”, and that the company “thoroughly collected Signal conversations from its employees’ phones” and shared them with the agency.
Context – There’s little doubt that the FTC is hoping to capitalize on the apparent good fortune that Epic Games gained at Google’s expense in their antitrust trial last year when Google’s policy of deleting intracompany executive messages was roundly criticized by the federal judge overseeing the case. Epic Games brought similar antitrust complaints against Apple and Google accusing them of anticompetitive conduct in their app store operations. While Apple, with its very restrictive walled-garden practices, prevailed on the federal antitrust claims, Google lost. Many expert observers pointed to the message deleting bruhaha as well the fact that Google faced a jury trial while Apple had a bench trial. The FTC’s massive antitrust complaint against Amazon is currently scheduled for a bench trial in October 2026.
Epic Games Gets Apple Back in Federal Court Looking to Win Lower Fees
Report from Forbes
In Brief – Apple and Epic Games will soon be back in front of Federal District Judge Yvonne Gonzalez Rogers, the same judge who issued a split decision in their epic federal antitrust trial nearly three years ago, to fight over whether Apple’s plan to allow app developers to use alternative payments services violates the judge’s orders. In 2021, Rogers ruled that Epic did not prove that Apple operated an illegal monopoly, but she ordered the iPhone giant to stop prohibiting app developers from telling their customers that there are other ways to buy online goods and services than through their iPhone apps. Although she said that Apple did not violate federal antitrust law, she was openly skeptical that they could justify their 30% commissions and said that informing consumers of purchasing options would promote competition. Both companies appealed the parts of her ruling they did not like, but it was backed by the Federal Ninth Circuit Court of Appeals. And the Supreme Court rejected both companies’ appeals. Apple eventually rolled out new rules for alternative app payments in the US to address Rogers’ order, and in Europe as part of its plan to comply with the EU’s Digital Markets Act (DMA) regulating so-called digital “gatekeepers”. However, Apple’s new rules still include app developer commission rates approaching 30%, leading companies like Epic to cry foul. The gaming giant petitioned Judge Rogers to find Apple out of compliance with her anti-steering injunction and impose a remedy. The judge has ruled that Epic met its initial evidentiary test and has set a three-day hearing for the two sides in May.
Context – The years-long legal, regulatory, and lobbying fight over “in-app payments” has never been about payments. It is about fees. Epic Games and other app developers want to pay much lower fees to Apple (and Google). A European Commission that appears unsympathetic to Apple is reviewing their DMA plan and fee levels are definitely on the table. Although it once seemed that Judge Rogers was not inclined to become a national regulator of iPhone rules and fees, maybe that has changed too.
Italian Competition Authority Fines Amazon for “Subscribe and Save” Feature
Report from EuroNews
In Brief – Italy’s antitrust authority has fined Amazon 10 million euros for what it considers to be unfair commercial practices related to its ‘Subscribe and Save’ option that allows customers to set up recurring deliveries of frequently purchased items. The regulator objected to Amazon’s practice of sometimes automatically designating recurring purchases, requiring consumers to affirmatively opt-out of the ongoing purchases rather than opt-in to the service. “The pre-ticking of recurring purchase induces one to periodically buy a product – even without the effective need – thus limiting one’s freedom of choice,” the antitrust body said in a statement. The regulator also objected to Amazon’s automatic selection of the fast delivery option, which costs more than standard delivery. While Amazon contested the overall decision and said that it would appeal, the company has committed to the Italian officials that, in the future, it will pre-select only the free delivery option.
Context – Italy’s competition authority has been ahead of the curve regarding Amazon conduct that has proven troubling to a growing number of antitrust regulators. In 2021, the regulator ruled that the ecommerce and logistics giant engaged in anticompetitive conduct by leveraging its dominant position as the largest ecommerce marketplace in Italy to preference its large and growing logistics business, Fulfilment by Amazon (FBA), leading to a 1.3 billion euro fine. The European Competition Authority (ECA) conducted its own investigation, which Amazon eventually settled in late 2022 by agreeing to a range of logistics and delivery changes, including offering buyers a second, lower-priced delivery option. Similar concerns regarding how Amazon’s marketplace preferences the products of third-party sellers who purchase its FBA logistics has led to an antitrust settlement with the UK CMA in late 2023 and is a key issue in the US Federal Trade Commission’s antitrust lawsuit targeting Amazon that is currently scheduled for a trial in fall 2026. On the issue of its website design, EU regulator concerns with so-called “dark patterns” led Amazon to simplify its Prime subscription cancellation process in Europe in 2022.