Home
Ask A Question!
AskAQuestion@platformeconomyinsight.com
News & Insights
Google and Epic Games Settle Federal Antitrust Battle Over the Play Store
Report from Bloomberg
In Brief – Google and Epic Games have agreed to settle their long-running antitrust battle over the rules and fees of the Android Play Store, with Google agreeing to implement changes both sides say will lower fees and expand choices for developers and consumers. The two sides are asking District Judge James Donato to approve the settlement to resolve Epic’s 2020 antitrust lawsuit that accused Google of monopolizing Android app downloads and in-app payments. A jury fully sided with Epic in 2023 and Judge Donato imposed a sweeping nationwide injunction ordering major changes to the Play Store that Google appealed. Google and Epic have now agreed on a modified set of rules that allow users to more easily download and install third-party app stores that meet security and safety standards, while developers will be allowed to direct users to alternative payment methods both within apps and via external web links. Google said it would implement a capped service fee of either 9% or 20% on transactions in Play-distributed apps that use alternative payment options. Of note, where Donato’s injunction only applied in the US and for three years, the settlement applies for six years with the fee levels being global. Epic Games CEO Tim Sweeney, who has been personally engaged in the company’s legal and regulatory campaign to change the Apple and Google mobile ecosystems and lower both company’s fees, called the agreement “awesome”.
Context – Epic’s 2020 antitrust lawsuits against Apple and Google opened a global lobbying and litigation campaign to drive down their fees. Apple, despite earning more money from developers and running a closed system largely prevailed, while Google, with a more open ecosystem, lost. However, Apple did lose on a single complaint involving California’s Unfair Competition Law and eventually was hit with an injunction at least as restrictive as the one Google faced after losing across the board. Apple is still appealing. Globally, both companies face similar demands in Europe, Japan, South Korea and India, with each new result feeding more.
UK Court Largely Rejects Getty Images Copyright Claims Against Stability AI
Report from Reuters
In Brief – A London High Court ruling has handed a victory to artificial intelligence firm Stability AI in a closely watched case testing whether copyrighted material can legally be used to train AI models without the authorization of the copyright holder. US-based Getty Images sued Stability, which is based in London, alleging the company had infringed its copyrights and trademarks by scraping and using millions of its photos to train the Stable Diffusion image-generation model. Getty argued that the model’s creation itself amounted to infringement. Justice Joanna Smith ruled that because Stability’s AI model does not store or reproduce copyrighted works, it is not itself an “infringing copy.” The decision did not rule specifically on whether training was a copyright violation because it could not be determined that Stability trained its model in the UK. The court did uphold some trademark violation claims by Getty because some images generated by Stable Diffusion included Getty watermarks. Legal observers described the decision as a setback for rights holders, with Stability AI’s general counsel saying that the decision "ultimately resolves the copyright concerns that were the core issue”.
Context – A series of lawsuits in US courts, including some involving Stability AI, will likely have the greatest impact in determining if training generative AI models, both for chatbots and for image generation, violates copyright law. There were a pair of dueling decisions from US District judges in June on the key “fair use” question in chatbot copyright cases, one defended training as fair use while the other rejected fair use claims. In the EU, the AI Act, regulators and expert groups are playing central roles, with the copyright section of the General-Purpose AI Code of Practice proving contentious. In the UK and Australia, proposals to make national copyright law more conducive to AI training have resulted in considerable pushback from creative industries, while in Japan, which has a copyright exception for AI training considered to be very pro-industry, creative industries are pushing the government to tighten the AI training exception.
Character.AI Setting an 18-Year-Old Age Limit for Its Companion Chatbots
Report from New York Times
In Brief – AI “companion” company Character.AI has announced that they will bar users under 18 from its chatbots in a sweeping move to address concerns over teen safety. The move follows mounting scrutiny over how AI chatbot companions can affect mental health. Character.AI users currently self-report their age. Those who have indicated to the company that they are under 18 will immediately face a daily chat limit of two hours that will gradually shrink to zero on November 25. Users under 18 will still be able to generate AI videos and images through a structured menu of prompts, within certain safety limits. The company also says that they have been developing age verification technology in house, and it said it will combine that age assurance capability with services from third-party providers. Character.AI's chief executive said, “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” and that they planned to establish an AI safety lab.
Context – Alleged harms to young users from engaging with AI “companions” has emerged as an AI version of the worst of social media. Social media platforms have largely been shielded from liability caused by user-generated content by Sec. 230, so that industry’s critics have been resorting to legislation and lawsuits targeting allegedly addictive platform characteristics like auto-play and algorithmic feeds. It’s an open question whether Sec. 230 applies to generative AI services. Supreme Court Justice Gorsuch has opined that it probably does not, as have Sec. 230’s congressional authors. But an argument can be made that everything created by a generative AI chatbot is just an algorithmic re-ordering of existing third-party content. California recently enacted a law requiring companion chatbot developers to implement safeguards so that users are not misled into believing that they are interacting with a human, but Governor Gavin Newsome (D) vetoed a separate bill that required companion companies to block those under 18 if they could not guarantee that their chatbots would block objectionable content.
Austrian Privacy Advocates File Criminal Complaint Against Clearview AI
Report from Courthouse News
In Brief – Austrian data privacy advocacy group Nyob (None of Your Business) announced that it filed a criminal complaint with Austrian authorities aiming to have executives of US-based facial recognition firm Clearview AI held personally liable for amassing an allegedly illegal database of billions of photos of faces. Clearview AI, which describes itself as a search engine for faces posted online, developed its facial recognition technology using images gathered from social media websites. The service is now sold almost exclusively to law enforcement and government agencies. After the company burst onto the scene in 2020, privacy regulators in many countries, including several in Europe, fined the firm for violating national privacy and data protection regulations, including the EU’s General Data Protection Regulation. Noyb calls on Austrian prosecutors to hold Clearview AI’s corporate leadership personally liable and even send them to jail.
Context – Until OpenAI made Chat-GPT available, Clearview AI was the most notorious AI start-up. Although a small enterprise, it built facial recognition technology that outperformed giants like IBM, Microsoft, and Google. Under pressure from regulators, the company drastically pared back its business aspirations, agreeing to limit sales to government authorities, primarily in the US. Fights over extraterritorial jurisdiction of national laws regulating digital platforms are certain to grow as regulators attempt to enforce rapidly proliferating online regulations on small, entirely remote digital firms. For example, 4chan has filed suit in US federal court to block enforcement of the UK Online Safety Act on its business, arguing that it has no operations in the country. The UK’s Upper Tribunal recently rejected that argument from Clearview AI and sided with the UK Information Commissioner’s Office that the company was subject to the British GDPR. In the EU, the GDPR is primarily a civil law, but it does allow Member State privacy authorities to impose criminal penalties for certain violations of the regulation. If a US company executive is arrested for violating the law, it will accelerate the extra-territorial jurisdiction controversy.
Apple in European Court Because the DMA Outlaws Apple’s Business Model
Report from Bloomberg
In Brief – In a hearing before the EU General Court in Luxembourg, Apple launched a sweeping legal attack against the EU’s Digital Markets Act (DMA), arguing that the bloc’s landmark Big Tech regulation imposes “hugely onerous and intrusive” burdens on the company and conflicts with fundamental property, privacy, and security protections under EU law. The landmark 2023 law seeks to curb the dominance of digital “gatekeepers” by imposing a collection of regulatory mandates, including opening their ecosystems to rivals products and services. Apple contends that these interoperability mandates, such as forcing its iPhone to work with competitors’ hardware and software, pose serious risks to user privacy, device security, and intellectual property. The company also disputes regulators’ classification of its App Store as a separate service under the DMA. The company has already been hit with a €500 million fine for app store policies and fees. The EU Commission defended the law to the judges, accusing Apple of maintaining “absolute control” over the iPhone ecosystem to generate “supernormal profits” while hindering competition. They argued that Apple’s restrictive practices effectively “lock in” more than a third of European smartphone users, reinforcing the need for the EU’s tough gatekeeper rules.
Context – We’ve long said that the DMA challenges Apple’s core user proposition more than the other tech giants. The others generally use technical and non-transparent practices to shape their platforms. Apple’s business model has been to create openly structured and controlled “walled gardens” with customer value propositions based on privacy, security, and seamless premium user experiences. And their devices consistently proved very popular with EU users. The DMA basically told the company that its highly successful business model was no longer legal in Europe. Meta, themselves a major critic of Apple’s walled gardens, feels equally hard done by as the Commission has used the DMA to force the company, with a business built on targeted ads, to offer free versions of its platforms that only include less-targeted, and therefore less-valuable, ads.
EU Commission Finds Meta and TikTok in Violation of the DSA
Report from Bloomberg
In Brief – The European Commission has issued a preliminary decision finding Meta’s Facebook and Instagram and TikTok all in breach of the Digital Services Act (DSA) for not allowing independent researchers to easily access data regarding content moderation on their platforms. The regulator also found the Meta platforms in violation of requirements to give users the ability to flag illegal content and challenge content moderation decisions, alleging that they do not provide user-friendly and easily accessible “Notice and Action” mechanisms for users to flag illegal content and used confusing “dark patterns” to dissuade users from filing challenges. The companies now have an opportunity to study and challenge the commission’s accusations and make commitments to address its findings. Both firms issued statements objecting to the findings and defended their DSA compliance efforts, with TikTok claiming that there are conflicts between DSA data sharing requirements and GDPR data protection mandates.
Context – The DSA regulates how digital platforms address a wide variety of objectionable content, including hate speech, disinformation, threats to children, along with illegal, dangerous and fraudulent products and services. The largest platforms, designated Very Large Online Platforms (VLOPs), face the strictest duties and are regulated by the Commission. VLOPs facing investigations include social media platforms X, Facebook, Instagram, and TikTok, marketplaces AliExpress and Temu, four adult platforms accused of not adequately protecting minors, and Apple, Google, Booking and Microsoft in a new investigation of efforts to block frauds and scams. Speech moderation on social media, which often involves politically contentious and sensitive topics, remains a top concern of the Trump Administration and other conservatives who allege censorship by government establishments and progressive platforms. Even with Elon Musk out of favor with President Trump, the Commission’s decisions in their investigation of X are likely to eventually be a major US-EU friction point.
Big Trends
About PEI
Platform Economy Insights aims to provide small-to-mid-sized digital platform business leaders, investors and firms that support industry growth, and public officials, staff and media who track the platform economy, with expert analysis of public policy trends impacting the digital platform industry globally.
Executive Editor Brian Bieron and Senior Advisor Tod Cohen are recognized Internet, trade and platform policy leaders who have served as top global public policy experts to some of the Internet industry's leading platform businesses. They are now providing insights, analysis and reporting to wider audiences through a public policy platform that challenges the reach of all but the largest Internet industry public affairs teams.
Platform Economy Insights