bbieron@platformeconomyinsights.com

Judge Rejects Fair Use Defense of an AI Company (not Generative AI)

Feb 12, 2025

Report from TechSpot

In Brief – Federal Judge Stephanos Bibas has delivered a significant ruling in a copyright case pitting Thomson Reuters against the now-defunct legal services startup Ross Intelligence that claimed to have developed an AI-enabled legal service. Judge Bibas ruled that Ross’s system was developed using thousands of Thomson Reuters’ Westlaw case summaries without paying licensing fees, and that copies of those summaries were provided to Ross’s users. Of note, the judge took pains to point out that, “Ross’s AI is not generative AI (AI that writes new content itself). Rather, when a user enters a legal question, Ross spits back relevant judicial opinions that have already been written.” And therefore, he cautioned that this summary judgement ruling, which rejected Ross’s fair use copyright defense, was about non-generative AI. Bibas said that Ross’s fair use defense failed on two prongs of fair use analysis, namely that Ross’s service was a commercial venture that was not truly transformative, and that Ross’s service competed with Westlaw in the market and harmed Westlaw’s value. He said that the Supreme Court’s 2023 ruling in Andy Warhol Foundation v. Goldsmith guided his fair use determination.

Context – The fact that Ross’s service was inarguably not generative AI limits the value of the ruling on the huge questions around the legality of “training” the neural networks of major GAI models with non-licensed copyrighted material. In the EU, with its AI Actregulators and AI expert groups will play key roles. In the US, copyright lawsuits targeting GAI giants will likely focus on fair use. Federal Judge William Orrick, overseeing cases involving image generating services trained on digital artworks, recently issued a ruling in which he explained that he is trying to ascertain how the GAI systems work. He will learn that they are not databases like Ross’s system. They do not store or retrieve copies. They “learn” from data and then produce new output. It will be interesting to see how courts react when GAI operators admit they don’t exactly know why their systems produce any particular output, hence the nagging existence of our favorite GAI concept, “hallucinations”.

View By Monthly
Latest Blog
European Commission Expands Their DSA Probe of Online Porn Sites

Report from CBC News In Brief – The European Commission has announced that they have preliminarily found four large adult content platforms to be in breach of the Digital Services Act (DSA) for failing to protect minors from being exposed to pornographic content on...

UK Government Targeting Manosphere Content on Online Platforms

Report from The Guardian In Brief – More than 60 Labour MPs have urged Ofcom, the country’s communications and digital regulator, to use its authority under the Online Safety Act to press platforms to better protect young men from risks they argue are linked to...

Google Proposes a Publisher Opt-Out for AI-Enabled Search in the UK

Report from MediaPost In Brief – Google has outlined plans to give publishers more authority over how their content appears in AI-driven search features in response to the consultation by the UK Competition and Markets Authority (CMA) regarding application of the...

Dutch Court Bans Grok from Creating Nude and Partially Nude Images

Report from Reuters In Brief – A Dutch court has ordered AI company xAI to stop its chatbot Grok from generating or distributing non-consensual sexualized images, including depictions of adults or children partially or wholly stripped naked. The preliminary...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required