bbieron@platformeconomyinsights.com

Google Settles Character.AI Liability Suits Tied to Teen Suicides

Jan 9, 2026

Report from Washington Post

In Brief – Character.AI and Google have agreed to settle lawsuits regarding teen suicide and self-harm bought by victims’ families in Florida, Colorado, Texas and New York. Character.AI is a role-playing chatbot platform that allows users to create custom characters, often based on celebrities or pop culture figures. The company was founded in 2021 by two former Google engineers and in 2024 Google rehired the co-founders and paid $2.7 billion to license the startup’s technology. The Florida lawsuit, the first targeting the companies, was filed by the mother of 14-year-old who used a Character.AI chatbot tailored after Game of Thrones’ Daenerys Targaryen. The teen reportedly exchanged sexualized messages with the chatbot and eventually talked about joining “Daenerys” in a deeper way before taking his own life. That suit has been dismissed by Federal Judge Anne Conway, indicating the level of progress in settlement talks.

Context – Alleged harms to young people from engaging with AI “companions” has emerged as an AI version of the worst of social media. In May, Judge Conway rejected the First Amendment-based motion to dismiss arguments of Character.AI and Google, waived away precedents related to videogames, social media, and other expressive mediums, and said in her order that she was “not prepared to hold that Character AI’s output is speech”. Character.AI later announced that they would bar users under 18 from their chatbots and enforce the rule with age verification technology. While social media platforms have largely been shielded from liability caused by user-generated content by Sec. 230, it’s an open question whether Sec. 230 applies to generative AI services. Supreme Court Justice Gorsuch has opined that it probably does not, as have Sec. 230’s congressional authors. But an argument can be made that everything created by a generative AI chatbot is just an algorithmic re-ordering of existing third-party content. California enacted a law earlier this year requiring companion chatbot developers to implement safeguards so that users are not misled into believing that they are interacting with a human.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required