Report from Washington Post
In Brief – Character.AI and Google have agreed to settle lawsuits regarding teen suicide and self-harm bought by victims’ families in Florida, Colorado, Texas and New York. Character.AI is a role-playing chatbot platform that allows users to create custom characters, often based on celebrities or pop culture figures. The company was founded in 2021 by two former Google engineers and in 2024 Google rehired the co-founders and paid $2.7 billion to license the startup’s technology. The Florida lawsuit, the first targeting the companies, was filed by the mother of 14-year-old who used a Character.AI chatbot tailored after Game of Thrones’ Daenerys Targaryen. The teen reportedly exchanged sexualized messages with the chatbot and eventually talked about joining “Daenerys” in a deeper way before taking his own life. That suit has been dismissed by Federal Judge Anne Conway, indicating the level of progress in settlement talks.
Context – Alleged harms to young people from engaging with AI “companions” has emerged as an AI version of the worst of social media. In May, Judge Conway rejected the First Amendment-based motion to dismiss arguments of Character.AI and Google, waived away precedents related to videogames, social media, and other expressive mediums, and said in her order that she was “not prepared to hold that Character AI’s output is speech”. Character.AI later announced that they would bar users under 18 from their chatbots and enforce the rule with age verification technology. While social media platforms have largely been shielded from liability caused by user-generated content by Sec. 230, it’s an open question whether Sec. 230 applies to generative AI services. Supreme Court Justice Gorsuch has opined that it probably does not, as have Sec. 230’s congressional authors. But an argument can be made that everything created by a generative AI chatbot is just an algorithmic re-ordering of existing third-party content. California enacted a law earlier this year requiring companion chatbot developers to implement safeguards so that users are not misled into believing that they are interacting with a human.
