Report from New York Times
In Brief – AI “companion” company Character.AI has announced that they will bar users under 18 from its chatbots in a sweeping move to address concerns over teen safety. The move follows mounting scrutiny over how AI chatbot companions can affect mental health. Character.AI users currently self-report their age. Those who have indicated to the company that they are under 18 will immediately face a daily chat limit of two hours that will gradually shrink to zero on November 25. Users under 18 will still be able to generate AI videos and images through a structured menu of prompts, within certain safety limits. The company also says that they have been developing age verification technology in house, and it said it will combine that age assurance capability with services from third-party providers. Character.AI’s chief executive said, “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” and that they planned to establish an AI safety lab.
Context – Alleged harms to young users from engaging with AI “companions” has emerged as an AI version of the worst of social media. Social media platforms have largely been shielded from liability caused by user-generated content by Sec. 230, so that industry’s critics have been resorting to legislation and lawsuits targeting allegedly addictive platform characteristics like auto-play and algorithmic feeds. It’s an open question whether Sec. 230 applies to generative AI services. Supreme Court Justice Gorsuch has opined that it probably does not, as have Sec. 230’s congressional authors. But an argument can be made that everything created by a generative AI chatbot is just an algorithmic re-ordering of existing third-party content. California recently enacted a law requiring companion chatbot developers to implement safeguards so that users are not misled into believing that they are interacting with a human, but Governor Gavin Newsome (D) vetoed a separate bill that required companion companies to block those under 18 if they could not guarantee that their chatbots would block objectionable content.
