bbieron@platformeconomyinsights.com

Character.AI Setting an 18-Year-Old Age Limit for Its Companion Chatbots

Nov 1, 2025

Report from New York Times

In Brief – AI “companion” company Character.AI has announced that they will bar users under 18 from its chatbots in a sweeping move to address concerns over teen safety. The move follows mounting scrutiny over how AI chatbot companions can affect mental health. Character.AI users currently self-report their age. Those who have indicated to the company that they are under 18 will immediately face a daily chat limit of two hours that will gradually shrink to zero on November 25. Users under 18 will still be able to generate AI videos and images through a structured menu of prompts, within certain safety limits. The company also says that they have been developing age verification technology in house, and it said it will combine that age assurance capability with services from third-party providers. Character.AI’s chief executive said, “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” and that they planned to establish an AI safety lab.

Context – Alleged harms to young users from engaging with AI “companions” has emerged as an AI version of the worst of social media. Social media platforms have largely been shielded from liability caused by user-generated content by Sec. 230, so that industry’s critics have been resorting to legislation and lawsuits targeting allegedly addictive platform characteristics like auto-play and algorithmic feeds. It’s an open question whether Sec. 230 applies to generative AI services. Supreme Court Justice Gorsuch has opined that it probably does not, as have Sec. 230’s congressional authors. But an argument can be made that everything created by a generative AI chatbot is just an algorithmic re-ordering of existing third-party content. California recently enacted a law requiring companion chatbot developers to implement safeguards so that users are not misled into believing that they are interacting with a human, but Governor Gavin Newsome (D) vetoed a separate bill that required companion companies to block those under 18 if they could not guarantee that their chatbots would block objectionable content.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required