Report from The Verge
In Brief – California has enacted legislation requiring companion chatbot developers to implement safeguards so that users are not misled into believing that they are interacting with a human. The measure requires a clear and conspicuous notification to users and requires that companion chatbot operators make annual reports to the Office of Suicide Prevention about safeguards they’ve put in place “to detect, remove, and respond to instances of suicidal ideation by users.” In his statement accompanying signing the bill, Governor Gavin Newsome (D) said, “We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.” The chatbot bill was signed into law along with a collection of other digital and AI measures, including mandated warning labels for social media platforms, a device-based age verification mandate that will put Apple and Google at the center of enforcing age limits for mobile apps, new transparency duties for developers of large AI models, and changing civil liability law to reduce the ability of an AI developer or user to employ a defense in a liability case arguing that an AI system acted autonomously in harming someone.
Context – Alleged harms to young users from engaging with AI “companions” seems to be an AI version of the worst of social media. Those platforms are largely shielded from liability for objectionable content by Sec. 230, so critics have been resorting to legislation and lawsuits targeting platform characteristics like auto-play that they allege addict young users. The laws are facing skeptical federal judges, but the lawsuits are having more luck. Whether Sec. 230 applies to content created by generative AI is an open question, with Supreme Court Justice Gorsuch opining that it probably does not, as have Sec. 230’s congressional authors, but a strong argument can be made that everything created by a generative AI chatbot is just an algorithmic re-ordering of existing third-party content. Newsome vetoed a tougher AI chatbot companion bill that required companies to block teens under 18 if they could not guarantee that their chatbots would block objectionable content.
