Report from the Wall Street Journal
In Brief – Florida Attorney General James Uthmeier has launched an investigation into OpenAI and its ChatGPT models, citing concerns about national security and potential links to criminal activity. Uthmeier released a video on social media saying that artificial intelligence tools should “advance mankind, not destroy it” and alleged that AI systems, including ChatGPT, has been linked to a wide range of harmful uses, including exploitation by national security adversaries such as China, criminals such as child-sex predators, and furthering mental illnesses and self-harm, especially by young people. He also referenced claims that ChatGPT was used by the suspected gunman in a shooting at Florida State University who allegedly asked questions such as the time when the student union was most busy and how to disengage the safety on his shotgun. OpenAI responded to the AG’s announcement saying that it will cooperate with the investigation and emphasized its many positive uses.
Context – When a search engine like Google is used by criminals to find information that they then use to engage in harmful activities, the digital provider has historically been protected from liability in part by Sec. 230 of Communications Decency Act. In addition, the US Supreme Court has unanimously ruled several times in recent years that digital services providers are not liable for harms caused by users when their general services were used and they were not intentionally designed to promote harmful or illegal acts, including a recent decision in Cox v Sony regarding IP piracy and Twitter v Taamneh regarding social media use by terrorists. However, in oral arguments on the terrorist case, which happened months after the rollout of ChatGPT, the application of Sec. 230 to AI chatbots was raised by a highly skeptical Justice Gorsuch. Now, however, chatbots often used like a search engine, and an argument can be made that their responses are just another algorithmic re-ordering of existing third-party online content. As civil liability lawsuits by private plaintiffs and State AGs emerges as the primary means of circumventing Sec. 230 for social media and imposing regulation, the same tactic is quickly being applied to AI services.
