bbieron@platformeconomyinsights.com

Florida AG Investigating ChatGPT After Alleged Use by University Shooter

Apr 24, 2026

Report from the Wall Street Journal

In Brief – Florida Attorney General James Uthmeier has launched an investigation into OpenAI and its ChatGPT models, citing concerns about national security and potential links to criminal activity. Uthmeier released a video on social media saying that artificial intelligence tools should “advance mankind, not destroy it” and alleged that AI systems, including ChatGPT, has been linked to a wide range of harmful uses, including exploitation by national security adversaries such as China, criminals such as child-sex predators, and furthering mental illnesses and self-harm, especially by young people. He also referenced claims that ChatGPT was used by the suspected gunman in a shooting at Florida State University who allegedly asked questions such as the time when the student union was most busy and how to disengage the safety on his shotgun. OpenAI responded to the AG’s announcement saying that it will cooperate with the investigation and emphasized its many positive uses.

Context – When a search engine like Google is used by criminals to find information that they then use to engage in harmful activities, the digital provider has historically been protected from liability in part by Sec. 230 of Communications Decency Act. In addition, the US Supreme Court has unanimously ruled several times in recent years that digital services providers are not liable for harms caused by users when their general services were used and they were not intentionally designed to promote harmful or illegal acts, including a recent decision in Cox v Sony regarding IP piracy and Twitter v Taamneh regarding social media use by terrorists. However, in oral arguments on the terrorist case, which happened months after the rollout of ChatGPT, the application of Sec. 230 to AI chatbots was raised by a highly skeptical Justice Gorsuch. Now, however, chatbots often used like a search engine, and an argument can be made that their responses are just another algorithmic re-ordering of existing third-party online content. As civil liability lawsuits by private plaintiffs and State AGs emerges as the primary means of circumventing Sec. 230 for social media and imposing regulation, the same tactic is quickly being applied to AI services.

View By Monthly
Latest Blog
Swedish Court Delays Ruling in PriceRunner v Google Damages Case

Report from Crowdfund Insider In Brief – Sweden’s Patent and Market Court has delayed until June 10th its decision in a major antitrust damages case between comparison shopping site PriceRunner, which is owned by Sweden-based fintech company Klarna, and Google, citing...

Indonesia Warns YouTube Over Not Complying with Age Limit Rules

Report from Business Today In Brief – Indonesia has issued a formal reprimand to Google over noncompliance by its YouTube service with the country’s new child-protection rules for social media platforms that took effect March 28. The regulations require “high-risk”...

Meta Sued for Addictive Design in Denmark by Nonprofit Association

Report from Anadolu News In Brief – A Danish non-profit association, SOMI, has filed a lawsuit against Meta in Denmark on behalf of parents and children, alleging that the company’s platforms cause psychological harm to minors. The complaint in Copenhagen City Court...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required