Report from the Wall Street Journal
In Brief – Robby Starbuck, a conservative activist notable for successfully pressuring companies to change their DEI practices, has filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. Starbuck says he discovered the problem last August when the allegations were posted on X. Starbuck immediately denied the allegations and contacted Meta about the claims, which he says continued to show up in its AI system responses months later. Starbuck’s lawsuit, filed in Delaware Superior Court, seeks more than $5 million in damages. Joel Kaplan, Meta’s chief global affairs officer, took to X to apologize to Starbuck that the company’s fix “didn’t address the underlying problem” and that he was working to “explore potential solutions.”
Context – AI “hallucinations”, the reality that all generative AI systems sometimes produce results that are false and cannot be explained by the system’s creators, provides valuable context for many AI public policy issues. They illustrate that GAI tools are not like traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. Starbuck’s defamation case gets in line, with a noteworthy lawsuit in Georgia state court involving OpenAI being farthest along. The fact that all the AI systems create fictions that are presented as real should be presented in AI-related copyright lawsuits as proof that AI systems create new works rather than copy and paste. The issue will likely appear when the question of applying Sec. 230 to AI results is decided because developers cannot ever be sure of outputs. And hallucinations happen at the same time some entities are trying to influence AI outputs by seeding training date with falsehoods. “Fixing” troubling outputs often ends up with an AI developer installing “guardrails”, meaning algorithmically overriding the system in some cases, which has created accusations of ideological biases by the companies.
