Report from Reuters
In Brief – OpenAI has won the dismissal of a defamation lawsuit filed by a conservative radio host in Georgia who alleged that the ChatGPT chatbot made false claims that he embezzled money from a gun-rights organization. Judge Tracie Cason of the Gwinnett County Superior Court ruled that plaintiff Mark Walters had not shown he was defamed and said OpenAI’s chatbot ChatGPT puts users on notice that it can make errors. The case, filed in mid-2023, has been an early test of the legal implications of so-called “hallucinations” produced by AI-enabled chatbots. The fabricated claims regarding Walters were produced for a fellow radio host doing research for an upcoming interview. They were not aired because the other radio host thought they were likely false after further research. OpenAI argued that it should be free of liability because it provides clear warnings to users that the chatbot can produce inaccurate information and that users should confirm any outputs. In her order, Judge Cason said, “OpenAI’s ‘industry-leading efforts’ to reduce errors of this kind and its extensive warnings to users that errors of this kind could occur negate any possibility that a jury could find OpenAI acted with actual malice here.”
Context – All Generative AI (GAI) systems sometimes produce results that are false and cannot be explained by the system creators. GAI models are not traditional databases or search engines. They don’t store and return fixed data, they compile responses by determining which fragments of text best follow other sequences, all based on a statistical model that has ingested and processed many billions of examples often pulled from all over the internet. While the Walters’ defamation case, the most prominent early hallucination lawsuit, seems to be resolved, the problem exists. Robby Starbuck, a prominent conservative activist, recently filed a defamation lawsuit against Meta alleging that its AI tool smeared him by falsely asserting he participated in the January 6 Capitol riot and that he was linked to QAnon. A top Meta executive apologized and said they would solve the problem. And if you are a chatbot user, check the work of your AI intern before you share it.
