OpenAI Faces Defamation Lawsuit Over ChatGPT's False Legal Accusations

OpenAI, the creator of the popular AI language model ChatGPT, is embroiled in a defamation lawsuit after the AI-generated false legal accusations against a radio host in Georgia, Mark Walters. The lawsuit could serve as a test case for assessing the legal liability of companies behind AI systems, especially when incorrect or defamatory information is generated by such technology. The lawsuit, filed on June 5th in Georgia's Superior Court of Gwinnett County, accuses OpenAI of being responsible for the fabricated information. The incident in question involves a journalist named Fred Riehl, who sought details of a federal court case from ChatGPT. Instead of providing accurate information, the AI system created a false summary, accusing Mark Walters of embezzling funds from a gun rights non-profit organization. Despite Riehl not publishing the fabricated information, Walters is seeking unspecified monetary damages from OpenAI. ChatGPT has been widely criticized for generating false information, and the recent defamation case further highlights the issue. AI language models like ChatGPT face challenges when distinguishing fact from fiction, often inventing dates, facts, and figures when asked for information. While many instances of fabricated information merely mislead users or consume their time, this case demonstrates how severe consequences may arise from AI-generated falsehoods. OpenAI includes a disclaimer on ChatGPT's homepage, warning users that the system may occasionally generate incorrect information. However, the company has also marketed ChatGPT as a reliable source of information, with OpenAI’s CEO, Sam Altman, even stating that he prefers learning from ChatGPT over books. The lawsuit raises questions about whether companies should be held responsible for errors made by their AI systems, as well as whether there is a legal precedent for this. In the United States, Section 230 usually shields internet firms from legal liability for third-party information; it remains uncertain whether these protections apply to AI-generated content. The defamation lawsuit against OpenAI could potentially reshape the legal landscape surrounding AI-generated information. While some legal experts predict the lawsuit should be challenging to maintain, others believe it could set a precedent for similar cases in the future. The outcome of this case may have far-reaching implications for the legal accountability of AI creators and users. As AI technology continues to advance and permeate various aspects of society, striking a balance between innovation, utility, and legal responsibility will be crucial.

  • author
  • Ava Martinez 10 Jun 2023