The article introduces a comprehensive legal analysis of hallucinations produced by large language models and their implications for defamation law. It situates the debate in high-profile statements about Artificial Intelligence, citing Sam Altman’s warning about existential risk and endorsements by Bill Gates and Sundar Pichai about transformative potential. The authors note that although Artificial Intelligence systems are not “generally” smarter than humans yet, progress has been rapid: OpenAI’s GPT-3 scored “zero percent” in 2020, and five years later OpenAI’s o3-preview scored between “75% and 88%.” The piece explains that large language models are prediction engines that synthesize massive textual data and sometimes produce “plausible yet false outputs,” including fake legal documents, non-existent citations, and false biographical data, with hallucinations observed in “3% to 10%” of outputs and leading some scholars to call them “Large Libel Models.”
The authors review real-world litigation and doctrinal friction points. They recount the Walters v. OpenAI case, in which radio host Mark Walters sued after GPT-3.5 hallucinated accusations of fraud and embezzlement in response to a journalist Frederick Riehl’s prompt. The article emphasizes the difficulty of applying traditional scienter requirements to machine-generated speech because only human actors can possess the requisite state of mind about falsity or publishing. It examines whether an LLM’s output is a factual assertion when a prompter doubts it, how to assign publisher or distributor status to LLM producers, and whether presumed damages are appropriate when harm is not evidenced.
Drawing on a taxonomy of hallucinations and recent computer-science research, the authors argue that hallucinations are both inevitable and sometimes valuable to generative capacities. They propose a two-pronged response: adapt tort doctrines to treat defamatory hallucinations as “inevitable errors” in the tradition of New York Times v. Sullivan while treating LLMs more like information distributors, and enact statutory duties for LLM producers. Recommended reforms include a duty to warn users against unverified reliance, duties to preserve search records for a limited period to enable proof of harm, incentives for transparency and safety innovations, and liability for users who republish hallucinations without reasonable verification.
