Inevitable errors: defamation by hallucination in Artificial Intelligence reasoning models, by Lyrissa Lidsky & Andrew Daves

Lyrissa Lidsky and Andrew Daves argue that hallucinations in large language models are inevitable and can cause reputational harm, urging tort-based adaptations and statutory reforms to balance accountability with the benefits of Artificial Intelligence reasoning models.

The article introduces a comprehensive legal analysis of hallucinations produced by large language models and their implications for defamation law. It situates the debate in high-profile statements about Artificial Intelligence, citing Sam Altman’s warning about existential risk and endorsements by Bill Gates and Sundar Pichai about transformative potential. The authors note that although Artificial Intelligence systems are not “generally” smarter than humans yet, progress has been rapid: OpenAI’s GPT-3 scored “zero percent” in 2020, and five years later OpenAI’s o3-preview scored between “75% and 88%.” The piece explains that large language models are prediction engines that synthesize massive textual data and sometimes produce “plausible yet false outputs,” including fake legal documents, non-existent citations, and false biographical data, with hallucinations observed in “3% to 10%” of outputs and leading some scholars to call them “Large Libel Models.”

The authors review real-world litigation and doctrinal friction points. They recount the Walters v. OpenAI case, in which radio host Mark Walters sued after GPT-3.5 hallucinated accusations of fraud and embezzlement in response to a journalist Frederick Riehl’s prompt. The article emphasizes the difficulty of applying traditional scienter requirements to machine-generated speech because only human actors can possess the requisite state of mind about falsity or publishing. It examines whether an LLM’s output is a factual assertion when a prompter doubts it, how to assign publisher or distributor status to LLM producers, and whether presumed damages are appropriate when harm is not evidenced.

Drawing on a taxonomy of hallucinations and recent computer-science research, the authors argue that hallucinations are both inevitable and sometimes valuable to generative capacities. They propose a two-pronged response: adapt tort doctrines to treat defamatory hallucinations as “inevitable errors” in the tradition of New York Times v. Sullivan while treating LLMs more like information distributors, and enact statutory duties for LLM producers. Recommended reforms include a duty to warn users against unverified reliance, duties to preserve search records for a limited period to enable proof of harm, incentives for transparency and safety innovations, and liability for users who republish hallucinations without reasonable verification.

65

Impact Score

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.