Researchers reported in Science that brief, personalized conversations with an Artificial Intelligence chatbot substantially reduced belief in conspiracy theories. In an experiment with over 2,000 self-identified conspiracy believers, participants wrote the conspiracy they endorsed and the evidence that persuaded them, then engaged in a three-round chat with DebunkBot, a model built on OpenAI’s GPT-4 Turbo. The average session lasted 8.4 minutes. After the interaction, confidence in the targeted belief fell by 20 percent on average, and roughly one in four participants who initially endorsed a conspiracy said they no longer believed it. The reduction held across both classic and contemporary politically charged conspiracies and persisted at a two-month follow-up.
The authors attribute the effect to timely factual rebuttals rather than rhetorical framing. Follow-up experiments showed the debunking worked equally well when users were told they were talking to an expert rather than an Artificial Intelligence model, and it failed when the model was instructed to persuade without presenting facts and evidence. A professional fact-checker evaluated GPT-4’s claims and judged over 99 percent of them as true and not politically biased. In cases where a named conspiracy proved accurate, such as MK Ultra, the chatbot confirmed the correct belief rather than erroneously debunking it. The researchers argue that many conspiratorial beliefs reflect misinformed but relatively rational reasoning that can be shifted by clear, specialized explanations that are hard for laypeople to assemble quickly.
The study situates these findings within broader debates about generative Artificial Intelligence and misinformation. While acknowledging the harms of disinformation, the authors suggest that debunking bots could be deployed on social platforms, linked to search results, or used in private conversations to provide efficient, evidence-based rebuttals. The work was published as part of MIT Technology Review’s series on conspiracy theories and was conducted by Thomas Costello (Carnegie Mellon University), Gordon Pennycook (Cornell University), and David Rand (Cornell University). The researchers offer DebunkBot for public trial at debunkbot.com and present the results as evidence that facts and evidence can still shift public beliefs.
