Artificial Intelligence chatbots reduce belief in conspiracy theories

A study published in Science found that short conversations with an Artificial Intelligence chatbot built on GPT-4 Turbo reduced belief in conspiracies for many participants, with effects persisting for at least two months.

Researchers reported in Science that brief, personalized conversations with an Artificial Intelligence chatbot substantially reduced belief in conspiracy theories. In an experiment with over 2,000 self-identified conspiracy believers, participants wrote the conspiracy they endorsed and the evidence that persuaded them, then engaged in a three-round chat with DebunkBot, a model built on OpenAI’s GPT-4 Turbo. The average session lasted 8.4 minutes. After the interaction, confidence in the targeted belief fell by 20 percent on average, and roughly one in four participants who initially endorsed a conspiracy said they no longer believed it. The reduction held across both classic and contemporary politically charged conspiracies and persisted at a two-month follow-up.

The authors attribute the effect to timely factual rebuttals rather than rhetorical framing. Follow-up experiments showed the debunking worked equally well when users were told they were talking to an expert rather than an Artificial Intelligence model, and it failed when the model was instructed to persuade without presenting facts and evidence. A professional fact-checker evaluated GPT-4’s claims and judged over 99 percent of them as true and not politically biased. In cases where a named conspiracy proved accurate, such as MK Ultra, the chatbot confirmed the correct belief rather than erroneously debunking it. The researchers argue that many conspiratorial beliefs reflect misinformed but relatively rational reasoning that can be shifted by clear, specialized explanations that are hard for laypeople to assemble quickly.

The study situates these findings within broader debates about generative Artificial Intelligence and misinformation. While acknowledging the harms of disinformation, the authors suggest that debunking bots could be deployed on social platforms, linked to search results, or used in private conversations to provide efficient, evidence-based rebuttals. The work was published as part of MIT Technology Review’s series on conspiracy theories and was conducted by Thomas Costello (Carnegie Mellon University), Gordon Pennycook (Cornell University), and David Rand (Cornell University). The researchers offer DebunkBot for public trial at debunkbot.com and present the results as evidence that facts and evidence can still shift public beliefs.

68

Impact Score

Microsoft emails show early doubts about OpenAI

Court emails show Microsoft executives were unconvinced by OpenAI’s early Artificial Intelligence progress in 2018 while also worrying that rejecting the lab could push it toward Amazon. The messages reveal internal tension between skepticism over technical claims and concern about competitive and public relations fallout.

Apple explores Intel chip manufacturing alliance

Apple has reached a preliminary agreement with Intel to manufacture some chips for its devices, reflecting mounting pressure on semiconductor supply chains as Artificial Intelligence demand absorbs advanced capacity. The move also aligns with Washington’s push to expand domestic chip production and revive Intel’s foundry business.

Why businesses must act now on agentic Artificial Intelligence risk

Businesses are moving from generative tools to autonomous Artificial Intelligence agents that can execute tasks with limited human input. That shift is creating urgent governance, security, and accountability risks, underscored by recent concerns around OpenClaw.

US signals proactive approach on Artificial Intelligence regulation

US federal and state agencies are showing signs of a more proactive stance on Artificial Intelligence oversight, especially around security. The shift contrasts with more sector-specific or horizontal regulatory models emerging in the UK, Europe, Singapore and Japan.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.