Artificial Intelligence chatbots reduce belief in conspiracy theories

A study published in Science found that short conversations with an Artificial Intelligence chatbot built on GPT-4 Turbo reduced belief in conspiracies for many participants, with effects persisting for at least two months.

Researchers reported in Science that brief, personalized conversations with an Artificial Intelligence chatbot substantially reduced belief in conspiracy theories. In an experiment with over 2,000 self-identified conspiracy believers, participants wrote the conspiracy they endorsed and the evidence that persuaded them, then engaged in a three-round chat with DebunkBot, a model built on OpenAI’s GPT-4 Turbo. The average session lasted 8.4 minutes. After the interaction, confidence in the targeted belief fell by 20 percent on average, and roughly one in four participants who initially endorsed a conspiracy said they no longer believed it. The reduction held across both classic and contemporary politically charged conspiracies and persisted at a two-month follow-up.

The authors attribute the effect to timely factual rebuttals rather than rhetorical framing. Follow-up experiments showed the debunking worked equally well when users were told they were talking to an expert rather than an Artificial Intelligence model, and it failed when the model was instructed to persuade without presenting facts and evidence. A professional fact-checker evaluated GPT-4’s claims and judged over 99 percent of them as true and not politically biased. In cases where a named conspiracy proved accurate, such as MK Ultra, the chatbot confirmed the correct belief rather than erroneously debunking it. The researchers argue that many conspiratorial beliefs reflect misinformed but relatively rational reasoning that can be shifted by clear, specialized explanations that are hard for laypeople to assemble quickly.

The study situates these findings within broader debates about generative Artificial Intelligence and misinformation. While acknowledging the harms of disinformation, the authors suggest that debunking bots could be deployed on social platforms, linked to search results, or used in private conversations to provide efficient, evidence-based rebuttals. The work was published as part of MIT Technology Review’s series on conspiracy theories and was conducted by Thomas Costello (Carnegie Mellon University), Gordon Pennycook (Cornell University), and David Rand (Cornell University). The researchers offer DebunkBot for public trial at debunkbot.com and present the results as evidence that facts and evidence can still shift public beliefs.

68

Impact Score

What the EU Artificial Intelligence Act means for U.S. employers

The EU Artificial Intelligence Act, effective August 1, 2024, reaches U.S. employers that use Artificial Intelligence affecting EU candidates or workers and treats many HR uses as high risk. Employers should inventory tools, prepare worker notice and human oversight, and strengthen vendor contracts ahead of phased obligations through 2026 and 2027.

Why Nvidia’s value is so high: market cap and future growth

Nvidia’s market capitalization reflects its leadership in GPUs for Artificial Intelligence and data centers, reinforced by a growing software ecosystem and strong investor expectations. The article outlines the technical and market drivers behind that valuation and notes risks such as competition and market volatility.

Zoom expands Artificial Intelligence companion with NVIDIA Nemotron

Zoom is integrating NVIDIA Nemotron into its Artificial Intelligence Companion 3.0, using a federated, hybrid language model approach to route tasks between small, low-latency models and a fine-tuned 49-billion-parameter large language model to improve speed, cost, and quality for enterprises.

Artificial intelligence transforming the insurance industry

Emmanuèle Lutfalla and Louis Fer examine how artificial intelligence is reshaping insurers’ operations, delivering efficiency and new products while creating legal, ethical and regulatory risks under the EU artificial intelligence framework.

ASUS launches GB721-E2 rack with NVIDIA GB300 NVL72 for Artificial Intelligence

ASUS introduced the XA GB721-E2, a rack-scale system built on the NVIDIA GB300 NVL72 designed for large-scale model training and high-throughput inference. The system pairs high-density NVIDIA Grace CPUs and Blackwell Ultra GPUs with liquid cooling and networking for enterprise Artificial Intelligence and HPC deployments.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.