In 2024 a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an Artificial Intelligence chatbot named Ashley to call and converse with voters. New multi-university research reported in Nature and Science shows those kinds of chatbot conversations can shift voter opinions more effectively than traditional political advertising. The studies find chatbots persuade by generating real-time information and tailoring arguments, but the most persuasive versions also tended to present more inaccurate claims.
In the Nature study researchers recruited more than 2,300 participants to chat with a partisan chatbot two months before the 2024 US presidential election. The models were trained to advocate for one of the top two candidates and were particularly effective discussing policy topics such as the economy and health care. Donald Trump supporters who chatted with an AI model favoring Kamala Harris moved 3.9 points toward her on a 100-point scale, a shift described as roughly four times the measured effect of political advertisements during the 2016 and 2020 elections. The symmetric condition produced a 2.3 points shift toward Trump among Harris supporters. In experiments ahead of the 2025 Canadian federal election and the 2025 Polish presidential election the team observed larger shifts of about 10 points among opposition voters.
The overlapping Science study deployed 19 LLMs to interact with nearly 77,000 participants in the UK on more than 700 political issues and varied computational power, training techniques, and rhetorical strategies. Instructing models to pack arguments with facts and evidence and fine-tuning them on persuasive conversations increased impact; the most persuasive model shifted initial dissenters 26.1 points toward agreement. However, increased persuasiveness correlated with reduced truthfulness, and right-leaning advocating chatbots produced more inaccurate claims across the three countries. Authors warn these dynamics could reshape electioneering and call for auditing and documenting accuracy in conversational deployments as a first step toward guardrails for democratic processes.
