Study reveals Artificial Intelligence’s potential to sway political opinions

A pre-registered study found widely used Artificial Intelligence systems can shift voter preferences by 15% in controlled experiments with nearly 6,000 participants. The research links prompt design to persuasiveness and identifies uneven accuracy tied to ideological targets.

A team from Cornell University and the British Institute for AI Safety used pre-registered experiments to assess whether large language models can influence voter attitudes. In controlled settings the researchers report that widely used Artificial Intelligence systems can alter voter preferences by 15%. The study involved nearly 6,000 participants from the US, Canada, and Poland who evaluated a political candidate, interacted with a chatbot, and then reassessed their evaluation.

In the US segment of the study, 2,300 people participated ahead of the 2024 presidential elections. The researchers observed that Artificial Intelligence had an amplifying effect when the interlocutors’ opinions aligned with participants. When the chatbot supported a candidate whom the participant did not, the study noted “more noticeable shifts” in which people significantly changed their views on the political figure. Similar patterns were reported in Canada and Poland. Messages focused on policy produced a greater persuasive effect than messages based on personality, and the study found that the accuracy of chatbot statements depended on the conversation: chatbots supporting right-wing candidates made more inaccurate statements compared to those advocating for left-wing candidates.

A related analysis published in Science tested 19 language models on 76,977 adults in the UK across more than 700 political questions to probe why persuasion occurs. Those authors concluded that prompt engineering methods affect persuasive ability more than model size: prompts that encouraged the model to introduce new information increased persuasiveness but reduced accuracy. Think tanks and experts cited in the reporting warned that younger conservatives may be more willing to delegate decisions to Artificial Intelligence than liberals, and raised concerns about the illusion of impartiality in large language models. The report closes by noting broader implications, including an example in which a simulated meeting using Artificial Intelligence agents revealed political pressure dividing decision-makers during discussions on interest rates.

78

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.