Reddit users unknowingly part of controversial artificial intelligence experiment

Researchers used Reddit´s r/ChangeMyView subreddit for a covert Artificial Intelligence-based persuasion experiment, sparking outrage and renewed debate over research ethics online.

Users of Reddit´s r/ChangeMyView subreddit have voiced strong objections after learning they were unknowingly involved in an artificial intelligence-powered persuasion experiment conducted by researchers from the University of Zurich. The project involved posting over 1,700 comments generated by large language models into subreddit discussions to study their effectiveness in changing opinions, with no disclosure that the comments were artificial. Some posts simulated highly sensitive subjects, such as personal trauma or abuse counseling, raising significant ethical concerns among participants and observers.

The experiment´s methodology instructed artificial intelligence models that Reddit users had provided informed consent to donate their data, despite this not being the case. According to a draft of the study, the comments generated by artificial intelligence were found to be three to six times more effective at persuading users compared to human-made arguments, based on metrics tracking how often users publicly acknowledged a change in viewpoint. The study authors noted that users did not suspect the messages were generated by artificial intelligence, suggesting that such technologies could integrate seamlessly—and undetectably—into online communities. The lack of transparency only became known when subreddit moderators complained to the University of Zurich after the experiment´s disclosure.

The University of Zurich´s ethics committee had approved the research, but the experiment´s revelation prompted heated criticism from both subreddit members and academics. Critics argued that the deception violated ethical guidelines, especially as participants had not given consent nor were made aware of their involvement. Experts like Carissa Véliz of the University of Oxford and Matt Hodgkinson of the Committee on Publication Ethics questioned the decision and highlighted a contradiction in misleading artificial intelligence models about ethical approval while failing to uphold those standards themselves. In response to the backlash, the university committed to stricter future reviews and more involvement of online communities in research decisions, while the researchers have opted not to formally publish the controversial findings.

85

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.