Reddit users unknowingly part of controversial artificial intelligence experiment

Researchers used Reddit´s r/ChangeMyView subreddit for a covert Artificial Intelligence-based persuasion experiment, sparking outrage and renewed debate over research ethics online.

Users of Reddit´s r/ChangeMyView subreddit have voiced strong objections after learning they were unknowingly involved in an artificial intelligence-powered persuasion experiment conducted by researchers from the University of Zurich. The project involved posting over 1,700 comments generated by large language models into subreddit discussions to study their effectiveness in changing opinions, with no disclosure that the comments were artificial. Some posts simulated highly sensitive subjects, such as personal trauma or abuse counseling, raising significant ethical concerns among participants and observers.

The experiment´s methodology instructed artificial intelligence models that Reddit users had provided informed consent to donate their data, despite this not being the case. According to a draft of the study, the comments generated by artificial intelligence were found to be three to six times more effective at persuading users compared to human-made arguments, based on metrics tracking how often users publicly acknowledged a change in viewpoint. The study authors noted that users did not suspect the messages were generated by artificial intelligence, suggesting that such technologies could integrate seamlessly—and undetectably—into online communities. The lack of transparency only became known when subreddit moderators complained to the University of Zurich after the experiment´s disclosure.

The University of Zurich´s ethics committee had approved the research, but the experiment´s revelation prompted heated criticism from both subreddit members and academics. Critics argued that the deception violated ethical guidelines, especially as participants had not given consent nor were made aware of their involvement. Experts like Carissa Véliz of the University of Oxford and Matt Hodgkinson of the Committee on Publication Ethics questioned the decision and highlighted a contradiction in misleading artificial intelligence models about ethical approval while failing to uphold those standards themselves. In response to the backlash, the university committed to stricter future reviews and more involvement of online communities in research decisions, while the researchers have opted not to formally publish the controversial findings.

85

Impact Score

AGI Is Not Around the Corner: Why Today’s LLMs Aren’t True Intelligence

Today’s LLMs like GPT-4 and Claude are impressive pattern-recognition tools, but they’re not anywhere near true intelligence. Despite the hype, they lack core AGI traits like reasoning, autonomy, and real-world understanding. This article cuts through the noise, explaining why fears of imminent AGI are wildly premature.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend