Amid growing regulatory and platform requirements for transparency, a new field experiment examines consumer reactions to explicit disclosures of artificial intelligence participation in social media advertising. Researchers Huaqing Huang and Juanjuan Meng of Peking University conducted the study by testing user engagement and purchase intentions when advertising content was labeled as generated by ChatGPT, either solely or collaboratively with humans.
Contrasting the widely cited theory of algorithm aversion—where people show reluctance or skepticism toward automated content—the experiment revealed a positive impact from disclosure. When social media posts identified ChatGPT’s involvement, both in artificial intelligence–generated and human-artificial intelligence co-produced ads, users not only interacted more frequently with the content but also reported higher intentions to purchase the advertised products. These effects were further corroborated by a supplementary survey-based experiment. Specifically, the uplift was strongest when transparency about human-artificial intelligence collaboration was present, suggesting a nuanced consumer appreciation for content that combines machine capability with human creativity.
The study´s mechanism analysis provides additional insight: consumer curiosity plays a central role in driving engagement with artificial intelligence–generated content. However, when content is created through human-artificial intelligence collaboration, both curiosity and a distinct preference for this type of hybrid creation are at work. This revealed preference indicates potential market value in maintaining human involvement in content creation, even as automated solutions become more prevalent. The findings carry significant implications for marketers, platforms, and policymakers, illuminating a path where greater transparency and hybrid creation models may foster more authentic and effective consumer engagements, rather than eroding trust or interest as previously assumed.
