Artificial Intelligence Agents Spontaneously Develop Social Norms, Study Finds

Groups of artificial intelligence agents can independently create social conventions, mirroring society’s emergence of norms, new research reveals.

A pioneering study led by City St George´s, University of London and the IT University of Copenhagen has demonstrated that groups of artificial intelligence agents, specifically large language models, can independently develop shared social norms through repeated interaction—without human intervention or central coordination. The findings, published in Science Advances, challenge the prevailing view that artificial intelligence agents operate simply as isolated systems and highlight the increasing societal relevance of multi-agent artificial intelligence networks in digital environments.

The researchers employed a version of the ´naming game´, a classic experimental framework used in human sociolinguistics, to observe how groups of language models select and converge on shared linguistic conventions. In simulated experiments, clusters of up to 200 artificial intelligence agents, each powered by models such as Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet, were randomly paired to choose names from a shared pool. Agents received rewards for choosing the same name as their partner and penalties otherwise, with only limited memory of previous encounters and no explicit instructions about group membership. Over time, stable social norms emerged spontaneously within the groups, resembling the formation of societal conventions among humans.

Strikingly, the study also discovered ´collective biases´—emergent properties in the group’s behavior that could not be traced back to any individual agent but stemmed from their interactions. Senior author Professor Andrea Baronchelli noted this as a key blind spot in current artificial intelligence safety frameworks, which tend to focus exclusively on single-model behavior. Furthermore, a small but committed subgroup of agents was able to tip the majority toward a new convention, mirroring the critical-mass tipping points observed in human social change.

The study’s implications extend to the design and governance of future artificial intelligence populations as they become more integrated into digital societies, from autonomous vehicles to participatory online platforms. The research team emphasizes the need to understand and monitor the collective dynamics of artificial intelligence agents, as their ability to self-organize and propagate biases could amplify risks to marginalized groups. This new line of inquiry opens avenues for developing more robust frameworks for artificial intelligence safety, ethics, and societal coexistence.

81

Impact Score

Creating artificial intelligence that matters

The MIT-IBM Watson Artificial Intelligence Lab outlines how academic-industry collaboration is turning research into deployable systems, from leaner models and open science to enterprise-ready tools. With students embedded throughout, the lab targets real use cases while advancing core methods and trustworthy practices.

Inside the Artificial Intelligence divide roiling Electronic Arts

Electronic Arts is pushing nearly 15,000 employees to weave Artificial Intelligence into daily work, but many developers say the tools add errors, extra cleanup, and job anxiety. Internal training, in-house chatbots, and executive cheerleading are colliding with creative skepticism and ethical concerns.

China’s Artificial Intelligence ambitions target US tech dominance

China is closing the Artificial Intelligence gap with the United States through cost-efficient models, aggressive open-source releases and state-backed investment, even as chip controls and censorship remain constraints. Startups like DeepSeek and giants such as Alibaba and Tencent are helping redefine the balance of power.

Artificial Intelligence could predict who will have a heart attack

Startups are using Artificial Intelligence to mine routine chest CT scans for hidden signs of heart disease, potentially flagging high-risk patients who are missed today. The approach shows promise but faces unanswered clinical, operational, and reimbursement questions.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.