Artificial Intelligence Agents Spontaneously Develop Social Norms, Study Finds

Groups of artificial intelligence agents can independently create social conventions, mirroring society’s emergence of norms, new research reveals.

A pioneering study led by City St George´s, University of London and the IT University of Copenhagen has demonstrated that groups of artificial intelligence agents, specifically large language models, can independently develop shared social norms through repeated interaction—without human intervention or central coordination. The findings, published in Science Advances, challenge the prevailing view that artificial intelligence agents operate simply as isolated systems and highlight the increasing societal relevance of multi-agent artificial intelligence networks in digital environments.

The researchers employed a version of the ´naming game´, a classic experimental framework used in human sociolinguistics, to observe how groups of language models select and converge on shared linguistic conventions. In simulated experiments, clusters of up to 200 artificial intelligence agents, each powered by models such as Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet, were randomly paired to choose names from a shared pool. Agents received rewards for choosing the same name as their partner and penalties otherwise, with only limited memory of previous encounters and no explicit instructions about group membership. Over time, stable social norms emerged spontaneously within the groups, resembling the formation of societal conventions among humans.

Strikingly, the study also discovered ´collective biases´—emergent properties in the group’s behavior that could not be traced back to any individual agent but stemmed from their interactions. Senior author Professor Andrea Baronchelli noted this as a key blind spot in current artificial intelligence safety frameworks, which tend to focus exclusively on single-model behavior. Furthermore, a small but committed subgroup of agents was able to tip the majority toward a new convention, mirroring the critical-mass tipping points observed in human social change.

The study’s implications extend to the design and governance of future artificial intelligence populations as they become more integrated into digital societies, from autonomous vehicles to participatory online platforms. The research team emphasizes the need to understand and monitor the collective dynamics of artificial intelligence agents, as their ability to self-organize and propagate biases could amplify risks to marginalized groups. This new line of inquiry opens avenues for developing more robust frameworks for artificial intelligence safety, ethics, and societal coexistence.

81

Impact Score

Port Washington vote challenges Artificial Intelligence data center expansion

Port Washington, Wisconsin, voters approved a measure that gives residents more control over large tax-incentivized development projects tied to the Artificial Intelligence infrastructure boom. The local pushback is emerging as a closely watched test of how communities respond to massive data center expansion.

Anthropic launches managed agents for enterprise development

Anthropic has introduced Claude Managed Agents, a new tool aimed at helping enterprises build and deploy Artificial Intelligence agents more quickly by handling core infrastructure tasks. The release adds to Anthropic’s recent product push as it competes for a fast-growing enterprise market.

Meta launches muse spark for its apps

Meta has introduced Muse Spark, an in-house large language model designed for its products and positioned as the first in a broader Muse family. The model brings multimodal reasoning, coding, shopping, and recommendation features to the Meta Artificial Intelligence app and website, with wider rollout planned.

Microsoft scales back Copilot in Windows 11 apps

Microsoft is pulling back some Copilot branding and interface elements from core Windows 11 apps after sustained user criticism. Notepad and Snipping Tool are among the latest apps to lose the prominent Copilot button as the company repositions some features.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.