Researchers at the MIT Media Lab conducted what they describe as the first large-scale computational analysis of an adults-only Reddit community devoted to relationships with chatbots and found that many users did not set out to form romantic bonds. Instead, relationships often emerged unintentionally while people used general-purpose systems like ChatGPT for tasks such as creative projects or problem-solving. The findings suggest that users are more likely to form relationships with large language models not explicitly designed for companionship than with dedicated apps like Replika, underscoring how conversational fluency and perceived emotional intelligence can foster attachment even when neither the user nor the system’s maker intended it. The paper has been posted on arXiv and is under peer review.
The team analyzed the top 1,506 posts published between December 2024 and August 2025 in a community with more than 27,000 members. Discussions centered on dating and romantic experiences with chatbots, introductions of artificial companions to the community, requests for support, and coping with system updates that change chatbot behavior. Members frequently shared Artificial Intelligence generated images featuring themselves and their partners, and some reported becoming engaged or married to their chatbot. Only 6.5 percent said they deliberately sought an Artificial Intelligence companion, reinforcing the theme of unintended relationship formation.
The reported impacts ranged widely. About 25 percent of users described benefits such as reduced loneliness and improvements in mental health. Others highlighted risks: 9.5 percent acknowledged emotional dependence, some said they felt dissociated or avoided relationships with real people, and 1.7 percent reported suicidal ideation. Outside researcher Linnea Laestadius argued that user safety cannot follow a one-size-fits-all model and cautioned against moral panic and stigmatization, urging developers to decide whether emotional dependence is a harm in itself or whether the goal is to prevent toxic dynamics while recognizing demand for these relationships.
The study focused on adults and did not capture youth experiences, which are under intense scrutiny amid lawsuits alleging that companion-like behavior in models from Character.AI and OpenAI contributed to the suicides of two teenagers. OpenAI has announced plans for a separate version of ChatGPT for teenagers, plus age verification and parental controls, and did not comment on the MIT study. Many community members say they know their companions are not sentient yet still feel genuine bonds, raising design and policy questions about how to provide support without pulling users into emotional dependency. The researchers plan to examine how human-machine relationships evolve over time and note that some users view Artificial Intelligence companionship as preferable to loneliness, even as others warn about manipulation by sycophantic systems.