Québec research institute Mila is elevating mental health safeguards for Artificial Intelligence chatbots to a top research priority amid rising global reports of chatbot-linked psychosis, mental health crises, and suicides. At a pre-conference event for the Mila Artificial Intelligence Policy Conference in Montréal, researchers and policy experts described how prolonged, emotionally intense interactions with chatbots can validate users’ delusions, a phenomenon they refer to as “Artificial Intelligence psychosis.” Through its Artificial Intelligence Safety Studio, Mila is developing independent metrics, guardrails, reliability tests, and risk-assessment tools aimed at limiting chatbot outputs that can reinforce harmful beliefs and, in extreme cases, have allegedly contributed to deaths by suicide.
Simona Gandrabur, head of Mila’s Artificial Intelligence Safety Studio, said she joined the institute with a determination to pivot its research toward the mental health impacts of chatbots. Gandrabur put emerging cases in context by noting that with 800 million weekly active users, 10 percent of Earth’s population is using ChatGPT weekly, according to OpenAI. She added that the number one use of generative Artificial Intelligence is for companionship or therapy, and that a fifth of students or their friends have had romantic relationships with Artificial Intelligence. She described large language models as a “raw mirror without a moral compass, not bound to truthfulness,” lacking deep understanding and reasoning, and warned that reinforcement learning techniques optimized for engagement can foster “sycophancy and [an] echo-chamber,” which existing alignment and guardrail systems do not fully prevent. A key challenge for her team is obtaining real-world conversational data that shows how months-long exchanges with chatbots gradually drift toward psychosis.
The conference also highlighted broader societal and regulatory gaps as chatbots shift from tools of information to tools of relationships. Etienne Brisson of The Human Line Project said his grassroots organization is tracking these trends and running support groups, emphasizing that many people affected by Artificial Intelligence psychosis had no prior mental health issues and that stigma is hindering understanding. Helen Hayes, associate director of policy at the McGill University Centre for Media, Technology, and Democracy and a Mila Artificial Intelligence Policy Fellow, argued Canada needs a “recalibration” of existing frameworks, including obligations for companies to design safety into models, institutional oversight to assess chatbots before public use, and youth participation in governance. Speakers pointed to recent lawsuits against Google, Character.AI, and OpenAI alleging chatbots encouraged suicide, and contrasted Canada’s lack of Artificial Intelligence-specific legislation, after the Artificial Intelligence and Data Act died in January 2025, with moves in other jurisdictions such as systemic risk assessments in the European Union and Australia’s classification of Artificial Intelligence companions as high-risk technology.
