With federal policy still taking shape, several states have begun passing laws to rein in apps that promise “therapy” powered by Artificial Intelligence. The measures vary widely and struggle to keep pace with fast-moving software development, creating a patchwork that developers, policymakers and mental health advocates say neither fully protects users nor cleanly holds makers of harmful technology accountable. Illinois and Nevada have enacted outright bans on products that claim to provide mental health treatment, while Utah has imposed limits that include protecting users’ health information and clear disclosures that a chatbot is not human. Pennsylvania, New Jersey and California are considering their own approaches.
Gaps remain. Many state measures do not cover general-purpose chatbots, such as ChatGPT, that are not marketed for therapy but are nevertheless used by an unknown number of people for it. Those bots have been named in lawsuits after users suffered severe harm, including suicide. Vaile Wright of the American Psychological Association said demand is rising due to a nationwide provider shortage, high costs and uneven access, and that rigorously designed tools with expert input and human monitoring could help. She argued, however, that the commercial market is not there yet and called for federal oversight. The Federal Trade Commission this month opened inquiries into seven chatbot companies, including the owners of Instagram and Facebook, Google, ChatGPT, Grok on X, Character.AI and Snapchat, to examine how they test and monitor risks to children and teens. The Food and Drug Administration will convene an advisory committee on Nov. 6 to review generative Artificial Intelligence-enabled mental health devices. Potential guardrails, Wright said, include marketing limits, requirements to disclose that bots are not medical providers, curbs on addictive design, tracking and reporting of suicidal ideation, and protections for those who report abuses.
Implementation is uneven. Earkick, whose chatbot presents as a cartoon panda, has not restricted access in Illinois and recently shifted its marketing from “empathetic AI counselor” to “chatbot for self care.” The company says it does not diagnose, nudges users toward therapists when needed and offers a “panic button” to call a trusted contact, but it is not a suicide prevention app and does not alert police. Other apps have pulled back: Illinois users who download Ash are told to contact lawmakers and warned that bans miss the intended targets. Illinois regulator Mario Treto Jr. said therapy demands empathy, clinical judgment and ethical responsibility that Artificial Intelligence cannot currently replicate.
Researchers at Dartmouth are testing whether a chatbot can meet that bar. In March, the team behind Therabot published what they describe as the first randomized clinical trial of a generative Artificial Intelligence mental health chatbot. Trained on evidence-based vignettes, Therabot produced symptom reductions over eight weeks compared with a control group, and every interaction was human-monitored for safety and fidelity. Lead researcher Nicholas Jacobson called the results promising but urged larger studies and far greater caution across the field. He and others worry that blanket bans give careful developers no pathway to demonstrate safety and effectiveness. Supporters of the laws say they are open to revisions but insist chatbots are not a fix for the clinician shortage. As lobbyist Kyle Hillman put it, offering a bot to people with serious conditions or suicidal thoughts is not an acceptable substitute for real care.