Recent reporting uncovered multiple cases in which patients discovered their therapists were using ChatGPT during sessions without prior disclosure. In one striking example a therapist accidentally shared his screen during a virtual appointment, allowing a patient to see private notes being typed into ChatGPT and then echoed back by the clinician. The episode prompted conversations about consent and the appearance of concealment when the practice is revealed.
Interviews with therapists and the reporting author, Laurie Clarke, show mixed motivations and attitudes toward these tools. Some clinicians view general-purpose Artificial Intelligence as a potential time-saver, especially for administrative tasks such as writing notes. Others expressed deep skepticism about relying on such models for clinical decision-making or treatment advice, preferring supervisors, colleagues, or literature. Practitioners reported wariness about inputting sensitive patient data into public models and emphasized the difference between commercial chat models and systems explicitly designed for therapy.
There are also professional and legal limits coming into focus. Professional organizations such as the American Counseling Association advise against using Artificial Intelligence tools to diagnose patients. Legislatures have begun to act: Nevada and Illinois have passed laws that bar the use of Artificial Intelligence in therapeutic decision-making, and more states could follow. Those developments reflect concerns that undisclosed or inappropriate use of these tools can undermine the trust central to therapeutic relationships.
The reporting situates the problem within a broader debate about how technology companies position conversational models. Some industry figures note that many people use ChatGPT like a therapist, but reporters and clinicians warn that what these models provide is not genuine therapy. Real therapy often requires challenge, discomfort, and clinical judgment, qualities that generic chat models do not reliably deliver. One clear takeaway from the coverage is the professional imperative for clinicians to disclose if and how they use Artificial Intelligence in patient care.