Therapists secretly using ChatGPT highlight Artificial Intelligence ethics in therapy

Reports that therapists are covertly using ChatGPT during sessions raise concerns about disclosure, trust, and the limits of Artificial Intelligence in mental-health care.

Recent reporting uncovered multiple cases in which patients discovered their therapists were using ChatGPT during sessions without prior disclosure. In one striking example a therapist accidentally shared his screen during a virtual appointment, allowing a patient to see private notes being typed into ChatGPT and then echoed back by the clinician. The episode prompted conversations about consent and the appearance of concealment when the practice is revealed.

Interviews with therapists and the reporting author, Laurie Clarke, show mixed motivations and attitudes toward these tools. Some clinicians view general-purpose Artificial Intelligence as a potential time-saver, especially for administrative tasks such as writing notes. Others expressed deep skepticism about relying on such models for clinical decision-making or treatment advice, preferring supervisors, colleagues, or literature. Practitioners reported wariness about inputting sensitive patient data into public models and emphasized the difference between commercial chat models and systems explicitly designed for therapy.

There are also professional and legal limits coming into focus. Professional organizations such as the American Counseling Association advise against using Artificial Intelligence tools to diagnose patients. Legislatures have begun to act: Nevada and Illinois have passed laws that bar the use of Artificial Intelligence in therapeutic decision-making, and more states could follow. Those developments reflect concerns that undisclosed or inappropriate use of these tools can undermine the trust central to therapeutic relationships.

The reporting situates the problem within a broader debate about how technology companies position conversational models. Some industry figures note that many people use ChatGPT like a therapist, but reporters and clinicians warn that what these models provide is not genuine therapy. Real therapy often requires challenge, discomfort, and clinical judgment, qualities that generic chat models do not reliably deliver. One clear takeaway from the coverage is the professional imperative for clinicians to disclose if and how they use Artificial Intelligence in patient care.

72

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.