Therapists secretly using ChatGPT highlight Artificial Intelligence ethics in therapy

Reports that therapists are covertly using ChatGPT during sessions raise concerns about disclosure, trust, and the limits of Artificial Intelligence in mental-health care.

Recent reporting uncovered multiple cases in which patients discovered their therapists were using ChatGPT during sessions without prior disclosure. In one striking example a therapist accidentally shared his screen during a virtual appointment, allowing a patient to see private notes being typed into ChatGPT and then echoed back by the clinician. The episode prompted conversations about consent and the appearance of concealment when the practice is revealed.

Interviews with therapists and the reporting author, Laurie Clarke, show mixed motivations and attitudes toward these tools. Some clinicians view general-purpose Artificial Intelligence as a potential time-saver, especially for administrative tasks such as writing notes. Others expressed deep skepticism about relying on such models for clinical decision-making or treatment advice, preferring supervisors, colleagues, or literature. Practitioners reported wariness about inputting sensitive patient data into public models and emphasized the difference between commercial chat models and systems explicitly designed for therapy.

There are also professional and legal limits coming into focus. Professional organizations such as the American Counseling Association advise against using Artificial Intelligence tools to diagnose patients. Legislatures have begun to act: Nevada and Illinois have passed laws that bar the use of Artificial Intelligence in therapeutic decision-making, and more states could follow. Those developments reflect concerns that undisclosed or inappropriate use of these tools can undermine the trust central to therapeutic relationships.

The reporting situates the problem within a broader debate about how technology companies position conversational models. Some industry figures note that many people use ChatGPT like a therapist, but reporters and clinicians warn that what these models provide is not genuine therapy. Real therapy often requires challenge, discomfort, and clinical judgment, qualities that generic chat models do not reliably deliver. One clear takeaway from the coverage is the professional imperative for clinicians to disclose if and how they use Artificial Intelligence in patient care.

72

Impact Score

Artificial intelligence is coming for YouTube creators

More than 15.8 million YouTube videos from over 2 million channels appear in at least 13 public data sets used to train generative Artificial Intelligence video tools, often without creators’ permission. creators and legal advocates are contesting whether such mass downloading and training is lawful or ethical.

Netherlands issues new Artificial Intelligence Act guidance

Businesses in the Netherlands have been given updated guidance on how the new EU-wide Artificial Intelligence Act will affect them. The 21-page guide, available in English as a 287KB PDF, sets out practical steps to assess scope and compliance obligations.

FSU experts on the role of Artificial Intelligence in health care

Florida State University professors Zhe He and Delaney La Rosa are available to discuss how Artificial Intelligence is reshaping diagnosis, treatment planning and access to care in rural communities. Media can contact them for commentary and interviews.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.