Therapists using artificial intelligence without disclosure damage patient trust

Anecdotes and studies show therapists are using Artificial Intelligence tools such as ChatGPT to draft messages or analyze sessions without telling clients. The undisclosed use can undermine trust and create privacy risks.

Several recent anecdotes describe therapists using Artificial Intelligence, most often ChatGPT, during or after sessions without informing clients. One patient, Declan, discovered his therapist was feeding their live session into ChatGPT after a technical mishap revealed the therapist´s screen. Others reported receiving polished messages that later appeared to contain AI prompts, leaving them surprised and distrustful. Confrontations with therapists produced apologies in some cases but also emotional fallout and, for some, ended the therapy relationship.

The article cites research that complicates the picture. A 2025 study in PLOS Mental Health found ChatGPT responses could conform better to therapeutic best practice in vignettes and were often indistinguishable from human replies, but participants who suspected AI authorship rated those responses lower. A 2023 Cornell study similarly found that AI-generated messages can increase closeness only when recipients are unaware of the tool´s role. Clinicians and researchers, including Adrian Aguilera at the University of California, Berkeley, argue that transparency and prior consent are essential if therapists intend to use Artificial Intelligence for drafting communications or generating ideas.

Beyond trust, privacy and safety are central concerns. Experts note general-purpose chatbots like ChatGPT are not HIPAA compliant and can expose sensitive information. Pardis Emami-Naeini of Duke University warns that seemingly innocuous details can allow reidentification, and that protecting patient data requires time and expertise that may defeat the convenience of these tools. The article also references specialized vendors such as Heidi Health, Upheal, Lyssn, and Blueprint that claim HIPAA compliance, while cautioning that any recording or storage of sessions carries leakage risk. Past incidents, including a 2020 hack of a mental health provider in Finland, are cited as warnings about the consequences of data breaches. The piece concludes that although Artificial Intelligence can offer efficiency and communication benefits for busy or burnt-out therapists, undisclosed use risks damaging the therapeutic relationship and may produce clinical errors if therapists rely on AI for judgment rather than using it transparently and sparingly.

72

Impact Score

Artificial intelligence sharpens humidity maps to improve forecasts

Researchers at Wrocław University of Environmental and Life Sciences used a super-resolution approach powered by Artificial Intelligence and NVIDIA GPUs to turn low-resolution GNSS snapshots into high-resolution 3D humidity maps, cutting retrieval errors in test regions.

Can an artificial intelligence doppelganger help me do my job

Digital clones combine hyperrealistic video, lifelike voice cloning, and conversational models to mimic a person. Startups promise that an artificial intelligence doppelgänger can scale personal interactions, but practical limits and safety concerns remain.

What health care providers actually want from artificial intelligence

In a market flooded with artificial intelligence promises, health care leaders now prioritize pragmatic, pressure-tested solutions that address staffing shortages, clinician burnout, costs, and patient bottlenecks. Vendors that demonstrate integration, real-world validation, explainability, and clear return on investment are more likely to gain traction.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.