Therapists using artificial intelligence without disclosure damage patient trust

Anecdotes and studies show therapists are using Artificial Intelligence tools such as ChatGPT to draft messages or analyze sessions without telling clients. The undisclosed use can undermine trust and create privacy risks.

Several recent anecdotes describe therapists using Artificial Intelligence, most often ChatGPT, during or after sessions without informing clients. One patient, Declan, discovered his therapist was feeding their live session into ChatGPT after a technical mishap revealed the therapist´s screen. Others reported receiving polished messages that later appeared to contain AI prompts, leaving them surprised and distrustful. Confrontations with therapists produced apologies in some cases but also emotional fallout and, for some, ended the therapy relationship.

The article cites research that complicates the picture. A 2025 study in PLOS Mental Health found ChatGPT responses could conform better to therapeutic best practice in vignettes and were often indistinguishable from human replies, but participants who suspected AI authorship rated those responses lower. A 2023 Cornell study similarly found that AI-generated messages can increase closeness only when recipients are unaware of the tool´s role. Clinicians and researchers, including Adrian Aguilera at the University of California, Berkeley, argue that transparency and prior consent are essential if therapists intend to use Artificial Intelligence for drafting communications or generating ideas.

Beyond trust, privacy and safety are central concerns. Experts note general-purpose chatbots like ChatGPT are not HIPAA compliant and can expose sensitive information. Pardis Emami-Naeini of Duke University warns that seemingly innocuous details can allow reidentification, and that protecting patient data requires time and expertise that may defeat the convenience of these tools. The article also references specialized vendors such as Heidi Health, Upheal, Lyssn, and Blueprint that claim HIPAA compliance, while cautioning that any recording or storage of sessions carries leakage risk. Past incidents, including a 2020 hack of a mental health provider in Finland, are cited as warnings about the consequences of data breaches. The piece concludes that although Artificial Intelligence can offer efficiency and communication benefits for busy or burnt-out therapists, undisclosed use risks damaging the therapeutic relationship and may produce clinical errors if therapists rely on AI for judgment rather than using it transparently and sparingly.

72

Impact Score

How Intel became central to America’s Artificial Intelligence strategy

The Trump administration took a 10 percent stake in Intel in exchange for early CHIPS Act funding, positioning the struggling chipmaker at the core of U.S. Artificial Intelligence ambitions. The high-stakes bet could reshape domestic manufacturing while raising questions about government overreach.

NextSilicon unveils processor chip to challenge Intel and AMD

Israeli startup NextSilicon is developing a RISC-V central processor to complement its Maverick-2 chip for precision scientific computing, positioning it against Intel and AMD and in competition with Nvidia’s systems. Sandia National Laboratories has been evaluating the technology as the company claims faster, lower power performance without code changes on some workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.