Therapists secretly using ChatGPT highlight Artificial Intelligence ethics in therapy

Reports that therapists are covertly using ChatGPT during sessions raise concerns about disclosure, trust, and the limits of Artificial Intelligence in mental-health care.

Recent reporting uncovered multiple cases in which patients discovered their therapists were using ChatGPT during sessions without prior disclosure. In one striking example a therapist accidentally shared his screen during a virtual appointment, allowing a patient to see private notes being typed into ChatGPT and then echoed back by the clinician. The episode prompted conversations about consent and the appearance of concealment when the practice is revealed.

Interviews with therapists and the reporting author, Laurie Clarke, show mixed motivations and attitudes toward these tools. Some clinicians view general-purpose Artificial Intelligence as a potential time-saver, especially for administrative tasks such as writing notes. Others expressed deep skepticism about relying on such models for clinical decision-making or treatment advice, preferring supervisors, colleagues, or literature. Practitioners reported wariness about inputting sensitive patient data into public models and emphasized the difference between commercial chat models and systems explicitly designed for therapy.

There are also professional and legal limits coming into focus. Professional organizations such as the American Counseling Association advise against using Artificial Intelligence tools to diagnose patients. Legislatures have begun to act: Nevada and Illinois have passed laws that bar the use of Artificial Intelligence in therapeutic decision-making, and more states could follow. Those developments reflect concerns that undisclosed or inappropriate use of these tools can undermine the trust central to therapeutic relationships.

The reporting situates the problem within a broader debate about how technology companies position conversational models. Some industry figures note that many people use ChatGPT like a therapist, but reporters and clinicians warn that what these models provide is not genuine therapy. Real therapy often requires challenge, discomfort, and clinical judgment, qualities that generic chat models do not reliably deliver. One clear takeaway from the coverage is the professional imperative for clinicians to disclose if and how they use Artificial Intelligence in patient care.

72

Impact Score

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.