Teaching Artificial Intelligence to Provide Therapy

Researchers at Dartmouth College have made progress with an Artificial Intelligence therapy bot that shows promise for mental health support.

On March 27, the results of the first clinical trial involving a generative Artificial Intelligence therapy bot called Therabot were revealed, indicating that participants with depression, anxiety, or at risk for eating disorders benefited from interacting with the bot. Despite initial skepticism, researchers from Dartmouth College’s Geisel School of Medicine believe appropriate training data is crucial for Artificial Intelligence therapy models to provide effective support.

The journey began with Therabot being trained on mental health-related conversations from online forums, which led to inadequate and misunderstood responses. It became evident that the bot’s responses mirrored the non-expert dialogue it learned from, rather than offering therapeutic advice. To rectify this, the team shifted their focus to transcripts from real therapy sessions and evidence-based cognitive behavioral therapy techniques, which yielded improved outcomes.

The time investment into developing Therabot was substantial, occurring over several years with significant human resources. This highlights concerns regarding other Artificial Intelligence therapy bots that lack rigorous training approaches. Moving forward, the key questions are whether the market’s plethora of AI therapy bots can utilize better data and whether these models can achieve FDA approval. These outcomes will significantly influence the future credibility and efficacy of AI-driven therapy solutions.

72

Impact Score

HMS researchers design Artificial Intelligence tool to quicken drug discovery

Harvard Medical School researchers unveiled PDGrapher, an Artificial Intelligence tool that identifies gene target combinations to reverse disease states up to 25 times faster than current methods. The Nature-published study outlines a shift from single-target screening to multi-gene intervention design.

How hackers poison Artificial Intelligence business tools and defences

Researchers report attackers are now planting hidden prompts in emails to hijack enterprise Artificial Intelligence tools and even tamper with Artificial Intelligence-powered security features. With most organisations adopting Artificial Intelligence, email must be treated as an execution environment with stricter controls.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.