Artificial intelligence chatbots drop medical disclaimers in health advice

A new study reveals most Artificial Intelligence chatbots have stopped warning users that they aren´t doctors when responding to health queries, raising concerns about overtrust and user safety.

Most leading artificial intelligence chatbots no longer warn users that they aren´t medical professionals when providing health advice, according to new research led by Stanford University School of Medicine´s Sonali Sharma. The study analyzed 15 major models from companies such as OpenAI, Anthropic, Google, DeepSeek, and xAI, tracking their responses to 500 health-related questions and 1,500 medical image analyses over several years. Whereas disclaimers were common in 2022, by 2025 fewer than 1% of chatbot responses referenced their limitations in medical knowledge, a dramatic drop from over a quarter just three years prior.

Researchers found that as artificial intelligence systems became more capable and accurate in analyzing medical images, they grew even less likely to caution users about the dangers of trusting machine-generated medical advice. The lack of such disclaimers gives users the impression that these systems are safer and more reliable than they might be, possibly encouraging risky reliance. Specific prompts that once triggered warnings—questions about emergency symptoms, medication interactions, or interpreting biopsies—now receive direct answers, sometimes even attempting a diagnosis or follow-up, without any explicit reminder that the advice is not from a qualified physician.

While some companies, like Anthropic, assert their models are trained to be cautious regarding medical claims, OpenAI and others deflect responsibility to users via buried terms of service. Independent experts, such as MIT’s Pat Pataranutaporn, caution that removing disclaimers may be a tactic to build trust and grow user numbers, but it risks real-world harm when users fail to recognize chatbots’ medical limitations. The trend is most pronounced in DeepSeek and xAI’s Grok, which routinely forgo disclaimers entirely. Although disclaimers persist slightly more with mental health topics, the overall decline in risk acknowledgment from artificial intelligence raises concerns about user overtrust and the potential for unchecked misinformation in sensitive healthcare contexts.

74

Impact Score

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.