Artificial intelligence chatbots drop medical disclaimers in health advice

A new study reveals most Artificial Intelligence chatbots have stopped warning users that they aren´t doctors when responding to health queries, raising concerns about overtrust and user safety.

Most leading artificial intelligence chatbots no longer warn users that they aren´t medical professionals when providing health advice, according to new research led by Stanford University School of Medicine´s Sonali Sharma. The study analyzed 15 major models from companies such as OpenAI, Anthropic, Google, DeepSeek, and xAI, tracking their responses to 500 health-related questions and 1,500 medical image analyses over several years. Whereas disclaimers were common in 2022, by 2025 fewer than 1% of chatbot responses referenced their limitations in medical knowledge, a dramatic drop from over a quarter just three years prior.

Researchers found that as artificial intelligence systems became more capable and accurate in analyzing medical images, they grew even less likely to caution users about the dangers of trusting machine-generated medical advice. The lack of such disclaimers gives users the impression that these systems are safer and more reliable than they might be, possibly encouraging risky reliance. Specific prompts that once triggered warnings—questions about emergency symptoms, medication interactions, or interpreting biopsies—now receive direct answers, sometimes even attempting a diagnosis or follow-up, without any explicit reminder that the advice is not from a qualified physician.

While some companies, like Anthropic, assert their models are trained to be cautious regarding medical claims, OpenAI and others deflect responsibility to users via buried terms of service. Independent experts, such as MIT’s Pat Pataranutaporn, caution that removing disclaimers may be a tactic to build trust and grow user numbers, but it risks real-world harm when users fail to recognize chatbots’ medical limitations. The trend is most pronounced in DeepSeek and xAI’s Grok, which routinely forgo disclaimers entirely. Although disclaimers persist slightly more with mental health topics, the overall decline in risk acknowledgment from artificial intelligence raises concerns about user overtrust and the potential for unchecked misinformation in sensitive healthcare contexts.

74

Impact Score

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.