Most leading artificial intelligence chatbots no longer warn users that they aren´t medical professionals when providing health advice, according to new research led by Stanford University School of Medicine´s Sonali Sharma. The study analyzed 15 major models from companies such as OpenAI, Anthropic, Google, DeepSeek, and xAI, tracking their responses to 500 health-related questions and 1,500 medical image analyses over several years. Whereas disclaimers were common in 2022, by 2025 fewer than 1% of chatbot responses referenced their limitations in medical knowledge, a dramatic drop from over a quarter just three years prior.
Researchers found that as artificial intelligence systems became more capable and accurate in analyzing medical images, they grew even less likely to caution users about the dangers of trusting machine-generated medical advice. The lack of such disclaimers gives users the impression that these systems are safer and more reliable than they might be, possibly encouraging risky reliance. Specific prompts that once triggered warnings—questions about emergency symptoms, medication interactions, or interpreting biopsies—now receive direct answers, sometimes even attempting a diagnosis or follow-up, without any explicit reminder that the advice is not from a qualified physician.
While some companies, like Anthropic, assert their models are trained to be cautious regarding medical claims, OpenAI and others deflect responsibility to users via buried terms of service. Independent experts, such as MIT’s Pat Pataranutaporn, caution that removing disclaimers may be a tactic to build trust and grow user numbers, but it risks real-world harm when users fail to recognize chatbots’ medical limitations. The trend is most pronounced in DeepSeek and xAI’s Grok, which routinely forgo disclaimers entirely. Although disclaimers persist slightly more with mental health topics, the overall decline in risk acknowledgment from artificial intelligence raises concerns about user overtrust and the potential for unchecked misinformation in sensitive healthcare contexts.