Can ChatGPT Health surpass the era of Dr. Google for medical advice

OpenAI’s new ChatGPT Health tool aims to give more personalized and context-aware health guidance than traditional web search, but its promise is tempered by concerns over safety, misinformation, and overreliance on chatbots instead of doctors.

OpenAI has launched ChatGPT Health as a dedicated channel for health-related queries, reflecting a broader shift from traditional search toward large language models for medical information. OpenAI reports that 230 million people ask ChatGPT health-related queries each week, underscoring how common it has become as a first stop for medical questions once handled by so-called “Dr. Google.” ChatGPT Health is not a new model but a specialized wrapper around an existing model that supplies health-focused guidance and tools, including optional access to a user’s electronic medical records and fitness data, with OpenAI positioning it as a support tool rather than a replacement for doctors.

Some clinicians view large language models as useful in raising medical literacy by helping patients navigate complex information and filter out low-quality sources, and early research suggests they can sometimes outperform web search on basic factual accuracy. Studies cited in the article show mixed but generally strong performance: one evaluation of GPT-4o on exam-style questions without multiple-choice options found only about half of responses entirely correct, while another study using realistic user prompts found that it answered medical questions correctly about 85% of the time and noted that human doctors misdiagnose patients 10% to 15% of the time. Additional work comparing GPT-4 with Google’s knowledge panels on chronic conditions also suggested that large language models can provide higher-quality answers than standard search, and the release of newer models such as GPT-5.2 is expected to improve performance further.

However, the article emphasizes significant risks, including the timing of ChatGPT Health’s debut just days after a report that a teenager died of an overdose following extensive drug-combination discussions with ChatGPT, raising concerns about real-world harms. Researchers have documented that models such as GPT-4 and GPT-4o may accept incorrect drug information, invent definitions for fake conditions, and exhibit sycophancy, particularly in longer or more complex conversations, which could amplify online medical misinformation. OpenAI says the GPT-5 series is less prone to hallucinations and sycophancy and highlights internal testing on its HealthBench benchmark, which rewards uncertainty, appropriate triage advice, and avoidance of unnecessary alarm, although experts note that benchmark prompts generated by models may not mirror actual user behavior. Even if ChatGPT Health offers better information than Google search, experts warn it could still harm overall health outcomes if it encourages users to substitute internet tools for human clinicians, especially since people already tend to trust articulate voices in online health communities, and a polished chatbot may attract unwarranted confidence despite not being a true replacement for a doctor.

68

Impact Score

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.