© 2026 | WUWF Public Media
11000 University Parkway
Pensacola, FL 32514
850 474-2787
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

ChatGPT might give you bad medical advice, studies warn

Andriy Onufriyenko/Moment RF
/
Getty Images

As tech companies roll out platforms specifically designed for health care consultation, AI is rapidly becoming a key player in many people's medical decisions. According to OpenAI, the maker of ChatGPT, more than 40 million people consult the platform every day for health information.

But new research suggests AI may mislead users in certain medical scenarios.

One risk: While AI puts vast medical knowledge at your fingertips, many laypeople don't know how to harness it effectively. In a study published recently in the journal Nature Medicine, researchers tried to simulate how people use AI chatbots by giving participants medical scenarios and asking them to consult AI tools. After conversing with the bots, participants correctly identified the hypothetical condition only about a third of the time.

Only 43% made the correct decision about next steps, such as whether to go to the emergency room or stay home.

"People don't know what they are supposed to be telling the model," says Andrew Bean, who studies AI systems at Oxford University and was one of the authors on this study.

Bean says often when using AI, arriving at a helpful conclusion comes down to word choice. "Doctors are trained to ask you questions about symptoms you might not have realized you should have mentioned," says Bean.

In one scenario, two different users gave slightly different depictions of the same scenario. One of them described "the worst headache I've ever had," and was directed by the AI to go to the emergency room immediately. The other – who did not use that explicit description – was told to take aspirin and stay home. "Turns out this was actually a life-threatening condition," says Bean.

There are some instances when AI excels at identifying medical issues — in some studies, large language models have sometimes matched or even outperformed physicians on diagnostic reasoning tasks. But the way people use AI Chatbots, says Bean, is far more messy than the controlled, clinical situations in which it performs well.

Correct diagnosis, wrong advice

Even in circumstances where AI is able to correctly identify the condition, it often does not present the next steps with the appropriate amount of urgency, according to another study.

Researchers presented the AI bots with different medical scenarios. In 52%of emergency cases, the bots "under-triaged," meaning treated the ailment as less serious than it was. In one example, it failed to direct a hypothetical patient with diabetic ketoacidosis and impending respiratory failure — a life-threatening condition — to go to the emergency department.

"When there was a textbook medical emergency, ChatGPT got it right," said Girish Nadkarni, a doctor and AI researcher at Mount Sinai who is an author on the study. The problem, said Nadkarni, is when there were more complicated scenarios in which there was an "element of time" at play – the bot often both over- and under- estimated the amount of time a patient could wait until pursuing care.

A spokesperson from OpenAI said this study did not represent the way people actually use ChatGPT, and that the previous study used an older version of ChatGPT that the company argues has since been corrected for some of the concerns that surfaced.

AI can improve a doctor's visit

Despite concerns about inaccuracy, doctors who study AI believe there is value in patients using it for health care information, and point to times it has even provided lifesaving advice.

"I encourage patients to use these tools," says Robert Wachter, a doctor at UC San Francisco and author of the recently published book, A Giant Leap: How AI Is Transforming Health Care and What That Means for Our Future.

Wachter argues that with health care difficult to afford and access, consulting AI is still often better than the alternatives. "The advice you get from the tools is substantially better than nothing and better than what you would get from your second cousin," says Wachter.

Still, Wachter stresses, AI is not a replacement for a doctor.

Adam Rodman, a hospitalist who researches AI programs at Harvard Medical School, discourages people from using AI to triage emergency situations, but says AI can add significant value to a patient's interaction with a human medical practitioner.

"A good time to use a large language model is when you're about to go see a doctor — or after you see your doctor," says Rodman. It can help you become more informed about your condition in advance of an appointment and use time with your providers efficiently, he says, giving patients the opportunity to partner with their doctor on decisions rather than engage in lengthy question and answer sessions.

"There are no downsides to better understanding your health," says Rodman.

AI in health care is here to stay

Doctors interviewed for this story acknowledge that AI and medicine are already inextricably entangled and imagine that both AI and humans will become more skilled at engaging with each other.

" My hope is that you might see AI as an extension of a human relationship," says Rodman. He imagines a future where both doctors and humans partner with AI in order to facilitate communication and overcome medical bureaucracy.

Rodman says there is a risk in AI. He fears a time when humans would be informed of scary diagnoses — such as cancer — by a bot, rather than a human. Studies show that when health care is treated more like a business or marketplace product, people trust doctors less.

 "What I hope is that this technology can be used in a way that enhances humanity in medicine," says Rodman "and not in a way that cuts out the doctor-patient relationship."

Copyright 2026 NPR

Tags
Katia Riddle
[Copyright 2024 NPR]