
ChatGPT is a hot topic these days, with some championing the evolving AI-powered technology and others urging caution about how and where it is used. New research finds that ChatGPT is more empathetic and provides higher-quality answers to real-world health questions than human doctors, begging the question: Can doctors be replaced?
Virtual healthcare—itself a form of artificial intelligence—emerged at the height of the COVID-19 pandemic as a way to give people access to medical professionals during quarantine or lockdown.
But the rise of virtual care during the pandemic has put additional pressure on physicians, who have seen a 1.6-fold increase in electronic patient information, adding more than two minutes of electronic patient record work per message and additional after-hours work. The increased workload has led to 62% of U.S. physicians reporting symptoms of burnout — a record high — and increased the likelihood that patients’ messages will not be answered.
This has prompted researchers at the University of California (UC) San Diego to consider the role of artificial intelligence in medicine, and ChatGPT in particular. They wanted to see if ChatGPT could accurately answer the types of questions patients were asking their doctors.
“ChatGPT might be able to pass a medical licensing exam,” said co-author Dr. Davey Smith, a physician and professor at UC San Diego School of Medicine, “but answering patients’ questions directly, accurately, and compassionately is another story.”
To obtain a large, diverse sample group, the research team turned to Reddit’s AskDocs subreddit (r/AskDocs), which has more than 480,000 members and posts questions answered by licensed, verified healthcare professionals. medical problems.
The researchers randomly sampled 195 unique communications from r/AskDocs, fed raw questions into ChatGPT (version 3.5) and asked it to generate responses. Three licensed healthcare professionals evaluated each question and the corresponding response, unaware of whether it came from a physician or ChatGPT.
Evaluators were first asked which response was “better”. They then had to rate the quality of the information provided (very poor, poor, acceptable, good, or very good) and the empathy or bedside attitude provided (not empathetic, slightly empathetic, moderately empathetic Empathetic, Empathetic, Very Empathetic) to rate. Evaluators preferred ChatGPT responses 79% of the time.
“ChatGPT messages responded with nuanced and precise information that often addressed aspects of a patient’s problem better than a physician’s response,” said co-author Jessica Kelly.
ChatGPT was also found to provide higher quality responses. ChatGPT was 3.6 times more likely to respond “good” or “very good” than doctors (ChatGPT 78.5% vs doctors 22.1%). AI is more empathetic. ChatGPT was 9.8 times more likely to respond “Empathetic” or “Very Empathetic” than doctors (ChatGPT 45.1% vs Doctors 4.6%).
That doesn’t mean doctors are indispensable, the researchers said. They need to use ChatGPT as a learning tool to improve their practice.
“While our study pits ChatGPT against your doctor, the ultimate solution isn’t to ditch your doctor entirely,” said study co-author Adam Poliak. “Instead, physicians leveraging ChatGPT are the answer to better and more caring care.”
Investing in ChatGPT and other artificial intelligence will have a positive impact on patient health and physician performance, researchers say.
“We can use these technologies to train physicians to communicate in a patient-centered way, eliminate health disparities suffered by minority groups who often seek health care through care to assist physicians,” the partners said. – Author Mark Drez.
As a result of the findings, the researchers are considering randomized controlled trials to gauge how the use of AI in medicine could affect physician and patient outcomes.
The study was published in the journal JAMA Internal Medicine.