An analysis revealed that the satisfaction level of medical consultations provided by the interactive chatbot ‘ChatGPT’ surpasses that of doctors. It is better than doctors in terms of quality and empathy.


On the 1st, according to the University of California San Diego (UCSD), Professor John Ayers’ team at Qualcomm Research evaluated responses from doctors and ChatGPT to the same internal medicine questions and found that ChatGPT’s answers were far superior in both quality and empathy.


ChatGPT logo <br>Photo by Yonhap News

ChatGPT logo
Photo by Yonhap News

View original image

According to results published in the Journal of the American Medical Association (JAMA) in ‘JAMA Internal Medicine,’ the research team randomly selected 195 questions from the online community Reddit’s ‘AskDocs’ board. AskDocs is a space where approximately 452,000 members post medical questions and licensed medical professionals provide answers.


The research team provided ChatGPT’s answers to the randomly selected questions to three experts for comparative evaluation. The expert panel was blinded to whether the answers came from doctors or ChatGPT and was asked to assess the quality and empathy of the responses.


As a result, 79% of all expert evaluations indicated that ChatGPT’s responses were better than those of doctors. Regarding response quality, 78.5% of ChatGPT’s answers were rated as ‘good’ or ‘very good,’ whereas only 22.1% of doctors’ answers received ‘good’ or ‘very good’ ratings.


In terms of information quality, responses rating ChatGPT as ‘excellent’ or ‘very excellent’ were 3.6 times higher than those for doctors, and empathy ratings for ChatGPT were 9.8 times higher than for doctors. In the evaluation of empathy by medical experts, 45.1% of ChatGPT’s responses were rated as ‘agree’ or ‘strongly agree,’ while only 4.6% of doctors’ answers received ‘agree’ or ‘strongly agree’ ratings.


The research team stated that the purpose of the study was to explore whether ChatGPT can accurately answer questions patients ask doctors, and if so, whether integrating AI into the medical system could improve doctors’ responses to patient questions and reduce doctors’ workload.


In a medical expert evaluation, 78.5% of ChatGPT's responses were rated as "Good" or "Very Good," whereas only 22.1% of doctors' responses received "Good" or "Very Good" ratings. <br>[Photo by Yonhap News]

In a medical expert evaluation, 78.5% of ChatGPT's responses were rated as "Good" or "Very Good," whereas only 22.1% of doctors' responses received "Good" or "Very Good" ratings.
[Photo by Yonhap News]

View original image

They further anticipated that doctors could receive personalized medical advice from AI for patient-centered care, and that AI could also directly communicate with patients. Especially with the increase in telemedicine during the COVID-19 pandemic, AI is expected to significantly reduce doctors’ burdens.



Co-author of the paper, Professor Adam Poliak, said, “This study compared ChatGPT and doctors, but completely replacing doctors is not the ultimate solution,” adding, “Instead, doctors utilizing ChatGPT will be the answer to better empathetic care.”


This content was produced with the assistance of AI translation services.

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Today’s Briefing