In a recent interview, Rajesh Bhayana, M.D., shared insights from new research that compared the abilities of ChatGPT-3.5 and ChatGPT-4 to answer text-based questions akin to those found on a radiology board examination.
The recently released ChatGPT-4 (Open AI) may offer more advanced reasoning, be less prone to hallucinations, and be more capable of passing a radiology board exam than ChatGPT-3.5 (Open AI), according to newly published research.
In prospective studies published recently in Radiology, researchers assessed the performance of ChatGPT-3.5 and ChatGPT-4 in answering 150 text-based multiple-choice questions akin to those found on a radiology board examination.
The researchers found that the ChatGPT-4 model correctly answered more than 80 percent of the questions in comparison to 69 percent for ChatGPT-3.5. ChatGPT-4 also demonstrated a greater than 20 percent improvement over ChatGPT-3.5 on questions that required higher-order thinking, including description of imaging findings, classifications, and application of concepts, according to the study authors.
In a recent interview, Rajesh Bhayana, MD, FRCPC, the lead author of the studies, said it is apparent that the technology with ChatGPT is showing significant improvement.
“The fact that ChatGPT-4 performed better than ChatGPT-3.5 and had less frequent incorrect answers and also performed better with higher-order reasoning suggests the frequency of hallucinations is in fact decreasing,” noted Dr. Bhayana, an abdominal radiologist, and technology lead in the Department of Medical Imaging at the University of Toronto in Canada.
(Editor’s note: For related content, see “Can ChatGPT Have an Impact in Radiology?” and “Can ChatGPT Provide Appropriate Information on Mammography and Other Breast Cancer Screening Topics?”)
While Dr. Bhayana said there is significant potential with the use of ChatGPT in radiology, he cautioned that accuracy remains an issue and use of the technology still requires rigorous fact checking.
“It was very impressive that these models, based on the way they work and based on the fact that they are general models, performed so well in a specialty like radiology where language is so critical,” maintained Dr. Bhayana. “(But) it still does get things wrong. When it does get those things wrong, it uses very confident language. If you’re a novice and you can’t separate fact from fiction, it can be tough to know what’s right and what’s wrong. Especially for education, especially for novices when you’re looking up that information and learning something for the first time, you can’t rely on it. If you do use it, you have to always fact check it.”
For more insights from Dr. Bhayana, watch the video below.
MRI-Based Deep Learning Algorithm Shows Comparable Detection of csPCa to Radiologists
May 8th 2024In a study involving over 1,000 visible prostate lesions on biparametric MRI, a deep learning algorithm detected 96 percent of clinically significant prostate cancer (csPCa) in comparison to a 98 percent detection rate for an expert genitourinary radiologist.
FDA Clears AI-Powered Qualitative Perfusion Mapping for Cone-Beam CT
May 6th 2024Reportedly validated in more than 10 clinical trials, the AngioFlow perfusion imaging software enables timely identification of brain regions with cerebral blood flow reduction and those with significant hypoperfusion.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.