In a recent interview, Rajesh Bhayana, M.D., shared insights from new research that compared the abilities of ChatGPT-3.5 and ChatGPT-4 to answer text-based questions akin to those found on a radiology board examination.
The recently released ChatGPT-4 (Open AI) may offer more advanced reasoning, be less prone to hallucinations, and be more capable of passing a radiology board exam than ChatGPT-3.5 (Open AI), according to newly published research.
In prospective studies published recently in Radiology, researchers assessed the performance of ChatGPT-3.5 and ChatGPT-4 in answering 150 text-based multiple-choice questions akin to those found on a radiology board examination.
The researchers found that the ChatGPT-4 model correctly answered more than 80 percent of the questions in comparison to 69 percent for ChatGPT-3.5. ChatGPT-4 also demonstrated a greater than 20 percent improvement over ChatGPT-3.5 on questions that required higher-order thinking, including description of imaging findings, classifications, and application of concepts, according to the study authors.
In a recent interview, Rajesh Bhayana, MD, FRCPC, the lead author of the studies, said it is apparent that the technology with ChatGPT is showing significant improvement.
“The fact that ChatGPT-4 performed better than ChatGPT-3.5 and had less frequent incorrect answers and also performed better with higher-order reasoning suggests the frequency of hallucinations is in fact decreasing,” noted Dr. Bhayana, an abdominal radiologist, and technology lead in the Department of Medical Imaging at the University of Toronto in Canada.
(Editor’s note: For related content, see “Can ChatGPT Have an Impact in Radiology?” and “Can ChatGPT Provide Appropriate Information on Mammography and Other Breast Cancer Screening Topics?”)
While Dr. Bhayana said there is significant potential with the use of ChatGPT in radiology, he cautioned that accuracy remains an issue and use of the technology still requires rigorous fact checking.
“It was very impressive that these models, based on the way they work and based on the fact that they are general models, performed so well in a specialty like radiology where language is so critical,” maintained Dr. Bhayana. “(But) it still does get things wrong. When it does get those things wrong, it uses very confident language. If you’re a novice and you can’t separate fact from fiction, it can be tough to know what’s right and what’s wrong. Especially for education, especially for novices when you’re looking up that information and learning something for the first time, you can’t rely on it. If you do use it, you have to always fact check it.”
For more insights from Dr. Bhayana, watch the video below.
Study Assesses Lung CT-Based AI Models for Predicting Interstitial Lung Abnormality
September 6th 2024A machine-learning-based model demonstrated an 87 percent area under the curve and a 90 percent specificity rate for predicting interstitial lung abnormality on CT scans, according to new research.
What a Prospective CT Study Reveals About Adjunctive AI for Triage of Intracranial Hemorrhages
September 4th 2024Adjunctive AI showed no difference in accuracy than unassisted radiologists for intracranial hemorrhage (ICH) detection and had a slightly longer mean report turnaround time for ICH-positive cases, according to newly published prospective research.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
FDA Expands Clearance of MRI-Guided AI Platform for Deep Brain Stimulation and Lesioning Techniques
September 3rd 2024Utilizing a new machine learning model, the OptimMRI software may improve radiosurgery applications and lesioning techniques such as MRI-guided focused ultrasound through enhanced targeting of the inferolateral part of the ventral intermediate nucleus (VIM).