In a recent interview, Rajesh Bhayana, M.D., shared insights from new research that compared the abilities of ChatGPT-3.5 and ChatGPT-4 to answer text-based questions akin to those found on a radiology board examination.
The recently released ChatGPT-4 (Open AI) may offer more advanced reasoning, be less prone to hallucinations, and be more capable of passing a radiology board exam than ChatGPT-3.5 (Open AI), according to newly published research.
In prospective studies published recently in Radiology, researchers assessed the performance of ChatGPT-3.5 and ChatGPT-4 in answering 150 text-based multiple-choice questions akin to those found on a radiology board examination.
The researchers found that the ChatGPT-4 model correctly answered more than 80 percent of the questions in comparison to 69 percent for ChatGPT-3.5. ChatGPT-4 also demonstrated a greater than 20 percent improvement over ChatGPT-3.5 on questions that required higher-order thinking, including description of imaging findings, classifications, and application of concepts, according to the study authors.
In a recent interview, Rajesh Bhayana, MD, FRCPC, the lead author of the studies, said it is apparent that the technology with ChatGPT is showing significant improvement.
“The fact that ChatGPT-4 performed better than ChatGPT-3.5 and had less frequent incorrect answers and also performed better with higher-order reasoning suggests the frequency of hallucinations is in fact decreasing,” noted Dr. Bhayana, an abdominal radiologist, and technology lead in the Department of Medical Imaging at the University of Toronto in Canada.
(Editor’s note: For related content, see “Can ChatGPT Have an Impact in Radiology?” and “Can ChatGPT Provide Appropriate Information on Mammography and Other Breast Cancer Screening Topics?”)
While Dr. Bhayana said there is significant potential with the use of ChatGPT in radiology, he cautioned that accuracy remains an issue and use of the technology still requires rigorous fact checking.
“It was very impressive that these models, based on the way they work and based on the fact that they are general models, performed so well in a specialty like radiology where language is so critical,” maintained Dr. Bhayana. “(But) it still does get things wrong. When it does get those things wrong, it uses very confident language. If you’re a novice and you can’t separate fact from fiction, it can be tough to know what’s right and what’s wrong. Especially for education, especially for novices when you’re looking up that information and learning something for the first time, you can’t rely on it. If you do use it, you have to always fact check it.”
For more insights from Dr. Bhayana, watch the video below.
Study: AI-Generated ADC Maps from MRI More Than Double Specificity in Prostate Cancer Detection
June 5th 2025Emerging research showed that AI-generated ADC mapping from MRI led to significant increases in accuracy, PPV and specificity in comparison to conventional ADC mapping while achieving a 93 percent sensitivity for PCa.
Can AI Assessment of PET Imaging Predict Treatment Outcomes for Patients with Lymphoma?
June 2nd 2025The use of adjunctive AI software with pre-treatment PET imaging demonstrated over a fourfold higher likelihood of predicting progression-free survival (PFS) in patients being treated for lymphoma, according to a new meta-analysis.
Lunit Unveils Enhanced AI-Powered CXR Software Update
May 28th 2025The Lunit Insight CXR4 update reportedly offers new features such as current-prior comparison of chest X-rays (CXRs), acute bone fracture detection and a 99.5 percent negative predictive value (NPV) for identifying normal CXRs.