New research suggests that adjunctive artificial intelligence (AI) may increase the focus of radiologists on suspicious areas of mammograms.
For the retrospective study, recently published in Radiology, researchers compared the use of adjunctive AI software (Transpara version 2.1.0, ScreenPoint Medical) to unassisted radiologist interpretation for 150 screening mammograms. There were 12 reviewing radiologists with screening mammography experience ranging between four to 32 years, according to the study. The study authors also reviewed data from eye tracking recordings of the radiologists.
In comparison to unassisted reading, the study authors found no significant differences with AI in regard to reading time (29.4 seconds vs. 30.8 seconds). The researchers also noted comparable sensitivity (81.7 percent vs. 87.2 percent) and specificity (89 percent vs. 91.1 percent).
However, adjunctive AI led to increased fixation on lesions (4.4 seconds vs. 5.4 seconds) and less time spent on the rest of the breast (11.1 percent vs. 9.5 percent), according to the study authors. They also pointed out a 4 percent increase in AUC with adjunctive AI.
“Despite no observed difference in overall reading time, eye tracking data revealed that radiologists examined the images less comprehensively when assisted by AI. Breast fixation coverage was lower, while fixation time in lesion regions was higher, suggesting that AI helped radiologists spend more time examining diagnostically relevant areas,” wrote lead study author Jessie J.J. Gommers, M.D., who is affiliated with the Department of Medical Imaging at Radboud University Medical Center in Nijmegen, the Netherlands, and colleagues.
The study authors also noted no statistical difference between unassisted reading and the use of adjunctive AI with the time to first fixation in the lesion region (3.4 seconds vs. 3.8 seconds).
“This may be because, with AI support, radiologists initially followed their usual search strategy while considering the AI-assigned examination category before activating the AI markers. Only after clicking the AI marker button did their attention shift primarily to the AI-flagged regions while also reviewing some additional areas,” added Gommers and colleagues.
Three Key Takeaways
1. AI enhances focus on suspicious lesions. Adjunctive AI led to increased fixation time on lesions and less time on non-suspicious areas, suggesting AI helps radiologists concentrate more on diagnostically relevant regions.
2. No significant impact on reading efficiency or accuracy. There was no significant difference in overall reading time, sensitivity, or specificity between AI-assisted and unassisted interpretations, indicating that AI does not hinder performance but subtly shifts radiologist behavior.
3. Behavioral changes may reflect higher perceived prevalence. Eye-tracking data showed reduced breast coverage and focused lesion inspection when AI was used. This behavioral shift may be influenced by the cancer-enriched dataset, possibly affecting radiologists' perception of disease likelihood and raising questions about how AI changes search patterns in real-world, low-prevalence settings.
In an accompanying editorial, Jeremy M. Wolfe, M.D., noted the cancer-enriched data set utilized in the study and the “modest” improvement in AUC with adjunctive AI. In addition to calling for evaluation of the AI software in a real-world mammography screening cohort with lower cancer prevalence, Dr. Wolfe emphasized that behavioral research is an important consideration as well.
“If we suppose that the prevalence of disease is about 0.4% in a North American screening population, triaging even 50% of the cases would increase that prevalence twofold to only 0.8% in the remaining cases. It would be interesting to know if readers behaved as if the radiologists’ estimates of the actual prevalence were much higher. … This matters because prevalence has a significant effect on search behavior with false-negative errors rising as prevalence falls,” posited Dr. Wolfe, a professor of ophthalmology and radiology at Harvard Medical School.
(Editor’s note: For related content, see “Considering Breast- and Lesion-Level Assessments with Mammography AI: What New Research Reveals,” “New Study Examines Key Factors with False Negatives on AI Mammography Analysis” and “Mammography AI Platform for Five-Year Breast Cancer Risk Prediction Gets FDA De Novo Authorization.”)
In regard to study limitations, the study authors acknowledged the cancer-enriched data set and a lack of access to previous exams or other clinical data for the reviewing radiologists. They also noted the use of a mammography system from one vendor and assessment of only one AI system.