Emerging AI Mammography Model May Enhance Clarity for Initial BI-RADS 3 and 4 Classifications

News
Article

In a study involving over 12,000 Asian women, researchers found that an artificial intelligence (AI) model converted over 83 percent of false positives in patients with initial BI-RADS 3 and 4 assessments into benign BI-RADS categories.

Employing deep learning algorithms, an emerging convolutional neural network (CNN) model may significantly enhance mammography detection of breast cancer.

For the retrospective study, recently published in Insights into Imaging, researchers developed and assessed a two-step CNN model that incorporates patch-level localization of malignant lesions and mammography image-level classification. The cohort was comprised of 12,433 Asian women, according to the study.

The study authors found that the AI model achieved AUCs of 93.3 percent and 94.7 percent for malignant lesion detection in two testing cohorts. In the two testing groups, the AI model offered average sensitivity of 82.55 percent, specificity of 95.5 percent and accuracy of 93.1 percent.

Emerging AI Mammography Model May Enhance Clarity for Initial BI-RADS 3 and 4 Classifications

Here one can see the use of AI with mammography in two patients with dense breasts: a 48-year-old woman with benign breast cancer (patient one) and a 42-year-old woman with invasive ductal carcinoma (patient two). (Images courtesy of Insights into Imaging.)

In subgroups of women with initial BI-RADS 3 and BI-RADS 4 assessments, the researchers said the AI model downgraded 83.1 percent of false-positive cases into benign BI-RADS categories and converted 54.1 percent of cases with false-negatives into malignant groups.

“AIS can serve as a clinically applicable tool to identify malignancy on mammography and further assist decision-making on the current BI-RADS lexicon, especially for stratified assessment within BI-RADS 3–4 categories to prevent over- and delayed diagnosis,” wrote lead study author Hongna Tan, M.D., who is affiliated with the Department of Radiology at the Henan Provincial People’s hospital and the People’s Hospital of Zhengzhou University in Zhengzhou, China, and colleagues.

The researchers noted significantly enhanced specificity with the AI model (ranging between 90.5 percent to 96.3 percent) in the validation and testing cohorts) in comparison to initial BI-RADS assessments (58.8 to 74.3 percent).

Three Key Takeaways

1. High diagnostic performance. The deep learning AI model demonstrated strong performance in breast cancer detection on mammography, achieving AUCs of 93.3 percent and 94.7 percent, with 82.6 percent sensitivity and 95.5 percent specificity, significantly enhancing diagnostic accuracy.

2. Improved BI-RADS stratification. The model effectively reclassified BI-RADS 3 and 4 assessments, downgrading 83.1 percent of false positives to benign and upgrading 54.1 percent of false negatives to malignant, offering a promising tool to reduce both over-diagnosis and delayed diagnoses.

3. Clinical relevance for Asian populations. Developed using a large cohort of Asian women with dense breast tissue, the AI model addresses population-specific challenges in mammography, potentially filling a gap left by prior models trained predominantly on Caucasian cohorts.

Noting low rates of mammography screening and utilization of computer-aided detection (CAD) in Asia, the study authors suggested the AI model may be advantageous in this patient population.

“Unlike previous large-scale AI-based mammographic studies that (were) mainly established on the Caucasian population, our (AI model was) established among Asians, who typically have a high proportion of dense glands and blurred lesions on mammograms,” added Tan and colleagues.

(Editor’s note: For related content, see “Mammography Study Compares False Positives Between AI and Radiologists in DBT Screening,” “Can Radiomic Parenchymal Phenotypes Derived from Mammography Enhance Risk Stratification for Breast Cancer?” and “Mammography Study Shows Key Findings with AI in DBT Screening.”)

In regard to study limitations, the authors conceded low percentages of BI-RADS 1 and BI-RADS 2 cases (ranging from 2.7 percent to 8.3 percent) in the different validation and testing cohorts. The researchers also noted the use of one mammography vendor (Hologic) for the reviewed mammograms in the study, and the lack of clinical information utilized in AI-assisted detection.

Recent Videos
Combining Advances in Computed Tomography Angiography with AI to Enhance Preventive Care
Study: MRI-Based AI Enhances Detection of Seminal Vesicle Invasion in Prostate Cancer
What New Research Reveals About the Impact of AI and DBT Screening: An Interview with Manisha Bahl, MD
Can AI Assessment of Longitudinal MRI Scans Improve Prediction for Pediatric Glioma Recurrence?
New Mammography Studies Assess Image-Based AI Risk Models and Breast Arterial Calcification Detection
Can Deep Learning Provide a CT-Less Alternative for Attenuation Compensation with SPECT MPI?
Employing AI in Detecting Subdural Hematomas on Head CTs: An Interview with Jeremy Heit, MD, PhD
Current and Emerging Legislative Priorities for Radiology in 2025
How Will the New FDA Guidance Affect AI Software in Radiology?: An Interview with Nina Kottler, MD, Part 2
How Will the New FDA Guidance Affect AI Software in Radiology?: An Interview with Nina Kottler, MD, Part 1
Related Content
© 2025 MJH Life Sciences

All rights reserved.