MRI-Based Deep Learning Model Bolsters Prediction of PI-RADS 3 and 4 Lesions

News
Article

Offering an 87 percent sensitivity for clinically significant prostate cancer (csPCa), the deep learning model demonstrated an 86 percent AUC for predicting > PI-RADS 3 lesions on prostate MRI.

An emerging open-source deep learning model may provide enhanced risk stratification for the development of clinically significant prostate cancer (csPCa).

For a new retrospective study, recently published in European Radiology, the researchers reviewed prostate cancer (PCa) lesion probability maps generated by the deep learning model after assessment of biparametric magnetic resonance imaging (bpMRI) for 151 men (mean age of 65 and mean prostate-specific antigen (PSA) level of 8.3 ng/mL).1

The study authors found that the deep learning model had an 86 percent AUC for predicting PI-RADS > 3 lesions and a 91 percent AUC for predicting PI-RADS > 4 lesions. The deep learning model detected 100 percent of PI-RADS > 4 lesions and 93 percent of PI-RADS > 3 lesions, according to the researchers.1

MRI-Based Deep Learning Model Bolsters Prediction of PI-RADS 3 and 4 Lesions

Here one can see b1500 diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) mapping and a T2-weighted MRI in a case involving AI detection of a PCa lesion. Note the results with prostate segmentation (red contour) and lesion segmentation (heat map) on the T2W imaging. (Images courtesy of European Radiology.)

“The AUC of 0.86 for PI-RADS ≥ 3 is noteworthy as it approaches the performance of models trained with PIRADS labels that use a threshold of ≥ 4, despite the inherent challenge of including PI-RADS 3 lesions, which are diagnostically challenging yet significant for biopsy consideration. … The model’s accuracy for predicting PI-RADS ≥ 4 is consistent with state-of-the-art models,” wrote lead study author Patricia M. Johnson, M.D., who is affiliated with the Bernard and Irene Schwartz Center for Biomedical Imaging and the Department of Radiology at the New York University Grossman School of Medicine in New York City, and colleagues.

Overall, the researchers noted a 78 percent AUC and an 87 percent sensitivity for detecting csPCa with the deep learning model. In comparison, three reviewing radiologists (with five, two and two years of experience with prostate MRI interpretation) provided sensitivity rates ranging from 57 to 84 percent) for PI-RADS > 3, according to the study authors.1

Three Key Takeaways

  1. Strong predictive accuracy for significant lesions.
    The open-source deep learning model demonstrated high diagnostic performance, with AUCs of 0.86 for PI-RADS ≥ 3 and 0.91 for PI-RADS ≥ 4 lesions, detecting 100 percent of PI-RADS ≥ 4 and 93 percent of PI-RADS ≥ 3 lesions.
  2. Superior sensitivity compared to radiologists. The model showed 87 percent sensitivity for detecting clinically significant prostate cancer (csPCa), outperforming radiologists whose sensitivity ranged from 57 percent to 84 percent, though radiologists had better specificity.
  3. Potential for enhanced risk stratification. With an overall AUC of 0.78 for csPCa detection, the deep learning model shows promise for improving risk stratification, especially in challenging PI-RADS 3 cases, despite noted limitations such as short follow-up and higher than recommended imaging resolution.

However, the reviewing radiologists provided better specificity rates for csPCa detection (ranging between 67 to 87 percent) in contrast to 53 percent for the deep learning model.1 In comparison to a 2024 study assessing MRI-based artificial intelligence (AI) for PCa detection, this study’s findings revealed higher specificity and lower sensitivity for the deep learning model, according to the researchers.2

“This suggests a more conservative approach to lesion classification, prioritizing specificity over sensitivity,” added Johnson and colleagues.

(Editor’s note: For related content, see “Adjunctive AI Bolsters Lesion-Level PPVs for csPCa in International bpMRI Study,” “Study: Adjunctive AI Provides Over 18 Percent Higher Lesion-Level Sensitivity on Prostate MRI” and “Multinational Study Reaffirms Value of Adjunctive AI for Prostate MRI.”)

In regard to study limitations, the authors acknowledged a short follow-up period for confirming negative MRI findings as well as a slightly higher T2W frequency-encoding resolution (.56 mm) than the resolution recommendation with PI-RADS v2.1 (.4 mm).1

References

  1. Johnson PM, Tong A, Ginocchio L, et al. External evaluation of an open-source deep learning model for prostate cancer detection on bi-parametric MRI. Eur Radiol. 2025 Aug 3. doi: 10.1007/s00330-025-11865-x. Online ahead of print.

2. Saha A, Bosma JS, Twilt JJ et al. Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international,paired, non-inferiority, confirmatory study. Lancet Oncol. 2024;25(7):879-887.

Newsletter

Stay at the forefront of radiology with the Diagnostic Imaging newsletter, delivering the latest news, clinical insights, and imaging advancements for today’s radiologists.

Recent Videos
Emerging Concepts in CT-Guided Treatment of Coronary Artery Disease
Can Generative AI Reinvent Radiology Reporting?: An Interview with Samir Abboud, MD
Combining Advances in Computed Tomography Angiography with AI to Enhance Preventive Care
Study: MRI-Based AI Enhances Detection of Seminal Vesicle Invasion in Prostate Cancer
What New Research Reveals About the Impact of AI and DBT Screening: An Interview with Manisha Bahl, MD
Can AI Assessment of Longitudinal MRI Scans Improve Prediction for Pediatric Glioma Recurrence?
A Closer Look at MRI-Guided Adaptive Radiotherapy for Monitoring and Treating Glioblastomas
New Mammography Studies Assess Image-Based AI Risk Models and Breast Arterial Calcification Detection
Can Deep Learning Provide a CT-Less Alternative for Attenuation Compensation with SPECT MPI?
Employing AI in Detecting Subdural Hematomas on Head CTs: An Interview with Jeremy Heit, MD, PhD
Related Content
© 2025 MJH Life Sciences

All rights reserved.