A new study suggests that deep learning algorithms with multimodal ultrasound have comparable specificity and sensitivity to subjective expert assessment and use of the O-RADS classification to distinguish between benign and malignant ovarian tumors.
When it comes to differentiating between benign and malignant ovarian tumors, emerging research shows deep learning algorithms that incorporate multimodal ultrasound (US) have comparable diagnostic accuracy to use of the Ovarian-Adnexal Reporting and Data System (O-RADS) and expert assessment.
In the retrospective study, recently published in Radiology, researchers assessed 422 women with ovarian tumors (304 benign tumors and 118 malignant tumors) and a mean age of 46.4. They found that the deep learning decision fusion and deep learning feature fusion algorithms had respective specificity rates of 80 and 85 percent and similar sensitivity at 92 percent. Use of the O-RADS risk stratification system had a 92 percent sensitivity and an 89 percent specificity whereas expert assessment was associated with 96 percent sensitivity and 87 percent specificity, according to the study.
“Our results suggest that targeted DL algorithms could assist practitioners of US, particularly those with less experience, to achieve a performance comparable to experts. Our models could also be further developed to assess lesions found within a screening population,” wrote Wei-Wei Fong, MD, PhD, who is affiliated with the Department of Obstetrics and Gynecology at the Ruijin Hospital and the Shanghai Jiao Tong University School of Medicine in China, and colleagues.
Noting that recently developed deep learning models for detecting malignant ovarian tumors were based on one type of ultrasound, Fong and colleagues said their deep learning algorithms were multimodal in nature. These algorithms incorporated input from color Doppler US, gray scale US revealing the plane with maximal dimension and gray scale US focused on the maximum size of the solid tumor component, according to the study.
Fong and colleagues noted that the multimodal deep learning algorithms in their study are akin to the common clinical use of multiple types of US images to diagnose ovarian cancer.
The study authors acknowledge that the findings from their single center study need further exploration and validation in future multicenter studies. Fong and colleagues also noted that the data sets in their retrospective study were limited in size. The assessment of US images by one expert in US may limit the general application of the study findings with O-RADS and expert assessment, according to Fong and colleagues.
New AI-Powered Ultrasound Devices May Enhance Efficiency in Women's Imaging
April 19th 2024One of the features on the new Voluson Signature 20 and 18 ultrasound devices reportedly uses automated AI tools to facilitate a 40 percent reduction in the time it takes to perform second trimester exams.
Could a Deep Learning Model for Mammography Improve Prediction of DCIS and Invasive Breast Cancer?
April 15th 2024Artificial intelligence (AI) assessment of mammography images may significantly enhance the prediction of invasive breast cancer and ductal carcinoma in situ (DCIS) in women with breast cancer, according to new research presented at the Society for Breast Imaging (SBI) conference.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
AI Adjudication Bolsters Chest CT Assessment of Lung Adenocarcinoma
April 11th 2024The inclusion of simulated adjudication for resolving discordant nodule classifications in a deep learning model for assessing lung adenocarcinoma on chest CT resulted in a 12 percent increase in sensitivity rate.