• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Chest X-ray Interpretation Better with AI

Article

New deep learning tool is designed to help radiologists evaluate chest X-rays regardless of where they work.

A new deep learning model could help radiologists in any facility interpret chest X-rays.

In a new study published in The Lancet Digital Health, investigators from Australia outlined their new tool. It is designed to alleviate heavy workloads and make it easier for providers who do not have specialty thoracic training to read these scans while reducing errors.

Chest X-rays are already the most common imaging study worldwide, and that number is growing, said the team from annalise.ai, the company that created the AI model. Developing a tool to help shoulder the weight and process the workload will be critical.

“The ability of the AI model to identify findings on chest X-rays is very encouraging,” said Catherine Jones, MBBS, thoracic radiologist, chest lead at annalise.ai, and lead study author. “Radiologists and non-radiology clinicians incorporate clinical factors into decision-making, but ultimately rely on perception of findings to underpin our clinical interpretation.”

For their study, Jones’ team trained their deep learning model on 821,681 chest X-ray images taken from 520,014 studies in 284,649 patients. They evaluated radiologist performance alone and compared it to how the same radiologist performed with the model.

Overall, 20 radiologists assessed 2,568 chest X-rays both with and without the tool. Based on the team’s assessment, the tool significantly helped radiologists improve their classification for 102 of 127 clinical findings (80 percent), and it was statistically non-inferior for 19 findings (15 percent).

Additionally, assisted radiologists had an average area under the curve of 0.808 compared with 0.713 for unassisted radiologists. The model alone had an area under the curve of 0.957, and model classification by itself was distinctly more accurate than unassisted radiologists for 117 of 125 clinical findings (94 percent). It was also non-inferior to unassisted radiologist for all clinical findings.

“Radiologist accuracy improved across a large number of clinical chest X-ray findings when assisted by the deep-learning model,” the team said. “Effective implementation of the model has the potential to augment clinicians and improve clinical practice.”

Further research, they said, is needed to confirm the efficacy and accuracy of their tool in real-world settings.

For more coverage based on industry expert insights and research, subscribe to the Diagnostic Imaging e-Newsletter here.

Related Videos
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Expediting the Management of Incidental Pulmonary Emboli on CT
Related Content
© 2024 MJH Life Sciences

All rights reserved.