In a video interview discussing one of her recent lectures at the Radiological Society for North America (RSNA) conference, Nina Kottler, M.D., M.S., noted how the combination of artificial intelligence (AI) and radiologist experience can help mitigate bias limitations with the development of AI algorithms as well as educational biases inherent to a radiologist’s training and experience.
Combining artificial intelligence detection (AI) algorithms with radiologist assessment may improve detection rates anywhere between two to 45 percent, according to Nina Kottler, M.D., M.S., the Associate Chief Medical Officer of Clinical AI and Vice-President of Clinical Operations at Radiology Partners.
However, in a recent video interview, Dr. Kottler emphasized that recognition and appropriate education on the potential biases of AI algorithms and inherent radiologist biases are key to maximizing the impact of AI on patient outcomes.
Dr. Kottler noted that factors such as the patient history, the exam type ordered, an unusual location for a lesion and distracting pathology can contribute to variability with radiologist assessment. She also pointed out that education biases stemming from one’s experience, recently missed diagnoses and satisfaction of search can also factor into image interpretation.
However, Dr. Kottler added there are also biases or limitations with the development of AI algorithms as well.
“Most AI systems are trained on one series or a couple of images within a total image set,” noted Dr. Kottler, who lectured about AI at the recent Radiological Society of North America (RSNA) conference. “If you’re looking at a chest X-ray, a lot of the chest algorithms are trained on just the frontal image. If you’re looking at a head CT for intracranial hemorrhage, most of those (AI) algorithms are trained just on the axial, soft tissue, thinnest slice series with no contrast. It's not looking at all of the other components.”
(Editor's note: For other video interviews with RSNA lecturers, click here.)
Dr. Kottler emphasized appropriate education that goes beyond the sensitivity rates and area under the curve (AUC) performance of AI systems.
“We spend a lot of time in our practice trying to understand the bias of the AI (model)," maintained Dr. Kottler. "Where is it overcalling things? Where is it under-calling things? We take that information to our radiologists and let them know. We just don’t give them information like (this AI system) has a 95 percent AUC, it is going to be great. No. We go to them and say (this system) may overcall intracranial hemorrhage when the patient is moving and there is a streak artifact in the exam. It’s going to miss some of the findings in X, Y and Z cases. You bring those things to the radiologists and you’re educating them on the biases of the AI. You’re making it so (radiology) and AI are stronger together.”
For more insights from Dr. Kottler, watch the video below.