The industry should pay more attention to the interplay between providers and AI tools, says Elizabeth Krupinski, Ph.D., during SIIM 2020.
Radiology has always been on the cutting edge of technology, and the latest tech tool craze is still artificial intelligence (AI). But, as the industry continues to push forward, finding new ways to make use of these algorithms, it is critical for radiology leaders to also pay attention to how AI is impacting the provider, said one industry expert.
Using AI is about more than data results in a lab, said Elizabeth Krupinski, Ph.D., professor and vice chair of research in the radiology and imaging sciences department at Emory University School of Medicine. It is about how the technology touches the radiologist’s thinking, perceptions, workflow, and other human factors.
“We really need to understand and appreciate how technology – no matter how cool it is to develop and implement – impacts the human user at the end,” she said, during the 2020 Dwyer Lecture at the SIIM 2020 virtual meeting. “And, after that, how it impacts patient care.”
The Effect on Radiology
Despite being widely used throughout the industry for several years now, there has not been much investigation into the presentation of AI results during clinical use or how these tools affect providers in cognitive, perceptual, or ergonomic ways. There is also a lack of data on how AI touches patient outcomes, radiologist training, and decision-making, she added.
“Basically, what’s missing in a lot of the AI, deep learning, and some of the technology development – even hardware – is a consideration of the users,” she explained. “And, I think that ends up being critically important.”
Throughout the industry, she said, a growing population of radiologist are also turning to AI tools to help alleviate the stress, fatigue, and burnout that has become increasingly prevalent in the specialty, particularly when it comes to complex research efforts. But, even when AI is used in investigations, sufficient context about clinical significance or relevance is rarely included.
In many cases, Krupinksi said, investigators will identify a particular type of image or finding, apply an AI scheme, and, then, point to the strength of their outcomes.
“But, then, when you read it, there’s nothing in the significance or the background section that says why they’re doing this,” she said. “Is it a problem right now? Do radiologists have a problem with that particular type of diagnosis or type of image?”
Determining AI’s Clinical Impact
Collecting a true assessment of any new method, initiative, or innovation in an actual clinical setting can be complicated and costly. But, quasi-experimental designs could be options for reach this goal, she said.
Overall, she recommended three types of quasi-experimental designs:
Prepost with nonequivalent control: In a direct comparison, a group with an AI intervention is analyzed – both before and after adoption – with a non-randomized, similar control group that does not receive AI.
Interrupted time series: Investigators measure outcomes at consecutive time points (three-to-eight) prior to and after an intervention is conducted. All measurements occur within the same group, as it functions as its own control.
Stepped-wedge: The intervention is launched at various sites over a certain time period with control groups eventually receiving the intervention and functioning as their own controls. This design allows investigators to compare the intervention by different sites or groups, enabling assessment across institutions, practices, and patients groups.
Implementing these designs offers several advantages, she said. Although they can have biases and errors, they provide for faster intervention uptake, enhanced acceptability, and reduced cost.