• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

5 AI Lessons Radiology Can Learn from Boeing Crashes

Article

Radiology can learn from aviation industry AI failures.

Artificial intelligence (AI) isn’t a new term in radiology anymore – it’s been a buzzword at conferences for nearly half a decade. But, that doesn’t mean the technologies have been widely implemented. Industry experts agree there’s much still to learn about effectively designing these tools, and putting them into proper practice will require even more significant attention to detail.

Given that AI is still largely uncharted territory, radiology has the opportunity to learn from the mistakes other sectors have experienced. In particular, the airline industry has a long track record of using AI to improve safety. Consequently, when errors occur, gleaning the knowledge of what went wrong in order to side-step the same problems can be vital.

In recent months, two Boeing 737 MAX airplanes crashed due to AI failure. John Mongan, M.D. Ph.D., associate professor of radiology at the University of California at San Francisco, outlined the lessons radiology can learn from these tragedies in a March 18 article published in Radiology: Artificial Intelligence.

“Medicine should learn from aviation’s failures,” he wrote. “This is particularly true with respect to AI and automated systems, which are more broadly adopted in aviation than in medicine.”

To avoid making critical errors that could cause patient harm in the future, Mongan advised radiology practices and departments pursuing AI solutions to remember five key points:

1. Malfunctioning AI tools can create unforeseen safety hazards. The worst-case scenario of AI gone awry isn’t that your system simply won’t have AI – the consequence is potential real harm. For Boeing, this meant neither plane that crashed was equipped with optional sensors that could have alerted pilots to the malfunctioning safety system.

For radiology, Mongan said, this risk could come in the form of an improperly designed workflow algorithm that prioritizes cases from least urgent to most critical, rather than putting the most acute cases at the top of the worklist, potentially leading to significant harm.

“This may seem unlikely but would happen if the algorithm and implementation mismatched on whether the highest or lowest numeric priority value represents greatest acuity,” he said. “Adding AI introduces new possibilities for failure; risks of these failures must be identified and mitigated, which can be difficult prospectively.”

2. As much attention must be given to connecting inputs and outputs as is dedicated to algorithm development itself. Even though the 737’s safety system was designed correctly, crashes still occurred due to incorrect input data. Similarly, in radiology, training an AI tool on low-quality data can decrease the quality of patient care.

Consequently, it’s critical to test not only isolated algorithms, but the fully integrated system, he said. And, set up an alert that can be easily communicated if your AI system picks up on any inconsistent or potentially erroneous inputs.

“An alert should be clearly and reliably communicated to the people able to immediately address the issue,” he said. “These should be basic, required aspects of AI systems, not options or add-ons.”

3. Tell everyone when you add AI to your workflow, and train them. An AI tool can only work as well as the people charged with supervising, maintaining, and correcting it. The pilots and crew of the Boeing flights were unaware a safety

system existed on their planes, immediately putting them at a disadvantage.

Within radiology, notification and training can be critical in systems that are intended to be transparent to users, such as AI-based image reconstruction algorithms, Mongan said.

“If people are uninformed about AI in their workflow, the likelihood that failure of the system will be detected decreases, and the risk associated with failures increases,” he explained.

4. Include a way to disable closed-loop systems. Such systems, including the planes’ safety system, can introduce added risk because they initiate actions without any human involvement. Currently, most radiology AI tools aren’t closed loop systems as they provide triage, prioritization, or diagnostic decision support. But, this doesn’t preclude that possibility for the future.

If any closed-loop AI tools are developed, Mongan said, designers should create a type of fail-safe mechanism that can be activated to side-step potentially harmful situations. Boeing’s planes didn’t have this type of measure.

“To mitigate this additional risk, closed-loop systems should clearly alert users when they are initiating actions, systems should accompany the alerts with a simple and rapid mechanism for disabling the system,” he said, “and the system should remain disabled long enough for the failure to be addressed.”

5. Don’t rely on regulation alone. The airline industry has been stringently regulated by the U.S. Federal Aviation Administration since the 1980s, but even with these guidelines and measure in place, an analysis of these crashes revealed that Boeing underreported and mischaracterized the performance and risks of its safety system.

This could be significant in radiology because the aviation industry’s regulatory environment has served as a model for the U.S. Food & Drug Administration’s (FDA) approach to regulating AI in healthcare. Rather than certifying individual AI tools, the FDA has opted to certify software developers, streamlining application reviews in a process called Pre-Cert.

Consequently, Mongan said, regulation is no guarantee of proper performance and or that critical information will always be shared.

“Regulation is necessary, but may not be sufficient to protect patient safety,” he advised, “particularly when subject to the conflicts of interest inherent in delegated regulatory review.”

Although, in hindsight, these mistakes might appear as though they should have been evident, they still easily occurred, he said. With this knowledge, radiology is armed to potentially prevent negative impacts on patient care.

“In retrospect, these errors may seem obvious, by they occurred in a mature field with a strong safety culture, and similar failures could easily recur in the developing area of AI in radiology,” Mongan said. “We have the opportunity to learn from these failures now, before there is widespread clinical implementation of AI in radiology. If we miss this chance, our future patients will be needlessly at risk for harm from the same mistakes that brought down these planes.”

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.