Radiology Leaders Urge Proceeding with Caution with Autonomous AI

July 6, 2020

ACR and RSNA leaders request the FDA slow its momentum with AI tools that work without radiologist oversight.

Artificial intelligence (AI) tools that operate autonomously in radiology are not yet safe, and pursuing their use should be done carefully, two industry leadership organizations have told the U.S. Food & Drug Administration (FDA).

In a letter detailing their reticence, the American College of Radiology and the Radiological Society of North America leaders expressed concerns that these AI tools have not been sufficiently tested to ensure their safety – and that a framework is not currently in place to address that problem. This message comes in response to a two-day FDA workshop convened earlier this year to assess AI’s integration into medical imaging.

“The ACR and RSNA believe it is unlikely the FDA could provide reasonable assurance of the safety and effectiveness of autonomous AI in radiology patient care without more rigorous testing, surveillance, and other oversight mechanisms throughout the total product life cycle,” said Howard Fleishon, M.D., ACR chairman, and Bruce Haffty, M.D., RSNA chairman, in the letter. “We believe this level of safety is a long way off, and while AI is poised to assist physicians in the care of their patients, autonomously functioning AI algorithms should not be implemented at this time.”

Related Content: American College of Radiology Offers Suggestions for Federal Artificial Intelligence Oversight

To avoid potential problems with patient safety, care, and outcomes, both leaders urged the FDA to proceed slowly, specifically waiting until more radiologists are comfortable using AI algorithms.

Current AI Use

Much of their concern stems from the relatively small use of AI algorithms in the industry to date. According to a recent ACR survey, only 30 percent of radiologists use AI in clinical practice, averaging 1.2 algorithms per radiologist. Instead, most providers use the tools solely for research.

Approved algorithms on the market are not intended to work without human oversight. Instead, they assist in image interpretation and exam prioritization, as well as help with some administrative tasks. In most cases, the tools were not tested to be generalizable across a heterogenous patient population or across varying modalities, the leaders said.

“It is not surprising that 93 percent of radiologists using AI in our survey said that the results of AI in their practices are inconsistent, and 95 percent said they would not use the AI algorithms without physician overread,” they wrote.

Recommendations to the FDA

If the goal is to continuing pursuing autonomous AI in radiology, Fleishon and Haffty said, there are various steps the FDA should consider. They recommended:

  • All AI algorithms undergo testing using multi-site heterogenous data sets to ensure a minimum level of generalizability across patient populations, imaging equipment, and protocols.
  • Rigorous pre-market approval process to ensure patient safety across heterogeneous patient populations and equipment types.
  • Develop post-market monitoring requirements, possibly using trusted third-party registries, to protect patients and the public.
  • Post-market oversight mechanisms should be in place to ensure algorithms function longitudinally.
  • Requirements should be developed for continuous monitoring of all algorithms used in clinical practice, regarding patient demographics, equipment type, and imaging protocols. These should be monitored by interpreting physicians.
  • Establishing a close relationship between the FDA and radiology leadership organizations to pinpoint practical use cases for autonomous AI.

Safeguarding Radiologist Involvement

Most importantly, Fleishon and Haffty pointed out, safe image interpretation, such as the analysis of screening mammography, still requires active radiologist involvement, and removing the provider and the context he or she brings to each interpretation could have negative consequences.

“We are not confident that these examinations can be safely excluded from radiologist interpretation while maintaining the current level of patient safety,” they wrote.

Overall, they recommended the FDA focus its efforts in algorithms that help providers with population health as a way to integrating autonomous AI into radiology care, such as concentrating on algorithms that can incidentally detect and quantify potentially undiagnosed chronic disease. The human touch is still necessary when examining, recognizing, and characterizing disease, they said.

“The value that human interpretation with independent medical judgement brings to patient care cannot currently be replaced,” they said.