All repeat images are not created equal. Some are good; some are wasteful. So one radiology policy think-tank introduced a classification system Wednesday that could help determine whether a repeat image is necessary.
According to Richard Duszak Jr., MD, CEO of the American College of Radiology’s Harvey L. Neiman Health Policy Institute, “repeat image” is an overused and undefined term that offers no clarity about a scan’s medical necessity. The institute, , which researches medical imaging use, quality, and safety metrics, published guidance about how to categorize the various types of repeat images.
This paper, entitled “Repeat Medical Imaging: A Classification System for Meaningful Policy Analysis and Research,” aims to help quantify how many imaging studies have beneficial diagnostic value and how many can be avoided as wasteful spending.
“This classification system is in response to what we perceive as a global lack of clarity as to what repeat imaging means in medicine and when it applies to imagining,” said Duszak, who is also a practicing radiologist. “We’ve tried to be as thoughtful as we could be in creating something that would work both now and with future research as people have more and more robust data.”
The system divides medical images into four categories: supplementary, duplicative, follow-up, and unrelated imaging:
• A supplemental image — many of which are medically necessary — would occur during the same clinical encounter but utilize a different modality, such as a non-contrast CT scan and a renal ultrasound to identify kidney stones.
• Duplicative images involve the same modality during the same or subsequent clinical session. These images are taken for a variety of reasons, including the unavailability of previous scans or a change in the patient’s condition.
• Follow-up imaging can involve the same or different modalities during later clinical meetings, such as repeated imaging in cancer patients to verify there’s been no relapse of disease.
• Unrelated imaging — scanning of the same body area with any modality — is often an unforeseen event. For example, in its paper, HPI discussed unrelated imaging in a woman who had CT screenings for breast cancer staging two weeks prior to a car accident that prompted identical scans.
In many instances, how well researchers will be able to use this classification system will depend on how integrated and mineable the electronic health records they use are, Duszak said, as well as how standardized radiology reports are.
“There’s a lot of discussion among the radiology community’s thought leaders about creating a uniform pattern for radiology reports so they can become more actionable and more meaningful,” he said. “Some of that will include specific guidelines for when radiologists should make specific follow-up recommendations and when they shouldn’t.”
The efficacy of the classification system will continue to evolve as adoption and improvement of electronic health records spreads, he said.
Initially, this imaging classification system will likely be used to help radiology researchers better understand under what circumstances and in what ways repeat images are ordered and used. It could also be useful to policy makers who determine how to reimburse procedures.
However, Duszak said, as the classification system builds datasets characterized by populations and disease processes, the ultimate goal is for clinicians to use this information as a way to benchmark themselves against their peers.
“Maybe they fit inside the bell curve, and that’s great. Or, maybe they’re an outlier and need to self-reflect,” he said. “Analyzing this data could be an opportunity for improvement within a health system.”
While Duszak doesn’t anticipate any pushback to this classification system, he did say some frustration is possible over the extra work it might create for researchers and clinicians alike as they determine whether the criteria are meaningful and decide how to use them. In some cases, it could be a challenge to convince people to implement them.
“There will be no cultural change overnight,” he said. “As with any thought paper, buy-in will take some time, but we think our criteria will, hopefully, catalyze cross-disciplinary work that will be meaningful down the road for radiology.”
It’s important to remember, however, that these criteria neither support nor reject repeat images, he said. “Sometimes imaging is good, and sometimes it’s not,” Duszak said. “There are pockets where there is too much imaging and pockets were there isn’t enough. Our goal is precision. We just want it to be right.”
The Medical Imaging and Technology Alliance (MITA) commended the report, saying that a lack of clinically-based guidelines has created “misguided notions that ‘repeat imaging’ is synonymous with ‘wasteful’ or ‘inappropriate imaging.’” The report, MITA said, can enhance consideration of each procedure to make sure patients have the right scan at the right time.
“Too often in Washington, misperceptions drive decision making,” MITA executive director Gail Rodriguez said in a statement. “In this case, a misperception of what ‘repeat imaging’ implies, rather than the actual facts of individual patient cases, often leads policy analysts to respond with the wrong solution.”