• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Perspectives on Preliminary Interpretations, Radiology Reports and Quality Assurance

Blog
Article

Could adjustments to quality assurance review facilitate improved teamwork between radiologists and referring clinicians?

If you have read this blog much, you probably know I am a big believer in preplanning. Whenever I hear someone propose a predictably poor course of action on the grounds of “let’s try X and see how it goes,” I practically hear them suggesting that we can’t really know it’s a bad idea to step off of a cliff until we have seen it attempted.

Still, often functioning as a mere cog in the health-care machine, I regularly have to abide by bad plans in which I had zero input or, worse, an evident lack of planning by anyone in charge. I (and many docs like me) commonly play the role of Cassandra, clearly seeing multiple ways things aren’t likely to go well. We rarely have opportunity to share these insights, let alone proposals for improvement, with anyone able/inclined to make changes.

Such foresight is not exhaustive. In addition to all the pitfalls we predict, we commonly find ourselves blundering into others. This is partially due to satisfaction of search (radiologists might know it better as “happy eyes”). If I have already thought of a dozen reasons why doing X is foolish, I probably won’t try very hard to come up more. Subsequently experiencing reason #13 thus doesn’t strike me as a real failure of foresight.

The silver lining is that, if you do blunder into a bad outcome, you have firsthand experience of it, and you just might come up with worthwhile new ideas that you never would have if you had sized up reasons #1-12 and not done X in the first place. You can literally blunder into better ways of doing things.

Here is a case in point courtesy of a former job. There was a system (I use that word very charitably here) for handling preliminary interpretations and associated QA. In the abstract, it sounded reasonable. The renderer of the prelim would put an interpretation in the system, which would then pop up on the screen of the radiologist rendering a final report albeit much later. The rad could hit agree or disagree buttons and type his or her own comments if desired (for instance, “Missed a fracture in 4th metatarsal”). This was in addition to the rad’s actual report.

Again, it might have sounded nice in the abstract but in practice, it was a complete train wreck. Some folks entered prelims and some did not. There was no indication of who had entered the prelim. Was it a radiology resident, the ER doc, or a “physician extender” who had ordered the study? There was no telling whether the prelim window would pop up for the radiologist. He or she might not even know there had been a prelim. If the rad responded to the prelim, there was no guarantee that the prelim interpreter would ever know about it to “close the loop.” I could go on.

Flaws in the system begat further unreliability as some prelim-ers and rads just stopped paying attention to the system entirely. Others kept at it, and patchy participation created more chaos. Yet others resented it, and sometimes took out their frustrations on other personnel who were just trying to do what they thought made sense. I had one particularly unpleasant noctor caterwaul at me because she had missed a fracture and thought that my highlighting the discrepancy would hurt her statistics in the system.

However, one fine day, in the midst of this minefield, it occurred to me that a lot of the prelims for these studies are being written by the docs (or non-docs) ordering them. The same folks who are giving me histories like “pain” or “r/o injury” without saying anything about where the pain is, what mechanism of trauma might have happened, etc. are the ones making the prelim interpretations I intermittently see.

In other words, their prelim is based on all of the clinical information they have gathered. My formal report is without such benefit. In effect, the accuracy deck is stacked in favor of the prelim-er. They might know that the issue is in a patient’s right lower abdomen whereas I have a head-to-toe scan without any clues at all. Someone wanting to game the system to be able to boast that he or she is a better reader than a formally trained rad, has a perverse incentive not to share clinical info with us. I sincerely hope that nobody out there is behaving this way, at least not on a conscious level.

I have used the analogy before of a baseball pitcher. He doesn’t want the batter to connect. He wants his pitches to be as tricky as possible. A referring clinician and radiologist, however, aren’t supposed to be working at cross-purposes. We’re supposed to be a team.

Wouldn’t it make sense, then, for QA systems to pay some attention to the radiologist-clinician duo? That is, if you have referrers #1-10 and rads A-E, track the stats of the 1-A pair separately from the 2-A team. (Of course, you could still evaluate the sum total of rad A’s cases as its own category. It would be simple for a spreadsheet to subdivide his cases by referrer.)

In such a system, the referrer and the rad have that much more reason to support one another. A referrer who lazily or carelessly provides histories like “r/o pain” should be revealed as inferior to more conscientious colleagues when, no matter which rad receives his or her cases, the results are a step below the rest.

It might even develop that certain referrer-rad pairs happen to work together particularly effectively. Perhaps their styles complement one another, or they’re just on the same mental wavelength. A smart worklist might make a point of having them share patients whenever possible. Schedulers might even make efforts to have them working at the same time.

Related Videos
Improving the Quality of Breast MRI Acquisition and Processing
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Can Diffusion Microstructural Imaging Provide Insights into Long Covid Beyond Conventional MRI?
Emerging MRI and PET Research Reveals Link Between Visceral Abdominal Fat and Early Signs of Alzheimer’s Disease
Nina Kottler, MD, MS
Practical Insights on CT and MRI Neuroimaging and Reporting for Stroke Patients
Related Content
© 2024 MJH Life Sciences

All rights reserved.