Has an overly critical approach from quality assurance committees led to an overly cautious approach in reports from teleradiologists?
This morning, my Facebook feed included a lamentation/venting from a rad: “According to overnight telerad prelims, 87 percent of the planet has ‘possible mild non-specific bowel findings,’ ‘enteritis/colitis cannot be excluded’ or ‘fecal loading’ needs to be mentioned … .”
As a nearly 11-year telerad veteran, I have to plead guilty but not quite as charged.
Some of the respondents to that thread (not all telerads coming to their own defense) reasonably pointed out that you can’t completely blame the telerad. One person summed it up: “A real miss at night is what matters. A little perspective goes a long way … lack of history, prior images/reports … depends on local QA culture and infrastructure.”
I couldn’t tell you the whys and wherefores, but there is one experience that has been common to all of the places I have covered by telerad. Somehow, I never get all of the history that referring clinicians and even some on-site rads seem to have at their fingertips. It’s not at all rare for me to find out that my “reason for exam” has left out the most important information about the patient or is outright false.
Some referrers have expressed frustration to me about this. They stood on their head to convey the relevant info, which somehow never got conveyed to us. Maybe a lazy or inattentive clerk fumbled the ball. Maybe the order entry system was the medical equivalent of HAL-9000 (“I’m sorry, Doctor, I’m afraid I can’t allow that CT with the history you have input”). Then they see our reports replace the valuable history paragraph they tried to give us with something totally irrelevant and/or untrue.
Another piece of the puzzle, one that the griping on-site/over-reading rads do have some control over, is the QA machine and their own routine input into it.
Rewinding the clock to the days when I was first doing telerad, I can tell you it was an unpleasant shock to my system just how nitpicky and fault-finding some of those on-site rads could be. It felt like some of them were constantly trying to “ding” their telerads’ stats by claiming diagnostic misses and occasionally overcalls for some of the pettiest, most subjective stuff.
Some of the hypercritical over-readers surely thought they were superior to the telerads. “Well, here I am, reading out the overnight cases again and it is time to clean up all of their amateur hour mistakes.” But sometimes I wondered whether there might be a baser motivation at work, like taking the telerads down a few pegs.
Why? Maybe because the on-site rads didn’t like the idea that somebody else got to work from home, and/or resented having to do over-reads for them every morning. Accordingly, maybe they leveraged QA to punish them a little. Perhaps some on-site rads considered it a form of job security: “Hey, if we pepper those telerads with a bunch of QA misses, maybe the hospital won’t think about firing us and contracting with the telerad company for full-time coverage.”
The mate to that puzzle piece was that at least some telerad companies bent over backwards to accommodate those hypercritical on-site rads. Whenever there was a claimed error, the default assumption was that the teleradiologist had indeed done wrong. I know at least some of that was in the name of customer satisfaction. The on-site rads’ opinions mattered, and if the telerad company didn’t agree with their complaints often enough, it might take a toll on goodwill.
A telerad reading multiple hundreds of cases each week, getting disproportionately accused of petty misses and then subjected to a less than impartial “QA committee” soon learns that he or she has a choice: Get a checkered history on performance evaluations or learn to defend him- or herself against the excessive criticisms. Unfortunately, one way to do that is to hedge in reports, offering a lot of “can’t rule out” statements and, yes, overcall enteritis or colitis.
Consider this case in point. I once got accused of “missing” a lesion that I thought was normal lingual tonsillar tissue. I was pretty sure it looked like a bunch of other neck CTs I’d seen before and said so in my defense to the QA committee. They ruled against me without any explanation, such as an annotated image showing where a lesion should be measured, or clinical follow-up proving that something had been found there.
As if this made it better, the QA committee told me it was “only a minor miss.”
Well, I certainly didn’t want my record to suffer. Who knows? Maybe the QA committee knew something I did not. Accordingly, in the probable hundreds of subsequent necks I read for that telerad company (all of whose lingual tonsils looked similar to the one I “missed”), I never again failed to mention that a subtle tonsillar lesion could not be excluded. It should be noted the QA committee never told me I erred by overcalling or hedging.
Not every on-site rad is guilty of this, of course, but some of the folks liable to complain about other rads’ reporting are precisely the hypercritical colleagues who helped form that defensive behavior. Just like with any other self-manufactured problem, the first thing to do after realizing you might have created the hole you are standing in is to stop digging. In this case: Any time you’re thinking about calling something a “miss,” try to step back and think about whether it is really something that needs to be flagged. Remember that your feedback will shape the future behavior of the rad you judge.
For folks running or simply sitting on QA committees, whether you’re with a big telerad company or just a local radiology group: If you don’t ever find yourself functioning as a filter to keep your rads from getting unnecessarily dinged in their performance metrics, don’t be surprised when they find ways to do it themselves. At that point, fixing the methods they have found will be an awful lot harder than if you had averted any need for them in the first place.