Radiology: Right or Wrong

Blog
Article

Recognizing that many abnormalities don’t ‘read the textbook,’ we strive to provide the most accurate assessments amid the gray zones of image interpretation.

A couple of weeks ago, a lifelong friend and I were discussing differences between how we would approach certain situations in our respective fields. He is a graphic designer, specifically for book jackets. (Just so nobody is confused, I am a diagnostic radiologist). At one point, he referenced my work being “right or wrong,” as opposed to his, which wasn’t.

I piped up to indicate my disagreement, but because it would have gotten us off track, I didn’t press the issue. It occurs to me now that I have told him about this before, but who can blame him for not remembering. We are busy guys with a lot on our minds.

Some rads would agree that our work is a “right or wrong” affair. There is a diagnosis to be made (or, perish the phrase, ruled out), and we either succeed or we don’t. However, I don’t think anyone would say that is always the case. If you asked a bunch of us “How frequently in your work is there a definitive right or wrong answer?,” responses would vary from “Almost always” to “Very rarely.” Then your respondents would set about arguing with each other.

Trying to come up with a scenario of right/wrong that everyone can agree on, I imagine a simple long bone X-ray. Suppose it shows an obvious fracture, a “janitorial diagnosis,” as one of my residency attendings used to say. That is where the janitor looks up at the image from across the room and says, “Yep, that is broken,” and goes back to sweeping.

Nobody would dispute that “fracture” is right, and “no fracture” wrong, would they?

Maybe not in the abstract, but in the real world, I bet you would have rads arguing over whether the report was properly descriptive. The reader called a fracture, but he didn’t specify simple or comminuted. If there was no displacement, would the report be “wrong” not to specifically state anatomic alignment? How overzealous would a radiological critic have to be to call someone “wrong” because they reported 15 degrees of angulation when the critic measured 20 degrees, or would a QA committee need to be engaged?

Waxing philosophical, one could take the stance that the truly “right” diagnosis is out there, even if no rad can make it from available images. Suppose I get a chest X-ray that shows no abnormality with a history of “R/O malignancy.” It eventually turns out the patient has leukemia. If I don’t magically know to say “leukemia” in my report, how can I be 100 percent right? Does verbiage like “A normal chest x-ray does not rule out malignancy” make me any righter?

Getting a lot more down to Earth, a lot of the studies we read are not definitive matters. Many abnormalities, as some of my attendings used to say, don’t “read the textbook.” Many liver lesions, despite scrutiny via ultrasound, CT, and MR, fail to be pathognomonic. The best we can do is offer a differential, but that is only mixing in the right diagnosis with a handful of wrong ones, if indeed we manage to include the right one at all.

It is baked into our profession that we can’t hope to be 100 percent accurate. Long before budding physicians specialize in radiology, they learn about epidemiology, including things like sensitivity, specificity, and positive/negative predictive value. All of that number crunching tells us that the best we are usually going to be able to do is have a high probability of being right.

Later on, rad residents are reminded of that, for instance in mammography. There is an entire category of BI-RADS for the notion of “less than 2 percent chance of cancer but needs follow-up.” Other RAD systems came along to mimic the concept. That is not going to be satisfying for anybody who wanted the world, or at least radiology’s corner of it, to be a binary affair of right or wrong.

Another way to look at that and other uncertainties in our field is the notion that if you don’t commit to one answer, you are wrong no matter what. Sticking with the BIRADS-3 example, if the abnormality was, in fact, benign, the “absolute” right answer should have stuck it in category 1 or 2, requiring no follow-up until the next routine screening. On the other hand, if it was truly cancer, “right” identification, as such, would have resulted in immediate referral/treatment, not waiting a few months for repeat imaging.

Rads differ in how they deal with these gray zones. Some of us pepper our reports with reminders of these uncertainties, thinking referrers (especially the ones who didn’t go to medical school) need reminders/refreshers that our work isn’t full of gold-standard diagnoses. We might also do it on a wing and a prayer that our caveats could avert lawsuits.

Other rads go further, accumulating hedging techniques like they are collectable trading cards: Let all who read their reports consider our work to be only a shade more reliable than astrology and tea leaf divination. When clinicians snicker about “clinical correlation” being our watchwords, we can probably thank this lot.

Perhaps in response, we have the other extreme with rads who go out of their way to avoid all uncertainty, making boldly definitive statements whenever possible and criticizing colleagues who don’t. These folks tend to be smarter than most, so it is not likely they failed to learn their epidemiology or forgot it. Rather, they have made a conscious decision: commit to a diagnostic opinion even at the risk of being wrong.

Newsletter

Stay at the forefront of radiology with the Diagnostic Imaging newsletter, delivering the latest news, clinical insights, and imaging advancements for today’s radiologists.

Recent Videos
© 2025 MJH Life Sciences

All rights reserved.