Defensive or vague wording in radiology reports may protect me by making it harder to pin me down - but it’s less helpful to the ordering clinician.
It doesn’t take being in the health care field to know about the practice of “defensive medicine.” Mostly, this refers to excessive action in the name of giving attorneys as little opportunity as possible to target the defensive practitioner. One might also consider other motivations: Perhaps the physician is a worrier by nature, and will lie awake at night if he leaves a 0.001 percent chance that he failed to properly diagnose a zebra. Even a desire to avoid bad stats in one’s peer-review program - it’s not much fun to know that every blemish on your internal record is being tallied for potential use against you.
In radiology, we certainly practice our share of defensive medicine. For instance, recommending follow-up studies on stuff we know is going to be unimportant. Having patients give informed consent that minor percutaneous procedures pose potential risks “including, but not limited to” everything from minor bruising to death. Noting date and time of when critical results were called in, and the name, title, and firstborn child of the clinician who got handed the hot potato. As the conventional wisdom goes, “If you don’t document it, you didn’t do it.”
Because our specialty is more attached to the written record than most, there’s an even greater focus on the words and phrases we’re using. I recall an attorney once talking admiringly of another radiologist who was a “master of doublespeak,” and could generate entire reports without actually committing to any meaning that might prove inconvenient in the event of subsequent litigation.
Not all of us have such abilities, and that’s probably a good thing since our value in interpreting these studies is identifying and describing the pathology (or lack thereof) that we see. It would be nice if we could devote 100 percent of our attention to this. Unfortunately, the motivations for such defensive dictation are very real, and I imagine that most of us are less purely focused.
Perhaps a bright-eyed and bushytailed young physician, fresh from training, is only 1 percent distracted by an awareness of defensively phrasing himself. A 20-year veteran who has been dragged through a few frivolous lawsuits, or gotten sick of warnings about negative stats during peer review, might be 10 percent distracted or more.
I think a big piece of the problem is that most if not all of the mechanisms for measuring radiologist performance out there are very much focused on the negative. If one identifies every single case of acute pathology crossing the path of his ER in a year without flaw, nothing much happens. It’s expected. But if the same doc misses, or deems stable, a 1 mm pulmonary nodule that someone else later identifies - zing! Instant feedback.
Defensive or downright vague wording in reports games this system. Suppose I’m on the fence about whether or not there is very mild diverticulitis. If I make my best decision and say that there is, or that there is not, somebody later disagreeing with me potentially results in a negative peer review, a lawsuit, etc. But if I hedge and say something like “cannot rule out very mild diverticulitis,” it’s harder to pin me down. It’s also less useful to the clinician who ordered the study.
I think there is currently too little focus on that latter point. Unless a clinician is overwhelmed with admiration or gratitude for a good pickup, to the point that he reaches out to the interpreting radiologist and/or his department leadership, the value added by the radiologist doesn’t count.
Correcting this does not require a massive new river of paperwork (though I can just see some new federal initiative requiring all clinicians ordering studies to subsequently grade the “helpfulness” of each report). But it would take very little time and effort, say on a monthly or quarterly basis, for major referrers to be shown a list of radiologists most frequently reading their studies, with the simple question: Who on this list has been particularly helpful to you? (With an option for supplying further detail.)
Might be telling to get such feedback, rather than wait until clinicians call up, in bad moods, to complain about alleged misreads.
Comparing Digital Breast Tomosynthesis to Digital Mammography: What a Long-Term Study Reveals
September 17th 2024In a study involving over 272,000 breast cancer screening exams, digital breast tomosynthesis (DBT) had a higher breast cancer detection rate and a lower rate of advanced cancer presentation at the time of diagnosis in comparison to digital mammography.
Can Radiomics Enhance Differentiation of Intracranial Aneurysms on Computed Tomography Angiography?
September 17th 2024Radiomics models offered a pooled AUC of 86 percent for differentiating between ruptured and unruptured intracranial aneurysms, according to a recently published meta-analysis.
Emerging PSMA Radioligand Therapy Shows Benefits for Metastatic Castration-Resistant Prostate Cancer
September 16th 2024The PSMA-targeted modality 177Lu-PNT2002 improved radiographic progression-free survival by 29 percent in patients with mCRPC in comparison to ARPI therapy, according to new phase 3 trial data presented at the ESMO Congress in Spain.
The Nonexistence of Perfect Balance in Radiology
September 16th 2024In the elusive pursuit of reconciling case volume and having an appropriate number of radiologists, the proverbial windsurfer may fare better than stand-up paddleboarders and daredevil surfers at navigating the waves of the profession.