When too many abnormal findings becomes normal in radiology.
It was an oft repeated exchange in the reading rooms of my residency. A junior rez, reviewing cases with one of the attendings who had come out of semi-retirement in private practice to re-experience the academic life, would be in the midst of offering his/her notions as to what a given X-ray portrayed.
One of those notions, most often cardiomegaly, would provoke a reaction from the attending, who would make a face and characteristically retort, “Whaddaya wanna give ‘em that for?” To be followed by a short chat on the potential downsides of overdiagnosing, like saddling the patient with medical labels that might get repeated throughout the hapless individual’s life. “He’ll never be able to get insurance,” was one of the upshots. I’ve often wondered if that particular line got dropped from the dialog after PPACA outlawed screening on the basis of preexisting conditions.
The underlying wisdom was not to be denied. It is awfully easy to be an “overcaller,” erring on the side of declaring too much abnormal rather than too little. Peer review zingers and medmal allegations seem far less likely to result from excessive caution (“cannot exclude diagnosis X,” “appearance could consist with condition Y,” etc.) than they do from declaring things normal that turn out not to be.
There’s also something to be said for tending towards overcalling in one’s earlier career, and gradually progressing towards undercalling as one builds confidence and experience. I’ve wondered, however, at just how and when the transition occurs. That is, what makes a rad consider some aspect of a case normal (or, shall we say, normal enough not to dictate otherwise) today, but abnormal months or years earlier in the rad’s career?[[{"type":"media","view_mode":"media_crop","fid":"48415","attributes":{"alt":"Diagnosis in radiology","class":"media-image media-image-right","id":"media_crop_9841685936571","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"5767","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"height: 180px; width: 180px; border-width: 0px; border-style: solid; margin: 1px; float: right;","title":"©Feuerbach/Shutterstock.com","typeof":"foaf:Image"}}]]
Probably the most valid reason would be new scientific research. Suppose, for instance, a massive, rigorous study came out tomorrow that overwhelmingly supported a new threshold measurement for some diagnosis or other-appendiceal thickness, aortic diameter, whatever. An up-to-date rad, seeing this, might well adjust his methods.
Next might be formal pronouncements by authoritative sources, like a “white paper” from the ACR regarding diagnostic criteria. Again, a rad keeping current by reading such things might do well to adopt such guidelines, lest he be out of step with the rest of his field.
Even without such changes in what constitutes the field’s standard of care, a rad might come across an older write-up (or wise words from a respected colleague) and realize that, egad, he’s been doing things the wrong way…and henceforth, does better.
Such external (internal?) validations are rare, however, and I have found that it is more often a gradual process. At least for me, a matter of seeing hundreds to thousands of cases over a period of time, and gradually getting the impression that, hey, I’m calling abnormality X an awful lot. Can that much of the patient population really be abnormal? Or do I need to adjust my diagnostic thresholds a little?
The first step, to modify the old expression, is for the overcaller to recognize that he might have a problem. Next time, I’ll identify some of the more frequent overcall offenders I’ve seen.
Burnout in Radiology: Key Risk Factors and Promising Solutions
June 9th 2025Recognizing the daunting combination of increasing imaging volume and workforce shortages, these authors discuss key risk factors contributing to burnout and moral injury in radiology, and potential solutions to help preserve well-being among radiologists.
Mammography AI Platform for Five-Year Breast Cancer Risk Prediction Gets FDA De Novo Authorization
June 2nd 2025Through AI recognition of subtle patterns in breast tissue on screening mammograms, the Clairity Breast software reportedly provides validated risk scoring for predicting one’s five-year risk of breast cancer.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
Can Emerging AI Software Offer Detection of CAD on CCTA on Par with Radiologists?
May 14th 2025In a study involving over 1,000 patients who had coronary computed tomography angiography (CCTA) exams, AI software demonstrated a 90 percent AUC for assessments of cases > CAD-RADS 3 and 4A and had a 98 percent NPV for obstructive coronary artery disease.
Could AI-Powered Abbreviated MRI Reinvent Detection for Structural Abnormalities of the Knee?
April 24th 2025Employing deep learning image reconstruction, parallel imaging and multi-slice acceleration in a sub-five-minute 3T knee MRI, researchers noted 100 percent sensitivity and 99 percent specificity for anterior cruciate ligament (ACL) tears.