The Diagnostic Satisfaction Quotient

April 29, 2016

A different kind of satisfaction score in radiology.

I have yet to meet anybody practicing medicine who found epidemiology or “evidence-based medicine” electrifyingly fascinating topics during med school or postgrad training. I sure didn’t.

Still, study or work with something long enough, and it’s bound to affect the way you think. Every now and then, I find myself musing on how circumstances around me might be summed up in tidy little mathematical expressions, much as sensitivity, specificity, etc. are used to stratify diagnostic tests and the like.

So, as I was churning through my usual onslaught of assorted medical imaging cases last week, the thought occurred to me that they offered a broad spectrum of satisfaction to me, as the diagnostician evaluating them.

In this context, a “satisfying” study would be one that made the radiologist feel intellectually engaged, a valuable contributor to care of the patient in question, etc. For instance, scans for “fever of unknown origin” that actually identify a nidus of infection, or postchemo studies conclusively showing a good response to therapy.

Unsatisfying studies, likely as not, require little explanation, as most of us probably receive them far more frequently. They leave the rad feeling irrelevant, or even an unhappy participant in what seems to have been a waste of time and resources. Whole body noncontrast CTs for “r/o pathology,” for instance. My earliest recollection of receiving an unsatisfying study is when, as a first year rad resident, I got a left humerus X-ray for history of “chest pain, radiating to left arm.” (Lest you wonder, the patient had not gotten a chest film…hopefully they at least did an ECG.)[[{"type":"media","view_mode":"media_crop","fid":"48174","attributes":{"alt":"radiology satisfaction","class":"media-image media-image-right","id":"media_crop_184298774499","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"5733","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"height: 120px; width: 180px; border-width: 0px; border-style: solid; margin: 1px; float: right;","title":"©phoelixDE/Shutterstock.com","typeof":"foaf:Image"}}]]

So, the thought nibbled at my brain of a Diagnostic Satisfaction Quotient (DSQ). At its simplest, the number of satisfying cases divided by the total number of cases received. Easy enough to make a routine of selecting from a 5-point satisfaction scale when signing out one’s cases; I imagine most rads would find it more valuable than having to categorize every case from Normal to Major Abnormality/Physician Notified (if you don’t have to do this, you have my envy).

Of course, as with Epi/EBM, there’s no need to keep things simple; indeed, one is best suited to be hyped as an expert if one has made things nice and impenetrably intricate. We might start complicating things by dividing DSQs by imaging modality (PET/CT cases, for instance, are liable to yield satisfaction more frequently than screening mammos or ICU chest X-rays). So now we’d have DSQxr, DSQct, DSQmr, etc. Maybe add a prefix of i, o, or e to stratify inpatient, outpatient, or ER cases.

The DSQ could thus become something capable of comparison between radiologists, and indeed the facilities for which they read. Think of the possibilities-an institution able to boast that their rads’ DSQs are in the 99th percentile nationwide, or a radiology department knowing that its most complaint prone member has a DSQ well below the average, and is thus more likely to be a malcontent than a visionary with brilliant insights as to how to improve the place.

I think, for radiology, this could be as valuable a tool as Press-Ganey stats have been for ERs. Or as much of a headache to everyone in the field beyond the entrepreneur who gets the ball rolling on this. Anybody able to line up a hedge fund’s backing? I’m willing to take partners on this.