How much of your day do you spend tracking down missing information-or, worse, incorrect information?
The title of this piece is a repeated quote from a former colleague. We worked together in a practice that, shall we say, could have been more organized and cohesive. Often, to get the job done, you had to go through considerable contortions, especially if you wanted to job done right-and you’d sleep better at night if you did away with any notions of pursuing perfection.
The quote references one of the most frequent obstacles we faced: not having the information we needed. It was easier when info was just missing; the gap stood out as something that needed filling in. More often, however, it was a matter of having the wrong details-then, it was a lot easier to accept things at face value and proceed with either incomplete or false ideas in mind.
I wish I could say that in subsequent years away from that particular workplace this was no longer a problem. It has, at least, been far less of an issue in terms of radgroup-organization. That is, I haven’t had to be a detective anywhere near as frequently to figure out the policies, procedures, game-plans, etc. of my subsequent places of employment.
But being a detective is still overwhelmingly prevalent when it comes to figuring out what the heck the story is with the imaging studies we’re wrangling. Which is rather baffling, since this is a completely-avoidable issue that routinely impacts patient care-how on Earth is it allowed to go on?
More from Eric Postal: The Warnings Going Unheeded
Some of it is the result of sheer laziness, and has been around since before my time in the medical field. One classic: Even though every hospital’s orientation-program tells its staff over and over that “R/O [whatever]” is not an acceptable, let alone billable, reason-for-exam, stuff like “r/o malignancy” or “r/o trauma” is probably still supplied as the reason for almost half of the studies I see.
But at least those give us a vague clue as to why imaging was ordered-with the exception of the preposterous “r/o pain” that somehow still exists. And, even if it doesn’t give a clue, this still falls into the category of visibly-missing information: We see that there is no info given and know that we either have to go looking for some or proceed with awareness that we are flying blind.
What I’ve seen more frequently in recent years is the more insidious provision of misinformation, where we think we have been informed but have actually been misled. A common example is substituting “H/O” for “R/O.” “History of,” at least, is more billable than “Rule out,” so the bean-counters and paper-pushers are happier.
The problem is that this often turns out to be an outright lie, or at best a guess on the part of the clinician. A head CT provided for (H/O) stroke, fracture, whatever, winds up meaning that the referring clinician thought there might be such pathology, or simply that the referrer knew that this reason-for-exam would be permitted and the scan she wanted to order would ensue.
Meanwhile, the rad receiving the study, taking such histories at face value, is being sandbagged as he’s been told there has been a stroke, fracture, whatever-except it’s not evident on the images he’s given. Does that mean he’s failing to identify a subtle abnormality? That someone else read a previous exam and wrongly diagnosed such pathology? That the patient had a stroke or fracture 20 years ago, rather than just now?
An example from my caseload this past week: Maxillofacial CT for “pain, s/p dental procedure.” No mention of what procedure, where it happened, or where the pain was. At least half of the time I spent on the case was trying to figure out where the area of interest was. Then, since the exam showed nothing acute, reluctantly signing off my pretty-much-normal report wondering what I’d missed.
Or another case this past week: A chest CT for “F/u nodule.” The patient had undergone a chest CTA the immediately-preceding day. Detective hat on, I proceeded to review the report of that scan, thinking that since there was no way we were doing a 1-day follow-up of a nodule, there must have been some kind of equivocal finding on the previous scan that the current study was supposed to clarify.
The previous report made no mention of a nodule. I went ahead and looked over the previous exam’s images: No motion artifact or anything that might have rendered such findings questionable. Again, I ultimately wound up spending about 50% of my time on the case just trying to figure out why it had been done before reading it out as unchanged from the preceding day, wondering what I’d missed.
(Later, when a PA wanted to talk about the case: Were the lymph nodes changed from another study 5 years earlier? I asked her what the nodule was supposed to be, and why the “reason for exam” said nothing about lymph nodes. Not only was she unaware of any nodule that had ever been identified or followed on the patient, but she clearly couldn’t care less that she’d been caught A) Introducing false information about a nodule into the patient record, B) not writing word one in the clinical history about the lymph nodes that were prompting the study, and C) ordering that the patient be needlessly radiated a second time when nodes could easily have been reviewed on the scan of the preceding day. I mean it; she was clearly not embarrassed by any of this.)
So much time and effort are wasted on such misadventures, it’s a miracle that more patients aren’t harmed in the process. How is this issue allowed to be so rampant? Shouldn’t there be at least as much effort put forward toward policing and correcting such careless referring-clinician behavior as there is to hassling conscientious rads over nitpicky QA stats?