Automation doesn't do any good if it doesn't take human nature into account.
It never ceases to amaze how many of our daily challenges and stressors are self-inflicted. By “self,” I mean radiologists as a whole-rads are often avoidably troubled by the actions of their rad-superiors, whether or not they actually work “in the trenches” together (sometimes despite the best efforts of the individual rad to the contrary).
For instance, let’s wind the calendar back a few years. Young(er) radiologist Dr. Postal dictated his reports from scratch, with a small repertoire of voice-recognition macros. One such macro was the bare-bones structure of the report, with headings for Clinical History, Technique, Findings, and Impression.
Related article: Are You Tired of Repeating Yourself?
This allowed him to phrase the contents of those sections however he liked. If he had fragments of Clinical History from the referring doc, the rad tech, and (God forbid) the patient, Dr. P could choose how to assimilate these various items into a cohesive whole. For Technique, he could specifically describe which views were obtained for an x-ray exam, or what MRI sequences were utilized in which anatomical planes. Everything that wound up in the report was thought about, and specifically entered based on the exam that Dr. P was looking at.
Along came a directive from on high: We are introducing some exciting new dictation-software for you! It will improve your efficiency, reduce errors, and improve reimbursement. It even has the ability to create Structured Reports if you (or a referring clinician) like those. Usage of the new software is strongly encouraged for these reasons, but don’t you worry, it’s not mandatory. (Until, of course, the day that it became mandatory.)
One could see how the newfangled system might appeal. It had the ability to automatically populate the Clinical History, verbatim, with whatever “reason for exam” the ordering clinician had supplied. Same thing for Technique: Came directly from whatever type of study the tech had entered. Later versions of the software even alerted the user if s/he was attempting to sign off a study that appeared to contain an error: The word “left” when the study was of a right-sided extremity, for instance.
Trouble in paradise?
The problem which stood out to rads like Dr. P, from an early stage, was that all of this labor-saving automation introduced a lot of potential for error. Dictating an entire report manually, so to speak, a rad is consciously deciding each and every word, and where it goes. The more you take that away from the rad, the less quality-control s/he’s going to exert.
So, for instance, if the rad gets good and accustomed to having Clinical History filled out for him or her based on whatever the referrer said, maybe handwritten notes that the tech provided separately fail to catch notice. And/or whatever the patient might have written or said. So the Clinical History becomes “Abdominal pain,” whereas those other valuable sources saying where the pain is, when it happened, what surgical background the patient had, etc. are lost from the spotlight.
Or sometime between when the study-order gets entered into the computer system and the exam arrives for the rad to report, the Technique section stops matching what was actually done. The software dutifully fills out that it’s a 5-view lumbar spine, when in fact it was only 3 views. Or that the CT was with contrast, when in fact it was not.
Maybe the rads, foreseeing that this is going to happen, speak up because they don’t want other sources introducing sloppy-looking errors into their reports. Maybe the rads are just being practical about it, and don’t look forward to a parade of addendum requests down the line: The tech needs the rad to addend that the x-ray only had 3 views, or that no contrast was given. The clinician wants an addendum because the pain was in the right lower quadrant, can the radiologist specify s/he looked there?
The powers that be, of course, having already decided that the new software (or policy, or whatever decision is being made) is the way to go, aren’t going to be swayed by anything such rads have to say. Proofread your reports for such errors, they might retort. You still have the ability to change anything that the new software put into them.
More from Eric Postal: Why are we doing this, again?
Which ignores human nature, whether innocently or a little more willfully. The fact is that once you give people a labor-saving device, they’re going to come to depend on it, even if that means the saved labor results in lower-quality output. The alternative being that the rads would spend at least as much time proofing and correcting the software-generated boilerplate as they would have if they were still dictating the old-fashioned way-from scratch, before the software got implemented.
Such rads are particularly vexed when they can see such things coming, yet can do nothing about it and are instead forced to live with the consequences. For instance, upon first unveiling of the software, when they speak up about their concerns and find themselves “yessed” or more overtly ignored. Then again, when it’s made mandatory-usage, and the issues they predicted do indeed occur.
Those selfsame rads’ frustration is multiplied if they have previously foreseen and warned of other self-created problems in the past. Having been proven right before, they might well expect to be taken a little more seriously with subsequent episodes. When that recurrently fails to happen-well, let’s just say it’s no fun being Cassandra.