When did it become okay for clinical histories to be entirely unhelpful?
Longer-term readers of this column will know that I’ve got more than a little dissatisfaction when it comes to the clinical history, or “reason for exam,” that kicks off far too many episodes of the diagnostic-radiology ballgame.
An interesting thing happened when I recently switched jobs and went from providing telerad services for hundreds of hospitals across the country to covering a handful of relatively-local facilities. The clinical histories I received were suddenly a lot more meaningful.
Related article: Another Clinical-history Piece…
For instance, in my worklist a month ago, a typical abdominal CT would have been for “pain.” In my new gig, the exact same scan might be for “upper abdominal pain, left more than right, worse in the AM.” And I’ll routinely receive a quick listing as to what surgeries and/or cancers the patient might have had in the past, when they happened, etc.
Not only does it make my job easier and more effective, it’s downright satisfying. I’m feeling a lot more like a physician than I have in years gone by, rather than a cog in a machine.
It’s unquestionably easier for this to happen when one is working for a smaller number of facilities: Your group can develop closer relationships with the referrers and onsite techs, and if crummy clinical histories are happening, you can direct efforts to specific trouble-spots to bring them up to snuff. A much greater challenge, if your group were 10 times bigger and trying to service a hundred times more facilities, each with their own dysfunctions and wrinkles.
As I’m learning the new-to-me software interface at my gig, an interesting thing stands out to me: The old, crummy clinical histories are often still there. It’s just that the extra, relevant info is manually entered by the technologists who are doing the studies that then get sent my way. Usually handwritten. In other words, they’re working around a still-broken system.
This could mean that referrers, by and large, are a lazy, uncaring lot who don’t mind sacrificing quality patient-care if it means they can save a few seconds when ordering diagnostic imaging. Even I’m not cynical enough for such a blanket condemnation.
Instead, I suspect it’s the means that’s been fobbed on them for ordering studies. I haven’t recently used such interfaces firsthand; the last time I ordered medical imaging, I was an intern and it was almost 20 years ago (yikes). We ordered studies by filling out a paper form.
From what I’ve heard, referrers don’t necessarily have the ability to enter the clinical histories they’d like. Maybe it’s that the interfaces are crummy, or maybe it’s that the computerized order-entry systems are set up to make it difficult/impossible to order a study for anything other than approved reasons. In other words, reasons-for-exam that are guaranteed to result in reimbursement, favorable utilization-reviews, etc.
So there might or might not be a way for a referring clinician to give a clinical history that will enable us to do the best possible job as their imaging-consultants. And, if there is a way, it might be a convoluted mess, where one has to traverse a byzantine series of menus in order to be permitted to freely type a clinical history out.
Many referrers might not be able to figure such things out, or might be hectic enough that they just take the path of least resistance and choose one of the menu-items on the first screen they see, however irrelevant it might be. Hey, it gets the imaging study done, and radiology-goons will probably figure it out. And if they don’t, they can always generate an addendum to their reports after a “here’s what’s really going on with my patient” phone call.
All of which begs the question: When the companies produce and test this software, who do they consider their customers? That is, who are they looking to satisfy? The need-to-know information that is populating the “reason for exam” is needed to know by whom? Certainly not those of us who are routinely disappointed, if not angered, by these “clinical histories.” If we were valued customers, our dissatisfaction would have prompted change long ago.
But do you know who does find these so-called histories acceptable? The administrative folks: coders, billers, payers, etc. Somehow, they became the customers for the computerized order-entry industry.
Related article: The Warnings Going Unheeded
Some of us have made lemonade out of these lemons. With these generic reasons-for-exam being part of the digital record, it’s pretty easy to target them with other software subroutines. Some systems have the ability to import the garbage “clinical history” into the report-template, so we don’t have to waste time dictating it ourselves. Multiply those seconds you’re saving by the number of studies you read each day, you’re probably looking at an extra RVU or two.
Me, I’d ask those conscientious techs to switch from handwriting those notes to entering them on their keyboard-and then I’d have the RIS import their typing, either along with or instead of the garbage. Why not have efficiency and accuracy?
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.