• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

How Do We Determine The Radiologist’s Value?

Article

Our original measurement of productivity no longer defines how useful we are.

Many of you have heard of Schrodinger's cat. It’s the hypothetical physics problem that posits a cat in a box could be both alive and dead simultaneously, until you open the box, observe it and find out which is true. Like this, radiologists often send out reports to the netherworld of physician offices, never knowing if they are helpful or not, and thus not knowing how well we are doing our job - because we rarely observe the end product’s results.

For years we’ve thought of measuring ourselves by productivity. Doing this with the number of studies read, or RVUs or billed or reimbursed dollars and comparing them internally or to statistical norms very well may still be a valid part of determining value; I think it is. But that is not the whole story, and may not provide the right incentives all the time. Quality of care, communication skills and relationship creation and management, administrative and business task completion and leadership, all form parts of the value of a radiologist. The days of “I just want to come in and do my work” (ie, read films), may have passed. Those that want to work that way better be really productive, or they are a liability. For your practice, that can be a challenge, as those providers who read a lot often see themselves as most valuable, and have a hard time seeing or measuring the benefits indirect or non-clinical care can offer. It is critical for practices to emphasize that in the end, it is not the amount of work done, or technical proficiency, (what you might call our ‘input’), but actually how useful the work we do is to our patients and clinicians that is our true value (what you might call our ‘output’). Like the cat, what we are, is how we are looked at after the report arrives and the patient is managed.

When analyzing the totality of contribution, some metric for direct work productivity is a necessity, and a combination is probably best. Again, number of studies read, reimbursed dollars or RVUs are important. Quality metrics are critical too. They include basics like turnaround time (TAT). Other factors must be included but are tougher to come by. Measuring the quality of care in those direct interactions is trickier but doable. Data on peer review is a start. Whatever the scale, keeping track of how each provider measures against others with regard to minor and major disagreements is helpful. Information on quality of communication is also a challenge but available for those who want to get it. For instance, survey referring physicians - how often does the provider call you when you think they should? How are your interactions with them? How would you grade the usefulness of reports? So too, must there be feedback on interactions with imaging partners such as hospitals and imaging centers with which you work. Measuring administrative tasks has always been contentious in my experience. Some things are more banal and might be amenable to measurement as simply ‘hours worked.’ But for most things, there should be some effort to regularly determine how effective the administrator is. Measure business growth year to year, but use industry comparisons as benchmarks to account for business cycles. Survey the practice partners to assess communication skills, innovation, confidence and leadership for one another. Ask how useful and necessary certain administrative tasks are felt to be.

This is only a start and a meager one at that, I’d guess. In the long run, we should be looking to find out what measurable impact our radiology work and practice efforts have on improving the care of patients, something we have a very poor handle on at present. A system that accrues data for when a radiology report alters care might help us determine this clinically. Measuring our benefit in less clinical ways is also challenging at present. We have, as a surrogate, how useful our work seems to referring providers and colleagues by qualitative methods as I mentioned above.

So why is it valuable to do this? Because not evaluating it leads to stagnation. Like Schrodinger’s cat, when we look at something, it becomes defined for us. The ‘something’ here is our output (ie, or usefulness, not our input, ie, effort). Once any aspect of our output is measured, there is natural human instinct to improve on the measurement. That measurement has to go beyond the easy direct stuff. Because including indirect components validates the methodology for everyone, and vests people of more diverse productivity types into the process. Refining it will only help us all continually improve our usefulness. That should be the goal for us all.

Recent Videos
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2024 MJH Life Sciences

All rights reserved.