• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

It’s Time for Radiology to Take Peer Review Seriously

Article

In order to thrive, radiologists need to know their diagnostic miss-rate.

What is your diagnostic miss-rate? Do you know? I don’t know mine. Not my real one, at least.

If your miss rate is unknown, what do you estimate that it is? Research says that we all tend to grossly overestimate our actual performance and underestimate the frequency of our mistakes, so be wary of your estimate. And even if your miss-rate has been measured, was it measured by an independent, objective party that is not invested in the success of the practice?

To probe further, what is your group’s collective accuracy rate? How does that compare to national benchmarks? Are there detectable patterns in errors derived from faulty search patterns that can be used as teaching points? 

Perhaps most importantly- are we getting better, worse, or staying the same?[[{"type":"media","view_mode":"media_crop","fid":"29391","attributes":{"alt":"C. Matt Hawkins","class":"media-image media-image-right","id":"media_crop_306355657539","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"3040","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"border-width: 0px; border-style: solid; margin: 1px; float: right;","title":"C. Matt Hawkins, MD","typeof":"foaf:Image"}}]]

Radiology, as a profession, is in an era of redefinition. We’re redefining our day-to-day clinical operations. We’re redefining our image. We’re redefining our relevance. For radiology to emerge as a clinical service that is trusted by referring physicians, patients, policy makers, and administrators, we simply must begin measuring our performance. And we have to do it right. Peer review done well allows us to measure the ultimate “quality-metric” in radiology: how often we are right, and how often we are wrong. While we can measure the frequency of guideline compliance in report recommendations, or how often incidental findings are handled appropriately (ie, pseudo-quality metrics), in a profession whose livelihood hinges on the correctness of our diagnoses, there is nothing more important or relevant for us to measure than our accuracy rate.

There are certainly examples of institutions that have superb peer review programs. But most existing peer review programs in diagnostic radiology are shallow. They are tedious and incomplete. They are biased.

There are nuances to peer review in diagnostic radiology that render it challenging. I recognize that. All findings are not black and white. Hedges can be difficult to review. Clinically irrelevant findings that are not mentioned in reports are just that: irrelevant. And the clinical impact of overcalls can be difficult to assess. It is likely that a variety of miss-rates would need to be measured.

Despite these predictable challenges - just as financial companies want to know how their mutual funds are performing as compared to the Lipper average - radiologists need to know how their accuracy rates compare to national standards and make adjustments to search patterns, imaging protocols, and pace accordingly. A real radiology peer review system could take on many forms. But it needs to cover broad geographies and have de-identified studies reviewed by people that are not invested in the practice where the study was performed. (A legitimate appeal/adjudication process will also be necessary.) 

What should be done with all of the data?  I am not suggesting that radiologists failing to meet certain benchmarks be fired. Or that the bottom 10% is eliminated Jack Welch style. This information can instead be used for coaching and improvement. David Larson and colleagues have already delineated how peer review can be used in this manner. Misses are radiology’s greatest teaching tool. Our misses are eternally engrained in our brains. They are quickly recalled and rarely, if ever, repeated. And for less severe, more frequent misses, a genuine non-biased peer review process might teach us how to alter our search patterns to improve detection. Each group’s collective de-personalized data can also be used to provide hospital administrators with a gauge of how the radiology services provided at their hospital compare to neighboring rivals as well as top-tier academic institutions. Groups that are better than average can revel, and those that are not can strategically intervene and improve.

In the grand scheme, there are many ways that all specialties need to change in their day-to-day clinical practice. In radiology, peer review is only one of many targets for reform. But as we begin to publish our prices, convey our value to patients and administrators, and learn the costs of incorrect diagnoses, it will be paramount to know how often we’re right. If we cannot quantify our value, then we will be paid purely according to how we are perceived, with nothing but effusively abstract descriptions of our worth left to offer those who distribute the dollars.

It’s time to take peer review seriously.

Recent Videos
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2024 MJH Life Sciences

All rights reserved.