How to Do Peer Review in Radiology

February 4, 2014

Peer review can help radiologists improve. Here are some methods for peer review and solutions for barriers your group may face.

So it’s the end of one of the most popular sports seasons again. It’s SuperBowl time. In the NFL, the process of figuring out who is doing well is a straightforward march through playoffs to the SuperBowl.

Knowing you are doing a good job is easy. Did you make the playoffs? Win? Other industries like the music industry use award shows.

Last I checked, though, there are no playoffs for radiology and no Grammy’s. In fact, we look at things the other way. We note when things don’t go right. This is painful but should be viewed as a strength and opportunity.

A great observation I heard on this is that Wile E. Coyote didn’t realize he was running in the air over a canyon until he fell, but the observers around him did. Unless our peers help us to improve, we don’t even know when we’ve fallen into the canyon.

So few notes on how to do peer review.

What method should be used? A combination is best. Here are few types:

Traditional, retrospective review: That’s where you review others work when they are used for comparison most commonly. It’s fairly random, but through the retrospectroscope.

Double reading: It’s ideal and prospective but expensive; suitable for certain studies.

Satisfactory use of standards: Good for things with standards like assessment of nodules, renal cysts, ovarian lesions, but not everything has a standard.

Focused review: That’s when someone points out an error. Can be very helpful, but needs a standardized process and review, including appeals, as it can be a tool for targeting.

Audit/correlative: Using path, surgical findings or clinical outcome to find out if you are right. It’s only suitable for certain types of cases; it may become easier with central accumulation of data and EMRs. Remember, too, the “gold” standard is not always right.

Clinical feedback: This is information from a referring about the nature/type of reporting and style. If done in an organized way, can be very helpful on several levels.

Education: That is looking for regular errors throughout the practice or in certain modalities. Looks for systematic problems and globally improves practice as whole, not just individuals.

Combining some or all of these helps mute limitations of each method, and aids in subspecialties where few peers are available.

What are the barriers to acceptance?

  • “I’m a target”
  • Embarrassment
  • There is a sense of unfairness or randomness
  • It seems onerous and not useful
  • It may be unsupported by references (i.e. opinions)
  • Increasingly complex studies mean it’s harder to find peers to review

What are the solutions? Peer review

  • Should be anonymous - central accumulation of data; radial distribution
  • Should be random - try having everyone use the first case with a comparison one day each week
  • Should be reliable and consistent
  • Should be focused on outcome and improvement, not the error
  • Should be constructive, become part of data collection over time and be use to direct education
  • Can be incentivized to help encourage its use
  • Should be assessed and measured to show improvements - Remember there are MOC and PQR benefits.

Peer review is part of a focus on quality of patient care. It’s accepted throughout medical industry and with increased attention on medical errors and demand for accountability, it will be part of a broader QA/QI process. It is needed and desired by technical partners and is an opportunity for value added by the radiology practice. Soon, it also may be necessary to obtain reimbursement or maximize reimbursement.

Moreover, peer review is our best way to find out how we are doing and to push us to improve. At least until there is an award show for radiology.