• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

The Future of Radiology Peer Review

Article

Peer review is becoming an increasingly important quality measurement. Here’s how to make it work for your practice.

You’re already reading a hundred or more cases a day. Time is tight, and the pressure is on to make it through the work list. Then your administrators tell you to review a handful of additional cases daily -  all in the name of peer review.

Are all radiology practices doing this? What’s the value in this added scrutiny, and what’s the future of peer review?

Peer review is definitely not fading away. If anything, its value is increasing. “Going forward, peer review is going to be ever more important, more quality measurement,” said Robert Pyatt, MD, chair of the American College of Radiology RADPEER Committee.

Peer review is a catch-all phrase for doctors reviewing colleagues’ cases for a variety of credentialing, re-credentialing and quality purposes. Peer review programs are required for hospitals seeking ACR or JCAHO accreditation, though often administrators or medical staff set department or individual physician goals. “The objective of a peer review program should be to learn and improve,” said Nicole Wichlei, product manager of QICS, PeerVue’s Qualitative Intelligence and Communication System.

The results are helpful in-house as well as to accountable care organizations (ACOs) and insurance companies to provide data to find trends, learn how the organization is doing, and identify your opportunities for improvement.

“How can we take this information and educate our staff to make these improvements? Peer review does all this,” Wichlei said.

Peer review software solutions

Peer review is almost entirely electronic these days, said Pyatt, estimating that less than two percent of peer review is still done on paper. Practices can choose from commercial systems like Powerscribe, QICS and Primordial, a home-grown system, RADPEER alone, or a combination of RADPEER and another system. To make it easier for the doctors to remember to select cases, software systems may include pop-ups windows, alerting a physician using the PACS station to a possible review case.

Pyatt, who practices at Chambersburg Imaging Associates and Chambersburg Hospital in Pennsylvania, uses RADPEER, developed and run by the American College of Radiology (ACR). Physicians click on the RADPEER shortcut on their work station to enter the data, choosing review cases on their own.

Milton S. Hershey Medical Center in Pennsylvania uses Primordial, said Michael A. Bruno, MD, professor of radiology and medicine and director of quality services and patient safety. While software systems like this contain other digital workplace solutions, for peer review it randomly brings up cases to score with RADPEER’s four point scale. The completed peer review case goes into an electronic queue for administrators to review cases, if needed, in Primordial, he said. If they need to adjust the score, they do that in Primordial, before electronically transmitting the data in bulk to the ACR’s database.

Other systems, like QICS, can also be customized to adhere to the RADPEER scale, said Wichlei.

RADPEER

RADPEER was developed by the ACR in 2002 to 2003, and has more than 17,000 individual physicians and 1,100 groups using it, said Pyatt. They continue to improve it, last year adding “a” and “b” classification to the scoring, to indicate whether a reading error was likely to be clinically significant.

RADPEER addresses one concern with peer review: whether a case is reviewed anonymously, and if there might be repercussions to critically scoring a past read. RADPEER assigns users a unique identifier and a site ID number for where they read the case. When generating a report as an administrator, Pyatt said he does so using the ACR-assigned ID, and the information he gives to key administrators several times a year is associated only with that ID number.

Administrators like Pyatt, who is vice chair of radiology and medical staff president at Chambersburg Hospital, can look at the data to see how many radiologists in his practice are doing peer review, and he generates individual and group data. He can also break the data down by user ID and by imaging modality, to compare results at his site versus unaffiliated sites nationwide. “I can see how they did at MRI at each hospital,” he said. This is one of the advantages of using RADPEER, he said, given the huge, national database to compare your data to national data.

Pyatt and other administrators review their hospital data before submitting it to the national database. If there are committee concerns or actions, Pyatt can document them in the system, and the data is protected by law. He said that RADPEER is economical, costing around $100 per physician annually.

Choosing peer review cases

Peer review cases must have a prior read to qualify, said Pyatt, meaning that the physician does a read on a patient case with a previous study available. Make sure you haven’t reviewed the case before, and it should not be your own case, said Pyatt. Random sampling is the least biased way to choose peer review cases, so you shouldn’t look at the case before deciding whether to review it. “Some make it the first two cases after lunch,” he said.

Bruno’s Primordial program automatically brings up peer review cases. “It randomly selects cases during the work day for radiologists to peer review,” he said, and the radiologists have the option to review them.

While doing a few cases a day spreads out the work, you don’t have to do it daily. “You can say ‘every Friday I’ll do 20 cases,’” said Pyatt. “Whatever it takes to get your numbers, as long as it’s done randomly.”

The RADPEER committee is still debating guidelines as to how old a case can be for review. They agree that a missed finding two years back is less helpful than something missed last week, when intervention can still happen. “Some feel everything in the last month or two, some within the last six months, the most is within a year. Personally I’m learning more and more toward the last couple months,” Pyatt said.

For mistakes found outside this peer review process, Bruno said the staff knows not to submit it to RADPEER, but takes it through a different review in-house. A case scored as a significant error in any review process will be discussed in the morbidity and mortality conference, and with the affected physician. Error corrections will be made and patients notified if necessary.

The RADPEER committee found that the average number of daily peer reviewed cases is three to four per physician. “People always ask how many to do,” Pyatt said. “It varies. Our practice tries to sample one to two percent of the volume.”

At Bruno’s practice, each radiologist reviews an average  of 5 percent of their workload. “We ask every radiologist to score about six cases a day,” he said, with physicians reading 100 to 200 cases daily.

Other types of peer review

The retrospective peer review is helpful in affirming that a radiologist is knowledgeable when reading chest X-rays or mammograms, for example. “The error rate has shown the miss rate for significant errors is 1 to 4 percent,” said Pyatt. “So if you’re finding a doctor misses MRI findings one percent of the time, that’s in the range of what everyone misses. If your data shows you’re missing ten percent of the time, that’s a problem.”

This type of peer review shouldn’t be the sole mechanism for finding errors though, said Pyatt. It has to be supplemented with abnormal cases brought to the staff’s attention, nonrandom cases. Specialists contact Pyatt when they feel a staff radiologist missed a subtle finding. “That’s an important part of peer review, to capture cases that other physicians think need reviewing.” Those will be reviewed in conferences.

Incentivizing doctors to participate

With full workloads and goals to meet, how do you create an incentive for physicians to participate? Some practices use peer pressure, acknowledging those who met the goals. Some use monetary carrots. “Some say you’ll lose 10 percent of your bonus if you don’t do the minimum,” said Pyatt.

At Hershey Medical Center, doctors meeting their quota receive a token financial incentive of a few hundred dollars twice a year, what Bruno said will buy coffee for six months. “We have 100 percent participation in RADPEER, but every half year, there are some people who fall a little short in their quota,” he said. “This is to encourage them to be more responsive when it comes up.”

The key to successful participation, said Wichlei, is “to make the doctors’ lives easier. They need a system that works in their primary system. If it will make it easy to do so, the adoption rate will be high, something that flows that isn’t disruptive to their current work flow.”

The future of peer review

Currently, peer review is retrospective, checking cases that have already been read and distributed. “Now it’s not quite good enough to do the retrospective, on-the-fly assigned peer review,” said Wichlei. Physicians are now interested in catching errors before they get out the door, and doing blind reviews to avoid bias.

“When people come to talk to us about our solution, we’re getting questions about removing the bias,” she said, adding that the industry is getting savvier about how peer reviews are done.  “There’s been some studies of late, where the industry is trending to remove that bias from the peer review and to recommend that for more accurate quality measures, to perform blind peer review,” she said. That can be difficult for single sites, but for multiple sites it should be easier to anonymize the location and consistency on how reviewers see the data.  

Other countries, like Canada, are closer to doing prospective peer review, where the results are checked before going to the ordering physician, said Wichlei. If there’s a discrepancy, the initial reading radiologist can see it, and possibly sending the case through a quality committee and revising the report before sending it out.

The bottom line is to use a system a facility can effectively implement. “The goal obviously is to continue to improve over time,” Wichlei said. If the doctors aren’t used to doing peer review, implementing an elaborate peer review process would be difficult. Instead they should start out with something simple.

Related Videos
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2024 MJH Life Sciences

All rights reserved.