Peer Review in Radiology is Not a Punishment

March 31, 2015

Use peer review in radiology as an opportunity for improvement, said David A. Koff, MD at ECR 2015.

After the 2013 incident in which 3,500 CT scans and mammograms read by Ontario radiologist Ivo Szelic had to undergo review for possible errors, quality assurance processes changed for all Ontario radiologists, explained David A. Koff, MD, FRCPC, professor and chair, department of radiology, McMaster University, and chief, Diagnostic Imaging, Hamilton Health System, at ECR 2015.

Soon after the incident, Health Quality Ontario, the government agency focused on health quality, announced they will lead the implementation of a province-wide physician peer review program in all facilities where diagnostic imaging services are provided, including mammograms and CT scans.

 “Peer review is nonpunitive,” Koff said. “It’s an opportunity for quality improvement. It helps to identify trends.”

“We have to identify and correct the systemic barriers to a quality product, and therefore improve everybody’s performance,” he said.

Koff described the recognized principles of peer review.

• Prospective or retrospective time-limited review by a peer: Prospective means a double read, so someone is reviewing the case right away and would send the original radiologist a note if there is a disagreement, Koff said. Prospective reviews aren’t always feasible, though, especially in emergency cases. This is where the retrospective time-limited review comes into play, Koff said, which is a review that would take place on the same day. The first radiologist’s read would get sent to the emergency department, but another radiologist would review the case the same day and notify the emergency department if there is a discrepancy.

• Random selection: A random selection of studies should be reviewed on a regularly scheduled basis.

• Fair: The process should be fair, unbiased, and consistent, and ensure confidentiality for all aspects of peer review and anonymity for reporting and reviewing radiologists, when possible. Examinations must be representative of each physician’s subspecialty.

• Approved: Peer review findings must use an approved classification with regard to the level of quality concern.

Koff’s institution, Hamilton Health System, of 60 radiologists spread across 9 campuses implemented a pilot system with prospective and retrospective QA shared between radiologists and nuclear medicine physicians, he said. Their goals were: assessment, education, and improvement.[[{"type":"media","view_mode":"media_crop","fid":"33692","attributes":{"alt":"David A. Koff, MD","class":"media-image media-image-right","id":"media_crop_2377488754363","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"3567","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"float: right;","title":"David A. Koff, MD","typeof":"foaf:Image"}}]]

Because of their large network of hospitals, the pilot needed to be large-scale, something that could perhaps be expanded into the region or even the province, Koff said. This required a strong software solution and the ability to provide the radiologist with timely feedback to avoid repeating a mistake; reporting the errors to the enterprise was a must, he said.

They added a few elements like making the review blinded to the identity of the institution and the radiologist; managing user-defined variables like frequency of double-reads and modalities to be automatically selected for review and by which reviewers, and highly customized metrics on enterprise error rate, modality-specific error types, and the incidence of these errors over time. HHS used a modified ACR scoring system.

Based on HHS’s success, Health Quality Ontario used this model when it began work on its province-wide system.

Learning and education should be the primary focus of peer review, Koff said. The objective is not to punish, but to educate and improve.