• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

ACR’s facility accreditation rules add MD peer review

Article

Imaging sites applying for or renewing facility accreditation must implement the American College of Radiology’s RADPEER physician peer review program.

The American College of Radiology will require all sites applying for or renewing facility accreditation as of April 1 to implement its new physician peer review program.

RADPEER is a quality assurance tool that provides summary statistics and comparisons of an individual radiologist's performance and that of his or her imaging facility by modality, according to the ACR.

"Our goal is to make RADPEER very user-friendly and widely accepted," said Dr. Kenneth W. Chin, chair of the ACR RADPEER Committee. "We are hoping that we will be able to accumulate much more data and make it much more effective through widespread participation."

The RADPEER program started as a pilot in 2001 and was offered to members in 2002. It went from a paper-based process - whereby radiologists would fill out their peer review scores on actual cards before submitting them to the ACR - to a paperless one in August 2005. The program now allows data submission on an electronic format.

During interpretation of an imaging exam - plain-film x-ray, CT, ultrasound, or MR scan - radiologists may run into a previous study of the same area of interest and elaborate an opinion about the first exam. If they use RADPEER to score the exam based on their opinion, they will be engaging in a peer review event.

Radiologists should be able to determine whether they concur or not with the interpretation of the first study while reading the new one using the following standardized four-point scale:

 

 

  • Scoring level No.1: Concurring interpretation

 

 

  • Scoring level No.2: A difficult diagnosis that may not be expected to be made

 

 

  • Scoring level No.3: Diagnosis should be made most of the time.

 

 

  • Scoring level No.4: Diagnosis should be made almost every time. This is a misinterpretation of an important finding.

 

 

 

While reading a chest CT scan, for example, a radiologist runs into a previous report of the same case indicating a normal study. He or she, however, may find an easily noticeable 1-cm nodule overlooked by the first report and may categorize that report with a No. 4 score. Or the lesion could be a 7-mm noncalcified nodule, maybe a bit hard to spot, and that finding could be rated with a No. 3. The new reader may rate it with a No. 2 if it is a tiny lesion that is probably not dangerous, but difficult to detect. If the previous report describes exactly what the new reader could have reported, it gets a No. 1 for a concurring interpretation, Chin said.

Exams receiving level 1 or 2 scores will not need further evaluation. Those getting level 3 or 4 scores should be reviewed through the facility's internal quality assurance process before the peer review report is submitted to the ACR. This review will help figure out the origin of the error, detect common or systematic mistakes by an individual radiologist or a group, and determine at which point the ACR needs to intervene and provide education.

The program should set in place a process that proves radiology can police itself effectively and provide the quality of care that the public expects and deserves, according to Chin.

"Our approach will not only be that of assessing quality. Our ultimate goal will be quality improvement," he said.

Through this mandatory peer-reviewed process in all of its accreditation programs, the ACR hopes to be able to gather a large amount of data that can reliably indicate to radiologists where they stand in regards to performance, Chin said.

"There is a movement now to pay for performance, and radiologists believe they are the best for interpretation of imaging studies. This is a mechanism by which we can demonstrate that we are the best at what we do. And if we are not the best, then we will have a process by which we can continue to improve our quality so that the public will be assured that they do get the best care," he said.

Mammography, stereotactic biopsy, and ultrasound-guided breast biopsy programs will not be part of the accreditation rule because they are overseen by the Mammography Quality Standards Act. RADPEER's results will be confidential to encourage physician participation, Chin said.

Facilities that do not have RADPEER can get accreditation approval if they have an equivalent program in place. A qualifying program should include, among other requirements, a peer review process that allows for regular, random selection of studies for review, double reading by two radiologists, a four-point scoring scale; quality improvement policies, and summary statistical data for each physician by modality.

For more information from the Diagnostic Imaging archives:

CMS backs down on freestanding facility standards

National insurer targets outpatient imaging centers

Residency programs face sweeping changes

Insurer sets accreditation as imaging standard

Related Videos
Improving the Quality of Breast MRI Acquisition and Processing
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Making the Case for Intravascular Ultrasound Use in Peripheral Vascular Interventions
Can Diffusion Microstructural Imaging Provide Insights into Long Covid Beyond Conventional MRI?
Assessing the Impact of Radiology Workforce Shortages in Rural Communities
Emerging MRI and PET Research Reveals Link Between Visceral Abdominal Fat and Early Signs of Alzheimer’s Disease
Reimbursement Challenges in Radiology: An Interview with Richard Heller, MD
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2024 MJH Life Sciences

All rights reserved.