• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Hospital Compare website can give radiologists a bad rap on quality

Publication
Article
Diagnostic ImagingDiagnostic Imaging Vol 32 No 9
Volume 32
Issue 9

If you were to look at the Department of Health and Human Services' Hospital Compare website, you might think Lifecare Medical Center in Roseau, MN, uses MRI too frequently, especially when it comes to imaging patients for low back pain.

If you were to look at the Department of Health and Human Services' Hospital Compare website, you might think Lifecare Medical Center in Roseau, MN, uses MRI too frequently, especially when it comes to imaging patients for low back pain. Lifecare's variance was 47.3 points over the national average, according to the new website, www.hospitalcompare.hhs.gov.

But look a little deeper and you'll find that the number of cases was too small to allow a quality-of-care conclusion.

“There were only nine patients, so we had a really small sample size. And five of the scans were ordered by the emergency room,” said Sharlene Peterson, director of imaging services at Lifecare. “I think the small sample is a key factor as to why we look terrible.”

It's fine to compare facilities, but it's important to do so properly and gather a bigger sample size, she said.

“I would say that if the statistics are accurate, small sample size would be a disaster for a small hospital like us,” she said. “It makes us look like we're abusive with MRI, which we're not. We had 35 other examples that could have been included in the sample, but they weren't.”

The Department of Health and Human Services (HHS) recently started comparing hospitals based on imaging criteria. One of the four criteria they chose to use is imaging outpatients who experience low back pain without trying conservative treatments first.

Few national standards exist to address the variations in the delivery of services, define the quality of outpatient imaging care, or allow its measurement, HHS said. So the department looked at outpatient data for Medicare patients.

The other measures besides MRI for low back pain are recall rate for mammograms, CT for the chest with and without contrast, and CT for the abdomen with and without contrast.

The spine measure attributes to the radiologist something better attributed to the referring physician, said Dr. David Seidenwurm, vice chair of the American College of Radiology's commission on quality and safety.

“What this is saying is, if you can keep the patient out of the MRI, then you don't unnecessarily medicalize their uncomplicated back pain,” he said. “I have some technical criticisms of the measure. For example, I think they privileged certain things that have no greater proven value than others. And they also don't capture certain clinical
management.”

GOOD PLACE TO START

At the same time, you have to start somewhere, Seidenwurm said.

“These particular measures? They're about as close as you're going to get to fair,” he said. “People have this idea that unless a performance measure is perfect, you can't use it.”

Not everyone agrees with Seidenwurm's assessment, particularly the mammography measurement.

HHS' mammography measure looks at the recall rate, or the number of outpatients who had a follow-up study within 45 days after a screening mammogram. They recommend a facility fall between 8% and 14%.

“I definitely think we need measures of quality,” said Dr. Stamatia Destounis, a radiologist at Elizabeth Wende Breast Care in Rochester, NY. “Callback rate may be one of those measures that is targeted, but it cannot be looked at in a vacuum.”

The patient population needs to be considered. Older patients have a higher rate of cancer and thus a higher recall rate. Also, if something shows up on the mammogram of a woman who doesn't come in for her screening exam every year, the radiologist will be more apt to recall the patient because he or she won't have prior film to study, Destounis said.

Other possible variables are the facility's location in the country, the radiologists' experience, how many mammograms they read, how many biopsies they're doing, and how many cancers they detect. All of those factors make a difference, Destounis said. The recall rate needs to be examined with context.

“Just looking at one thing is fraught with error. They have to start somewhere, but they need to look at multiple things,” she said.

BREAST IMAGING

Another breast imager, Dr. Jennifer Harvey, chair of the RSNA breast imaging subcommittee, said there is a lot of variability between radiologists regarding recall rates, but a number between 8% and 14% seems reasonable; most will fall in that range.

“However, the far more important number in assessing quality is the cancer detection rate, or how many cancers are diagnosed per 1000 women screened,” she said.

The number varies from three
per thousand to eight per thousand and depends on the radiologist's ability to detect cancers as well as the incidence of cancer in the region where he or she is practicing, according to Harvey, also the director of the breast imaging section at the University of Virginia in Charlottesville, VA.

“Recall rate alone is not the best measure of quality. A radiologist could have a recall rate of 7% but a cancer detection rate of only one per thousand,” she said. “Unfortunately, HHS cannot easily monitor cancer detection rates, but it can estimate screening recall rates.”

The last two measures HHS is using look at are CT scans of the chest and abdomen with and without contrast. Seidenwurm said the measures are good ones.

“A lot of CTs of the abdomen, pelvis, and chest need to be done just one way,” he said. “There are some cases where you do both, like for some kidney disease, but usually once is enough.”

While Seidenwurm for the most part approves of the measures HHS chose, others, particularly hospitals, disagree.

“We're very much for reporting on quality,” said Beth Feldpush, senior associate director for policy at the American Hospital Association. “We very much think that imaging efficiency is an area we should target for quality measures. But I don't think the measures that are out there right now have been fully vetted by the scientific community, or are, perhaps, the measures we should be starting with.”

HOSPITAL'S PERSPECTIVE

There are two groups AHA wants to provide their stamp of approval before the association will also sign off on the measures: the National Quality Forum and the Hospital Quality Alliance. The forum reviews quality measures for scientific acceptability and the alliance reviews implementation. Once the forum deems a measure scientifically acceptable, the alliance determines which ones are ready for implementation on a national level.

Only two of HHS' four measures were endorsed by the forum: MRI for low back pain and CT scans of the chest. None were approved by the alliance.

“In general, this lack of knowing that gold standard benchmark is a concern,” Feldpush said. “As a provider, from the hospital's perspective, it's very challenging to know what you should be driving toward for improvement.”

From AHA' s perspective there are no evidence-based guidelines for determining what the benchmark targets should be for imaging. It's hard to know whether the hospital is doing a good job. Could they be doing a better job, or are they already a top performer?

“Comparing to a national average can be helpful,” she said. “In this particular case, because we don't know that any evidence-based guidelines suggest what appropriate rates should be, you can't really infer anything from that national average, other than this is the average of everybody when you roll them all up together and divide by a certain number of hospitals.”

While there are problems with the measures, you can't chase platonic perfection because you'll drive yourself crazy, Seidenwurm said.

“How do you know if you're doing

a good job if you don't know how

other people around you are doing?” he said. “Is the measure perfect? No. Is it pretty good? Yes.”

The government needs to know if it's getting what it paid for, or give the information to the consumers so they know, he said.

In the end, performance measurements for physicians are coming, there will be more of them, and that's a good thing, Seidenwurm said.

Related Videos
Improving the Quality of Breast MRI Acquisition and Processing
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Making the Case for Intravascular Ultrasound Use in Peripheral Vascular Interventions
Can Diffusion Microstructural Imaging Provide Insights into Long Covid Beyond Conventional MRI?
Assessing the Impact of Radiology Workforce Shortages in Rural Communities
Emerging MRI and PET Research Reveals Link Between Visceral Abdominal Fat and Early Signs of Alzheimer’s Disease
Reimbursement Challenges in Radiology: An Interview with Richard Heller, MD
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2024 MJH Life Sciences

All rights reserved.