Physician Profiling and Radiologists

May 4, 2016

Radiologists aren’t profiled the way other physicians are, but they are still affected, from RBMA 2016.

The trend line showing the growing increase in health care inflation is not one that economists haven’t seen before, in fact, the last time they saw a trend line showing similar growth to health care, was the housing industry, Ron Howrigon, president and CEO of Fulcrum Strategies, a contract negotiation and practice marketing firm, said at the 2016 Radiology Summit of the RBMA.

It’s well known what happened to the housing industry. It’s also well known why inflation grew so quickly in housing, the growth in health care is less clear, or widely misunderstood.

Researchers affiliated with the Dartmouth Atlas of Healthcare, an organization that “works to accurately describe how medical resources are distributed and used in the U.S.”, found wide discrepancies on Medicare spending per beneficiary geographically.

According to Howrigon, this led CMS and insurance companies to blame doctors for the variation in spending.

“The conclusion was drawn, it’s all about the doctors, it’s about their ordering practices,” he said. “A lot of people took that to the bank and said we have to get at the doctor, we have to figure out who those bad doctors are and get them out of our network.”

Thus, the birth of physician profiling.

“The way you tell good doctors from bad doctors is you do physician profiling, which is purely cost based,” Howrigon said. “For the most part, it has nothing to do with quality because that information isn’t in the data.”

Physician profiling is a leading method for CMS and insurance companies to characterize individual physician differences in ordering, practice, and spending. Physicians are then compared to their peers or a calculated norm. It can mean the difference between whether or not a physician will be included in the insurer’s network. There are also great limitations to the data, Howrigon said. [[{"type":"media","view_mode":"media_crop","fid":"48264","attributes":{"alt":"Ron Howrigon","class":"media-image media-image-right","id":"media_crop_7683717541317","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"5751","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"float: right;","title":"Ron Howrigon","typeof":"foaf:Image"}}]]

“[Profiling] is cost based because all they have is claims data, you have to remember what’s included and what’s not, because claims data has significant limitations to it,” he said. These limitations include:

Adverse Selection: The problem of one physician or group getting more sick or complex patients.

Data Lags: Insurance companies have to wait until all of their data is complete, by the time they are reviewing the claims, the data is usually at least a year old.

Subspecialty: With so many fields becoming subspecialized, it’s harder to compare, for example, an orthopedist who does joint reconstruction to one who does sports medicine.

Quality: Outcomes data isn’t included.

Transparency: Most payers won’t tell you exactly how they did their profiling.

The missing quality factor is especially problematic for radiologists. Ideally, outcomes data can measure quality, but radiologists get paid the same CPT code whether their read is perfect, on point, and definitive, or ambiguous or dead wrong.

A 2010 Rand Study identified further complications posed by physician profiling. Attribution rules, which health plans assign to decide which physicians are accountable for what costs, vary widely across payers. The study found that 17%-61% of physicians would be assigned to a different cost category if an attribution rule other than the most common rule were used. The same set of data can identify one group of doctors as “good” doctors, have an alternate rule applied, and the same data could identify them as “bad” doctors.

It also found that specialty dramatically affects reliability, Howrigon said.

“Vascular surgery, for example, is 5% reliable,” he said. “Because 95% of the time, the data you have profiling a vascular surgeon is wrong; dermatology, on the other hand, is 91% reliable, and that makes sense because that specialty has very little variation in their patients compared to vascular surgery.”

It’s an incidental problem for radiologists, who aren’t profiled the way other physicians are. But their referrers are, and they aren’t completely isolated from penalties.

Howrigon urged that all practices conduct a self-examination of their own data and look at national and payer statistics because it’s much better to start the discussion internally rather than receive a surprise visit from a payer who did their own examination.

He gave the example of a hospital-based radiology practice in West Virginia, who was told by one of its payers that their reimbursements would now be tied to the percentage of abdomen studies that are CT versus ultrasound.

The radiology practice tried to fight back, citing that they don’t order the studies, and they can’t yell at the doctor who ordered the study because they want to protect their contract. The insurance company didn’t budge and essentially agreed that they would get punished based on their referrers’ poor ordering practices.

“This is what happens when the [analysis] comes from the payers,” he said. “I’ve also seen providers start to develop their own interesting quality metrics that they can track and present as things they are doing well: turnaround time, percentage of studies read by subspecialists, etc.”

The payers’ world is changing and they are changing with it, Howrigon said. “You need to understand practice variation and its relationship to cost.”