• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Evidence-based medicine meets diagnostic radiology

Article

Evidence-based medicine is not a new concept. Hippocrates noted over 2000 years ago that while medicine is a mix of science and opinion, "the former begets knowledge, the latter ignorance."

Evidence-based medicine is not a new concept. Hippocrates noted over 2000 years ago that while medicine is a mix of science and opinion, "the former begets knowledge, the latter ignorance."

So why are we discussing it so much now? Information overload. More than 50 million medical articles have been published to date. But just 10% of what is published has lasting scientific value, and 50% of today's medical knowledge will be out of date within 10 years.

Textbooks quickly become old, journals are too numerous, and so-called experts are sometimes wrong. Even if you subspecialize in radiology, it is difficult to keep up with the literature.

It is important to use the highest possible level of evidence for the decision you are making and to know when there is no evidence. But evidence on its own is not enough. Clinical decision making involves integrating sound scientific data with clinical judgment.

Finding good evidence first involves doing a literature search. You should then rank the scientific papers you read based on quality and level of evidence. A systematic review should provide strong evidence, while a conclusion based only on opinion is insufficient.

Studies assessing the impact of diagnostic testing on clinical decision making and patient prognosis should ideally be designed as randomized control trials. Observational cohort studies and case-control studies are viable alternatives. It is also important to check what outcome measures are used. Measures of quality in radiology may include patient safety, technical performance, and economic efficacy. But, ultimately, the best measure of quality is whether patients are diagnosed correctly.

The requirements for grading therapeutic studies in terms of evidence are generally well known and accepted. This is not the case for diagnostic tests, such as x-rays or MR scans. Whenever you order a test, you should know from the patient's history the probability that they have a disease (pretest probability). Then you need to know the test's sensitivity and specificity-that is, its accuracy. If performing the test raises the patient's probability of having disease above a certain threshold, then the patient should be treated. If it lowers the probability sufficiently, disease can be ruled out without the need for further tests. If it does not change the probability, don't perform the test.

Radiologists' lives would be simple if the distinction between a diseased population and a nondiseased population was clear. Unfortunately, it's not. Some healthy people will test positive for a disease, and some diseased people will test negative. Given the presence of disease, sensitivity reflects the proportion of people who will test positive. Specificity reflects the proportion testing negative when disease is absent.

The prevalence of disease matters as well. For instance, if 2000 people have a dental x-ray, and the prevalence of caries is 50%, then a test with a sensitivity of 0.65 will find cavities in 650 of the 1000 subjects who have diseased teeth. If the specificity is 0.98, then 980 of the 1000 people without cavities will have a clear x-ray. So there will be 350 false negatives, and 20 false positives. But if the disease prevalence falls to 5%, then the positive predictive value (true positives divided by all positives) will drop dramatically, and a lot of people with healthy teeth will be told they have cavities. On the other hand, confidence that a clear x-ray is a true negative should go up.

The best way to tell whether a test will change the probability of disease is to use the likelihood ratio. This is the probability of a test result in people with the disease, divided by the probability of the same test result in people without the disease. If the likelihood ratio is 1, the test doesn't discriminate at all. It is totally useless. The higher the positive likelihood ratio, the more likely the patient has the disease. The lower the negative likelihood ratio is, the higher the chance that the patient is healthy.

There are two lessons from all of this. First, predictive values are dependent on the prevalence of disease. Second, if you want to find disease, use a test with high sensitivity, but if you want to exclude disease, use a test with high specificity.

PROF. ASPELIN is a professor in radiology at the Karolinska Institute in Stockholm. This column is based on a presentation made at the Asian Oceanian Congress of Radiology meeting held in Hong Kong in August.

Recent Videos
2 KOLs are featured in this series.
2 KOLs are featured in this series.
Can 18F-Floutufolastat Bolster Detection of PCa Recurrence in Patients with Low PSA Levels After Radical Prostatectomy?
2 KOLs are featured in this series.
2 KOLs are featured in this series.
2 KOLs are featured in this series.
2 KOLs are featured in this series.
2 KOLs are featured in this series.
2 KOLs are featured in this series.
2 KOLs are featured in this series.
Related Content
© 2024 MJH Life Sciences

All rights reserved.