Next year, my son will begin driving the family car. Before he does, he’ll learn the rules of the road in a classroom and behind the wheel, not to mention undergoing the umpteen hours of driving I have planned for the two of us in parking lots and on side roads. Yet I know that teenagers, though just 7% of licensed drivers, suffer 14% of fatalities and 20% of all reported accidents. How do I know? Because statisticians have told me.
Next year, my son will begin driving the family car. Before he does, he'll learn the rules of the road in a classroom and behind the wheel, not to mention undergoing the umpteen hours of driving I have planned for the two of us in parking lots and on side roads. Yet I know that teenagers, though just 7% of licensed drivers, suffer 14% of fatalities and 20% of all reported accidents. How do I know? Because statisticians have told me.
Insurance companies, realizing that newbie drivers - a group heavily weighted toward teenagers - have a lot to learn, charge them higher premiums. Several factors contribute to this. Certainly, a lack of maturity is one. But so is the learning curve that goes with using a new technology, such as computer-aided detection.
This fact did not escape Dr. Ferris M. Hall, who wrote in yesterday's New England Journal of Medicine: "One possible flaw in the study by Fenton et al was the failure to assess the time it takes to adjust to computer-aided detection."
I don't know how this fact escaped the attention of the well-qualified researchers whose article in the same issue of the NEJM described CAD screening mammography as doing more harm than good. Their claims are based on data obtained from novices in its use.
Survey a million young people and compare the number of accidents they have behind the wheel with an equal number of experienced drivers, and you'll find the increased risk of rookie drivers - not the overall risk of driving a car.
The data presented in the NEJM support the conclusion that CAD screening mammography, when performed by physicians with experience ranging from two to 25 months, increased the number of false positives, leading to more callbacks and biopsies. They do not support a sweeping generalization that CAD screening mammography in itself reduces accuracy or exposes patients to unneeded biopsies.
It is clear that the researchers did not set out to look at just the efficacy of CAD. Rather, they put together a survey that "measured factors that may affect the interpretation of mammograms (e.g., procedures used in reading the images, use of computer-aided detection, years of experience of radiologists in mammography, and number of mammograms interpreted by radiologists in the previous year)." What popped out at them was the drop in specificity after CAD implementation (90.2% to 87.2%), increase in recall rates (10.1% to 13.2%), and rise in the number of biopsies per 1000 screening mammograms (14.7 to 17.6).
Given that these surveys examined a study period from 1998 to 2002 and that the data were published only yesterday - five years later - it seems reasonable that, considering the potential importance of the findings, the researchers could have added a few years to the database, say through 2005. This would have allowed them to stratify providers by experience at the seven institutions that implemented CAD during the study period and then compare their performance. It might then have been possible to determine if mammographers' performance improved as their experience with CAD increased and, if so, at what point they got better than they were without it. It might even have provided enough data to determine the effect of software upgrades on performance.
But that is not what the researchers did. They published a study with findings that may or may not be valid. In their NEJM paper, the researchers allude to the need for "more precise" testing.
Too bad they didn't take the time to do it themselves.
Considering Breast- and Lesion-Level Assessments with Mammography AI: What New Research Reveals
June 27th 2025While there was a decline of AUC for mammography AI software from breast-level assessments to lesion-level evaluation, the authors of a new study, involving 1,200 women, found that AI offered over a seven percent higher AUC for lesion-level interpretation in comparison to unassisted expert readers.
Contrast-Enhanced Mammography and High-Concentration ICM Dosing: What a New Study Reveals
June 16th 2025New research showed a 96 to 97 percent sensitivity for contrast-enhanced mammography (CEM) with an increased iodine delivery rate facilitating robust contrast enhancement for women with aggressive breast cancer.
Mammography AI Platform for Five-Year Breast Cancer Risk Prediction Gets FDA De Novo Authorization
June 2nd 2025Through AI recognition of subtle patterns in breast tissue on screening mammograms, the Clairity Breast software reportedly provides validated risk scoring for predicting one’s five-year risk of breast cancer.