It’s sometime in the future. AI is far ahead of where it is today, and has been entrusted with many things that are beyond its current abilities.
Healthcare is no exception. Some areas of medicine are more impacted than others: Certain specialties still require, or at least strongly benefit from, a human touch.
Related article: Artificial intelligence in radiology: Friend or foe?
Fields like radiology and pathology are more readily given over to the machines. It doesn’t happen overnight, of course. Initially, it’s stuff with a low “pretest probability” of being abnormal, like screening mammograms or chest x-rays. Very little pressure is needed from an anxious patient or clinician to get a second, “just making sure” over-read by a human radiologist.
As case-volumes mount over the years, and there’s increasing evidence supporting the accuracy of the AI, the powers that be gradually raise the bar as to what has to happen before a human rad gets involved. After all, the humans are more expensive—and less easily controlled by industry. Not to mention the government and its regulators.
At some point, concerns of diagnostic radiology’s future as a viable career for human physicians go from being alarmist to realistic. Diminishing interest in rad-residencies dovetails nicely with a governmental whittling-down of spots in those residency programs.
Still, the public doesn’t quite embrace the notion of pushing human docs out of the process entirely. And nobody in political office wants to risk their next election over the issue.
Thus we eventually arrive at the situation of two human radiologists constituting the entire field. The more senior, at some point in the later stages of his career, is responsible for training his own replacement.
It’s actually not as hectic a job as one might think, being one of these two radiologists. True, one theoretically provides coverage for half of the population, but only a tiny minority of imaging-studies actually gets approved for human-rad review. It now takes a lot of effort on the part of the clinicians and/or their patients to make this happen. (Having friends in high places helps, too.)
When a case is to be reviewed, the human rad who gets it has to determine whether he substantively agrees or disagrees with the AI’s interpretation. If it’s an agreement (which in defense of the AI is most often so), the case is closed. If it’s not, the case goes on to the other rad for a tie-breaking decision. The public is reassured that any erroneous AI-interpretation can be overridden by two concurring humans.
It’s not a stress-free situation for the rads, however. They know that the powers that be would still very much like to remove expensive, independently-thinking humans from the equation entirely. And the motivation for such removal incrementally increases every time two human rads go on record as saying that the machines have made an error. The industry doesn’t like that, nor does the government that’s endorsed the practice of diagnostic medicine by AI. If you can get rid of these last couple of human rads, there’s nobody left to point out that the machines are still fallible.
Maybe it’s just paranoia, or maybe it’s recognition of how the world works, but the rads also know that it doesn’t do them a lot of good when they disagree with one another. That is, one of them disagreed with the AI’s interpretation, but the other rad then agreed with the machines. Somewhere, they suspect, these cases are being tallied for evidence: Look, these humans who are supposedly keeping the AI honest can’t even agree with one another. How good could they possibly be? Why do we need them?
Maybe one of them even remembers hearing about how similar tracking of QA stats used to bedevil their forebears. And maybe, to counteract this, they quietly get in touch whenever they are considering disagreeing with the AI. Just to make sure they’re ready to support one another.
Still, paranoia or human nature being what it is, they recognize that sooner or later the powers that be will come up with some mechanism, however contrived, to remove them (or a future pair of human rads) from the playing-field. And that it might most easily be with a “divide and conquer” tactic.
After all, it would probably only require one human rad to stand up before the general public and endorse the newest version of the diagnostic AI as a big-deal breakthrough. A game-changer, something the likes of which the field has never seen. And, as a physician charged with healthcare’s quality-control, to aver that his work here is done.
For which, of course, he’d stand to be quietly (yet handsomely) rewarded. Such as with a lofty position in the ranks of the AI’s manufacturer, or the regulatory entity overseeing it.
Related article: How AI is Evolving in Diagnostic Imaging
Perhaps the powers that be will have the largesse to extend this golden parachute to both of the human rads, and they would accept it simultaneously? I suspect not; why give away twice of what you absolutely have to? No, I suspect it would be a matter of one rad or the other exiting the field first, leaving the other to a short twilight as the last radiologist before his position got cut.
Until such a juncture arrives, the rads might just wonder: Who will get approached first, and who would accept the offer, effectively cutting short the other rad’s career? Who would be the penultimate radiologist?