As radiologists we review numerous images and base our findings on our experience and expertise, which are in turn based on reading articles and textbooks (our knowledge base). If we program all of these knowledge bases into a computer, then wouldn’t the computer be as good or likely even better than we are?
Answer: A Google news search for this returns 4,269 articles.
Question: What is Watson?
Last week the world was taken by storm as Watson, the supercomputing brainchild of IBM, easily defeated two of the greatest trivia masters on “Jeopardy!”. The nationally televised program prompted many observers to pen articles regarding the impact of this type of technology on the relationship between computers and humans.
Potential areas of transformative change sighted included fields as disparate as healthcare and hedge fund management. One NPR story even specifically mentioned radiology as an arena where artificial intelligence may replace humans. So are we destined for the dystopian futures portended by such sci-fi classics as “The Matrix” and “Terminator”? Contrary to Ken Jennings final “Jeopardy!” response, I do not welcome our new computer overlords.
To compete on “Jeopardy!” Watson was programmed with vast quantities of knowledge on a range of topics and the ability to sort through it when presented with the clues of a “Jeopardy!” answer. It should come as no surprise, then, that Watson did so well on a game show which requires a huge fund of knowledge and the ability to access this knowledge quickly. Isn’t that what computers were built for?
But what makes Watson remarkable is its ability to do what all other computers before it were incapable of: comprehend natural language including puns, word plays, and metaphors common in “Jeopardy!” clues. Computers will do what you tell them to do only if you speak its special language - computer programming.
However, computers cannot understand natural human language. If I ask my friend to suggest a “money” Sushi restaurant he will understand my query and suggest places that he knows to be good based on his experience. Yes, you can do essentially the same thing with a computer using Yelp! or Urban spoon, but you need those applications to perform that functionality for you and it is likely that the program will interpret the slang “money” in the context of cost, not quality. You cannot just “speak” to your computer and have it understand you and more importantly the context of your question. Watson opens the door to this potential.
So what impact will this type of technology have on the field of radiology? As radiologists we review numerous images and base our findings on our experience and expertise, which are in turn based on reading articles and textbooks (our knowledge base). If we program all of these knowledge bases into a computer, then wouldn’t the computer be as good or likely even better than we are?
With the ability to understand natural language, a primary-care physician could ask the computer following an abdominal CT, “Does my patient have appendicitis?” and the computer could answer, “No, but there is a high likelihood that the patient has diverticulitis.” While the aforementioned scenario is pretty scary, the advancement of machine intelligence makes it distinctively possible for the near future.
However, the personal relationship between a doctor and his patient can never be replaced. It is important, for example, to diagnose cancer, but how is that information communicated? I do not believe a computer will ever be able to demonstrate compassion or rest a hand of comforting support on the shoulder of a patient that is hurting.
Radiologists must continue to foster these personal relationships both in direct patient encounters and our interactions with our referring providers. We must also be honest with ourselves. Currently computers augment our imaging capabilities but we know that one day computers will be faster and more accurate at making diagnoses than we are.
In order to distinguish ourselves from machines, we must get out from behind our PACS stations and engage our patients and referring providers in meaningful discussions. If not, we better get used to working for our computer overlords.
Dr. Krishnaraj is a clinical fellow in the abdominal imaging and intervention division, department of imaging, at Massachusetts General Hospital/Harvard Medical School. He can be reached at akrishnaraj@partners.org.
Emerging AI Algorithm Shows Promise for Abbreviated Breast MRI in Multicenter Study
April 25th 2025An artificial intelligence algorithm for dynamic contrast-enhanced breast MRI offered a 93.9 percent AUC for breast cancer detection, and a 92.3 percent sensitivity in BI-RADS 3 cases, according to new research presented at the Society for Breast Imaging (SBI) conference.
The Reading Room Podcast: Current Perspectives on the Updated Appropriate Use Criteria for Brain PET
March 18th 2025In a new podcast, Satoshi Minoshima, M.D., Ph.D., and James Williams, Ph.D., share their insights on the recently updated appropriate use criteria for amyloid PET and tau PET in patients with mild cognitive impairment.
Can Abbreviated Breast MRI Have an Impact in Assessing Post-Neoadjuvant Chemotherapy Response?
April 24th 2025New research presented at the Society for Breast Imaging (SBI) conference suggests that abbreviated MRI is comparable to full MRI in assessing pathologic complete response to neoadjuvant chemotherapy for breast cancer.
Clarius Mobile Health Unveils Anterior Knee Feature for Handheld Ultrasound
April 23rd 2025The T-Mode Anterior Knee feature reportedly offers a combination of automated segmentation and real-time conversion of grayscale ultrasound images into color-coded visuals that bolster understanding for novice ultrasound users.