Researchers have shown the dangerous ability to cause neural networks to “hallucinate” by adding together two images. In a famous example, Google’s Cloud Vision system was tricked into guessing the prediction label “dog” on a clearly visible image of two men on skis, simply because a small number of pixels from a separate dog image were inserted into particular parts of the picture.
Related article: Artificial Intelligence: Own it Now
Similar disturbing results have arisen around the method of “backdoor poisoning attacks” on neural networks. By injecting a small number of specifically mislabeled training images into a training dataset, malicious actors can potentially insert “backdoors” into learning systems by tricking them into reliably predicting particular incorrect labels.
Perhaps most dangerous from a medical perspective, deep learning networks in general tend to be “black boxes,” which means they are difficult both to explain and to validate. It is quite common to train a neural network which seems to be performing incredibly well, only to discover that the network has been “cheating” by learning some artifact in the data. In one such example, a model learned to identify horses in images with near perfect precision. This amazing feat was accomplished simply because all such horse images in the training and testing sets contained a particular copyright tag! The clear technical vulnerabilities and lack of human-interpretable explanations for model predictions are strong evidence that AI is still far from replacing doctors; nonetheless, it can provide useful tools for skilled medical practitioners.
Following his talk, Dr. Ryan Lee took over to moderate the panel session on “Machine Learning and the Future of Radiology” which included Dr. Mitch Schnall, chairman of Penn Radiology, Dr. Devang Gor, chairman of Lehigh Valley Health Network, Dr. Paras Lakhani, assistant professor at Jefferson University Hospitals and Dr. Ajay Kohli, radiology resident physician at Drexel University College of Medicine. Many questions were addressed and there were three important takeaways:
1. Radiologists need to shape the conversation on the future of medical imaging. One of the most important takeaways was that the conversation around AI and radiology is being shaped by software engineers, data scientists and venture capitalists—who not only lack a complete understanding of the intricacies of the profession but also do not hold patient care paramount as do radiologist and clinicians.
Dr. Mitch Schnall made an excellent point when he noted that companies use healthcare data dating back decades. However, data in medicine is often not even relevant after just a few years because of rapidly evolving therapeutic and diagnostic tools. This is where medical experts will provide an integral part in the development of medical imaging technology—in order to apply technology effectively, it is imperative that developers understand the clinical relevance and significance of healthcare data.
Related article: Fear, Hope, and Hype For Artificial Intelligence
2. Understanding the technology and its limitations will be important for radiologists. Every radiologist has been in the position of having to make the call to differentiate an imaging artifact from true pathology (e.g., volume averaging artifact from true intracranial hemorrhage). This expertise acquired through training will not disappear with AI, but in fact will require an even higher level of understanding and experience of how deep learning technology works. It will therefore be imperative for radiologists to begin training with AI-assisted algorithms.