Could a radiologist be tricked by AI?
As radiology sees more types of artificial intelligence (AI) algorithms come online to help with diagnosis, there has been a looming fear that such programs could be used as cyberweapons, making cancerous lesions appear in imaging studies where none exist. Recent research published in the European Journal of Radiology, however, suggests that radiologists still have the upper hand and can distinguish real lesions from AI-generated ones.
Researchers from Switzerland’s University Hospital Zürich used a special type of AI program called a generative adversarial network (GAN) to see if they could fool radiologists. The GAN algorithm consists of two distinct networks: a generator network that can add a false lesion to different images and a discriminator network that is programmed to detect those malicious additions. The two networks act in competition, trying to learn from the other in order to improve their own performance. In theory, with enough time and a large sample size of images, the generator should be able to fool the discriminator. But what the researchers did not know was whether the generator could also fool trained radiologists.
To test the idea, the researchers trained a cycle-consistent GAN model (CycleGAN) on 860 mammographic images from more than 300 patients, and then tested the model using images of 302 images showing cancerous lesions as well as 590 normal scans. The researchers then had three radiologists read both original images, as well as those that had been modified by the GAN, in both a lower and higher resolution, and then had them rate whether a suspicious lesion was present as well as their opinion regarding whether that lesion was a manipulated image using 1-to-5 scales.
The study authors discovered that, at a low resolution, the radiologists had difficulty distinguishing between original and modified images. At the higher resolution of 512 x 408 pixels, however, the radiologists not only showed a significantly lower detection rate on the modified images, but they also could more easily identify that they had been modified. The researchers concluded that CycleGAN can be used to insert suspicious features into existing images but is not yet a threat as a cyberweapon because of technical limitations.