CT pathology: Is it real, or is it Photoshop?

December 4, 2009

Korean researchers tested the ability of radiologists to spot CT images altered with commercially available software to introduce pathology and found that their ability to do so is no more certain than a coin flip.

Korean researchers tested the ability of radiologists to spot CT images altered with commercially available software to introduce pathology and found that their ability to do so is no more certain than a coin flip.

In a study that goes to the heart of radiology’s credibility, researchers from the Catholic University of Korea designed a test to find out how well 17 attending radiologists and 13 residents could spot images with faked and real pathology in a set of 10 images.

After being told that some of the images were digitally altered, the attending radiologists were able to spot the five fakes only 51.8% of the time. Residents spotted the fakes just 47.7% of the time.

“The recognition of retouched images was like coin flipping,” said Catholic University’s Hee Jae Chang, who presented the study Thursday at the RSNA meeting.

Jae Chang and colleagues selected abdominal CT images from 2008. Images were exported from PACS to be saved as jpg files, retouched, reconverted to DICOM, and then sent back to the PACS. Key images with anatomically similar pathologies made up the real images.

The attending radiologists and residents were then asked about the images. First, could they identify the pathology on the 10 images? Attendings spotted the pathology 89.4% of the time and residents spotted it 77.4% of the time. When the groups were then told that some had been digitally altered, only about half were able to spot the altered images. None of the radiologists noticed during the diagnostic phase that half the images had been altered.

Queried about the implications of the study, Jae Chang said the main purpose was to alert the community to the possibility of fraud in academic settings. But in the discussion that followed, other potential problems came up, including malpractice cases and the prospect that consumers, through their personal health records, may in some instances control access to their radiology images and be able to alter them.

One member of the audience noted that the conversion to the jpg format and back to DICOM could provide a clue that the images were altered, since the jpg images may not contain the number of gray scales the DICOM image does.

But Dr. Eliot Siegel, vice chair for informatics at the University of Maryland, who comoderated the session, said, “One could certainly imagine editing DICOM images with a plug-in to Photoshop, for example. … The point is that it is not visually something that a radiologist could discern.”

It was also suggested that it would be harder to retouch an image with pathology to make it appear normal, but Siegel questioned that assertion as well, saying, “It would be possible to do it either way.”

There are potential safeguards, although they aren’t visual, Siegel said. One possibility would be to look at gray-scale values and add them up or do some kind of mathematical representation of the image and embed it with the original.

The 10 cases from the study covered aortic dissection, hepatocellular carcinoma, renal cell carcinoma, colon cancer, liver metastasis, hepatic cyst, gallbladder stones, splenic artery aneurysm, adrenal adenoma, and stomach cancer. A key image from each case was selected and represented the typical findings for conclusive diagnosis. The researchers also selected five normal CT images at anatomically similar levels of the key images representing the aortic dissection, hepatocellular carcinoma, renal cell carcinoma, colon cancer, and liver metastasis cases.

The normal CT images were retouched using the corresponding five key images. The time needed to retouch each image was 15.2 ± 7.05 minutes.