• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Advanced Visualization: automation gets it going but training brings it home

Publication
Article
Diagnostic ImagingDiagnostic Imaging Vol 32 No 6
Volume 32
Issue 6

The evolution to isotropic voxels and advances in cheap and powerful computer processing have transformed radiology from a planar science to one that is volumetric.

The evolution to isotropic voxels and advances in cheap and powerful computer processing have transformed radiology from a planar science to one that is volumetric. And the rewards couldn't be more in tune with the times: improved lesion detection, reduced x-ray dose thanks to greater clarity, and increased productivity from faster interpretations.

But while many clinicians may want to look at 3D and other advanced visualization options, few want to spend much time creating them. The goal of vendors is for the majority of radiologists to be able to work with advanced postprocessing techniques straight out of the box. But it doesn't work quite that easily. Training is a necessary intermediate step.

Every major vendor of advanced visualization offers training in the use of its products. At Vital Images, which sells the Vitrea workstation, training is broken into modules on segmenting anatomy, disarticulating joints, planning endovascular stents, and peripheral and carotid angiography. Two-D leads to 3D, as multiplanar reformatting sets the stage for volumetric reconstructions; wide area detectors, such as Toshiba's Aquilion One, support brain perfusion studies; and multislice CT scanning allows CT colonography.

Radiologists and techs may attend Vital Images' ViTAL U classroom at its Minneapolis headquarters, or clinical applications specialists may conduct training onsite. Alternatively, Vital Images customers may choose training over the web with online demonstrations and hands-on practice accomplished by remote control (see related article on page 10).

Although even a novice can learn the clicks that unlock the best practices embedded in algorithms, creating the best reconstructions depends on knowing how to refine those rough models, as well as on knowledge about anatomy, physiology, and disease. That's why, when it comes to advanced visualization, physicians receive special training from Carestream Health, according to Karen Emaus, who supports U.S. and Canadian customers using the company's information systems products.

“The clinical aspects are clearly targeted toward radiologists and the people who are actually doing the readings,” Emaus said. “They absolutely need to know more, but they don't want it to be complicated.”

Therein lies the challenge in advanced visualization. The great ease with which systems reconstruct data typically shows itself at the fingertips of those who know the software best. This is especially so when the data sets are less than optimal, according to Heather Brown, Ziosoft's director of clinical solutions.

“When the contrast (bolus) is mistimed, or you have an obese patient who creates a lot of noise in the image, software programs frequently fail,” she said. “That's when training comes into play.”

Brown advocates a consultative approach to training, one that focuses less on button pushing and more on problem solving. Training in visualization techniques and an understanding on the user's part of how reconstructions should look are critically important for smart algorithms to reach their potential, she said. These algorithms carry out many of the tedious elements of postprocessing that otherwise would have made the adoption of advanced visualization impractical.

“The software has to be efficient, because at Scripps all the 3D reconstructions are done by radiologists,” said Dr. Nikunz K. Patel, vice chair of radiology at the Scripps Clinic in LaJolla, CA. “Software is getting smarter and its functionality is becoming more intuitive.”

Smart algorithms have paved the way for the simplest advanced visualizations, multiplanar reconstructions and maximum intensity projections, to enter into common use. The underlying calculations are relatively easy to perform compared with those in 3D/4D applications, such as cardiac and CT colonography reconstructions, whose adoption remains on the fringes of radiology.

Automation has also narrowed the differences in what the various workstations and PACS can do, boosting reproducibility and reducing dependence on operator skill. Further automation will allow the greater adoption of more sophisticated visualization. But the growing dependence on smart algorithms has raised questions about the validity of interpretations when different workstations or PACS are employed.

There are no standards by which algorithms are created, no specific processes that govern exactly how one algorithm produces one result versus another, said Dr. Jacob Sosna, section chief for CT at Hadassah-Hebrew University Medical Center in Jerusalem. Intensifying the need for such comparisons are the many academic and private research groups developing algorithms for automated processing of medical imaging data.

There is no benchmarking for different techniques, no comparisons are made among different reconstruction algorithms. This does not mean, however, that algorithms cannot be compared. Sosna has built algorithms that grade automated carotid bifurcation lumen segmentation and stenosis. Tests have demonstrated that different segmentation algorithms can be compared and metrics can be developed to define the accuracy of segmentation.

A knowledgeable operator serves as a critical factor in the process. Not only can users guide and refine computer-driven segmentation, they can ensure the accuracy of results.

“I personally think that the perfect is the enemy of the good,” said Sandy Napel, Ph.D., a Stanford professor of radiology and co-director of the 3D radiology laboratory.

The radiology community should not wait for-or expect-engineers to build algorithms smart enough to do it all.

“Segmentation should continue being developed in a way that minimizes user interaction but doesn't force it to be zero time,” he said.

At institutions that have 3D labs in which supertechs do the work, quality control programs make sure the work is done right. Laura J. Pierce, who runs the quality control program at Stanford's 3D radiology lab, says that missteps, if not corrected, can lead to anatomical misrepresentations, as in blood vessels that are “overlooked” or structures that are mislabeled. These can cause delays or inappropriate treatment.

In going over the operations at the Stanford 3D lab, Pierce documented 13 common errors, some major, others minor or trivial. Human error was the root of all, from missing 3D images to incorrect segmentation and missing measurements.

“If you're rushing through a patient load, you miss that a patient has accessory renal arteries, or annotate a vessel with the wrong name or misspell it,” she said.

Pierce instituted a quality control program in the Stanford lab with striking results. Major errors dropped from 0.6% to 0.1%; minor errors from 5.2% to 1.6%; and trivial errors from 10.3% to 5.8%. Fixes included increased mentoring and periodic retraining, enforced rest breaks, and workflow that discouraged interruptions.

But depending on the radiologist, even the most capable tech may not be up to the job of advanced visualization. Dr. Ranji Samaraweera, chair of radiology at Sparrow Hospitals and Clinics in Lansing, MI, may delegate reconstructions to techs on his staff, but he reconstructs the data again for himself, just to be sure.

“I would never read a study based on what the techs do,” Samaraweera said. “I could easily miss something if I relied on what has been given me.”

Techs don't necessarily understand where to find the pathology, he said. Consequently, their reconstruction may not include what Samaraweera wants to see.

But all users of advanced visualization technology depend on processes in which they are not involved, whether they like it or not. Pattern-matching algorithms work in the background to subtract tissue extraneous to the diagnosis, draw centerlines in lumen, and calculate ejection fractions.

The reliability of these algorithms is remarkable, yet far from perfect. At the International Society of Computed Tomography symposium on multidetector row CT in May, three cases, one for the colon, a second of the heart, and a third addressing the liver, served as the playing fields for this year's Workstation Face-Off. Seven companies: Carestream, GE, Philips, Siemens, TeraRecon, Vital Images, and Ziosoft, went head-to-head, with luminaries putting each firm's workstation and PACS through exactly choreographed steps.

Variability was obvious on the cardiac case, as three systems recorded right ventricular ejection fraction values near 45 while the other four ranged from a low under 30 to a high above 50. Other measurements were remarkably consistent. The computer-aided detection programs that located and measured colon polyps showed extraordinary agreement, as six of the seven measured the colon polyp as between 7 mm and 8 mm, while the seventh was slightly more than 8 mm.

In the liver case, examining a patient with metastatic cholangiocarcinoma, all the results showed substantial growth of the dominant lesions. There was variance in the rate of growth, as one showed about 75% growth and a second 150% with the others somewhere in between. In this case, however, such wide variances were inconsequential, according to Patel.

“Everybody said the tumor was growing, so the conclusion would have been the same,” he said. “Therapy was not working.”

As advanced reconstructions and calculations become part of patient management, they will need to be easily available. Yet today most are done on workstations. Carestream Health, Visage Imaging, and, most recently, Fujifilm Medical Systems USA offer the only PACS with integrated 3D. But others are coming. The functionality of workstations and PACS have been growing closer over the years, with each becoming more like the other. Eventually, advanced visualization will be an integral part of PACS.

The future may take PACS beyond advanced visualization to content-driven diagnosis, whereby a diagnosis made on some patients might help clarify whether the same diagnosis applies to others. This possibility arises with the interconnection of information systems at multiple sites. Images in these systems might be identified according to interpretations made by radiologists or through information contained in the pixels themselves.

“It's been known for many years that good information is contained in images and coded into the semantics of interpretations, information that might allow pattern-based classification,” Napel said. “The bottom line here will be whether particular patterns suggest particular diagnoses.”

Napel and colleagues have performed a pilot study illustrating this potential with liver lesions seen on CT, showing that pixel-based and semantic features could be used to retrieve similar images of lesions seen in portal venous phase CTs. In a peer-reviewed paper scheduled to appear soon in Radiology, the team will report looking at 79 portal venous phase liver CTs from 44 patients showing all sorts of liver anomalies including cysts, hemangiomas, metastases, hepatocellular carcinomas, and abscesses.

Napel found that computer-derived and semantic features can be used to retrieve similar images.

Ultimately this capability may be part of PACS 2.0, the next generation of PACS, which will employ advanced image analysis tools. Such decision-supported PACS might allow radiologists to query databases not only by interpretation or pixel-based parameters such as shape, but by molecular data that can link imaging phenotype to genomics and genotype. Such a decision-supported PACS may even go beyond diagnoses, Napel said, to provide information about the success rates of specific treatments on specific types of patients.

"Content-based retrieval could be just the beginning of helping us get to where we would like to go (with personalized medicine)," he said.

Related Videos
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Making the Case for Intravascular Ultrasound Use in Peripheral Vascular Interventions
Can Diffusion Microstructural Imaging Provide Insights into Long Covid Beyond Conventional MRI?
Assessing the Impact of Radiology Workforce Shortages in Rural Communities
Emerging MRI and PET Research Reveals Link Between Visceral Abdominal Fat and Early Signs of Alzheimer’s Disease
Reimbursement Challenges in Radiology: An Interview with Richard Heller, MD
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2024 MJH Life Sciences

All rights reserved.