If CT and MR were invented today, how would we process the data they generate to produce information about medical conditions? Would we still generate anatomical images? Or would we come up with new schemes to characterize and present the data?It's a
If CT and MR were invented today, how would we process the data they generate to produce information about medical conditions? Would we still generate anatomical images? Or would we come up with new schemes to characterize and present the data?
It's a provocative question, and one the Society for Computer Applications in Radiology may confront with its Transforming the Radiological Interpretation Process initiative. TRIP, launched at last year's SCAR meeting, is now a year old, and it is moving into a new phase in which a growing list of participants will be asked these types of questions.
How this process works out could lead to radical new directions in medical image interpretation and may promote some of the most fundamental shifts in process since Roentgen discovered x-rays more than 100 years ago.
It's easy to overstate the significance of this effort. After all, medical imaging has proved itself amazingly adaptive and innovative. Still, the interpretation process remains, for the most part, strongly tied to anatomic observation.
That approach is coming up hard against limits. In a white paper published in November, SCAR's TRIP subcommittee gave the following example: An informal study at the Mayo Clinic in Jacksonville, FL, found that 1500 cross-sectional images per radiologist were generated and stored every day in 1994. By 2002, the number had grown to 16,000 images per day. Assuming the same rate of increase, volume could hit 80,000 images per day by 2006. If a radiologist reads one image per second, future volumes will require 22.6 hours per day to interpret, using today's practice strategies, the Mayo study concluded.
It's likely that many people see TRIP and think of volumetric imaging, the latest innovation, which is still a mystery to a number of radiologists heavily invested in cross-sectional interpretation. A year ago, for example, our cover story focused on the issue of slice overload ("Huge sets of slices will transform interpretations," May 2003, page 39), and in this space I suggested that 3D and volume interpretation strategies offer a solution to the problem.
But those involved with TRIP are thinking far more broadly. Volume interpretation is one possibility, but in the view of Richard L. Morin, Ph.D., who chairs SCAR's TRIP subcommittee, it may be only a small step. Other solutions are possible. Some examples:
In angiography, we visualize the vessel lumen and look for narrowing. But would it also be possible to quantify these data and display them graphically in a chart that shows deviations in the size of the lumen? Pattern recognition is now a maturing technology. What about a system that analyzes cross-sectional data and can separate out the normals, allowing radiologists to focus on the images in which there is likely to be pathology?
To be sure, some of this is already happening. Functional imaging possible with nuclear medicine allows visualization of biologic processes, although it remains wrapped in an anatomical context. MR spectroscopy also provides an alternative to traditional anatomic observation. A new technique under development will produce MR cine images of blood flow, as is now possible with angiography. The evolution of molecular imaging is likely to bring new visualization and interpretation techniques to the fore.
But as the Mayo example shows, the pace of data collection is accelerating, and the data overload problem is not abating. These developments, and more, will be necessary if radiology and nuclear medicine are to keep pace and maintain leadership of the image interpretation process. SCAR's TRIP initiative has already proved valuable in focusing attention on this issue and is poised to provide important guidance as the interpretation process evolves.
What are your thoughts on this topic? Please e-mail me at firstname.lastname@example.org.