Radiologists and other physicians struggle to translate their diagnostic observations into actual patient care in a timely fashion. Although the speed of communication is far faster now than it was two centuries ago, it still seems to trail the sophistication and urgency of modern medical treatment.
The Battle of New Orleans marked the last of the hostilities in the War of 1812. The Treaty of Ghent that ended the war had been signed three months earlier in Europe, but this information was, unfortunately, still making its way across the Atlantic at the time of the 1814 battle. Had knowledge of the treaty been available sooner, lives might have been spared.
Similarly, radiologists and other physicians struggle to translate their diagnostic observations into actual patient care in a timely fashion. Although the speed of communication is far faster now than it was two centuries ago, it still seems to trail the sophistication and urgency of modern medical treatment.
Speech recognition technology has been available for decades, but it wasn't until the hardware and language model software advances of the mid-1990s that dictation using continuous speech became possible.(1) It became practical to consider adding speech recognition in a busy radiology department, and many administrators and radiologists did so. The cost savings and improved turnaround times even of the early adopters of this technology have been well documented.(2) But at what cost? While some radiologists have embraced this technology and are pleased with its realized and potential advantages, others report decreased productivity.(3)
Many users point out that cost savings and improved turnaround time are gained at the expense of their own efficiency and perhaps diagnostic accuracy. A few seem to have saved their worst vitriol for the product. Clearly, the imperfect performance of the speech engine in converting voice to text is a major factor in this dissatisfaction, but it is by no means the only issue. The distractions that come with viewing images while simultaneously controlling the entire reporting process play a role as well. While structured reporting, a general term for the creation of a report using pulldown menus in a decision tree format, will enhance accuracy, it will not eliminate potential distractions and may in fact decrease eye dwell time on the images.
Many historical innovations are incremental. The speech recognition and structured reporting software of the late 20th century can be viewed as a single step in the longer journey of results reporting in medicine. Measures must now be implemented to address the distraction and loss of efficiency brought by this technology.
Nearly two years ago, I wrote in a Diagnostic Imaging supplement about maximizing the efficiency of speech recognition through the use of currently available means and the combination of other technologies to develop an ideal reporting system.(4) At the 2004 RSNA meeting, a number of vendors and researchers demonstrated incremental innovations that, if used together in a single reporting system, could provide the necessary speed and efficiency in radiology reporting.
Speech recognition and structured reporting make a handsome couple. The advantages of structured reporting include the automatic assignment of procedural and diagnostic coding for billing purposes, parallel real-time decision-support tools, standardized lexicon, and virtual elimination of accuracy problems. Most structured reporting companies incorporate speech recognition to drive navigation of the decision tree, and speech recognition companies are considering incorporating more structure into their product.
Potential time savings for the radiologist using this arrangement include minimization of proofreading and the simultaneous creation of the report body and impression. The documented speech recognition advantages of improved turnaround time and cost savings should remain. Such a combination should also decrease the visual distractions of early structured reporting systems. But building a report by individual word or phrase from a decision tree, even when driven by speech command, can still be time-consuming and distracting. One infoRAD exhibit at RSNA 2004 demonstrated the use of speech commands to drive macro creation within a structured reporting environment, creating the potential for further efficiency gains.(5)
The integration of the reporting system with image acquisition modalities, information systems, and other software would further add to radiologist efficiency. In reporting a CT scan of the abdomen and pelvis, for example, the appropriate template can be invoked based on procedure codes and patient demographics such as age and gender, without any user input. Details of the procedure, including contrast use and study dates, would be imported from the radiology information system or the modality and could be modified within these systems at any time by clerical or technologist input.
Rather than dictating or manually entering the procedure technique, the radiologist would have this information downloaded automatically from the RIS or modality. Most radiologists prefer to add a brief patient history to their reports, and this could be downloaded from the electronic medical record, RIS, or PACS. Some reporting systems allow assignment of RIS data to specific report fields to enable this capability.
The RIS contains information on prior studies, and the PACS records exactly what studies have been opened for comparison. Downloading this information into the report saves more time for the radiologist.
Numerical data from fetal ultrasound or dual x-ray absorptiometry could automatically populate the appropriate fields in the report, coming directly from the modality without human intervention. Vendors are beginning to incorporate numerical data into reports with varying degrees of automation using the DICOM standard Basic Diagnostic Imaging Report Template.(6)
In the case of a completely normal study, all of the above data would be automatically entered into the report with no intervention by the radiologist other than to approve and sign. Abnormal cases could be handled in a number of ways. In the case of a simple renal cyst in the right kidney, for example, the radiologist would likely identify the finding using PACS and measure its size and density. Ultimately, computer-assisted diagnosis software would identify this as a cystic lesion based on the CT density. This information could automatically be passed to the reporting system and replace the appropriate sentence or phrase.
The radiologist could easily identify the phrase to be replaced by either voice command or cursor position, simultaneously altering the impression to reflect the new information. In this simple example, an abnormal report is created with little or no input from the radiologist other than carefully reviewing and annotating the images. More complex findings would require additional input but could be streamlined in a similar manner. Ultimately, CAD findings for multiple diseases and in multiple modalities should provide the radiologist with a near-complete report awaiting approval or modification.
While CAD might be considered the ultimate decision support tool, other decision support software should be available to the radiologist during image interpretation. Continuing medical education potentially linked to clinical decision support and structured reporting was described at the 2004 RSNA meeting.(7) Another application presented there used a Web-based decision support application as a basis for structured reporting in mammography.(8) Whatever decision support tools are tied to the reporting process, they must be real-time, contextual, and seamlessly accessible.
Interest is growing in the possibility of creating a multimedia report containing both text and annotated images.(9) As reports shift to an electronic format, hyperlinking a word or phrase with the appropriate image will become increasingly useful to the referring physician.
The appropriate word or phrase is linked to a thumbnail image in the margin. The clinician using Web access can view the basic report with a quick download even over a low-bandwidth connection. He or she can then decide which images, if any, need to be viewed and open them by clicking on the thumbnail or the highlighted link. In this way, the key images from a large data set can be quickly and easily accessed nearly anywhere without the need to download a large amount of unnecessary data. This is particularly helpful in environments where high-bandwidth connection is not readily available.
The preparation of a multimedia report during image interpretation requires integration between the image viewing software and the reporting system. The creation of the hyperlink must not burden the radiologist with a time-consuming or distracting task. A drag-and-drop methodology described at the RSNA meeting decreased clinician reading time but increased the time spent in report creation.(10) Shifting the burden in this way to the radiologist is cause for concern and requires further study before adoption of this technique becomes widespread.
With increasing patient mobility and data sharing between systems, standardization of the multimedia report becomes highly desirable. One exhibit at the RSNA meeting referenced the coordination of multimedia reporting within the Integrating the Healthcare Enterprise framework.(11) Another described the combined use of the HL7 Clinical Document Architecture Standard with the DICOM draft standard Web Access to DICOM Persistent Objects to display annotated images on a standard Web browser.(12)
The requirements of a radiology report do not end when it is finalized by a radiologist or accessed by the referring physician. The reporting system should facilitate data mining for research, internal audits, and other functions. With a structured report, all data entry is tracked, and these functions can be performed with relative ease. With nonstructured text or in a hybrid environment, natural language processing can help achieve this functionality. A demonstration of this feature at the 2004 RSNA meeting showed the efficacy of an information theory-based search engine used to access pertinent findings from a large number of unstructured radiology reports.(13)
The last, but not unimportant, requirement of a radiology report is to facilitate billing. Automated procedure and diagnostic codes reflecting the final report information approved by the radiologist can be uploaded back to the RIS. Once these requirements are met, a bill can be submitted electronically as soon as the report is finalized.
In the past decade, we have improved report turnaround time from days to minutes, but at the cost of radiologist efficiency. Even though our primary task as radiologists is image interpretation, creating the radiology report takes a significant amount of our time. This report is sometimes our only link with referring physicians and patients.
We would not tolerate a 19th century clipper ship's speed of communication from our vendors. We must likewise demand from them that the efficiency of report creation be brought up to modern-day standards. The technology is available if sufficient research and development funding can be found. The separate elements must be combined into a single efficient reporting package. It will be interesting to see if these incremental innovations begin to coalesce at the 2005 meeting of the Society for Computer Applications in Radiology and beyond.
Dr. Weiss is clinical section head of imaging informatics and director of radiology at Geisinger Medical Center in Danville, PA. Dr. Weiss is a consultant for Agfa.
1. Weiss DL. Speech recognition: evaluation, planning, installation, and use: purchasing a speech recognition system. In: Reiner B, Siegel E, Weiss D, eds. Electronic reporting in the digital medical enterprise. Great Falls, VA: SCAR 2003:27-42.
2. Ramaswamy MR, Chaljub G, Esch O, et al. Continuous speech recognition in MR imaging reporting: advantages, disadvantages, and impact. AJR 2000;174:617-622.
3. Gale B, Safriel Y, Lukban A, et al. Radiology report production times: voice recognition vs. transcription. Radiol Manage 2001;23:18-22.
4. Weiss, DL. Speech recognition need not slow reporting time. Diagnostic Imaging 2003;Aug suppl:10-12.
5. Weiss DL, Lui D, Zucherman M. Using macros with voice recognition and structured reporting to increase speed and efficiency. Chicago: infoRAD exhibit, RSNA 2004.
6. Angarra D. DICOM SR Implementation for a multimedia structured report. Chicago: presentation, RSNA 2004.
7. Kahn CE, Nagy PG. Just-in-time learning in radiology: integration with RIS/PACS, structured reporting and clinical decision-support. Chicago: infoRAD exhibit, RSNA 2004.
8. Rubin DL, Burnside ES, Schachter RD. A Web-based decision support system for mammography. Chicago: infoRAD exhibit, RSNA 2004.
9. Reiner BI, Siegel EL, Shastri K. The future of radiology reporting. In: Reiner B, Siegel E, Weiss D, eds. Electronic reporting in the digital medical enterprise. Great Falls, VA: SCAR 2003:83-104.
10. Fukatsu H, Ishigaki, Osada M, Iwasa A. Hyperlinked diagnostic report: drag and drop-based user-friendly interface to create links among phrases on the report and images on the DICOM viewer. Chicago: infoRAD exhibit, RSNA 2004.
11. Kuzmak PM, Dayhoff RE, Christensen JH, et al. Enterprise-wide multidepartmental multimedia electronic patient record. Chicago: infoRAD exhibit, RSNA 2004.
12. Behlen FM, Costea-Barlutiu BS. Using the HL7 CDA (Clinical Document Architecture) standard for radiology reports: a standard format supporting image references and annotation. Chicago: infoRAD exhibit, RSNA 2004.
13. Kalra MK, Dreyer KJ, Maher MM, et al. Application of information theory-based, search engine, LEXIMER for automatic classification of unstructured radiology reports. Chicago: infoRAD exhibit, RSNA 2004.