• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Artificial intelligence helps provide decision support in radiology

Article

EDUCATIONAL OBJECTIVES:

Upon completion of this activity, participants should be able to:

Artificial intelligence helps provide decision support in radiology

Radiologists, who bear responsibility for managing an ever-growing body of complex medical knowledge, are beginning to employ artificial intelligence (AI) techniques that allow computers to emulate human faculties such as perception and reasoning in the task of diagnosing disease. Several factors account for the great potential of AI to contribute to improved radiological diagnosis.

First, burgeoning research and technology add complexity to radiology practice, diminishing radiologists' ability to consider available data in their clinical judgments. Second, people confronted with very complex situations tend to make decisions based on heuristics (shortcuts) rather than careful consideration of every possible alternative and its probability. These mental shortcuts lighten the cognitive load of human decision making, but they also can mean a much greater chance of error.1 Third, humans and computers have complementary strengths that, when combined, have the potential to surpass the abilities of either alone. Humans can reason inductively, recognize patterns, apply multiple strategies to solve a problem, and adapt to unexpected events. Computers can store large quantities of information, recall data accurately, perform complex calculations, and execute repetitive actions reliably without tiring.

The idea of computer-assisted decision support is not new. An assortment of systems has been developed over the last 30 years, and leaders in the field have established minimal requirements.2 If physicians are to use computer-aided decision support, the systems must enhance performance, explain and justify results, be flexible as the clinical environment evolves, and learn from experience. These characteristics depend on the AI technique employed, but details of implementation are important as well. For example, any computer aid must be incorporated into the physician's workflow; it is a myth that clinicians will use stand-alone decision support tools.

The most common AI tools used for decision support include Bayesian networks, neural networks, case-based reasoning, and rule-based systems, all of which can aid decision making in diagnostic radiology. As these systems become a part of routine clinical care, it is important that radiologists understand the strengths and weaknesses of each type of decision support tool in order to improve diagnostic accuracy.

BAYESIAN NETWORKS

A Bayesian network (BN), named for the 18th century British mathematician and Protestant minister who developed a mathematical basis for probability inference, is a graphical model that reflects probabilistic relationships among variables in the domain of interest. This AI method consists of three components: the structure, the probabilities, and the inference algorithm.

A Bayesian network's structure is composed of a set of nodes and the connections between them (Figure 1). The nodes represent uncertain variables. Typically, a primary or "root" node represents the variable of interest, and other nodes affect the probability of that primary node. In medical diagnosis, for example, disease is commonly the root node. Patient risk factors, signs, symptoms, and the results of diagnostic tests all have an impact on the probability of disease in that node. Each node contains mutually exclusive and collectively exhaustive instances. The connections between nodes signify that the nodes directly influence one another.

Each node is associated with a conditional probability table that reflects the probability of each instance contained within the node. There are two ways to obtain the probability values: to elicit them from expert opinion, randomized controlled trials, and/or the literature, or to train the Bayesian network using a large data set from which the probability values can be calculated.

After both the Bayesian network's structure and conditional probability values are established, an inference algorithm can be used to determine the probabilities of each node based on the available information. Inference is the reasoning process used to draw conclusions from evidence. The inference algorithm in a Bayesian network uses the principles of probabilistic reasoning and Bayes' theorem.

A Bayesian network may be useful in a clinical context. Figure 1 shows a simple Bayesian network that models the probabilistic relationships between breast disease, patient-specific risk factors, and mammographic findings of mass and microcalcifications. If a 50-year-old woman without a family history of breast cancer or a history of hormone replacement therapy has a circumscribed round mass on mammography, this Bayesian network can calculate a post-test probability of malignancy of 3.6%. In contrast, the network calculates a post-test probability of malignancy of 92.3% for the same patient with a spiculated mass on mammography. The post-test probability of disease provided by a Bayesian network is a valuable piece of information that can be used in decision making and patient-physician communication.

Many Bayesian networks are in use outside the domain of medical diagnosis. Microsoft's paper clip character, "Clippy," uses a Bayesian network to assist users. In fact, Microsoft considers Bayesian network development critical to its

competitive advantage.3 In medicine, a Bayesian network called Pathfinder performed as well as national experts in the field of pathology.4,5 Probabilistic reasoning and Bayesian network modeling have also been developed and tested in radiology.6-9 Some of these systems have been quite successful in using probability theory to generate accurate conclusions based on imaging results. Bayesian networks tested in mammography have been shown to differentiate between benign and malignant breast diseases as reliably as a mammographer.10 These systems can also perform more complex

reasoning tasks, such as mammography-histology correlation, and detect sampling error better than radiologists.9

A Bayesian network can calculate the probability of uncertain events based on probability theory, explain to the human user how it reached a conclusion, and merge the knowledge inherent in large data sets with valuable expert knowledge when specific data are not available. Bayesian networks are very efficient and perform inference quickly. On the other hand, building a Bayesian network and assessing relevant probabilities can be difficult and time-consuming. While a Bayesian network can calculate the probability of all nodes in the model based on evidence, it cannot deal with unforeseen data, so variables not included in the model cannot contribute to the post-test probability of interest.

ARTIFICIAL NEURAL NETWORKS

An artificial neural network (ANN) also captures knowledge in graphical form. It is an interconnected assembly of nodes, whose functionality is based on the animal neuron. The ANN is organized in layers, starting with an input layer, generally depicted on the left, and an output layer on the right (Figure 2). The processing ability of the network is stored in the weights represented by the layers of nodes, referred to as hidden layers, located between the input and output layers. Multiple hidden layers may be used; their values are obtained by training on a set of data for which diagnoses are known. The inference of a neural network is based on a complex correlation between input patterns and their known outputs.

Today, ANNs are being applied to problems of considerable complexity, such as speech recognition and weather prediction.11,12 The ANNs most familiar to radiologists are found in computer-assisted detection (CAD) for mammography. CAD, now commercially available as a "second reader" to increase the sensitivity of screening mammography, uses ANNs to detect abnormalities on mammography. In these mammography systems, image feature analysis provides inputs into an ANN that has been trained on a large number of images with known outcomes.

Figure 2 provides a simple example of an ANN for distinguishing benign and malignant microcalcifications. Microcalcification descriptors, which could be obtained either from image processing algorithms or from the radiologist, serve as inputs, and a benign or malignant diagnosis serves as outputs. CAD for mammography is used clinically in many practices and has been shown to improve breast cancer detection.13 In experimental settings, ANNs have been used to aid in CT diagnosis of malignant lung nodules, closed head injury, and intra-abdominal abscesses.14-16

ANNs are excellent pattern-recognition engines and robust classifiers with the ability to generalize in making decisions about imprecise input data. With its complex nonlinear structure, an ANN can find relationships in data that humans are unable to identify. An ANN cannot determine cause-and-effect relationships but only how certain patterns of inputs and outputs are associated with one another. For this reason, ANNs are unable to explain results. The mathematically complex relationships between an ANN's inputs and outputs are incomprehensible to the human user. ANNs can learn from data but cannot currently do so in an ongoing manner as new experience is accrued. Finally, ANNs are not designed to be flexible. They draw conclusions based on static inputs and do not evolve as new variables that might contribute to decision making become available.

CASE-BASED REASONING

Case-based reasoning (CBR), another artificial intelligence technique that can be used for decision support in radiology, consists of multiple cases whose details are

catalogued in computer memory. Features and outcomes of all logged cases form the system's knowledge base. To provide information about a new, unknown case, cases in the knowledge base with features similar to the new one are retrieved. The outcomes of these similar cases can be used to reason about the unknown case. For example, the outcomes can be aggregated, and decisions can be based on the proportion of similar cases that have the outcome of interest.

Figure 3 shows an example of a CBR system designed to reason about masses on mammography. The knowledge base consists of cases that have been categorized according to various features of masses. When an unknown case is introduced to the knowledge base, three cases match the unknown case, and two of those are malignant. This elucidates the rate of breast cancer in the matching cases.

CBR has been used in real-world applications that include evaluation of farming conditions by satellite and troubleshooting the problems of personal computers.17,18 In radiology, a system called Intelligent Selection of Imaging Studies (ISIS) was developed to help physicians choose diagnostic imaging procedures based on available clinical information.19 The system stored actual cases from a radiology department's records as its knowledge base. When a new patient presented, the clinical situation could be entered into the system. Based on matching cases, procedures such as CT, MR, ultrasound, or angiography were recommended to the referring physician.

Systems that employ CBR can acquire new knowledge by simply adding new cases to computer memory when features and outcomes become available. The system can thus "learn" in an ongoing manner. A CBR system has the ability to explain its results by quantifying the similar features of matching cases that led to the conclusion it reached. CBR systems tend to be quite flexible because new cases can be incorporated into the knowledge base over time.

RULE-BASED SYSTEMS

A rule-based system (RBS) consists of a knowledge base in the form of production (IF_THEN) rules, an inference engine (an algorithm used to apply those rules), and an explanation method that allows the user to interact with the knowledge base. The rules are usually constructed by an expert in the field of interest who can link the facts or evidence with conclusions. When a real-world situation is presented to the computer, it can use these rules to draw conclusions based on different situations in the way an expert would.

The inference engine applies the rules that define the type of RBS in two main ways: by forward chaining or backward chaining. A forward-chaining system starts with the initial facts and uses the rules to draw new conclusions (or take certain actions), given those facts. A backward-chaining system starts with some hypothesis (or goal) and keeps looking for rules that would allow one to reach that hypothesis, perhaps setting new subgoals to prove as it goes. Forward-chaining systems are data-driven, while backward-chaining systems are goal-driven. Whether one uses forward or backward reasoning to solve a problem depends on the properties of the rule set and the initial facts.

Figure 4 shows an example of production rules for a system to aid in decisions about management of a palpable breast mass. These production rules can be more intuitively conveyed in graphical form, as shown in Figure 5. This graph illustrates that the system uses each piece of evidence to determine the appropriate management of a palpable finding. While this is an overly simplistic example from a clinical perspective, it nevertheless demonstrates that such a system can use the fact that a palpable mass demonstrates only calcifications on mammography to recommend a stereotactic core biopsy.

MYCIN, an RBS developed in the 1970s to aid physicians in the diagnosis of meningitis, succeeded in elevating the performance of a nonexpert to that of an expert in the diagnosis and treatment of this disease.20 An RBS in radiology called PHOENIX contained knowledge of 54 common clinical presentations such as head trauma and pulmonary embolism, and it used 800 rules to guide referring physicians to the appropriate imaging study.21

RBSs explain conclusions by using the IF_THEN rules. In the mammography example (Figures 4 and 5), a stereotactic core biopsy is recommended because a palpable mass with calcifications and no mammographic mass is identified. Unfortunately, the construction of the knowledge base is onerous. In addition, the rules are often significantly interdependent, so an RBS cannot adapt to change. In PHOENIX, for example, if a new imaging modality becomes available, all the rules must be reviewed to incorporate this new recommendation. Similarly, generalizeability may be limited. In MYCIN, for instance, antibiotic susceptibility among institutions differs, so rules necessarily depend on these differences. An expert builds knowledge into the system that does not change as it is used; therefore, automatic learning fails to occur.

CONCLUSION

Artificial intelligence techniques have the potential to provide decision support in radiology because they can model complex knowledge and integrate a large amount of information efficiently. Each methodology has its own unique strengths and weaknesses, which are summarized in the accompanying table. Radiologists who use these systems must appreciate the trade-offs inherent in each methodology in order to use them to optimize disease detection, diagnostic accuracy, and decision making in radiology.

Dr. Burnside is an assistant professor of radiology and director of breast imaging research at the University of Wisconsin in Madison. Dr. Kahn is a professor and director of information sciences at the Medical College of Wisconsin in Milwaukee. This article is based in part on a presentation at the 2004 meeting of the Society for Computer Applications in Radiology.

Dr. Burnside has received grants/research support from the GE-AUR Radiology Research Academic Fellowship.

Dr. Kahn has no significant financial arrangement or affiliation with any manufacturer of any pharmaceutical or medical device and is not affiliated in any manner with any provider of any commercial medical or healthcare professional service.

References

1. Kahneman D, Slovic P, Tversky A. Judgment under uncertainty: heuristics and biases. Cambridge: Cambridge University Press, 2001.

2. Teach RL, Shortliffe EH. An analysis of physician attitudes regarding computer-based clinical consultation systems. Comput Biomed Res 1981;14(6):542-558.

3. Helm L. Improbable inspiration. Los Angeles Times, Oct. 8, 1996.

4. Heckerman D, Nathwani B. Toward normative expert systems: part II, probability-based representations for efficient knowledge acquisition and interface. Methods of Information in Medicine 1992;31:106-116.

5. Heckerman D, Horvitz E, Nathwani. B. Toward normative expert systems: part I, the Pathfinder project. Methods of Information in Medicine 1992;31:90-105.

6. Lodwick G. A probabilistic approach to diagnosis of bone tumors. Radiol Clin North Am 1965;3(3):487-497.

7. Kahn C, Laur J, Carrera G. A Bayesian network for diagnosis of primary bone tumors. J Digit Imaging 2001;14(2 suppl 1):56-57.

8. Kahn CE Jr., Roberts LM, Wang K, et al. Preliminary investigation of a Bayesian network for mammographic diagnosis of breast cancer. Proc Ann Symp Comput Appl Med Care 1995:208-212.

9. Burnside ES, Rubin DL, Shachter RD, et al. A probabilistic expert system that provides automated mammographic-histologic correlation: initial experience. AJR 2004;182(2):481-488.

10. Burnside ES, Rubin DL, Shachter RD. Using a Bayesian network to predict the probability and type of breast cancer represented by microcalcifications on mammography. Proc AMIA Symp 2004: accepted for publication.

11. Liang QH, Harris JG. The future of artificial neural networks and speech recognition. In: Leondes CT, ed. Intelligent systems: technology and applications. Vol III: Signal, image, and speech processing. Boca Raton, FL: CRC Press, 2003:215-236.

12. Yuval, Hsieh WW. An adaptive nonlinear MOS scheme for precipitation forecasts using neural networks. Weather & Forecasting 2003;18(2):303-310.

13. Warren Burhenne LJ, Wood SA, D'Orsi CJ, et al. Potential contribution of computer-aided detection to the sensitivity of screening mammography. Radiology 2000;215(2):554-562.

14. Matsuki Y, Nakamura K, Watanabe H, et al. Usefulness of an artificial neural network for differentiating benign from malignant pulmonary nodules on high-resolution CT: evaluation with receiver operating characteristic analysis. AJR 2002;178(3):657-663.

15. Sinha M, Kennedy CS, Ramundo ML. Artificial neural network predicts CT scan abnormalities in pediatric patients with closed head injury. J Trauma 2001;50(2):308-312.

16. Freed KS, Lo JY, Baker JA, et al. Predictive model for the diagnosis of intraabdominal abscess. Acad Radiol 1998;5(7):473-479.

17. Li X, Yeh, AG. Multitemporal SAR images for monitoring cultivation systems using case-based reasoning. Remote Sens Environ 2004;90(4):524-534.

18. Wang SL, Hsu SH. A Web-based CBR knowledge management system for PC troubleshooting. Int J Advanced Manufacturing Technology 2004;23(7-8):532-540.

19. Kahn CE Jr. Artificial intelligence in radiology: decision support systems. Radiographics 1994;14(4):849-861.

20. Shortliffe E. Computer-Based Medical Consultations: MYCIN. New York: American Elsevier, 1976.

21. Kahn CE Jr. Validation, clinical trial, and evaluation of a radiology expert system. Methods Inf Med 1991;30(4):268-274.

---

CME LLC designates this program for a maximum of 1.0 category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those credits that he/she actually spent in the educational activity.

The American College of Radiology (ACR) accepts activities designated for AMA Physician1s Recognition Award (PRA) category 1 credit.

Activities that have been designated for AMA/PRA category 1 credit and are relevant to the radiologic sciences are accepted as category B credit on a one for one basis by the American Registry of Radiologic Technologists (ARRT). Radiologic Technologists may receive a maximum of 12 category B credits per biennium.

NOTE: Clicking on the "post-test" link will take you away from diagnosticimaging.com and bring you to CME,Inc.'s environment where the tests are hosted.

Made possible by an unrestricted educational grant from TOSHIBA AMERICA MEDICAL SYSTEMS

Back to the CME page

Recent Videos
Computed Tomography Study Shows Emergence of Silicosis in Engineered Stone Countertop Workers
Can an Emerging AI Software for DBT Help Reduce Disparities in Breast Cancer Screening?
Skeletal Muscle Loss and Dementia: What Emerging MRI Research Reveals
Magnetoencephalopathy Study Suggests Link Between Concussions and Slower Aperiodic Activity in Adolescent Football Players
Radiology Study Finds Increasing Rates of Non-Physician Practitioner Image Interpretation in Office Settings
Assessing a Landmark Change in CMS Reimbursement for Diagnostic Radiopharmaceuticals
Addressing the Early Impact of National Breast Density Notification for Mammography Reports
2 KOLs are featured in this series.
2 KOLs are featured in this series.
Can 18F-Floutufolastat Bolster Detection of PCa Recurrence in Patients with Low PSA Levels After Radical Prostatectomy?
Related Content
© 2024 MJH Life Sciences

All rights reserved.