• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

What is Machine Learning?


It might be threatening to steal radiologists’ jobs, but few understand what it actually is, from ACR 2016.

There is a lot we don’t know about machine learning. What is it capable of? Will it steal radiologists’ jobs? And perhaps the most complicated question: What is it?

Machine learning - aka artificial intelligence - has the hottest buzz, according to Keith Dreyer, DO, PhD, vice chairman, radiology computing and information sciences at Massachusetts General Hospital, who spoke at ACR 2016. Machine learning is comprised of the concept of artificial neural networks that are capable of deep learning, mimicking how humans learn complex concepts.

“Our biological neural networks do deep learning,” Dreyer said. Deep learning versus shallow learning can be thought of as abstraction versus memorization.

What’s the difference between human and machine, abstraction and memorization, or shallow and deep learning?

“Given an example of presence and absence of a specific concept, computers would use storage to memorize examples exactly as they appear from a shallow learning concept – classic memorization,” Dreyer said. Humans adjust neural weighting to solve for current and future examples, so we don’t have to see a future example to know what something is.

If you show a computer a picture of a bunch of oranges and then show it an apple, the computer wouldn’t know what the apple was because it hadn’t seen it before.

Humans exploit deep learning by classifying new objects, recognizing faces, and recognizing language, Dreyer said.  We see examples every day of facial recognition technology (for example, used by Facebook and Twitter). Computers are also in the process of learning to drive a vehicle, which takes a lot of deep learning, he said.

Humans have biological neural networks that act as pattern recognizers, Dreyer said. “We understand the neurobiology within the networks, and we can even go further to the biochemistry at the synaptic level.”

We don’t have the capability of creating those networks, though, we can only observe them, he said. Through semiconductors, electrical engineering, and mathematical modeling and software, we can create the concept of a neuron and then expand it through software to get the notion of an artificial neural network.

Data is brought into the artificial neural networks and then the networks are trained to recognize objects, people, concepts, or diseases, he said.

A key element of the neural network is its ability to learn, according to Daniel Shiffman, author of Nature of Code. It can also change its internal structure based on the information that flows through it. Pathways between neurons are known as connections that all have a weight, which is a number that controls the signal between the two neurons, Shiffman wrote. “If the network generates a ‘good’ output…there is no need to adjust the weights. If the network generates a ‘poor’ output, an error, the system adapts, altering the weights in order to improve subsequent results.”[[{"type":"media","view_mode":"media_crop","fid":"48915","attributes":{"alt":"Keith Dreyer, DO, PhD","class":"media-image media-image-right","id":"media_crop_4765456821205","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"5877","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"float: right;","title":"Keith Dreyer, DO, PhD","typeof":"foaf:Image"}}]]

“If you give the network two sets of objects, a set of dogs and a set of cats, if you haven’t trained it at all, there is a 50-50 chance the network will [correctly identify the sets],” Dreyer said. If you adjust the weights and the thresholds in front of every neuron to increase the accuracy, the feedback helps the network understand how to find the most accurate set.

Future Applications and Clinical Opportunities
Ken Jennings famously wrote, “I for one welcome our new computer overlords” after losing to IBM’s Watson in Jeopardy! in 2011. If they haven’t already, computers are on their way to overlord status.

The top five deep learning companies in market capitalization collectively equal about $2 trillion, Dreyer said, which means they have a lot of power, and a lot of resources to do what they want.

Building the algorithms is complicated, though. Once the correct clinical question is formulated, the necessary volume of clinical data has to be acquired or retrieved.

“You want to accurately label that data such that it answers the clinical question,” Dreyer said. “Most machine learning today is supervised, meaning there are already training sets available.”

Humans have already trained the objects to say, ‘this is pneumonia’ or ‘this is not pneumonia,’ he said. Beyond acquiring the data, it has to be accurately labeled or annotated to answer the questions. “You’d then apply the appropriate data science technique, do the training and validation, and iterate until you get the desired best accuracy of the result, then clinically validate resulting in machine learning appliances, implement them into clinical practice, and then license or commercialize,” Dreyer said.

The challenge for machine learning, according to Dreyer, is diagnosing disease before clinical symptoms.

“Patients become data symptomatic before they become clinically symptomatic,” he said. “There is a range of patients that you could treat sooner and get a better prognosis in certain diseases [if] you can mine the clinical, genetic, and consumer data to have machine learning identify those patients.”

Radiologists today use imaging and patient data in the interpretation process where they classify findings and make recommendations. Dreyer pointed out there can be over 2,500 different findings in imaging pathology that represent about 23,000 different conditions.

“We have to have that available in our head as well as the patient and exam data,” he said. “A lot of that information is lost in the process.”

That doesn’t mean machines are ready to take over as radiologists, though. Regulatory considerations make FDA approval more challenging and time-consuming for devices that conduct interpretation, or class III devices.

“Instead if you take it to features of measurements, so the quantification, but not to say what the disease is, FDA calls that a class II device,” Dreyer said. “Which is a much simpler pathway to get FDA approval.”

Class II devices also allow people to create algorithms much faster, he said. “Good news for the radiologist, [interpretation] still requires a human to be able to see the data, elements, and images to make the interpretation.” 

Related Videos
© 2024 MJH Life Sciences

All rights reserved.