How Radiologists are Using Machine Learning
How Radiologists are Using Machine Learning
Radiologists’ use of machine learning is becoming a hot topic – and for good reason. While some radiologists worry their jobs will be taken over by software, others look forward to the value this technology provides, making image reading faster, more accurate and providing a second set of eyes as well as triage. With radiologists given larger caseloads coupled with decreasing reimbursement, adding artificial intelligence software to the acquisition and interpretation phases can change the profession’s future for the better.
“The jury was out [on machine learning] until maybe RSNA,” said Moshe Becker co-founder of RADLogics, whose AlphaPoint software acts as a “virtual radiologist,” analyzing images and populating reports with preliminary findings and pertinent medical records. “That was my gut feeling, about the notion of incorporating machine learning into imaging analysis. Now, this is resolved. Now, it’s time to start using it.”
Kevin Lyman, an Enlitic senior data scientist agreed interest in machine learning for radiology skyrocketed at November’s RSNA conference.
“When we first started going to RSNA a couple of years ago, we were the only company doing deep learning in radiology, and people though we were crazy,” Lyman said. “Now, there’s probably about a dozen companies talking about deep learning in that space.”
Not only are the numbers of start-ups and companies focusing on this rising, but there are also more frequent announcements of high profile partnerships forming between technology companies and health systems or universities to provide the large image sets needed to develop and test the machine learning algorithms.
Here’s what three machine learning technology companies are doing in the radiology space.
Based in San Francisco, Enlitic is using deep learning to look at various types of medical data, starting with radiology. They’re currently focusing on products to make radiologists faster and more accurate, and ideally capable of detecting diseases earlier than they could otherwise.
Their software incorporates medical data from a number of sources, including radiology images, lab results and electronic health records.
While not yet available in the market, they have several products that are far along in the process. The first is a lung nodule detector, initially developed using data from the National Lung Screening Trial. Then, they partnered with Capitol Health in Australia, for additional radiology data and testing. The tools assist with diagnosing and characterizing potentially malignant lung modules. They’re now working with the University of California at San Francisco to validate the results and plan to publish a paper this year on its effectiveness. Their internal tests, so far, show they’re able to characterize lung nodules as malignant 50% more accurately than a panel of radiologists could do, Lyman said.
Another focal area is looking for abnormalities on chest X-ray screening. They’re working with partners in Asia to develop and clinically roll that out this year. The main focus for this software is second opinion reads.
Last year, Enlitic conducted a study using extremity fracture detection models to measure how their machine learning software could improve detection accuracy. They hope to have a paper published in the next month or two. Their research shows a 20% improvement in radiologist efficiency and a 10% improvement in specificity and sensitivity, Lyman said. For this study, they measured radiologist performance with and without Enlitic software use.
While accuracy of the machine learning is important, adoption is, as well. They are focusing on developing models that radiologists will actually use, Lyman said, as they’ve seen resistance and hesitation from some radiologists in adopting new software use. To counter this, Enlitic is trying to augment tools radiologists already use to incorporate deep learning.
In the lung cancer screening product, for example, radiologists often consult text books to find rare cases similar to the one they’re trying to diagnose, Lyman said. That can be a lengthy process taking them out of their workflow environment. One of Enlitic’s emerging features is finding examples of lung nodules that are characteristically similar, using acquired machine learning data, which is faster to produce than radiologists could do manually.
“With features like these, we’re not just trying to make them faster and more accurate, but give them more information that can enable them to look at a problem in a completely different way than they’re used to,” Lyman said.
Enlitic’s future extends far beyond radiology. They’re looking at natural language processing, working with medical records and analyzing and integrating medical texts, starting with radiology reports. In terms of consumer applications, it can be used to automate medical billing and coding, helping convert medical codes to billing reports for insurance providers. Natural language processing is a big part of how they’ll enter other areas of medicine, like genomics, pathology, and pharmaceuticals
“We’re going to see deep learning become an integral part of the entire healthcare ecosystem. Data from one field can be combined with data from many others, creating new opportunities for innovation and insight. Helping hospitals leverage all their data will be a major focus for us in the years ahead,” Lyman said.
Also based in San Francisco, Arterys has two software products with recent FDA clearance. Last November, the FDA gave 501(k) clearance for its 4D Flow software, and in January, they cleared Cardio DL. Both are cloud-based image processing technologies. The 4D Flow program is used on cardiac MRI studies to provide blood flow visualization and quantification. Cardio DL applies machine learning for automatic identification and segmentation of the heart’s ventricles.
For the 4D Flow software, Arterys partnered with GE Healthcare’s ViosWorks product, where Areterys provides cloud processing and GE provides software for the MRI machine. Arterys CEO Fabien Beckers was not able say when the product would be for sale, but it will likely in the coming months. Beckers said 40 hospitals are using 4D Flow system for research, and they’ve already analyzed more than 10,000 cases.
Beckers and two Stanford radiologists created the Arterys technology, using cloud access for computation and gaming graphics processing units (GPU) to process the medical images offsite. With heart imaging, and, in particular, flow imaging, this software is a great way to demonstrate GPU’s capabilities. Since the cardiac files are large, requiring massive amounts of computing power, it’s difficult for radiologists to do that on site.
With heart imaging, doctors spend a lot of time computing the left and right ventricle contours by hand. The Arterys Cardio DL software saves the doctors time using machine learning to do it, said Beckers, and this also adds consistency. Doctors can override the machine’s output.
“We’re nowhere near trying to take over their work, just to help them,” Beckers said.
In a traditional imaging environment, the radiologist requires PACS, servers, and a work station for post-processing of the cardiac images. The Arterys applications live in the cloud, incorporating data privacy for patient safety. To do this, Arterys strips out patient information from scans at a hospital level, sending only de-identified data. The Arterys system never sees actual patient names. With this technology in place, “it’s much easier to embrace the cloud,” Beckers said. When physicians log in with the right credentials, the Arterys data reconciles with hospital data. He said this system took them two years to build, and it’s a critical and unique part of what they do.
With more available data in the long run, Arterys will add similar case analysis, to share statistics with physicians on what treatments have been the most effective. This will make treatment more consistent, as well, Beckers said. In the future, he also said the platform has potential for oncology use, in cases like tracking tumors and variability.
Silicon Valley company RADLogics developed AlphaPoint software, which uses machine learning to populate preliminary findings, including imaging analysis and appropriate medical records information, in a radiology report. The company has been working on the software for almost seven years.
“Every radiologist would love to have a helper that does the grunt work and makes sure they don’t make mistakes or miss something,” said RADLogics co-founder Moshe Becker. The software processes time-consuming measurements and findings. The company doesn’t anticipate it will replace radiologists, but rather assist them. “Radiologists are valuable as diagnosticians. But, just as humans aren’t that great at pixel counting or being a visual search engine, we’re helping them go through the work more accurately, more consistently, and save reading time.”
Typically, he said, there’s a tradeoff between quality and quantity of reads. It’s like another set of eyes looking over your shoulder, while improving efficiency. This is one reason Becker calls the software a “virtual resident.”
They developed a machine learning image analysis platform that goes through imaging data from various modalities. They have FDA clearance for chest CTs and are working to expand its use to other CT reads. They will submit the head CT package to the FDA later this year.
“We focus on the plain vanilla cases, those where 20% of the cases create 80% of the volume,” Becker said.
The software is also clinically validated for chest X-ray, and they’ll be pursuing FDA clearance for that this year, as well. MRI packages are also in the works, with plans to have a few of them clinically validated and submitted to the FDA later this year.
To develop AlphaPoint, researchers initially spent several weeks sitting next to radiologists around the world, watching them work and figuring out how they use their time.
“We saw that close to 80% of the time is spent essentially on pixel hunting or finding, hunting and measuring.” That takes time and is error-prone, he said. “We try to make this into science, into a consistent, robust helper to radiologists so they can focus more on the diagnostic work.”
Once radiologists have the preliminary findings and measurements, figuring out the diagnostic picture takes only seconds, he said, which is a small percentage of the reading time. “We want to help the radiologist focus on that, not on pixel counting.”
Their system first focuses on the image, and then integrates medical records. That way, Becker said, there’s no bias in what the algorithm looks for, and it’s easier to locate and report incidental findings.
“Don’t tell me what the patient has,” he said. “Give me the data; we’ll figure it out based on that.”
There’s an option for AlphaPoint to connect to the electronic medical records, providing additional contextual information to the radiologist.
Like the other software featured here, the imaging analysis happens in the cloud. The results are integrated into PACS and reporting systems. AlphaPoint fills in the report template, just like a resident would do with a preliminary report. The radiologist can manually change what’s there.
A couple of Silicon Valley hospitals are currently using AlphaPoint to demonstrate clinical value for publication purposes. It’s also being used by a dozen other sites using the software as a service (SaaS). The RADLogics fee model involves no upfront installation costs (and no actual installation).
“It’s a zero footprint solution,” Becker said.
The data flows to the cloud. They allow radiologists to try the software at no cost, and are now moving into the paid market stage.
Machine learning companies in the radiology space are moving beyond questioning whether this technology will be accepted. What’s keeping Becker up at night is not his competition.
“It’s the rate of uptake over the next few years - getting to scale in real clinical use,” he said. “That’s the big question in this market. It’s going to start happening more quickly over the next three to four years than it has been.”