Deep learning technology applied to medical imaging may become the most disruptive technology radiology has seen since the advent of digital imaging.
On the image processing side, deep learning algorithms will help select and extract features from medical images as well as construct new ones; this will lead to representations of imaging studies never seen before.
On the image interpretation front, deep learning applications will help not only identify, classify, and quantify disease patterns from images, but will also allow to measure predictive targets and create actionable prediction models of care pathways.
Is “radiologist replacement” a simpler term to describe this disruption? Should this perhaps be mitigated to “virtual radiologist assistant”?
While the medical imaging community is contemplating these questions, deep learning in health care and imaging is continuing to thrive. It is advancing quickly, as deep learning is in several other industries outside of health care.
Researchers and vendors moving this field forward have a bold recommendation for radiologists:
“Embrace it, it will make you stronger; reject it, it may make you irrelevant.”
In the Wake of IBM Watson, the Burgeoning of a Whole New Field
IBM Watson recently boosted itself with more than $4 billion worth of new assets through the acquisition of Phytel (population health), Explorys (cloud), Merge (imaging), and Truven (analytics). The Merge buyout represents Watson’s full-force, billion-dollar entry into the imaging arena. Imaging may be a fairly small area compared to Watson’s health care-wide sphere of activity, but it is certainly one of the most strategic for the group.
IBM Watson’s project Code Name: Avicenna, which the combined IBM and Merge entity showed as a work-in-progress at the 2015 RSNA and the 2016 HIMSS, builds on the early activities the company made in imaging with the IBM Sieve project.
The industry giant has set out to create the new future market it is envisioning. Meanwhile, a host of smaller companies are also establishing their presence in this dynamic new niche. Each one seems to be coming in with an approach and a technology that are highly innovative, differentiated, and promising.
The Need to Capitalize on Imaging Big Data
Clearly the acquisition of Merge, valued at about 3.1 times its revenues and 30 times its projected 2015 earnings, would not have reached 10 figures had it only been a technology- and market-oriented acquisition.
Another important perceived value-add for IBM Watson lies in the access to large imaging datasets that Merge provides. These big datasets are crucial to help train, tune, and validate Watson’s algorithms once applied to medical images.
Consequently, in executing the Merge acquisition, IBM will likely be looking to tap into the vast amount of imaging studies that may now be within reach. This could come from Merge’s PACS and VNA customers with large datasets on premises, or from customers of Merge’s hosted iConnect VNA.
There is every reason to expect that Watson will seek to enter into agreements with each of these customers for secondary use rights of images. Gradually the pools of images will be leveraged, a process that has already begun.
Different Use Case, Different Time to Market
In imaging, Watson is ultimately targeting a full-fledged decision and diagnostic support system designed to assist radiologists in their interpretations and physicians in their treatment decisions.
Watson would achieve this function through a deep learning back-end, validated prior data, and a lever on all relevant data sources, as well as patient information at its disposal across various IT systems and clouds.
However, even IBM would acknowledge that this “holy grail” of artificial intelligence in medical imaging is probably several years down the line. In all likelihood, it will assist in phasing in different use cases, gradually transitioning from “soft” use cases to “hard” use cases.
For example, a “soft” use case would have a deep learning application raise a flag - “this case may be urgent” - while a “hard” use case would have it predict - “this tumor is benign.” In another analogy, a “soft” use case would have it suggest a true positive, while a “hard” use case would have it ascertain a true negative.
The Big Regulatory Question Mark
This means the U.S. Food and Drug Administration (FDA) has a few more years to formalize its own guidance on how it will handle the accelerating stream of deep learning applications already flocking its way.
In fact, a number of the “softer” use-case applications are already FDA-cleared for having presented a valid predicate or were deemed as not requiring regulatory approval at all. The regulatory situation on this front is reminiscent of the early days of computer-aided detection (CADe) devices (which are cleared under the 510[k] process).
Conversely, it is likely future “harder” use-case applications will be regulated as Class 2, if not Class 3 devices, and require a large randomized clinical trial, which needs to be built off of a large pool of data. There, it is safe to expect many trials and tribulations, as has always been the case with computer-aided diagnosis (CADx) applications (which undergo the premarket approval [PMA] process). That is, unless a de novo consideration on the part of the FDA, similar to what happened in digital pathology, comes in to save the day.
Predictive and Prescriptive Use Cases Need More Time
Essentially, deep learning applications relying on retrospective analytics or performing comparative analytics are already making their way to the market. However, predictive and prescriptive use cases modeling is poised to be heavily regulated.
• First to Market: Research use cases have already begun. Zebra Medical Vision, for one, is already partnering with dozens of facilities on various research projects.
• Next to Market: Next-Generation Computer-Aided Detection. Recent examples are FDA approvals for RadLogics (2012) and HealthMyne (2016).
• Concurrently: Population health analytics use cases are coming up as well. Zebra Medical Vision feels confident it will go live in the Dell Cloud in 2016, in addition, a number of other companies are awaiting FDA decision or preparing to file.
• Third in Line: Clinical decision support use cases should be ramping up over the next three to five years.
• Down the Line: Diagnostic decision support use cases are likely five or more years away. However, many in the industry argue that this will be on the “more” side because the FDA is not willing to cross that line.
Beyond Regulations: What Outcomes Can Deep Learning Really Drive?
It may be a little naïve to project the adoption of deep learning in the medical imaging field on a current assessment of the state of the industry, observations of its picking up steam, or on speculations surrounding regulatory developments.
The real question is: what can deep learning do to contribute to the various clinical, operational, and financial outcomes providers are working towards in their transition to value-based care?
This is the type of question the health systems, ACOs, IDNs, hospitals, departments, and physicians are raising as they evaluate deep learning technologies.
Big Market Deployments Kicking Off Outside U.S.
In Australia, radiology service provider Capitol Health Limited has already figured out this answer by partnering with Enlitic for an “end-to-end transformation of medical diagnostics using deep learning for radiologists and healthcare providers.”
Therefore, if deep learning proves it can help achieve some of these high-level goals and starts making radiologists more productive, diagnoses more accurate, decisions more sound, and costs more manageable, then and only then would it become a no-brainer: deep learning will revolutionize the field of medical imaging, even if this has to begin outside of the U.S.