Novel deep learning model can provide needed information from multi-modal imaging even when some modalities are absent.
Alzheimer’s disease can be diagnosed earlier with multi-modal imaging by using a novel machine learning model even when some modalities are missing, according to a new study.
Researchers from a multi-institutional team developed a model that outperforms others that are currently available, and they presented their work during this year’s Radiological Society of North America (RSNA) annual meeting.
Developing tools for early detection of Alzheimer’s disease is critical because early identification can help slow progression of the disease. So far, healthcare has looked to neuroimaging as an avenue to catch the signs of this condition. But, more data is needed for scan to be successful.
To meet this need, the team, led by Fleming Y. Lure, Ph.D., created a novel transfer learning-based machine learning model that can diagnose and prognose mild cognitive impairment due to Alzheimer’s with varying availability of imaging modalities, such as MRI, FDG-PET, and amyloid-PET.
“Our research provides a clinical tool to assist physicians in diagnosis and prognosis of Alzheimer’s disease when disease is still early for their patients, which has tremendous clinical benefits,” Lure said.
For the study, Lure’s team included 241 patients with mild cognitive impairment from the Alzheimer’s Disease Neuroimaging Initiative Database. Of the group, 97 had mild cognitive impairment due to Alzheimer’s disease. Within two years, 26 individuals with mild cognitive impairment had converted to Alzheimer’s disease, and another 46 had converted to Alzheimer’s within six years. The team divided the patients into four sub-cohorts based on imaging available: MRI only; MRI and FDG-PET; MRI and amyloid-PET; and all three modalities.
Based on their analysis, the team determined the machine learning model achieved much better accuracy than the competing model, using each cohort for prognosis and diagnosis.
For additional RSNA coverage, click here.