The latest studies and news, from AI research to a new tracer.
AI Offers “Expert-Level” Detection of Brain Hemorrhage
Acute intracranial hemorrhage (ICH), sometimes referred to as a “brain bleed,” shares symptoms with several other neurological conditions. Today, emergency departments rely on CT scans to detect this life-threatening condition-and even the most experienced radiologists can sometimes miss the subtle signs of the condition on such lower resolution images. Now, researchers from the University of California, San Francisco and the University of California, Berkeley have demonstrated that a deep learning artificial intelligence (AI) algorithm can provide “expert-level” detection of brain hemorrhage in a new study published in the Proceedings of the National Academy of Sciences-not only performing at the same standard as expert radiologists but finding tiny brain bleeds that those experts overlooked.
The researchers used a single-stage, end-to-end, fully convolutional deep learning neural network in order to help identify what are usually very small abnormalities that must been detected on an image known for poor soft tissue contrast and low signal-to-noise issues. They trained the algorithm on a data set of over 4,000 CT exams where ICH abnormalities were manually highlighted at the pixel level.
While the authors said this data set was small, they argue that using this pixel-level supervision approach allowed for the kind of joint classification and segmentation to allow the network to better detect potential issues on the testing data. In addition, the authors used a technique called PatchFCN, where the researchers contextualized the training images with those that came both immediately before and after each one in the stack.
When the researchers compared the algorithm’s performance with four experienced, American Board of Radiology-certified radiologists on a test set of 200 head CT exams, they found that the algorithm was able to successfully detect ICH, achieving 100% sensitivity at specificity levels approaching 90%. Furthermore, the algorithm also identified some abnormalities missed by the expert reviewers – five positive cases were judged as negative by two of the four radiologists.
The researchers concluded that their algorithm could be used to augment radiologists’ abilities to detect ICH in the emergency department and plan to continue testing it in follow-on studies.
Radiology Organizations Push Ethics Discussions Regarding AI
In a show of solidarity, the American College of Radiology (ACR), European Society of Radiology (ESR), Radiological Society of North America (RSNA), Society for Imaging Informatics in Medicine (SIIM), European Society of Medical Imaging Informatics (EuSoMII), Canadian Association of Radiologists (CAR) and American Association of Physicists in Medicine (AAPM) have come together to publish a guidance statement to provide appropriate ethical scaffolding regarding the continued development of AI applications in radiology. The statement was simultaneously published in four leading radiology journals, including the Journal of the American College of Radiology, Radiology, Insights into Imaging and the Canadian Association of Radiologists Journal.
The statement authors highlighted three particular areas of ethical concern as the field moves forward, including the generation, sharing, and use of data that fuels such algorithms; the types of algorithms used, which can often have a “black box” deep learning aspect; and how the different platforms created with those data and algorithms will actually be utilized in radiologic practice. They argue that AI currently lacks appropriate standards to govern its development and use in the field and it is up to radiologists themselves to help provide the appropriate guidance.
The statement authors hope that the statement will help radiologists become more informed about both the promise and the potential perils of AI use-and where, how, and when they might get involved to help shape policy, practice, and clinical standards regarding its use in the future.
A Silent MRI for Pediatric Myelination Measures
MRI has become a radiologic mainstay, helping to diagnose and track the progression of a number of medical conditions. However, it is quite loud and uncomfortable even for mature patients. Pediatric patients, especially infants, are usually sedated before a scan to help tolerate the noise, which can be as loud as 130 decibels of gradient clicks, clanks, and whirs. But despite that measure, many patients wake during the procedure-and move-which can skew and invalidate the results.
A new study from China’s Yangzhou University suggests a technique called “silent” MRI, which significantly reduces the noise of the scanning platform, offers clinicians a better option for child patients to measure myelination. The study was published in Academic Radiology.
Silent MRI technology is specifically designed to reduce MRI scanning noise to normal background noise levels. However, studies suggest the methods used to muffle the noise can result in a longer read-out time and result in blurry images. When it comes to diffusion-weighted imaging, there is even less known.
As such, the researchers at Yangzhou University decided to compare the image quality for measuring myelination in pediatric patients, between the ages of 3 to 36 months, between conventional and silent MRI. The researchers discovered:
The researchers concluded that silent MRI is appropriate for pediatric diffusion-weighted imaging studies, as ambient noise levels are reduced, and the images are more diagnostically useful than conventional MRI techniques.
New PET Tracer Can Help Detect Cancer, Lung Disease
With the right “tracer,” or binding peptide, PET imaging remains an important diagnostic tool for a variety of different cancers and medical conditions. A new study in Nature Communications highlights the development of a new tracer, a cystine knot peptide that selectively recognizes integrin Î±Î½Î²6, a protein receptor known to be overexpressed in number of medical conditions including: oral squamous cell carcinoma, pancreatic ductal adenocarcinomas (PDAC), intestinal gastric carcinomas, ovarian cancer, and stage III basal cell carcinoma, as well as a marker of metastasis in a variety of other cancer types. Furthermore, integrin Î±Î½Î²6 is also known to play a role in the development of idiopathic pulmonary fibrosis (IPF).
Researchers from Stanford University engineered the new peptide to better identify pancreatic cancer in PET imaging studies. Their work quickly showed, however, that it could also detect other types of cancers, as well as IPF. To best demonstrate its effectiveness, the group conducted a small clinical trial to compare the use of their new peptide in both healthy individuals and patients who had been diagnosed with cancer of IPF. While it was a small trial, they were able to evaluate the tracer’s safety and pharmacokinetics in the healthy volunteers, as well as its ability to visualize multiple cancer types in study participants who had already been diagnosed with disease.
The researchers plan to do further, extended testing to prove the efficacy of the tracer but they believe it such cystine knot tracers can have great clinical utility in the future, given that so many conditions show an overexpression of the integrin Î±vÎ²6 protein.
AI-Radiologist Team More Accurate Than Either Alone
As more studies show that AI algorithms can match (or sometimes even surpass) the accuracy of trained radiologists on a variety of conditions, many have warned that deep learning models may replace radiology readers. However, a new study published in IEEE Transactions on Medical Imaging suggests that having an experienced radiologist whose reads can be enhanced by AI techniques may be the best option for identifying breast cancer screenings.
Researchers at New York University, hailing both from the NYU School of Medicine and the NYU Center for Data Science, developed a deep convolutional neural network program to screen mammography images for breast cancer. The system was trained and evaluated on a data set of more than 1,000,000 images and could accurately detect cancer at a rating of approximately 89.5%.
When they compared the AI program to a group of 14 experienced radiologists, they showed that the model was as accurate as the doctors, which falls in line with many of the AI vs. radiologist studies currently flooding the literature. However, beyond validating the model, the researchers wanted to see the benefits of a hybrid reading model, where the program could average the probability of a malignancy predicted by the radiologist with the prediction from the neural network. When both the AI and the radiologist worked together, the accuracy of reads increased to 90%-suggesting medicine may benefit from pairing experienced readers with proven AI tools.