ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Translation of AI into the Clinic
Oral
AI & Machine Learning
Thursday, 09 May 2024
Summit 2
16:00 -  18:00
Moderators: Morteza Esmaeili & Efrat Shimron
Session Number: O-62
CME Credit

16:001393.
Improved ex-vivo cerebral microbleed detection using self-supervised learning with fuzzy segmentation
Grant Nikseresht1, Arnold Evia2, David A. Bennett2, Julie A. Schneider2, Gady Agam1, and Konstantinos Arfanakis2,3
1Computer Science, Illinois Institute of Technology, Chicago, IL, United States, 2Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States, 3Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States

Keywords: Diagnosis/Prediction, Aging, Ex-Vivo Applications, Brain, Microbleeds

Motivation: Accurate and efficient detection of cerebral microbleeds (CMBs) on postmortem MRI is necessary for MR-pathology studies on the relationship between CMBs and cerebral small vessel disease (SVD). 

Goal(s): The development and improvement of an automated detection framework for identifying cerebral microbleeds (CMBs) on MRI scans of community-based older adults.

Approach: Fuzzy segmentation, a novel self-supervised auxiliary task based on CMB data synthesis, is proposed for pre-training a CMB detection model alongside other state-of-the-art SSL methods.

Results: Self-supervised pre-training with fuzzy segmentation and rotation prediction led to an 11% increase in average precision for automated CMB detection on postmortem MRI. 

Impact: This study demonstrates a new state-of-the-art for postmortem CMB detection performance using self-supervised learning. Automated CMB detection on postmortem MRI will enable future MR-pathology studies into the links between CMBs and neuropathology observed at autopsy such as cerebral amyloid angiopathy.

16:121394.
Cerebral microbleed detection on susceptibility weighted imaging using solely artificial training data
Jonathan A. Disselhorst1,2,3, Caroline Hall4, Punith B Venkategowda5, Alessandra Griffa4, Vincent Dunet2, Tobias Kober1,2,3, Gilles Allali4, and Bénédicte Maréchal1,2,3
1Advanced Clinical Imaging Technology, Siemens Healthineers International AG, Lausanne, Switzerland, 2Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 3LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, 4Leenaards Memory Centre, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 5Siemens Healthcare Pvt. Ltd., Bangaluru, India

Keywords: Diagnosis/Prediction, Machine Learning/Artificial Intelligence, microbleed, ARIA, SWI

Motivation: Cerebral microbleeds (CMBs) are small brain hemorrhages detectable with MRI associated with conditions like cerebral amyloid angiopathy. As their detection can be difficult, automated methods are needed for quick and precise detection and localization of CMBs.

Goal(s): To propose an algorithm to detect CMBs.

Approach: A neural network was trained on SWI/T2* images, with artificial bleeds generated and added during training. The model’s performance was tested on an independent test set with actual CMBs.

Results: Despite the absence of real CMBs in the training data, the simulated bleeds provided sufficient information to train a model with good performance in the independent test set.

Impact: We propose an algorithm that can help with the tedious radiological task of detecting cerebral microbleeds in the brain. We further demonstrate that a model trained solely on simulated bleeds can effectively detect actual microbleeds in real MRI data.

16:241395.
Automatic segmentation of spinal cord multiple sclerosis lesions across multiple sites, contrasts and vendors
Pierre-Louis Benveniste1,2, Jan Valošek1,2,3,4, Michelle Chen1, Nathan Molinier1,2, Lisa Eunyoung Lee5,6, Alexandre Prat7,8, Zachary Vavasour9, Roger Tam9, Anthony Traboulsee10, Shannon Kolind10, Jiwon Oh5,6, and Julien Cohen-Adad1,2,11,12
1NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montréal, QC, Canada, 2Mila - Quebec AI Institute, Montréal, QC, Canada, 3Department of Neurosurgery, Faculty of Medicine and Dentistry, Palacký University Olomouc, Olomouc, Czech Republic, 4Department of Neurology, Faculty of Medicine and Dentistry, Palacký University Olomouc, Olomouc, Czech Republic, 5Department of Medicine (Neurology), University of Toronto, Toronto, ON, Canada, 6BARLO Multiple Sclerosis Centre & Keenan Research Centre, St. Michael's Hospital, Toronto, ON, Canada, 7Department of neuroscience, Université de Montréal, Montréal, QC, Canada, 8Neuroimmunology research laboratory, University of Montreal Hospital Research Centre (CRCHUM), Montréal, QC, Canada, 9School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada, 10Departments of Medicine (Neurology), Physics, Radiology, University of British Columbia,, Vancouver, BC, Canada, 11Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montréal, QC, Canada, 12Centre de Recherche du CHU Sainte-Justine, Université de Montréal, Montréal, QC, Canada

Keywords: Diagnosis/Prediction, Multiple Sclerosis, Deep Learning, Segmentation, Spinal Cord

Motivation: Longitudinal analysis of spinal cord multiple sclerosis (MS) lesions is clinically relevant for the early diagnosis and monitoring of MS progression.

Goal(s): Develop a deep learning tool for the automatic segmentation of MS spinal cord lesions on PSIR and STIR images from multiple sites.

Approach: A nnUNet model was trained and tested on the baseline data and applied to follow-up scans to create lesion distribution maps.

Results: We demonstrated the utility of the model to map the spatio-temporal distribution of MS lesions across MS phenotypes. The model is packaged into an open-source software.

Impact: Automatic segmentation of spinal cord lesions in large cohorts helps to identify signatures of MS phenotypes for ultimately improving prognosis and optimizing treatment for people with MS.

16:361396.
Detection and Quantification of Acute Ischemic Lesions using Deep Learning-Based Super-resolution Portable Low-Field-Strength MRI
Yueyan Bian1, Long Wang2, Jin Li1, Xiaoxu Yang1, Erling Wang1, Yingying Li1, Chen Zhang3, Lei Xiang4, and Qi Yang1,5
1Department of Radiology, Beijing Chaoyang Hospital, Beijing, China, 2Subtle Medical, Shanghai, China, 3MR Research Collaboration, Siemens Healthineers, Beijing, China, 4Department of Radiology, Beijing Chaoyang Hospital, Shanghai, China, 5Laboratory for Clinical Medicine, Capital Medical University, Beijing, China

Keywords: AI/ML Image Reconstruction, Ischemia

Motivation: The diagnostic performance of portable low-field-strength MRI (LF-MRI) is constrained by low spatial-resolution and signal-to-noise ratio.

Goal(s): To evaluate the performance in detecting and quantifying ischemic lesions among SynthMRI, LF-MRI and real high-field-strength MRI (HF-MRI).

Approach: We created a deep learning-based model to generate the synthetic super-resolution (3T) MRI (SynthMRI) based on LF-MRI (0.23T). We evaluated the performance in detecting and quantifying ischemic lesions among SynthMRI, LF-MRI and HF-MRI.

Results: SynthMRI demonstrated high sensitivity in detecting the number and locations of ischemic lesions. Moreover, SynthMRI exhibited strong correlations with HF-MRI in the quantitative assessment of ischemic lesions, and significantly higher than portable LF-MRI.

Impact: Synthetic super-resolution MRI images overcome the limitations of low spatial resolution and signal-to-noise ratio in portable low-field-strength MRI. It has the potential to replace high-field-strength MRI images in the neuroimaging of AIS, enabling portable low-field-strength MRI examinations with comparable performance.

16:481397.
3D Segmentation of Subcortical Brain Structure with Few Labeled Data using 2D Diffusion Models
Jihoon Cho1,2, Hyungjoon Bae3, Xiaofeng Liu2, Fangxu Xing2, Kyungeun Lee3, Georges El Fakhri4, Van Wedeen2, Jinah Park1, and Jonghye Woo2
1School of Computing, Korea Advanced Institute of Science and Technology, Daejeon, Korea, Republic of, 2Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States, 3Daegu Gyeongbuk Institute of Science and Technology, Daegu, Korea, Republic of, 4Yale School of Medicine, New Haven, CT, United States

Keywords: Diagnosis/Prediction, Brain

Motivation: Deep learning-based segmentation methods have shown promising results; however, they require a large number of segmentation labels for training, which is very costly to obtain, especially for 3D labels.

Goal(s): Our goal is to achieve promising 3D segmentation results with few labels by exploiting the ability to capture semantic information from 2D diffusion models trained without labels.

Approach: We train simple pixel classifiers using features extracted from 2D diffusion models that have been trained with slices from three orthogonal orientations.

Results: In our experiments on the Human Connectome Project database, our proposed method outperformed conventional segmentation methods in a few labeled scenarios.

Impact: Our proposed method for segmenting subcortical brain structures can be readily applied to pre-trained diffusion models with only a few labeled data, while also generating paired segmentation labels for the images produced by diffusion models.

17:001398.
Enhancing Deep Learning-Based Liver Vessel Segmentation on MRI with Image Translation Techniques
Yanbo Zhang1, Ali Bilgin2,3, Sevgi Gokce Kafali4,5, Brian Toner3, Timo Delgado4,5, Eze Ahanonu3, Deniz Karakay3, Wenqi Zhou4,5, Sabina Mollus6, Stephan Kannengießer6, Vibhas Deshpande7, Sasa Grbic1, Maria Altbach3, and Holden H. Wu4,5
1Siemens Healthineers, Princeton, NJ, United States, 2Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States, 3Department of Medical Imaging, University of Arizona, Tucson, AZ, United States, 4Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States, 55Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States, 6Siemens Healthineers, Erlangen, Germany, 7Siemens Healthineers, Malvern, PA, United States

Keywords: AI Diffusion Models, Segmentation, Liver Vessel Segmentation

Motivation: To improve liver vessel segmentation on MRI under annotation constraints.

Goal(s): Apply an advanced unpaired image translation technique, SynDiff, to create synthetic MR images from CT data.

Approach: By incorporating vessel masks in the translation process, the optimized SynDiff models generated synthetic images that facilitated more effective pretraining of segmentation models.

Results: Validated across multiple pretraining settings, the refined SynDiff approach surpassed the standard nnU-Net and other pretraining-based methods, substantially improving liver vessel segmentation performance.

Impact: This study remarkably advances liver vessel segmentation on MRI, demonstrating that synthetic data can effectively augment limited datasets, leading to improved model performance. It has great potential for broader applications in medical image analysis.

17:121399.
Multimodal Approach for CDR : Vision-Language Integration for Enhanced Clinical Dementia Rating Classification in Alzheimer’s Disease
Joonhyeok Yoon1, Minjun Kim1, Sooyeon Ji1, Chungseok Oh1, Hwihun Jeong1, Kyeongseon Min1, Jonghyo Youn1, Taechang Kim1, Hongjun An1, Juhyung Park1, and Jongho Lee1
1Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea, Republic of

Keywords: Analysis/Processing, Alzheimer's Disease, multimodal, language-text

Motivation: The diagnosis of Alzheimer's disease (AD) considers not only clinical symptoms but also various data sources, including MR imaging.

Goal(s): In this study, we used a multimodal approach to integrate both language and vision information to improve the performance of clinical dementia rating classification network.

Approach: We used contrastive pre-training with language and vision data pairs; then trained a classifier, freezing the pre-trained network during classifier training.

Results: The results show that the integrated model achieved the highest accuracy. In addition, the contrastive learning process improved the performance of the vision encoder with guidance of abundant linguistic information.

Impact: With multimodal training, we successfully integrated both vision and language information and yielded the best results with integrated model. Also, multimodal training enhanced vision encoder's performance. When limited language information was provided, the complementary information from visual information was greater.

17:241400.
Predicting Anatomical Tumor Growth in Pediatric High-grade Gliomas via Denoising Diffusion Models
Daria Laslo1, Maria Monzon1, Divya Ramakrishnan2, Marc von Reppert2, Schuyler Stoller3, Ana Sofia Guerreiro Stücklin4, Nicolas U. Gerber4, Andreas Rauschecker5, Javad Nazarian4, Sabine Mueller5, Catherine Jutzeler1, and Sarah Brueningk1
1ETH Zurich, Zurich, Switzerland, 2Yale University, New Haven, CT, United States, 3École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 4Kinderspital Zurich, Zurich, Switzerland, 5University of California San Francisco, San Francisco, CA, United States

Keywords: AI Diffusion Models, Machine Learning/Artificial Intelligence, Oncology, Cancer, DMG, Diffuse Midline Glioma

Motivation: Pediatric diffuse midline gliomas are associated with a poor prognosis, leaving radiotherapy as standard of palliative care. Personalized radiation regimes could maximize the benefit for the patient, and consequently improve clinical outcomes.

Goal(s): This study explores a state-of-the-art computer vision method to predict the anatomical growth of tumors which could inform tailored radiotherapy treatments.

Approach: A denoising diffusion implicit model is employed to generate realistic, high-quality magnetic resonance imaging scans of enlarged tumor sizes starting from a baseline image.

Results: Our proof-of-concept study demonstrates promising results on an external longitudinal pediatric dataset, highlighting the method’s potential to realistically predict visual tumor growth. 

Impact:  We demonstrate realistic predictions of anatomical (pediatric) brain tumor growth using a generative denoising diffusion implicit model. This enables personalized predictions of tumor growth trajectories to guide localized therapies such as geometric dose shaping for radiotherapy delivery.

17:361401.
MRI-based Biological Age Estimation for Multiple Organs in the UK Biobank Cohort
Veronika Ecker1,2, Marcel Früh1, Bin Yang2, Sergios Gatidis1, and Thomas Küstner1
1University Hospital of Tübingen, Tübingen, Germany, 2University of Stuttgart, Stuttgart, Germany

Keywords: Analysis/Processing, Aging, Biological Age

Motivation: MRI is a valuable tool for providing health-related information, including visualizing age-associated changes. Aging is influenced by chronic diseases, and assessing the true organ-specific biological age is essential for accurate diagnosis.

Goal(s): Development of an organ-specific age estimation for investigation in large imaging cohort with associated non-imaging information.

Approach: While prior studies focused on age estimation from non-imaging data or single organs, we propose a multi-organ age estimation framework, operating on brain, cardiac, and abdominal MRIs, and OCT scans.

Results: Our results prove the feasibility of imaging-based organ age estimation and initiate further investigations to identify risk factors for accelerated aging.

Impact: Reliable imaging-based estimation of biological age in multiple organ systems facilitates research efforts to identify risk factors of accelerated aging, advancing the goal of age-related phenotyping.

17:481402.
Learning to synthesize MR contrasts using a self-supervised constrained contrastive learning approach
Lavanya Umapathy1,2, Li Feng1,2, and Daniel K Sodickson1,2
1Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States, 2Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Although deep learning frameworks have been widely used in all aspects of the MR imaging pipeline, the effect of learning tissue-specific information from MR images in improving model performance needs to be understood.

Goal(s): We demonstrate the utility of a self-supervised contrastive learning framework that uses multi-contrast information to improve synthesis of T1w and T2w images.

Approach: A deep learning model is pretrained to learn T1 and T2 information from a set of multi-parametric MR images.

Results: A contrast synthesis framework was developed using few examples of contrast mapping. Embedding relevant contrast information during pretraining synthesized images with improved MSE, SSIM, and PSNR. 

Impact: Multi-contrast information can be leveraged by self-supervised deep learning models to understand underlying tissue characteristics and synthesize new MR contrast-weighted images. This demonstrates the wider applicability of embedding tissue-specific information in improving different aspects of the MR imaging pipeline.