ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
The Future of AI in MRI: Emerging Technologies & Directions
Oral
AI & Machine Learning
Tuesday, 07 May 2024
Nicoll 3
15:45 -  17:45
Moderators: Akshay Chaudhari & Haifeng Wang
Session Number: O-59
CME Credit

15:450650.
Foundation Model based labelling of MR Shoulder images to drive Auto-Localizer workflow
Gurunath Reddy M1, Muhan Shao2, Deepa Anand1, Kavitha Manickam3, Dawei Gui3, Chitresh Bhushan2, and Dattesh Shanbhag1
1GE HealthCare, Bangalore, India, 2GE HealthCare, Niskayuna, NY, United States, 3GE HealthCare, Waukesha, WI, United States

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, One-shot, Shoulder, Foundation Models, Localization, Segmentation

Motivation: Develop automatic labelling capability on anatomical shoulder MRI images with minimal manual annotation.  

Goal(s): Leverage large-FOV, low resolution coil sensitivity maps to guide correct positioning of three-plane localizer for shoulder MRI planning. 

Approach: Use chained DINO-V2 and SAM foundation models, tuned to MRI localizers and a data driven similarity measure to label shoulder data at scale and transfer to low resolution coil sensitivity maps for CNN model training. 

Results: Excellent shoulder region localization with FM on anatomical (91% accuracy) and with CNN model on calibration data (error < 15 mm)

Impact: A data adaptive, chained foundation model-based approach for annotating shoulder regions on MRI anatomical images at scale is shown. This allowed rapid development of model using low-resolution calibration data for correctly positioning three-plane localizer for shoulder anatomical planning and imaging.

15:570651.
Towards a Generalizable Foundation Model for Multi-Tissue Musculoskeletal MRI Segmentation
Gabrielle Hoyer1,2, Michelle Tong*1,2, Sharmila Majumdar1, and Valentina Pedoia1
1Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States, 2Department of Bioengineering, University of California, Berkeley, Berkeley, CA, United States

Keywords: Analysis/Processing, Bone

Motivation: To evaluate the potential of foundation models for medical imaging analysis.

Goal(s):  To understand the limitations of foundation models trained for the natural imaging domain, and assess the challenges  for translation to complex musculoskeletal anatomy in a rich medical image domain.

Approach: A diverse collection of musculoskeletal MRI data was used to assess the generalizability of SAM when applied to a variety of segmentation tasks common to the medical research and clinical setting.

Results:  SAM performed decently on zero-shot of medical data. The ability of SAM to perform well when finetuned on a spectrum of data, is somewhat lacking and requires additional evaluation.

Impact: A foundational model for generalizable musculoskeletal MRI segmentation, such as one fine-tuned on the Segment Anything Model (SAM) has the potential to overcome challenges with generalizability for widespread usage beyond a specific task, reducing burden in medical imaging pipelines.

16:090652.
Distribution Matching Based Personalized Federated Learning for Multi-Contrast Liver MRI Synthesis and Registration
Rencheng Zheng1, Hang Yu2, Ruokun Li3, Qidong Wang4, Caizhong Chen5, Fei Dai1, Boyu Zhang1, Ying-Hua Chu6, Weibo Chen7, Chengyan Wang8, and He Wang1
1Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China, 2Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China, 3Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China, 4Department of Radiology, The First affiliated Hospital, School of Medicine, Zhejiang University, Shanghai, China, 5Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China, 6Siemens Healthineers, Shanghai, China, 7Philips Healthcare, Shanghai, China, 8Human Phenome Institute, Fudan University, Shanghai, China

Keywords: AI/ML Image Reconstruction, Liver

Motivation: The combined diagnosis of diffusion-weighted imaging (DWI) and dynamic contrast-enhanced (DCE)-MRI is of significant importance for liver diseases, but accurate registration between these two modalities remains a substantial challenge.

Goal(s): Our goal was to design a deep learning model for accurate registration between DCE and DCE-MRI, and conduct multicenter studies based on federated learning.

Approach: We proposed a multi-task synthesis-registration network (SynReg) and a personalized decentralized distribution matching federated framework (PDMa) based on SynReg.

Results: The proposed SynReg and PDMa method increased the registration accuracy in most centers both in liver region and liver tumor region.

Impact: Accurate and rapid registration of DWI and DCE can effectively assist clinicians in leveraging multimodal imaging for efficient diagnosis. Personalized federated learning can effectively aid single-center with limited data to leverage the abundant data from multiple centers for model development.

16:210653.
Peer-to-Peer Generative Learning for Architecture-Agnostic Federated MRI Reconstruction
Valiyeh Ansarian Nezhad1,2, Gökberk Elmas1,2, and Tolga Çukur1,2,3
1Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey, 2National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey, 3Neuroscience Program, Bilkent University, Ankara, Turkey

Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence, Federated learning, multi-institutional, collaborative learning, image reconstruction

Motivation: Federated learning (FL) enables privacy-preserving training of deep reconstruction models across multiple sites to improve generalization at the expense of lower within-site performance. Yet, existing methods require a common model architecture across sites, limiting flexibility.

Goal(s): Our goal was to devise an architecture-agnostic method for collaborative training of heterogeneous models across sites.

Approach: We introduced a novel peer-to-peer generative learning method (PGL-FedMR), where individual sites share a generative prior for their MRI data with remaining sites, and prior-driven synthetic data are used to train reconstruction models at each site.

Results: PGL-FedMR improves across-site generalization over local models, and within-site performance over conventional FL.

Impact: Improvements in within-site and across-site performance for MRI reconstruction through PGL-FedMR, coupled with the ability to handle heterogeneous architectures, may facilitate privacy-preserving multi-institutional collaborations to build reliable reconstruction models for many applications where data are scarce including rare diseases.

16:330654.
Imaging transformer for MRI denoising with SNR unit training: enabling generalization across field-strengths, imaging contrasts, and anatomy
Hui Xue1, Sarah Hooper1, Azaan Rehman1, Iain Pierce2, Thomas Treibel2, Rhodri Davies2, W Patricia Bandettini1, Rajiv Ramasawmy1, Ahsan Javed1, Yang Yang3, James Moon2, Adrienne Campbell-Washburn1, and Peter Kellman1
1National Heart, Lung, and Blood Institute, Bethesda, MD, United States, 2Barts Heart Centre at St. Bartholomew's Hospital, London, United Kingdom, 3University of California, San Francisco, San Francisco, CA, United States

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, imaging transformer, generalization

Motivation: MR denoising using the CNN models often requires abundant high quality data for training. In many applications, such as higher acceleration and low field, high quality data is not available. This study overcome this limitation by developing a SNR unit based training scheme and a novel imaging transformer (imformer) architecture.

Goal(s): To develop and validate a novel imformer model for MR denoising, enabling generalization across field-strengths, imaging contrasts, and anatomy.

Approach: SNR unit training scheme and imaging transformer architecture

Results: Imformer models outperformed CNNs and conventional transformer. The SNR training enables storng generalization.

Impact: Recovery high-fidelity MR signal from very low SNR inputs; Enable 0.55T MRI model training.

16:450655.
Accelerating DT-CMR with Deep Learning-based Tensor De-noising and Breath Hold Reduction
Michael Tanzer1,2, Andrew Scott1,2, Zohya Khalique1,2, Maria Dwornik1,2, Ramyah Rajakulasingam1,2, Ranil De Silva1,2, Dudley Pennell1,2, Pedro Ferreira1,2, Guang Yang1,2, Daniel Rueckert1,3, and Sonia Nielles-Vallespin1,2
1Imperial College London, London, United Kingdom, 2Royal Brompton Hospital, London, United Kingdom, 3Technische Universität München, Munich, Germany

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: DT-CMR can revolutionise diagnosis and treatment of heart conditions by non-invasively imaging cardiomyocyte microstructure, but currently long acquisition times prevent clinical use.

Goal(s): Reduce the number of breath-holds required for in-vivo DT-CMR acquisitions, resulting in significantly reduced scan times with minimal image quality loss.

Approach: We developed a deep learning model based on Generative Adversarial Networks, Vision Transformers, and Ensemble Learning to de-noise diffusion tensors computed from reduced-repetition DT-CMR data. We compared model performance to conventional linear fitting methods and a baseline deep learning approach.

Results: Our model reduced noise over 20% compared to previous state-of-the-art approaches while retaining known clinically-relevant myocardial properties.

Impact: This breakthrough in DT-CMR acquisition efficiency could enable rapid microstructural phenotyping of the myocardium in the clinic for the first time, revolutionising personalised diagnosis and treatment by unlocking DT-CMR’s ability to non-invasively characterise heart muscle organisation at the cellular level.

16:570656.
Reconstruction-free segmentation from undersampled k-space using transformers
Yundi Zhang1,2, Nil Stolt-Ansó1,3, Jiazhen Pan1,2, Wenqi Huang1,2, Kerstin Hammernik1,4, and Daniel Rueckert1,2,3,4
1School of Computation, Information and Technology, Technical University of Munich, Munich, Germany, 2School of Medicine, Klinikum rechts der Isar, Munich, Germany, 3Munich Center for Machine Learning, Technical University of Munich, Munich, Germany, 4Department of Computing, Imperial College London, London, United Kingdom

Keywords: AI/ML Image Reconstruction, Segmentation, k-space

Motivation: High acceleration factors place a limit on MRI image reconstruction. This limit is extended to segmentation models when treating these as subsequent independent processes.

Goal(s): Our goal is to produce segmentations directly from sparse k-space measurements without the need for intermediate image reconstruction.

Approach: We employ a transformer architecture to encode global k-space information into latent features. The produced latent vectors condition queried coordinates during decoding to generate segmentation class probabilities.

Results: The model is able to produce better segmentations across high acceleration factors than image-based segmentation baselines.

Impact: Cardiac segmentation directly from undersampled k-space samples circumvents the need for an intermediate image reconstruction step. This allows the potential to assess myocardial structure and function on higher acceleration factors than methods that rely on images as input.

17:090657.
Dual-confidence-guided feature learning for semi-supervised medical image segmentation
Yudan Zhou1, Shuhui Cai1, Congbo Cai1, Liangjie Lin2, and Zhong Chen1
1Xiamen University, Xiamen, China, 2MSC Clinical & Technical Solutions, Philips Healthcare, China

Keywords: Diagnosis/Prediction, Machine Learning/Artificial Intelligence, Data Processing, MRI medical segmentation, Brain

Motivation: Obtaining a large medical image dataset with accurate annotations is challenging, thus limiting the practical application of deep learning in clinical practice.

Goal(s):  Developing a novel semi-supervised algorithm for a limited set of labeled images.

Approach: Building a dual-branch network with dual-confidence-guided constraints for tumor feature learning, enabling the model to learn accurate and comprehensive feature representations.

Results: In brain tumor segmentation, this algorithm achieved accurate tumor boundary segmentation using only 1% and 10% of labeled training data, and obtained segmentation results close to fully supervised learning when 20% of the training data was labeled.

Impact: Our dual-confidence-guided semi-supervised feature learning model can achieve accurate brain tumor region segmentation with limited labeled training data, speeding up the application of deep learning technology in clinical research and providing assistance for clinical diagnosis.

17:210658.
Continuous Spatio-Temporal Representation with Implicit Neural Representation and Neural Ordinary Differential Equation in DSC-MRI
Junhyeok Lee1, Kyu Sung Choi2, Jung Hyun Park3, Inpyeong Hwang2, Jin Wook Chung2, and Seung Hong Choi2
1Seoul National University College of Medicine, Seoul, Korea, Republic of, 2Department of Radiology, Seoul National University Hospital, Seuol, Korea, Republic of, 3Seoul Metropolitan GovernmentSeoul National University Boramae Medical Center, Seoul, Korea, Republic of

Keywords: Analysis/Processing, DSC & DCE Perfusion

Motivation: Dynamic Susceptibility Contrast MRI (DSC-MRI) aids in diagnosing cerebrovascular conditions, but simultaneously achieving high spatial and temporal resolutions is challenging, limiting the capture of detailed perfusion dynamics.

Goal(s): To develop a deep learning framework for spatio-temporal super-resolution in DSC-MRI to enhance the capture of perfusion dynamics.

Approach: Our proposed model utilizing bi-directional Neural ODE, feature extraction, and a local implicit image function to improve DSC-MRI images and address spatial and temporal resolution constraints.

Results: The reconstructed results outperform other methods, with enhanced NMSR, PSNR, and SSIM metrics, providing visual confirmation of accurate MR signal approximation and perfusion parameter calculation.

Impact: The spatiotemporal super-resolution of DSC-MRI with deep learning allows for more accurate assessment of perfused tissue dynamics and tumor habitat, as well as more freedom in choosing acquisition weights between spatial and temporal during MRI acquisition.

17:330659.
Neural Implicit Quantitative Imaging
Felix Zimmermann1, Simone Hufnagel1, Patrick Schuenke1, Andreas Kofler1, and Christoph Kolbitsch1
1Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: 3D quantitative MRI presents a challenging inverse problem. The application of learned reconstruction methods is hindered by the need for extensive training data and the large size of high-resolution voxel representations of multi-dimensional data. Implicit neural fields have shown promise in cine imaging and slice-to-volume registration.

Goal(s): Explore the use of neural fields for representing 3D high-resolution quantitative parameters in qMRI.

Approach: We integrate motion correction, sensitivity map estimation, and 3D parameter neural fields into an end-to-end, scan-specific optimization without training data.

Results: Demonstration of feasibility in the context of cardiac qMRI and initial results of whole-heart 3D T1 maps.

Impact: Introduction of implicit neural fields into qMRI, allowing for continuous representation of the quantitative parameters in 3D space. Our novel end-to-end reconstruction with motion correction, sensitivity map estimation provides fast high-resolution, whole-heart T1-maps without relying on training data.