ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Cutting-Edge MRI with Diffusion Probabilistic Modeling
Oral
AI & Machine Learning
Tuesday, 07 May 2024
Hall 606
08:15 -  10:15
Moderators: Jonathan Tamir & Dong Liang
Session Number: O-55
CME Credit

08:15 Introduction
Jonathan Tamir
The University of Texas at Austin, United States
08:270377.
Variational diffusion models for blind MRI inverse problems
Julio A. Oscanoa1, Cagan Alkan2, Daniel Abraham2, Mengze Gao3, Aizada Nurdinova3, Daniel Ennis3, Kawin Setsompop3, John Pauly2, Morteza Mardani4, and Shreyas Vasanawala3
1Department of Bioengineering, Stanford University, Stanford, CA, United States, 2Department of Electrical Engineering, Stanford University, Stanford, CA, United States, 3Department of Radiology, Stanford University, Stanford, CA, United States, 4NVIDIA Inc., Santa Clara, CA, United States

Keywords: AI Diffusion Models, Machine Learning/Artificial Intelligence, Diffusion models

Motivation: Diffusion models have shown state-of-the-art performance in solving inverse problems. However, current solutions typically consider cases only when the forward operator is fully known, which limits their applicability to the wide variety of MRI inverse problems.

Goal(s): Develop a general method for blind MRI inverse problems with unknown forward operator parameters.

Approach: We extend the RED-diff framework, which has the key strength of not requiring training or fine–tuning for each specific task. We test our method for image reconstruction with off-resonance and motion correction.

Results: Our blind RED-diff framework can successfully approximate the unknown forward model parameters and produce accurate reconstructions.

Impact: We demonstrate the potential of current diffusion models to readily tackle a wide range of blind inverse problems in MRI without application-specific re-training or fine-tuning. Image reconstruction with motion and off-resonance correction are the first demonstration applications.

08:390378.
Accelerated MRI Reconstruction with Fourier-Constrained Diffusion Schrodinger Bridges
Muhammad Usama Mirza1,2, Onat Dalmaz1,2, Hasan Atakan Bedel1,2, Gokberk Elmas1,2, Alper Gungor3, and Tolga Cukur1,2
1Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey, 2National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey, 3ASELSAN, Ankara, Turkey

Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence, image reconstruction; diffusion models

Motivation: Diffusion probabilistic methods synthesize realistic images via a denoising transformation from Gaussian noise onto MRI data, but this normality assumption can yield suboptimal performance in accelerated MRI reconstruction tasks.

Goal(s): Our goal was to devise a new diffusion-based method that generates high-quality images by capturing a task-relevant transformation for accelerated MRI.

Approach: We introduced a novel reconstruction method based on a diffusion Schrodinger bridge (FDB) that learns to directly transform between undersampled and fully-sampled MRI data via a multi-step process.

Results: Higher reconstruction performance was obtained with FDB over previous state-of-the-art at up-to 8-fold acceleration.

Impact: The improvement in image quality and acquisition speed in accelerated MRI enabled through FDB may facilitate comprehensive MRI exams in many applications, particularly in assessments of pediatric and elderly individuals in need of fast exams due to limited motor control.

08:510379.
Using a Video Diffusion Model-prior for reconstructing undersampled dynamic MR-data – An application to real-time cardiac MRI
Oliver Schad1, Julius Frederik Heidenreich1, Nils-Christian Petri2, Bernhard Petritsch1, and Tobias Wech1,3
1Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany, 2Department of Internal Medicine 1, University Hospital Würzburg, Würzburg, Germany, 3Comprehensive Heart Failure Center, University Hospital Würzburg, Würzburg, Germany

Keywords: AI Diffusion Models, Machine Learning/Artificial Intelligence

Motivation: MR-based “real-time” imaging of dynamic processes, as the beating heart, often depends on fast (undersampled) scans, which are subsequently reconstructed by algorithms exploiting prior knowledge. Spatio-temporal models describing the data in suboptimal manner can thereby lead to residual artifacts.   

Goal(s): A high-quality model to regularize the reconstruction of real-time cardiac MRI based on undersampled spiral data acquisitions.

Approach: A video diffusion model was trained using cine videos in magnitude reconstruction and subsequently applied as a prior in a plug-and-play FISTA approach.

Results: Reconstructions of undersampled real-time frames with higher image quality than a low rank plus sparse approach.

Impact: We show the potential of probabilistic video diffusion models as a promising prior in iterative reconstructions of undersampled dynamic MR data. In our example, the approach enabled high quality real-time cardiac functional MRI in patients with arrhythmia.  

09:030380.
qDiMo: Domain-conditioned Diffusion Modeling for Accelerated qMRI Reconstruction
Wanyu Bian1,2, Albert Jang1,2, and Fang Liu1,2
1Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States, 2Harvard Medical School, Boston, MA, United States

Keywords: AI Diffusion Models, Machine Learning/Artificial Intelligence, Rapid MRI, Quantitative MRI, knee, brain

Motivation: Quantitative MRI (qMRI) is time-consuming and requires substantial efforts for acceleration to cut down the acquisition time.

Goal(s): This paper proposes a novel generative AI approach for image reconstruction based on diffusion modeling conditioned on the native data domain. 

Approach: Our method is applied to multi-coil quantitative MRI reconstruction, leveraging the domain-conditioned diffusion model within the tissue parameter domain.

Results: The proposed method demonstrates a significant promise for reconstructing quantitative maps at high acceleration factors. Notably, it maintains excellent reconstruction accuracy and efficiency for MR parameter maps across diverse anatomical structures.

Impact: This work demonstrates the feasibility of a new generative AI method for rapid qMRI. Beyond its immediate applications, this method provides potential generalization capability, making it adaptable to inverse problems across various domains.

09:150381.
Spatiotemporal Diffusion Model with Paired Sampling for Accelerated Cardiac Cine MRI
Shihan Qiu1,2,3, Shaoyan Pan1,4,5, Yikang Liu1, Lin Zhao1, Jian Xu6, Qi Liu6, Terrence Chen1, Eric Z. Chen1, Xiao Chen1, and Shanhui Sun1
1United Imaging Intelligence, Burlington, MA, United States, 2Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States, 3Department of Bioengineering, UCLA, Los Angeles, CA, United States, 4Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States, 5Department of Biomedical Informatics, Emory University, Atlanta, GA, United States, 6UIH America, Inc., Houston, TX, United States

Keywords: AI Diffusion Models, Image Reconstruction, Heart

Motivation: Current deep learning reconstruction for accelerated cardiac cine MRI suffers from spatial and temporal blurring.

Goal(s): To improve image sharpness and motion delineation for cine MRI under high undersampling rates.

Approach: A combined non-generative reconstruction and diffusion enhancement model along with a novel paired sampling strategy was developed.

Results: The proposed combined method provided sharper tissue boundaries and clearer motion than the original reconstruction in experts’ evaluation on clinical data. The innovative paired sampling strategy substantially reduced artificial noises in the generative results.

Impact: The approach has the potential to improve reconstruction quality in highly accelerated cardiac cine imaging. The novel paired sampling for diffusion generation may be applied to other conditional tasks to reduce the artificial noises stemming from noisy training data.

09:270382.
Explaining Deep fMRI Classifiers with Diffusion-Driven Counterfactual Generation
Hasan Atakan Bedel1,2 and Tolga Çukur1,2,3
1Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey, 2National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey, 3Neuroscience Program, Bilkent University, Ankara, Turkey

Keywords: AI Diffusion Models, Machine Learning/Artificial Intelligence, fMRI, xAI, diffusion, transformers

Motivation: Deep-learning classifiers for functional MRI (fMRI) offer state-of-the-art performance in detection of cognitive states from BOLD responses, but their black-box nature hinders interpretation of results.

Goal(s): Our goal was to devise a reliable method to infer the important BOLD-response attributes that drive the decisions of deep fMRI classifiers.

Approach: We introduced a novel counterfactual explanation method (DreaMR) based on a new fractional, distilled diffusion prior for efficient generation of high-fidelity counterfactual samples.

Results: DreaMR generated more specific and plausible explanations of deep fMRI classifiers trained for resting-state and task-based fMRI analysis than previous state-of-the-art explanation methods.

Impact: The improvement in sensitivity, plausability and efficiency in explanation of deep classifiers through DreaMR may facilitate adoption of AI-based analyses in fMRI studies, thereby benefiting assessment of cognitive processes in both normal and neurological disease states.

09:390383.
Decoding Visual Information from fMRI Data: A Multimodal Approach to Image and Caption Reconstruction
Matteo Ferrante1, Tommaso Boccato2, Furkan Ozcelik3, Rufin VanRullen4, and Nicola Toschi2
1Biomedicine and prevention, University of Rome Tor Vergata, Rome, Italy, 2University of Rome Tor Vergata, Rome, Italy, 3CerCo, University of Toulouse III Paul Sabatier, Toulouse, France, 4CNRS, CerCo, ANITI, TMBI, Univ. Toulouse, Toulouse, France

Keywords: AI Diffusion Models, fMRI, brain decoding, fMRI

Motivation: The study addresses the challenge of decoding and reconstructing visual experiences from fMRI data, an area yet to be mastered in neuroscience.

Goal(s): We propose a methodology that deciphers brain activity patterns and renders these into visual and textual representations.

Approach: We trained a linear model to map brain activity to image latent represenations. This informed a generative image-to-text transformer and a visual attribute-focused regression model, culminating in the creation of photorealistic images using a text-to-image diffusion model.

Results: The model effectively combined high-level semantic understanding and low-level visual details, producing plausible reconstruction images from fMRI data.

Impact: Our findings enhance our understanding of visual processing in the brain, with significant implications for integrating artificial intelligence (AI) with neuroscience.

09:510384.
CMRDiff: Multi-sequence CMR synthesis
Puguang Xie1, Zhongsen Li2, Yu Ma1, and Jingjing Xiao3
1Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China, 2Center for Biomdical Imaging Research, Tsinghua University, Beijing, China, Beijing, China, 3Bio-Med Informatics Research Centre \& Clinical Research Centre, Xinqiao Hospital, Army Medical University, Chongqing, China

Keywords: AI Diffusion Models, Cardiovascular

Motivation: The synthesis of multi-sequence cardiac magnetic resonance (CMR) images is of great significance to shorten the scan durations and expand the beneficiary population from CMR examination.

Goal(s): Achieving accurate synthesis is particularly challenging due to the inherent suboptimal image quality and the persistent interference from noise.

Approach: We first propose a novel method based on diffusion model, CMRDiff, for multi-sequence CMR synthesis.

Results: We evaluated the proposed CMRDiff on the MICCAI2020 MyoPS Challenge dataset. Our experiments demonstrate that CMRDiff outperforms other state of-the-art multi-modal MRI synthesis methods.

Impact: We design the first denoising diffusion probabilistic modelin the literature for multi-sequence CMR synthesis, promising to serve as an effective tool for multi-sequence CMR synthesis.

10:030385.
Unified Diffusion model for Multi-contrast Ensembling Synthesis
Yeeun Lee1, Yejee Shin2, Doohyun Park2, Geonhui Son2, Taejoon Eo2,3, and Dosik Hwang2,4,5,6
1School of Artificial Intelligence, Yonsei University, Seoul, Korea, Republic of, 2School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, Republic of, 3PROBE Medical Inc., Seoul, Korea, Republic of, 4Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul, Korea, Republic of, 5Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea, Republic of, 6Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea, Republic of

Keywords: Acquisition Methods, Brain

Motivation: Scanning for multi-contrast MR images is time-consuming. To reduce scan time, it is beneficial to explore methods for efficiently synthesizing target contrast images from existing contrast scans.

Goal(s): To address the stability issues encountered when dealing with multi-contrast MR image domains individually, we propose a methodology for effectively synthesizing images while incorporating multi-contrast domains.

Approach: Our model is a novel unified diffusion model (UDM) that improves the synthesis of detailed anatomical structures in target contrast images through an ensemble method.

Results: UDM demonstrates effectiveness across multiple domains, outperforming existing methodologies in synthesizing images for each contrast domain.

Impact: By reducing scan times and costs for multi-contrast imaging, UDM facilitates prognosis prediction and treatment planning. This method is not only usable for image synthesis but also extendable to various applications such as reconstruction.