ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Self-Supervised AI/ML Techniques
Digital Poster
AI & Machine Learning
Monday, 06 May 2024
Exhibition Hall (Hall 403)
13:45 -  14:45
Session Number: D-162
No CME/CE Credit

Computer #
1758.
49Accurate and efficient co-registration of diffusion and T1-weighted MRI using self-supervised deep learning
Keyu Chen1, Ziyu Li2, Zihan Li3, and Qiyuan Tian3
1School of Biological Science and Medical Engineering, Beihang University, Beijing, China, 2Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom, 3Department of Biomedical Engineering, Tsinghua University, Beijing, China

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, co-registration, distortion correction, voxelmorph

Motivation: The co-registration between diffusion and T1-weighted data is important for various diffusion analyses, which is challenging due to the geometric distortion in diffusion images.

Goal(s): To achieve accurate and efficient co-registration between diffusion data and T1w image.

Approach: A self-supervised deep learning-based framework VoxelMorph was used to non-linearly align distorted diffusion b=0 image to T1w image. Our proposal was systematically and quantitatively compared to other linear and non-linear  transformations. The benefit was also demonstrated.

Results: VoxelMorph achieved comparable co-registration accuracy compared to NiftyReg and seconds processing time, which was 40 times faster than NiftyReg, or even 300 times faster by leveraging transfer learning.

Impact: Our proposal achieved fast and accurate co-registration between distorted diffusion data and T1w image, which has a great potential to benefit various diffusion MRI data analyses for neuroscientific studies, including region-of-interest specific quantification and surface-based analysis.

1759.
50Self-Supervised Deep-Learning Networks for Mono and Bi-exponential T1ρ Fitting in the Knee Joint
Dilbag Singh1,2, Ravinder R. Regatte1,2, and Marcelo V. W. Zibetti1,2
1Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States, 2Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, New york, NY, United States

Keywords: Analysis/Processing, Quantitative Imaging, Quantitative Mapping, T1ρ mapping, Nonlinear Least Squares, Bi-exponential models, Deep Learning

Motivation: The nonlinear least squares (NLS)-based estimation of mono- and bi-exponential T1ρ maps in the knee joint is highly time-consuming. Deep-learning (DL) methods are faster alternatives.

Goal(s): However, DL requires substantial training data, which is usually obtained using NLS on acquired data. This paper introduces self-supervised DL models that leverage synthetic target data for training, eliminating the need for scanned or NLS data to be used as reference.

Approach: We have tested five different DL models, each utilizing a distinct activation function, and compared them against NLS.

Results: The proposed models are 25-200x faster than the NLS method, with errors close to NLS.

Impact: This study compared five different self-supervised DL models for estimating mono- and bi-exponential T1ρ maps in the knee joint. These models are faster alternatives to NLS, potentially replacing it to produce reference maps.

1760.
51Denoising intrinsic MRI repetitions using self-supervised iterative residual learning
Zihan Li1, Berkin Bilgic2,3, Ziyu Li4, Kui Ying5, David H. Salat2,3, Jonathan R. Polimeni2,3, Hongen Liao1, Susie Huang2,3, and Qiyuan Tian1
1Department of Biomedical Engineering, Tsinghua University, Beijing, China, 2Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States, 3Harvard Medical School, Boston, MA, United States, 4Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom, 5Department of Engineering Physics, Tsinghua University, Beijing, China

Keywords: Analysis/Processing, Data Processing, magnetic resonance imaging, diffusion tensor imaging, self-supervised learning, transfer learning

Motivation: MRI with high resolution and/or acceleration factor suffers from intrinsic low signal-to-noise ratio. Supervised learning-based denoising significantly improves image quality, but requires high-SNR data as training targets.

Goal(s): To denoise images using noisy image repetitions without additional acquisition.

Approach: Noise2Average trains CNN to map each noisy image to its residual compared to the average of all noisy images at iteration 1 and all denoised images from iteration k-1 at iteration k. The images from opposite phase-encoding directions of EPI or different echo times of ME-MPRAGE are noisy repetitions.

Results: Noise2Average outperforms BM4D, AONLM and Noise2Noise in terms of image quality and DTI metrics.

Impact: By reducing the requirement for training data and time, Noise2Average substantially increases the feasibility and accessibility of deep learning based denoising methods for MRI and potentially benefits a wider range of clinical and neuroscientific studies.

1761.
52Unsupervised Neural Network for Super-Resolving Non-Contrast-Enhanced Whole-Heart MRI Using REACT
Corbin Maciel1 and Qing Zou1
1Pediatrics, University of Texas Southwestern Medical Center, Dallas, TX, United States

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, Super-resolution, Heart

Motivation: Three-dimensional (3D) whole-heart MRI requires long scan times and the sequence used to acquire such scans is susceptible to banding artifacts.

Goal(s): The goal of this study was to develop an unsupervised super-resolution neural network for 3D whole-heart MRI.

Approach: The data used in this study was acquired using a modified Relaxation-Enhanced Angiography without Contrast and Triggering (REACT) sequence. A neural network referred to hereafter as the Super-resolution Neural Network (SRNN) was developed to super-resolve 3D MRI data.

Results: The SRNN allows us to acquire lower-resolution scans, thus decreasing scan time, and provides improved image quality after performing super-resolution.

Impact: The results of this study show that super-resolution offers a viable option to decrease scan time and improve overall image quality in 3D whole-heart MRI.

1762.
53A Contrastive Learning Approach for Unsupervised Anomaly Detection on Contrast-Enhanced Brain MRI Images
Srivathsa Pasumarthi1, Sidharth Kumar2, and Ryan Chamberlain1
1R&D, Subtle Medical Inc, Menlo Park, CA, United States, 2Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, United States

Keywords: Analysis/Processing, Brain, Unsupervised Anomaly Detection

Motivation: Unsupervised anomaly detection (UAD) approaches on T1 contrast-enhanced (T1CE) images are currently not feasible as T1CE images of healthy individuals are not typically available.

Goal(s): In this work, we aim to eliminate the need for large labeled datasets that are required for manual anomaly detection on T1CE images.

Approach: Using deep learning (DL), we synthesized healthy T1CE images from non-contrast images available in public datasets. We also synthesized healthy-anomalous paired images and forced the DL network to learn the healthy reconstruction. The anomalies were localized by subtracting the reconstruction from the input image.

Results: The proposed method achieves state-of-the art dice score coefficients.

Impact: This work opens up new avenues of research in unsupervised anomaly detection on T1CE images which has been infeasible due to lack of healthy post-contrast images. We also propose a novel contrastive learning paradigm using synthesis of healthy-anomalous image pairs.

1763.
54Pretraining using masked language modeling improves label noise robustness on the metadata standardization task
Ben A Duffy1 and Ryan Chamberlain1
1Subtle Medical Inc., Menlo Park, CA, United States

Keywords: Data Processing, Software Tools

Motivation: The lack of standardization in MRI metadata increases radiologist workload.

Goal(s): To demonstrate an approach to standardize the image contrast and the body part examined information in the DICOM header and to understand the benefits of self-supervised pretraining on the metadata standardization task in the presence of label noise.

Approach: Masked language modeling was used for pretraining. At the fine-tuning stage, a transformer model was used to predict the image contrast and body part from both the text and numerical DICOM tags.

Results: Pretraining improves robustness to label noise, with there being no loss in performance at 20% label noise.

Impact: Pretraining using masked language modeling is effective at rendering a metadata stanardization system robust to label noise. Such a system can be used to standardize MRI metadata and therefore reduce radiologist workload. Future work should investigate class conditional label noise.

1764.
55Multi-Contrast Low-field MR Image Enhancement via Self-supervision
Long Wang1, Zechen Zhou1, and Ryan Chamberlain1
1Subtle Medical, Menlo Park, CA, United States

Keywords: AI/ML Image Reconstruction, Low-Field MRI

Motivation: Restoring the structure that is barely visible on the MR images is a major challenge for self supervised enhancement using one input, especially in low-field MR imaging applications.

Goal(s): To improve the image quality and the visibility of some clinically relevant structure in certain contrast in MR images

Approach: We proposed a self-supervised learning framework using the shareable information from other image contrasts. More specifically, two mutual modulations with a cyclic consistency constraint are introduced to guide the training. 

Results: Preliminary results on 0.25T spine MR images suggest that our method can achieve superior results compared to other self-supervised methods.

Impact: The work shows the feasibility of adopting the multiple contrast information to improve the MR images with poor quality without acquiring low resolution/high resolution pairs. It leads to more accurate diagnoses.

1765.
56Joint Multi-Contrast Image Reconstruction with Self-Supervised Learning
Brenden Toshihide Kadota1,2, Charles Millard3, and Mark Chiew1,2
1Medical Biophysics, University of Toronto, Toronto, ON, Canada, 2Physical Sciences Research Platform, Sunnybrook Research Institute, Toronto, ON, Canada, 3FMRIB, Wellcome Centre for Integrative Neuroimaging, Oxford, United Kingdom

Keywords: AI/ML Image Reconstruction, Image Reconstruction

Motivation: Self-supervised learning via data undersampling (SSDU) uses single contrast images in reconstruction, but a typical protocol contains multiple contrasts that provide additional information.

Goal(s): Our goal is to improve self-supervised image reconstruction fidelity by jointly reconstructing multi-contrast images.

Approach: We modify SSDU by concatenating independently under-sampled contrasts along the channel dimension in a VarNet architecture.

Results: Joint multi-contrast SSDU reconstructs with higher SSIM and lower NMSE than single contrast supervised and self-supervised methods.

Impact: Joint multi-contrast SSDU produces higher quality reconstructions than single-contrast methods, without fully-sampled training data. Accelerated multi-contrast imaging protocols will benefit from higher diagnostic quality or higher acceleration factors.

1766.
57Non-central chi likelihood loss for quantitative MRI from parallel acquisitions with self-supervised deep learning
Christopher S Parker1, Daniel C Alexander1, and Hui Zhang1
1Centre for Medical Image Computing, University College London, London, United Kingdom

Keywords: AI Diffusion Models, Quantitative Imaging, parallel imaging; apparent diffusion coefficient; IVIM; parameter estimation

Motivation: The distribution of reconstructed MRI signals, used as input for quantitative MRI with self-supervised deep learning, depends on the number of receiver coils. Current loss functions do not account for this, leading to bias.

Goal(s): Develop a non-central chi likelihood (NLC) loss that accounts for the distribution of MRI measures in the most common scenario of parallelised acquisitions.

Approach: Implement and evaluate the NLC loss and compare its performance against the MSE and Rician likelihood loss in simulated data.

Results: The NLC improves performance compared to the Rician likelihood and MSE loss for the mono-exponential ADC model in simulated data.


Impact: The NLC loss permits fast inference of parameters from MRI signals reconstructed from parallelised acquisitions and may reduce bias compared to the Rician and MSE loss. The NLC loss is widely applicable due to the abundance of parallelised MRI acquisitions.

1767.
58Weak supervision in multi-coil accelerated MR image reconstruction
Arda Atalik1, Sumit Chopra2,3, and Daniel K Sodickson3,4
1Center for Data Science, New York University, New York, NY, United States, 2Courant Institute of Mathematical Sciences, New York University, New York, NY, United States, 3Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States, 4Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States

Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence

Motivation: In training-data-limited settings, weak supervision –cooperatively utilizing under-sampled and fully-sampled datasets– can be advantageous.

Goal(s): To compare weakly-supervised multi-coil Magnetic Resonance (MR) image reconstruction against reconstruction using only under-sampled or fully-sampled datasets in high- and low-data regimes.

Approach: Pretrain a Variational Network (VarNet) in a self-supervised manner by minimizing L1 loss in k-space using a 4x under-sampled dataset. Transfer the pre-trained weights to another VarNet and fine-tune it using a smaller, fully sampled dataset by optimizing MS-SSIM loss in image space.

Results: We demonstrate improvements in reconstruction quality in the high-data regime as well as enhanced robustness of reconstruction in the low-data regime.

Impact: Multi-coil MR image reconstruction exploiting both under-sampled and fully-sampled datasets is achievable with transfer learning and fine-tuning. Our proposed methodology can provide improved reconstruction quality and robustness.

1768.59PEARL: Cascaded Self-supervised Cross-fusion Learning For Parallel MRI Acceleration
Qingyong Zhu1, Bei Liu2, Zhuo-Xu Cui1, Chentao Cao3, Xiaomeng Yan2, Yuanyuan Liu3, Jing Cheng3, Yihang Zhou1, Yanjie Zhu3, Haifeng Wang3, Hongwu Zeng4, and Dong Liang1
1Research Center for Medical AI, SIAT, Chinese Academy of Sciences, Shenzhen, China, 2School of Mathematics, Northwest University, Xi'an, China, 3Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, Chinese Academy of Sciences, Shenzhen, China, 4Shenzhen Children’s Hospital, Shenzhen, China

Keywords: AI/ML Image Reconstruction, Image Reconstruction, Self-supervised AI

Motivation: Supervised deep learning (SDL) has limitations due to data dependency, and self-supervised frameworks like DIP struggle with noise and artifacts.

Goal(s): Introducing PEARL, a novel self-supervised accelerated parallel MRI approach.

Approach: PEARL leverages joint deep decoders coupling with cross-fusion schemes based on multi-parameter priors to achieve enhanced reconstruction.

Results: PEARL outperforms the existing methods, demonstrating notable improvement under highly accelerated acquisition.

Impact: This study emphasizes the significance and potential of self-supervised learning in addressing critical MRI challenges.

1769.
60Improved lesion conspicuity on liver ADC maps using self supervised DDPMs
Serge Vasylechko1, Andy Tsai1, Onur Afacan1, and Sila Kurugol1
1Boston Children's Hospital and Harvard Medical School, Boston, MA, United States

Keywords: AI Diffusion Models, Liver

Motivation: Abdominal DW-MRI suffers from low SNR and motion artifacts, compromising ADC reliability.

Goal(s): Improve ADC estimation in low SNR abdominal DW-MRI using a single-image acquisition per b-value. 

Approach: A novel self-supervised training approach using denoising diffusion probabilistic models (ssDDPM). Tailored for multi-b-value DW-MRI images, it requires only a single gradient image per b-value for denoising, which reduces scan time. 

Results: ssDDPM demonstrated superior lesion conspicuity in low b-value images and quantitative ADC maps in comparison to competing methods. In lesion versus normal tissue, a logistic classifier had improved sensitivity from 0.93 to 0.98, and specificity from 0.88 to 0.97, over non-denoised NEX1 images. 

Impact: ssDDPM enhances abdominal DW-MRI ADC accuracy from single acquisitions, reducing scan times and patient discomfort. This gives promise to an earlier, precise tumor detection and monitoring, impacting clinical care and healthcare efficiency.

1770.
61Enhancing Low-Count PET Image Quality via Unsupervised Deep Learning using Novel Cost Function – A PET/MRI study
Tianyun Zhao1,2 and Chuan Huang1,2,3
1Radiology and Imaging Science, Emory University School of Medicine, Atlanta, GA, United States, 2Biomedical Engineering, Stony Brook University, Stony Brook, NY, United States, 3Biomedical Engineering, Georgia Institute of Technology, Atalnta, GA, United States

Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence, PET/MR, unsupervised learning

Motivation: PET/MRI enables MR-assisted PET denoising for low-count PET. Recent work has demonstrated the potential of unsupervised deep learning (uDL) denoising method with the advantage of not requiring large amount of training data. We believe the performance of uDL denoising with MR guidance can be further improved.

Goal(s): To demonstrate the efficacy of a novel cost function in enhancing the quality of denoised low-count PET images using uDL techniques.

Approach: We utilized dNet with MRI as input and low-count PET as target, along with a novel cost function consisting of Bowsher prior and mean square error.

Results: Our method outperformed all other denoising methods.

Impact: The impact of the study is the potential for significant improvements in low-count PET image quality through advanced denoising techniques, which could enhance diagnostic accuracy while reducing radiation exposure for patients.

1771.
62Multi-Slice-Aware Denoising Model for 7T MR Angiography
SoJin Yun1,2, Sung-Hye You3, Jeewon Kim1, Byungjun Kim3, and Hyunseok Seo1
1Bionics Research Center, Korea Institute of Science and Technology, Seoul, Korea, Republic of, 2Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, Republic of, 3Department of Radiology, Korea University, Seoul, Korea, Republic of

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Noise in 7T MRA images can make it challenging to diagnose vascular diseases, and conventional denoisers tend to over-smooth in a way that blur entire image or does not preserve vascular information.

Goal(s): Our goal was to reduce noise in 7T MRA images while maintaining vascular information using deep learning method.

Approach: We devised an unsupervised denoising model using multiple slice information and cycleGAN-based neural networks.

Results: Our approach not only suppressed noise in 7T MRA images, but it also successfully preserved vessel information among the compared models.

Impact: We developed a denoising method in 7T MRA images while preserving vascular information. Clinically, our findings will help diagnose vascular-related diseases. High SNR is preserved by averaging adjacent slices, and we contribute to increase usability of 7T MRI.

1772.
63Quantitative MRI with Automated Histogram Analysis Based on Self-Supervised Learning of Organ Segmentation: Demonstration for Liver T1 Mapping
Lavanya Umapathy1,2, Prerna Luthra1,3, Jingjia Chen1,2, Daniel Sodickson1,2, and Li Feng1,2
1Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States, 2Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States, 3Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, New York, NY, United States

Keywords: Analysis/Processing, Liver, Machine learning, quantitative imaging

Motivation: Extensive recent work has been devoted to quantitative MRI, but practical implementation of quantitative parameter mapping is hindered by the lack of tools for easy visualization and automated analysis. 

Goal(s): We demonstrate the applicability of a self-supervised contrastive pretraining framework for organ segmentation in automated analysis of free-breathing 3D liver T1 mapping.

Approach: A DL model is pretrained to learn T1 contrast information from multi-contrast images acquired for T1 parameter mapping.

Results: With few labeled examples, an organ segmentation framework was developed, and its utility in interpreting parameter maps was demonstrated.

Impact: Multi-contrast information from images typically acquired for parameter estimation in quantitative MRI can be leveraged to pretrain organ segmentation models with self-supervision, enabling automated analysis of quantitative parameter maps.

1773.
64TransGRAPPA: Self-supervised Transformer Network for k-Space Interpolation
Wenqi Huang1, Veronika Spieker2,3, Jiazhen Pan1, Daniel Rueckert2,4, and Kerstin Hammernik2
1Klinikum rechts der Isar, Technical University of Munich, Munich, Germany, 2School of Computation, Information and Technology, Technical University of Munich, Munich, Germany, 3Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Munich, Germany, 4Department of Computing, Imperial College London, London, United Kingdom

Keywords: AI/ML Image Reconstruction, Image Reconstruction, Self-supervised learning

Motivation: Addressing limitations in parallel imaging, particularly GRAPPA’s challenges like noise amplification and dependence on linear k-space value combinations.

Goal(s): To enhance k-space interpolation accuracy with a transformer network, thereby improving the quality of clinical imaging.

Approach: We employ a novel self-supervised transformer network with an attention mechanism - TransGRAPPA, exploiting latent features for nonlinear interpolation of missing k-space points.

Results: TransGRAPPA outperforms GRAPPA and RAKI in terms of NRMSE, PSNR, SSIM, and noise reduction, showcasing enhanced capabilities on fastMRI’s multi-coil knee dataset.

Impact: The study presents a innovative reconstruction method using transformer network to explore k-space point relationships with limited training data, offering potential improvements in MR image quality and scan speed, and more efficient and accurate diagnostics in medical imaging.