ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Pitch: AI-Empowered Image Analysis & Processing
Power Pitch
AI & Machine Learning
Thursday, 09 May 2024
Power Pitch Theatre 1
08:15 -  09:15
Moderators: Jennifer Steeden & Ze Wang
Session Number: PP-25
No CME/CE Credit

08:151191.
Enhancing MRI Resolution with a Lightweight Network and Reverse Residual Attention Fusion
Xia Li1, Hui Zhang1, Hao Yang1, and Tie-Qiang Li2,3
1China Jiliang University, Hangzhou, China, 2Karolinska Institute, Stockholm, Sweden, 3Karolinska University Hospital, Stockholm, Sweden

Keywords: AI/ML Image Reconstruction, Brain

Motivation: In MRI reconstruction, deep-learning methods often increase network complexity for improved super-resolution, leading to longer reconstruction times and training difficulties.

Goal(s): Our solution introduces an enhanced lightweight network that maintains high-quality performance. 

Approach: We accomplish this by stacking Reverse Residual Attention Fusion (RRAF) with PCA and Enhanced Spatial Attention (ESA) for precise feature extraction, utilizing Transformers with depth-wise dilated convolution for better context information, and employing High-Frequency Image Refinement (HFIR) for detailed information recovery.

Results: Our experiments confirm the effectiveness of our approach.

Impact: Introducing the lightweight network represents an important improvement in MRI SR reconstruction. By integrating Reverse Residual Attention Fusion, it upholds exceptional image quality, streamlines network complexity, reduces reconstruction time, and simplifies training for SR MRI image reconstruction.

08:151192.
Protocol-aware unsupervised retrospective T1 and T2 mapping with diverse imaging parameters
Shihan Qiu1,2, Yibin Xie1, Anthony G. Christodoulou2,3, Pascal Sati1,4, Marcel Maya5, Nancy L. Sicotte4, and Debiao Li1,2
1Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States, 2Department of Bioengineering, UCLA, Los Angeles, CA, United States, 3Department of Radiological Sciences, David Geffen School of Medicine at UCLA, Los Angeles, CA, United States, 4Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, United States, 5Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, United States

Keywords: Analysis/Processing, Relaxometry

Motivation: Quantitative MRI has the potential for improved disease characterization, but the limited accessibility impedes its application.

Goal(s): To develop a deep learning method for retrospective T1 and T2 quantification from real-world brain MRI data, with the ability to handle diverse imaging protocols.

Approach: A protocol-aware self-supervised learning framework was developed, with the imaging parameters incorporated as additional inputs to the model.

Results: Validation on volunteers showed errors within 10% for nine brain regions when compared to prospective T1/T2 mapping. Application to 376 glioblastoma patients with diverse imaging protocols revealed statistical differences in T1 and T2 among tumor sub-regions and normal-appearing tissues.

Impact: The proposed method may allow retrospective T1 and T2 mapping in large real-world MRI datasets, enabling analysis of them regardless of the difference in protocols and scanners. This will facilitate the large-scale investigation of quantitative MRI as biomarkers for diseases.

08:151193.
Toward Task-Based Reconstruction: Evaluating Relationships Between Reconstruction and Object Detection Performance
Natalia Konovalova1, Aniket Tolpadi1,2, Rupsa Bhattacharjee1, Johanna Luitjens1, Felix Gassert1, Paula Giesler1, Sharmila Majumdar1, and Valentina Pedoia1
1Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States, 2University of California, Berkeley, Berkeley, CA, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Traditional medical image reconstruction emphasizes standard metrics, potentially overlooking optimization for downstream tasks like segmentation and anomaly detection.

Goal(s): Our study investigates the relationship between standard reconstruction and object detection metrics.

Approach: We trained a Faster R-CNN detector for meniscal anomalies, addressing class imbalance and implementing a custom detection-specific augmentation protocol.

Results: Evaluation on reconstructed datasets revealed that reconstruction quality was associated with true predictions but had a limited impact on overall detection performance, while boxes-based reconstruction metrics showed no correlation with prediction outcomes. These findings underscore the importance of considering associations between standard reconstruction and downstream task metrics when optimizing end-to-end pipelines.

Impact: Evaluation of standard reconstruction metrics, sliced by object detection outcomes, revealed a significant association between reconstruction and detection performance, emphasizing the utility of this approach in assessing task-based reconstruction.

08:151194.
Evaluation of an MR Anatomically Guided PET Reconstruction in Characterizing Multiple Sclerosis Lesions
Yujie Wang1, Chunwei Ying1, Matthew R. Brier1, Xinzhou Li2, David Faul3, Tammie Benzinger1,4, and Hongyu An1
1Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, United States, 2Siemens Medical Solutions, St. Louis, MO, United States, 3Siemens Medical Solutions, New York City, NY, United States, 4Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, United States

Keywords: Analysis/Processing, PET/MR, Bowsher CNN

Motivation: Bowsher CNN, an MRI anatomic-guided PET reconstruction method, can provide high-resolution PET images. It is unknown how it may affect lesion characterization in MS patients.

Goal(s): Evaluate PET signal without or with Bowsher CNN in various MS lesions.

Approach: FLAIR images defined WMH lesions and NAWM. WMH lesions were further separated into “T1-hypo” and “T1-iso” sub-categories based on T1 intensity. 18F-FDG SUVs were obtained from various lesion ROIs.

Results: The SUV differences between WMH and NAWM and between T1-hypo and T1-iso lesions became lower and higher, respectively, after applying Bowsher CNN.

Impact: Bowsher CNN PET reconstruction results in high-resolution PET images. SUV differences between WMH and NAWM became smaller, while the SUV differences between T1-hypo and T1-iso, two WMH subcategories, were larger after applying Bowsher CNN. 

08:151195.
Synthetic CT generation using focal frequency loss improves image sharpness
Veronica Ravano1,2,3, Adham Elwakil1,2,3, Thomas Yu1,2,3, Tom Hilbert1,2,3, Bénédicte Maréchal1,2,3, Jonas Richiardi2, Jean-Philippe Thiran3, Charbel Mourad2, Paul Margain4, Julien Favre4, Tobias Kober1,2,3, Patrick Omoumi2, and Stefan Sommer1,5
1Advanced Clinical Imaging Technology, Siemens Healthineers International AG, Lausanne, Geneva and Zurich, Switzerland, 2Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 3LTS5, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 4Swiss Biomotion Lab, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 5Swiss Centre for Musculoskeletal Imaging (SCMI), Balgrist Campus, Zurich, Switzerland

Keywords: Analysis/Processing, Data Processing, synthetic CT

Motivation: Standard intensity-based voxel-wise losses, generally used in image-to-image translation techniques, are typically biased towards the estimation of the low frequency content in image spectra. For the generation of synthetic CT (sCT) contrast, this results in limited image sharpness, and consequently a limited clinical utility.

Goal(s): To improve sharpness in synthetic contrasts. 

Approach: We trained a model using a combination of intensity- and frequency-based losses for the generation of sCT images from MRI. 

Results: Compared to a baseline model, sCT images generated using the focal-frequency loss resulted in an enhanced level of details in knee images.

Impact: Our results suggest that the use of frequency-based losses, in conjunction with an intensity-based L1 loss, improves image sharpness in synthetic contrasts, and thereby shows the potential to increase their clinical usefulness.

08:151196.
Contrast neutralization as a strategy to achieve generalizability in MR deep learning applications
Chitresh Bhushan1, Vanika Singhal2, and Dattesh D Shanbhag2
1GE HealthCare Research, Niskayuna, NY, United States, 2GE HealthCare, Bengaluru, India

Keywords: Analysis/Processing, Spinal Cord, Contrast Neutralization

Motivation: Provide flexibility to clinicians to fine-tune protocols/contrasts while still leveraging existing Deep-learning (DL) applications trained with limited set of MR contrasts.

Goal(s): Develop task-specific contrast neutralization pre-processing step to handle multiple imaging contrasts, that are different from the contrasts in the trainset.

Approach: Investigate Simple Contrast Neutralization (SCNe) approach that leverages Fourier domain filtering to neutralize contrast from objects of desired sizes, and demonstrate its impact on generalization of cervical foramina plane determination.

Results: Statistically significant improvements in prediction of planes when SCNe is used on new MERGE T2* contrast with DL-model that was trained only with Ax-T2 images.

Impact: Use of our Simple Contrast Neutralization (SCNe) approach as pre-processing step was effective in making DL-model trained only with Ax-T2 images robust to unseen new contrast MERGE T2* MR dataset for Spine cervical foramina (CF) plane determination.

08:151197.
A deep-learning model for effective ringing artifact removal by developing a novel multi-frequency Gibbs generator algorithm
Lisong Dai1, Zhenzhuang Miao2, Lei Lu2, Yuting Ling3, Hongyu Guo2, Xiaoyun Liang3, Qin Xu2, and Yuehua Li1
1Institute of Diagnostic and Interventional Radiology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China, 2MRI R&D, Neusoft Medical Systems Co. Ltd., Shanghai, China, 3Institute of Research and Clinical Innovations, Neusoft Medical Systems Co., Ltd, Shanghai, China

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, multi-frequency Gibbs artifact, deep learning, model training, artifact removal

Motivation: Gibbs artifact generated by zero-padding k-space data for model training poses a huge challenge for the model to learn different severity and manifestation of Gibbs artifact in the image domain.

Goal(s): Our goal was to effectively remove ringing artifact with a deep-learning model by developing a novel multi-frequency Gibbs generator algorithm.

Approach: We introduced Gibbs artifact generator (GAG) algorithm to create Gibbs artifacts with different truncation ratios as the input and tested the performance with a proposed deep-learning model. 

Results: The images processed using the proposed approach demonstrated higher image quality score than the original images (all P < 0.05).

Impact: The images generated by our new GAG algorithm with pronounced multi-frequency Gibbs artifacts could be used as a reliable training set for deep-learning model training, enabling the model to effectively identify and eliminate Gibbs artifacts in spinal MR imaging.

08:151198.
Unpaired Image-to-Image Translation of ULF-MRI using Vision Transformers to Advance Volumetric Analyses
Peter Hsu1,2, Elisa Marchetto1,3, Samantha Sanger1, Hersh Chandarana1,3, Jakob Asslaender1,3, Daniel Sodickson1,2,3, Patricia Johnson1,2,3, and Jelle Veraart1,3
1Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States, 2Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, United States, 3Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States

Keywords: Analysis/Processing, Low-Field MRI, ULF MRI, Ultra-Low-Field MRI, Deep Learning, Unpaired Image Translation, Brain Segmentation, Vision Transformers, CycleGAN

Motivation: The image quality of ultra-low-field MRI impacts the reliability of volumetric analysis in the brain. Existing techniques that address this issue learn from synthetically generated images, leading to a domain shift problem when presented with real images.

Goal(s): Development of a deep learning method trained with real ULF and HF images to robustly generate an image that can be segmented with routine software tools.

Approach: We introduce a CycleGAN framework with Residual Vision Transformers to improve super-resolved images compared to existing methods.

Results: The accuracy of volumetric estimations improves using our method compared to others based on clinical correlations and test-retest reliability metrics.

Impact: Our new image enhancement method should allow reliable volumetric evaluation using ULF-MRI. This will allow investigators in regions with access to ULF systems to monitor brain health in a way that was previously unattainable.

08:151199.
Dynamic Contrast-Enhanced MRI Parameter Mapping for Cervical Cancer Using CycleGAN-like model with UNet-Vision Transformer
Yuxi Jin1, Gengjia Lin1, Zhou Liu2, Zixiang Chen1, Zhenxing Huang1, Yang Qian2, Baijie Wang2, Na Zhang1, Hairong Zheng1,3, Dong Liang1,3, Dehong Luo2, and Zhanli Hu1,3
1Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, Shenzhen, China, 2National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China, 3Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences., Beijing, China

Keywords: Analysis/Processing, Cancer, Cervical Cancer,Dynamic Contrast-Enhanced MRI,UNet,Vision Transformer, CycleGAN, self-supervised pretraining

Motivation: DCE-MRI plays an important role in non-invasive detection and monitoring of cervical cancer, providing key information for improving diagnosis and treatment accuracy.

Goal(s): DCE-MRI faces complexities and noise issues in application and needs to be optimized and improved by deep learning techniques for parameter mapping. Existing deep learning based methods suffer from limited data and model efficiency.

Approach: We propose a CycleGAN-like model with UNet-Vision-Transformer generator, enhance the discriminator with gradient penalty, and pre-train the model via self-supervised image inpainting.

Results: The numerical experimental results demonstrate that the proposed model is quite efficient and robust compared with other deep learning-based methods.

Impact: This research offers fresh avenues for processing medical imaging data by proposing a novel and efficient deep learning model, significantly impacting the improvement of disease diagnosis. Furthermore, it provides researchers with new directions and insights, advancing scientific and technological progress.

08:151200.
3D Hybrid Deep Learning Solution for Subcortical Segmentation
Aaron Cao1, Vishwanatha Rao2, Xinru Liu3, and Jia Guo4,5
1Valley Christian High School, San Jose, CA, United States, 2Department of Biomedical Imaging, Columbia University, New York City, NY, United States, 3The Village School, Houston, TX, United States, 4Department of Psychiatry, Columbia University, New York City, NY, United States, 5Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY, United States

Keywords: Analysis/Processing, Neuro

Motivation: For subcortical brain segmentation, the most widely accepted tools like FreeSurfer are slow and inefficient for large datasets, while faster methods often sacrifice accuracy and reliability.

Goal(s): In this study, we propose a novel deep learning based alternative and achieve consistent state-of-the-art performance within reasonable processing times.

Approach: Our model, TABSurfer, utilizes a 3D patch-based approach with a hybrid CNN-Transformer architecture.

Results: We evaluated TABSurfer against FreeSurfer ground truths across various T1w MRI datasets, consistently demonstrating strong performance over a leading deep learning benchmark, FastSurferVINN. Then, we validated TABSurfer on a manual reference, outperforming both FreeSurfer and FastSurferVINN based on the gold standard.

Impact: Our proposed deep learning model, TABSurfer, demonstrated state-of-the-art subcortical segmentation performance and utility. TABSurfer displayed reliability across numerous datasets and outperformed well established traditional and deep learning tools in FreeSurfer and FastSurferVINN.

08:151201.
Graph kernel assisted robust individual and group level functional brain parcellation (GRAFP)
Sovesh Mohapatra1,2, Minhui Ouyang1,3, Qinmu Peng4, and Hao Huang1,3
1Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, United States, 2Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States, 3Department of Radiology, University of Pennsylvania, Philadelphia, PA, United States, 4School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China

Keywords: Analysis/Processing, fMRI (resting state), Functional Connectivity, Graph Kernel, Brain connectivity, Signal Modeling, Signal Representations

Motivation: Various rs-fMRI studies highlight the need for accurate delineation of different brain functional networks (FNs) to carry out precise therapeutic interventions in the individuals.

Goal(s): To develop a novel zero-shot non-linear graph kernel-assisted approach for enhanced functional brain parcellation at individual and group levels.

Approach: Utilization of Wavelet, Fourier, and Hilbert transformations for feature extraction from BOLD signals, and a propagation attribute graph kernel to capture non-linear temporo-spatial connectivity, using k-means clustering.

Results: The kernel-based approach outperforms static FC matrix parcellations, achieving higher accuracy in network delineation in both individual and group level, as evidenced by Dice and Jaccard scores.

Impact: The study introduced graph kernel-based method for functional brain parcellation, which improved the accuracy of functional network delineation in rs-fMRI data, surpassing traditional static functional connectivity approaches in both individual and group level, as validated by Dice and Jaccard metrics.

08:151202.
Healthy-to-Patients Domain-Adaptive Deep Learning for Time-Resolved Segmentation of Left Atrium in Short-Axis Cine MRI Images
Mohamed Elbayumi1, Ulas Bagci1, Maurice Pradella1, Zachary Zilber1, Philip Greenland2, and Mohammed S.M. Elbaz1
1Radiology, Northwestern University, Chicago, IL, United States, 2Preventive Medicine, Northwestern University, Chicago, IL, United States

Keywords: Analysis/Processing, Segmentation, Left Atrium, Mitral Valve Regurgitation, Domain Adaptation, Deep Learning

Motivation: Addressing challenges with current deep learning (DL) techniques that struggle with domain shifts.

Goal(s): To introduce a domain-adaptive technique that is able to segment the Left Atrium from MRI of patients employing model trained exclusively on healthy data.

Approach: Our approach involves training exclusively on healthy data and incorporating stochastic encoding of temporal composite variations as augmentations to encode the underlying space of plausible anatomical changes and dynamics. We tested on three challenging unseen patient daatsets.

Results: Our domain-adaptive approach showed significant improvement over the state-of-the-art LA segmentation model. Enabling LA segmentation of all time frames of the cardiac cycle.

Impact: The proposed domain-adaptive deep learning approach addresses a fundamental challenge of training deep learning models only on healthy control datasets while maintaining high performance on unseen patients' populations. This could potentially lead to solve performance issues for limited patients cohorts.  

08:151203.
BlindHarmony: Blind harmonization for multi-site MR image processing via unconditional flow model
Hwihun Jeong1, Heejoon Byun1, and Jongho Lee1
1Department of electrical and computer engineering, Seoul national university, Seoul, Korea, Republic of

Keywords: Analysis/Processing, Reproductive

Motivation: Conventional deep learning-based harmonization cannot handle the unseen source domain image when there is no large-size data.

Goal(s): We propose blind harmonization, which requires only target domain data during training and generalizes well on unseen source domain data.

Approach: BlindHarmony utilizes an unconditional flow model to measure the probability of the target domain image and find a harmonized image that is structurally close to this source domain image but has a high probability in the target domain.

Results: BlindHarmony successfully harmonized the source domain image to the target domain and improved the performance of downstream tasks for the data with a domain gap.

Impact: Deep learning-based harmonization typically necessitates both source and target domain data, limiting its widespread applicability. This study eliminates the need for source domain data and exhibits robust generalization to new source domain data, thereby expanding the utility of harmonization.

08:151204.
Generic and Robust Quantitative MRI Parameter Estimation using Neural Controlled Differential Equations
Daan Kuppens1,2, Sebastiano Barbieri3, Susanne Rauh1,4, and Oliver Gurney-Champion1,2
1Radiology & Nuclear Medicine, Amsterdam University Medical Centers location University of Amsterdam, Amsterdam, Netherlands, 2Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, Netherlands, 3Centre for Big Data Research in Health, University of New South Wales Sydney, Sydney, Australia, 4Department of Radiology, C.J. Gorter MRI Center, Leiden University Medical Center, Leiden, Netherlands

Keywords: Analysis/Processing, Quantitative Imaging

Motivation: Tissue properties are estimated from MRI data using bio-physical models that relate MRI signal to underlying tissue properties via quantitative MRI parameters. Deep learning can improve parameter estimation, but needs retraining for different acquisition protocols, hindering implementation. 

Goal(s): Implement a deep learning algorithm able to estimate quantitative MRI parameters for multiple quantitative MRI applications, irrespective of acquisition protocol.  

Approach: Neural controlled differential equations (NCDEs) overcome this limitation as they are independent of the configuration of input data.

Results: NCDEs have improved performance compared to least squares minimization in estimating quantitative MRI parameters when SNR is low or when the parameter has low sensitivity. 

Impact: Neural controlled differential equations are a generic purpose tool for parameter estimation in quantitative MRI that outperform least squares minimization in quantitative MRI parameter estimation, irrespective of acquisition protocol or quantitative MRI application. 

08:151205.
Applying adaptive convolution to brain data – Making use of transfer learning
Simon Graf1,2, Walter Wohlgemuth1,2, and Andreas Deistung1,2
1Medical Physics Group, University Clinic and Outpatient Clinic for Radiology, University Hospital Halle (Saale), Halle (Saale), Germany, 2Halle MR Imaging Core Facility, Medical Faculty, Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany

Keywords: Analysis/Processing, Quantitative Susceptibility mapping

Motivation: Deep learning approaches for QSM-based dipole inversion lack generalizability towards acquisition parameters.

Goal(s): Our aim was to address data scarcity by integrating known information in the network model and investigate the feasibility of transfer learning.

Approach: The acquisition parameters (voxel size, FOV orientation) were integrated with manifold learning. The models were pre-trained on large-scale synthetic data sets and fine-tuned on in-vivo brain data in a second step.

Results: The use of manifold learning increased generalizability, while transfer learning substantially improved the quality of computed susceptibility maps.

Impact: While this study demonstrates the feasibility of cross-domain knowledge transfer in deep learning approaches for QSM, it also points to the potential of fine-tuning network parameters to scanner-specific data in general, boosting the performance of neural networks therewith.

08:151206.
Efficient Analysis of Myocardial Perfusion MRI with Human-in-the-loop Dynamic Quality Control: Initial Results Using the SCMR Registry
Dilek M. Yalcinkaya1,2, Zhuoan Li1,3, Khalid Youssef4, Bobak Heydari5, Rohan Dharmakumar3,4, Robert Judd6, Orlando Simonetti7, Subha Raman4, and Behzad Sharif1,3,4
1Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, United States, 2Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States, 3Biomedical Engineering, Purdue University, West Lafayette, IN, United States, 4Krannert Cardiovascular Research Center, IUSM, Indianapolis, IN, United States, 5Stephenson Cardiac Imaging Centre, University of Calgary, Calgary, AB, Canada, 6Intelerad, Raleigh, NC, United States, 7Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, United States

Keywords: Analysis/Processing, Segmentation

Motivation: Accurate segmentation of free-breathing (FB) myocardial perfusion (MP) MRI is a labor-intensive yet necessary preprocessing step. A quality control (QC) tool for deep learning (DL)-based segmentation of FB MP MRI is lacking.

Goal(s): Developing a DL-based dynamic QC (dQC) tool for automatic analysis of MP MRI.

Approach: Using the discrepancy between patch-based segmentations, a dQC map is derived and quantified into a dQC metric. The utility of this metric in detecting erroneous segmentations is demonstrated by considering a human-in-the-loop (HiTL) framework.

Results: Referral of the dQC-detected timeframes to a HiTL has markedly improved the segmentation results when compared to a random referral approach.

Impact: We proposed a dynamic quality control tool for automatic segmentation and analysis of free-breathing myocardial perfusion MRI datasets. Our results show that the proposed approach has markedly improved segmentation accuracy when used within a practical and efficient clinician-in-the-loop setting.

08:151207.
Assessing Machine Learning Robustness in MRS Quantification: Impact of Training Strategies on Out-of-Distribution Generalization
Julian P. Merkofer1, Antonia Kaiser2, Anouk Schrantee3,4, Oliver J. Gurney-Champion3,5, and Ruud J. G. van Sloun1
1Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands, 2Center for Biomedical Imaging, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 3Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Amsterdam, Netherlands, 4Center for Urban Mental Health, University of Amsterdam, Amsterdam, Netherlands, 5Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, Netherlands

Keywords: Analysis/Processing, Spectroscopy, ML Robustness, MRS Quantification

Motivation: Despite promising developments, current machine learning methods for magnetic resonance spectroscopy (MRS) suffer from limited robustness and generalization issues, restricting their clinical application.

Goal(s): This study compares training strategies for MRS quantification, focusing on neural network resilience to out-of-distribution samples. 

Approach: Bias towards the training distribution was assessed for various out-of-distribution cases in synthetic data and in-vivo data. 

Results: Our findings reveal that, while common supervised regression is most accurate for in-distribution cases, it shows the most data bias; physics-informed self-supervised training is more robust; while integrating a least-squares fitting method within the training framework enhances standalone performance while remaining generalizable.

Impact: To advance integration in clinical MRS, robust and generalizable machine learning methods are needed. This study's exploration of quantification training strategies offers insights into data biases and advocates hybrid models that combine traditional methods with neural networks to maintain robustness.

08:151208.WITHDRAWN
08:151209.
SegFormer for Precise Quantification of Lung Ventilation Defects in Hyperpolarized Gas Lung MRI
Ramtin Babaeipour1, Ryan Zhu2, Harsh Patel2, Matthew S Fox2,3, and Alexei Ouriadov1,2,3
1School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON, Canada, 2Department of Physics and Astronomy, The University of Western Ontario, London, ON, Canada, 3Lawson Health Research Institute, London, ON, Canada

Keywords: Analysis/Processing, Hyperpolarized MR (Gas), Deep Learning; Magnetic Resonance Imaging (MRI); Hyperpolarized Gas MRI; Segmentation; Ventilation Defect; Chronic Obstructive Pulmonary Disease (COPD); Lung Imaging

Motivation: Current methods for quantifying lung ventilation defects using hyperpolarized gas MRI are effective but time-consuming. Deep Learning offers potential enhancements in image segmentation, with Vision Transformers (ViTs) emerging as notable alternatives to traditional CNNs.

Goal(s): The study aims to assess SegFormer's capability for automating the segmentation and quantification of ventilation defects in hyperpolarized gas MRI, comparing its efficiency and accuracy against traditional methods.

Approach: Utilizing a dataset from 56 study participants, the study adopted the SegFormer architecture for segmenting MRI slices.

Results: SegFormer, especially with ImageNet pretraining, surpassed CNN-based techniques in segmentation. Specifically, the MiT-B2 configuration of SegFormer showcased exceptional efficacy and efficiency.

Impact: SegFormer's efficiency in hyperpolarized gas MRI enhances future clinical decision-making with swift and precise segmentation. Its superiority may inspire broader adoption and further exploration into Vision Transformers' potential in medical imaging.

08:151210.
Enhanced motion artifact simulator for structural MRI with MP-RAGE sequence
Tianqi Wu1, Magdalena Sokolska2, David L. Thomas3, Matthew Grech-Sollars1, and Hui Zhang1
1Centre for Medical Image Computing & Department of Computer Science, University College London, London, United Kingdom, 2Medical Physics and Biomedical Engineering, University College London Hospitals, London, United Kingdom, 3Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom

Keywords: Analysis/Processing, Simulations, Motion artifacts

Motivation: Motion compromises the utility of structural MRI with MP-RAGE sequence, a workhorse of quantitative neuroimaging research. Recent interest in deep learning-based mitigating solutions, and the scarcity of motion-corrupted data, motivates the need for realistic data simulation. Unfortunately, existing open-source simulators fail to consider important features in real-world acquisitions, including variations in phase-encoding direction, multi-coil acquisition and GRAPPA parallel imaging, resulting in less realistic simulations.

Goal(s): We aim to develop a more realistic motion artifact simulator for MP-RAGE structural MRI.

Approach: We extend TorchIO, an existing simulation framework, to support aforementioned features.

Results: The comparison between simulations demonstrated the importance of including these features.

Impact: The proposed simulation framework can be used to generate more realistic motion-corrupted MRI data from clean images. These data can be served as training sets for deep learning algorithms in motion artifact related applications.