ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Seeking Reliability & Interpretability in Deep MRI
Digital Poster
AI & Machine Learning
Monday, 06 May 2024
Exhibition Hall (Hall 403)
17:00 -  18:00
Session Number: D-161
No CME/CE Credit

Computer #
2217.
33Combining domain knowledge and foundation models for one-shot spine labeling
Deepa Anand1, Ashish Saxena1, Chitresh Bhushan2, and Dattesh Shanbhag1
1GE Healthcare, Bangalore, India, 2GE Healthcare, Niskayuna, NY, United States

Keywords: Analysis/Processing, Spinal Cord, MRI, Spine, Spine Labelling, Foundation Model, ML/AI

Motivation: Spine labelling is a step crucial for several important tasks such as MRI scan planning or associating image regions with mentions in clinical reports and others. Automating it can lead to significant benefits but developing automated solutions requires extensive annotations of vertebra labels.

Goal(s): To automate spine labelling without extensively training a DL model with manual annotations.

Approach: We adapted a vision foundational model-based approach that combines spine domain knowledge to predict spine labels.

Results: Our spine labelling method gives an average accuracy of 79% and 86% for cervical and lumbar high resolution T1 images, respectively.

Impact: Leveraging spatially relevant landmarks (disc) and vision foundation deep learning model, spine labels are predicted using one-shot localization. The proposed method doesn’t require any prior data for model training.

2218.
34Cartilage and Meniscus Segmentation for Knee MRIs: 2D, 3D and Foundational Models
Bruno Astuto Arouche Nunes1, Xuzhe Zhang2, Laura Carretero Gomez3,4, Deepthi Sundaran5, Jignesh Dholakia5, Eugenia Sánchez6, Mario Padrón6, Maggie Fung7, Ravi Soni 7, Avinash Gopal7, and Parminder Bhatia8
1GE HealthCare, san mateo, CA, United States, 2Columbia University, New York, NY, United States, 3GE HealthCare, Munich, Germany, 4Rey Juan Carlos University, Madrid, Spain, 5GE HealthCare, Bangalore, India, 6Clinica Cemtro, Madrid, Spain, 7GE HealthCare, San Ramon, CA, United States, 8GE HealthCare, Seattle, WA, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Morphometric assessment of cartilage(e.g.,thickness), through MRI yields accurate measurements on the progression of Osteoarthritis(OA). Such quantitative measurements require image segmentation techniques. Recent developments in Visual Foundational Models(VFM) bring opportunities to increasing generality and robustness.

Goal(s): What improvements can VFM-based approaches bring to automatic segmentation of knee 3DMRIs, and how it compares to traditional convolution networks(CNNs)?

Approach: Trained 2DVFM, 3DCNN, and a modified 3DVFM on 500MRI volumes. Evaluated qualitative and quantitatively on external datasets.

Results: The proposed 3D-VFM, demonstrates a slight advantage on quantitative morphological assessment, but strongly outperforms others when qualitatively assessed by radiologists, presenting a promising direction and better generalization.

Impact: By leveraging Visual Foundational Models (VFM) in the morphometric assessment of cartilage through 3D MRIs, our research demonstrates significant promise in enhancing the accuracy and generalization of knee segmentation to be applied to osteoarthritis progression measurements.

2219.
35Multi-domain and Uni-domain Fusion for domain-generalizable fMRI-based phenotypic prediction
Pansheng Chen1, Lijun An1, Naren Wulan1, Chen Zhang1, Shaoshi Zhang1, Leon Qi Rong Ooi1, Ru Kong1, Jianxiao Wu2, Sidhant Chopra3, Danilo Bzdok4, Simon B. Eickhoff2, Avram J. Holmes5, and B.T. Thomas Yeo1
1National University of Singapore, Singapore, Singapore, 2Heinrich-Heine University Düsseldorf, Düsseldorf, Germany, 3Yale University, New Haven, CT, United States, 4McGill University, Montreal, QC, Canada, 5Rutgers University, Piscataway, NJ, United States

Keywords: Diagnosis/Prediction, fMRI (resting state), functional connectivity, phenotypic prediction, meta-learning, transfer learning

Motivation: Resting-state functional connectivity (RSFC) is widely used to predict phenotypes in individuals. However, predictive models may fail to generalize to new datasets due to differences in population, data collection, and processing across datasets.

Goal(s): To resolve the dataset difference issue, we aimed to generalize knowledge from multiple diverse source datasets and translate the model to new target data.

Approach: Here we proposed Multi-domain and Uni-domain Fusion (MUF) method that combines cross-domain learning and intra-domain learning, to capture both domain-general information and domain-specific information.

Results: The results show that our MUF outperformed 4 strong baseline methods on 6 target datasets. 

Impact: Our MUF method is adept at addressing the challenges introduced by different population profiles, fMRI processing pipelines, and prediction tasks. We offer a robust and universal learning strategy for domain-generalization in fMRI-based phenotypic prediction.

2220.
36Cloud-Magnetic Resonance Imaging Systems: Prospects and Challenges
Yirong Zhou1, Yanhuang Wu1, Yuhan Su2, Jing Li3, Jianyu Cai4, Yongfu You5, Di Guo6, and Xiaobo Qu1
1Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Institute of Artificial Intelligence, Xiamen University, Xiamen, China, 2Department of Electronic Science, Key Laboratory of Digital Fujian on IoT Communication, Xiamen University, Xiamen, China, 3Shanghai Electric Group CO., LTD, Shanghai, China, 4China Telecom Group, Quanzhou, China, 5China Mobile Group, Xiamen, China, 6School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China

Keywords: AI/ML Software, Software Tools

Motivation: Magnetic Resonance Imaging (MRI) plays an important role in medical diagnosis, generating petabytes of image data annually in large hospitals. Local data processing demands substantial manpower and hardware investments. Data isolation across different healthcare institutions hinders cross-institutional collaboration.

Goal(s): Solve a series of problems existing in current hospitals.

Approach: Integrating cloud computing, 6G, edge computing, federated learning, and blockchain.

Results: Cloud-MRI transforms raw data to the Imaging Society for Magnetic Resonance in Medicine Raw Data (ISMRMRD) format and enables fast reconstruction, AI training, and analysis. Results are relayed to cloud radiologists.

Impact: This system safeguards data, promotes collaboration, and enhances diagnostic precision and efficiency.

2221.
37Federated image-to-image MRI translation from heterogeneous multiple-sites data
Jan Stanisław Fiszer1,2, Dominika Ciupek1, Maciej Malawski1,2, and Tomasz Pieciak1,3
1Sano Centre for Computational Medicine, Kraków, Poland, 2AGH University of Science and Technology, Kraków, Poland, 3Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain

Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence, Federated Learning, Image-to-image Translation

Motivation: Applying machine learning (ML) in MRI necessitates the development of large and diverse datasets, which is a challenging process. Federated learning (FL) is a new frontier in ML that offers the possibility of multi-site data aggregation.

Goal(s): In our study, we examine a traditional deep convolutional neural network applied to multiple sources with that of the FL technique using different aggregation methods.

Approach: As a proof-of-concept, we employ four publicly available MRI datasets and carry out image-to-image translation between T1- and T2-weighted scans.

Results: Our findings suggest that the FL generalizes the model more effectively than using models trained at each site separately.

Impact: Our research demonstrated the crucial role of federated learning in medical imaging. It also emphasized the significance of selecting an appropriate aggregation algorithm considering the data type and degree of heterogeneity.

2222.
38Federated training of deep learning models for prostate cancer segmentation on MRI: A simulation study
Kuancheng Wang1, Pranav Sompalle2, Alexander Charles Tonetti3, Zelin Zhang4, Tal Tiano Einat3, Ori Ashush3, Anant Madabhushi4,5, Malhar P. Patel3, and Rakesh Shiradkar4
1Georgia Institute of Technology, Atlanta, GA, United States, 2University of Pennsylvania, Philadelphia, PA, United States, 3Rhino Health, Boston, MA, United States, 4Emory University, Atlanta, GA, United States, 5Atlanta VA Medical Center, Atlanta, GA, United States

Keywords: Diagnosis/Prediction, Segmentation

Motivation: Site and scanner specific variations in prostate MRI impact performance of deep learning (DL) based models. Federated learning allows for privacy preserving training of DL models without the need for data sharing. 

Goal(s): In this study, we train DL models for prostate cancer segmentation on MRI using the Rhino Health federated computing platform. 

Approach: We adopt 3D UNet architecture to train the DL models on 2 publicly available datasets.

Results: DL models trained using a federated approach result in more generalizable models compared to those trained on single site data.

Impact: Successful development of deep learning based prostate cancer segmentation models on MRI using federated learning will result in reproducible and generalizable models. These can enhance clinical adoption and potentially improve downstream diagnostic and treatment workflows for prostate cancer.

2223.
39Uncertainty-Aware Anatomical Brain Parcellation using Diffusion MRI
Chenjun Li1, Ye Wu2, Le Zhang1, Qiannuo Li3, Shuyue Wang4, Shun Yao5, Kang Ik Kevin Cho6, Johanna Seitz-Holland6, Lipeng Ning6, Jon Haitz Legarreta6, Yogesh Rathi6, Carl-Fredrik Westin6, Lauren J. O'Donnell6, Ofer Pasternak6, and Fan Zhang1
1University of Electronic Science and Technology of China, Chengdu, China, 2Nanjing University of Science and Technology, Nanjing, China, 3East China University of Science and Technology, Shanghai, China, 4The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China, 5The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China, 6Harvard Medical School, Boston, MA, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, Diffusion MRI

Motivation: While anatomical brain parcellation has long been performed using anatomical MRI and atlas-based approaches, deep learning methods together with diffusion MRI techniques can improve parcellation performance and interpretation of prediction uncertainty.

Goal(s): Our goal is to design an uncertainty-aware deep learning network to utilize multiple diffusion MRI parameters for accurate brain parcellation while enabling voxel-level uncertainty estimation.

Approach: We include five evidential deep learning subnetworks and perform an evidence-based ensemble for parcellation prediction and uncertainty estimation.

Results: The results demonstrate our method’s superior parcellation performance over several state-of-the-art methods, its promising results in unseen patient scans, and potential applications in brain abnormality detection.

Impact: The proposed approach enables improved accuracy in brain parcellation from diffusion MRI, facilitating the understanding of the human brain in health and disease. It may also serve as an effective tool for brain abnormality detection, fostering inquiries into uncertainty-quantified diagnostics.

2224.
40Understanding the Trustworthiness of Saliency Maps for Biomarker Research in 3D Medical Imaging Classification
Yixiong Shi1, Chenyu Wang2, Dongnan Liu1, Weidong Cai1, and Mariano Cabezas2
1School of Computer Science, University of Sydney, Sydney, Australia, 2Brain and Mind Centre, University of Sydney, Sydney, Australia

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, saliency maps; biomarkers

Motivation: It is significant to understand if saliency maps can be considered as potential biomarkers by providing reliable anatomical information in 3D medical imaging classification. 

Goal(s): We found saliency maps can provide different (even mutually exclusive) information with randomised models. It is necessary to estimate the robustness of saliency maps under the stochastic training process.

Approach: We introduced a novel method by re-organising the saliency scores in the saliency maps and quantify the inter-map difference for estimating the robustness of saliency maps.

Results: All selected explanation methods were not able to exhibit strong performance in the estimation of robustness of saliency maps.

Impact: Our estimation provides evidence that saliency maps are not competent to maintain the robustness under the stochastic training process. Researchers should be critically careful when utilising saliency maps as biomarkers for interpretation. 

2225.
41An Interpretable Deep Learning Approach for Identifying Working Memory-related Regions in fMRI using Three Large Cohorts
Tianyun Zhao1,2, Philip N Tubiolo2,3, John C Williams2,3, Jared X Van Snellenberg2,3,4, and Chuan Huang1,2,5
1Radiology and Imaging Science, Emory University School of Medicine, Atlanta, GA, United States, 2Biomedical Engineering, Stony Brook University, Stony Brook, NY, United States, 3Psychiatry and Behavioral Health, Renaissance School of Medicine at Stony Brook University, Stony Brook, NY, United States, 4Psychology, Stony Brook University, Stony Brook, NY, United States, 5Biomedical Engineering, Georgia Institute of Technology, Atalnta, GA, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, fMRI, Working memory

Motivation: fMRI is allows studying human brain activity in vivo, but standard analyzing fMRI cannot capture nonlinear relationships between activity and variables. Utlizing deep learning (DL) models may capture such relationships, providing new insight into mechanisms underlying human health and disease.

Goal(s): To evaluate our interpretable DL pipeline in fMRI analysis using three large cohorts to demonstrate its generalizability and reproducibility.

Approach: We built a VGG-like network to predict task performance and generate saliency maps that can show brain regions important for task performance using three independent datasets.

Results: The DL generated saliency maps are consistent between each dataset.

Impact: We demonstrated that interpretable deep learning can be used as a reliable and generalizable tool to gain insight into brain regions whose activation impacts task performance.

2226.
42A sophisticated method for encoding StyleGAN-based synthetic MR images for disease progression prediction in multiple sclerosis
Daniel Güllmar1,2, Wei-Chan Hsu1,2,3, and Jürgen R Reichenbach1,2
1Institut of Diagnostic and Interventional Radiology / Medical Physics Group, Jena University Hospital, Jena, Germany, 2Michael-Stifel-Center for Data-Driven and Simulation Science Jena, Jena, Germany, 3Institut of Diagnostic and Interventional Radiology / Section of Neuroradiology, Jena University Hospital, Jena, Germany

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Diseases related progress simulated through latent space image manipulation is difficult to interpret.

Goal(s): The goal was to develop an approach allowing for an improved interpretation of latent space image manipulation.

Approach: A StyleGAN model trained on MRI data from MS patients and healthy controls was used for image manipulation. The direction in latent space for generating images mimicking diseases progression towards MS was determined. The spatial changes were analyzed through eigenvalue decomposition.

Results: The decomposition approach revealed a pattern resembling a polynomial series, suggesting a parameterized data manipulation, with the second component being the most informative for illustrating disease related image changes.

Impact: The analysis method for disentanglement complex image changes through latent space manipulation offers improved predictive accuracy and enhances our understanding of disease progression in neuroimaging research by isolating disease-related image features with a parameter-free approach.

2227.
43Uncertainty-aware Automated Liver Macromolecular Proton Fraction Quantification
Hongjian Kang1, Vincent WS Wong2, Jiabo Xu1, Jian Hou1, Baiyan Jiang3, Queenie Chan4, Ziqiang Yu1, Winnie CW Chu1, and Weitian Chen1
1Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China, 2Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong, China, 3Illuminatio Medical Technology Limited, Hong Kong, China, 4Philips Healthcare, Hong Kong, China

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Macromolecular Proton Fraction quantification based on spin-lock MRI (MPF-SL) is a new technique for non-invasive imaging and characterization of macromolecule environment in tissues.

Goal(s): This study aims to develop an automated method for MPF quantification in the liver.

Approach: We present a deep learning framework for automated liver MPF quantification, incorporating an uncertainty-guided strategy for reliable region-of-interest (ROI) selection.

Results: Evaluation was conducted using clinical MPF data from 44 patients, demonstrating minimal error in MPF quantification and consistent and robust ROI selection. Our method shows promise in automated MPF measurement of the liver, offering both qualitative and quantitative evidence of its efficacy.

Impact: MPF-SL has been recently developed to measure macromolecule levels, showing potential in the non-invasive diagnosis of hepatic fibrosis.
This work automates MPF quantification using deep learning, showing the potential to decrease the cost of MPF-SL post-processing.

2228.
44Distribution-free uncertainty estimation in multi-parametric quantitative MRI through conformalized quantile regression
Florian Birk1,2, Lucas Mahler1, Julius Steiglechner1,2, Qi Wang1, Klaus Scheffler1,2, and Rahel Heule1,2,3
1High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany, 3Center for MR Research, University Children's Hospital, Zurich, Switzerland

Keywords: Analysis/Processing, Quantitative Imaging, phase-cycled bSSFP, multi-parametric mapping, uncertainty quantification

Motivation: When using black-box regression models in diagnostic imaging, it is critical to quantify the uncertainty of such model predictions.

Goal(s): Implementing and understanding uncertainty quantification for multi-parametric quantitative MRI. Do prediction intervals reflect model uncertainty?

Approach: Train conditional quantile regression deep neural networks with subsequent conformalization steps for multi-parametric quantitative mapping without making distributional assumptions about the data.

Results: Conformalized relaxometry and magnetic field prediction intervals reflect model uncertainty. Conformalized quantile regression was successfully implemented and provides supportive information about intrinsic model uncertainty which is mandatory for clinical decision making.

Impact: A novel method for quantifying uncertainty of supervised machine learning models for multi-parametric quantitative MRI was successfully tested in silico and in vivo. Conformalized quantile regression allows prediction of confidence intervals without making assumptions about the training data distribution.

2229.
45Uncertainty-guided task-specific multi-parametric MR image fusion for brain tissue segmentation and quantification
Cheng Li1, Weijian Huang1,2,3, Yousuf Babiker M. Osman1,2, Taohui Xiao1, Hua Han1,2, Hairong Zheng1, and Shanshan Wang1,3
1Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 2University of Chinese Academy of Sciences, Beijing, China, 3Peng Cheng Laboratory, Shenzhen, China

Keywords: Analysis/Processing, Brain

Motivation: Existing techniques for multi-parametric MR imaging-based brain tissue segmentation typically employ a generic feature combination strategy without incorporating task-specific guidance, making it challenging to ensure effective fusion.

Goal(s): In this work, we aim to develop a task-specific multi-parametric MR image fusion framework to enhance the brain tissue segmentation and quantification accuracy.

Approach: During preliminary experiments, we have identified a close correlation between prediction uncertainties and prediction errors. Therefore, we propose an uncertainty-guided task-specific multi-parametric MR image fusion framework to enhance fusion efficiency and decrease prediction uncertainty.

Results: Experiments on the iSeg-2019 dataset demonstrate that the proposed method achieves better results than existing techniques.

Impact: The outcome of this work has the potential to be utilized in clinical practice to help physicians better monitor brain development and diagnose brain diseases. Meanwhile, the framework can be extended to diverse fields where multi-modal image fusion is required.

2230.
46Neural Network-based MR Elastography Wave Inversion Using Physics-based Simulations and Uncertainty Quantification
Héloïse Bustin1,2, Tom Meyer1, Jakob Jordan1, Rolf Reiter1,3, Lars Walczak1,2,4, Heiko Tzschätzsch1,5, Ingolf Sack1, and Anja Hennemuth1,2,4,6
1Charité - Universitätsmedizin Berlin, Berlin, Germany, 2Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité (DHZC), Berlin, Germany, 3Berlin Institute of Health at Charité – Universitätsmedizin Berlin, BIH Biomedical Innovation Academy, BIH Charité Digital Clinician Scientist Program, Berlin, Germany, 4Fraunhofer MEVIS, Berlin, Germany, 5Institute of Medical Informatics, Berlin, Germany, 6DZHK (German Center for Cardiovascular Research), Partner Site Berlin, Berlin, Germany

Keywords: AI/ML Image Reconstruction, Elastography

Motivation: In Magnetic Resonance Elastography (MRE), accurate reconstruction of stiffness maps is essential for medical diagnosis. Traditional inversion techniques are limited by noise, discretization and/or low wavenumbers.

Goal(s): We aim to overcome these limitations using a neural network-based wave inversion (ElastoNet) with integrated uncertainty quantification ensuring reliable predictions with high detail resolution.

Approach: We trained ElastoNet on simulated wave patches. For inference, we combined all 3 motion encoding directions as input and used evidential deep learning as an uncertainty quantification method.

Results: ElastoNet achieves a substantial improvement in detail resolution compared to current neural network approaches and shows promising results in the low-frequency domain.

Impact: Our MR elastography neural network-based wave inversion is a promising method for enhanced accuracy and reliability in tissue property characterization. It effectively addresses challenges in reconstruction of stiffness maps, expanding the potential of MR elastography for medical diagnosis.

2231.
47Instance-level explanations in multiple sclerosis lesion segmentation: a novel localized saliency map
Federico Spagnolo1,2,3,4, Nataliia Molchanova4,5, Roger Schaer4, Mario Ocampo-Pineda1,2,3, Meritxell Bach Cuadra5,6, Lester Melie-Garcia1,2,3, Cristina Granziera1,2,3, Vincent Andrearczyk4, and Adrien Depeursinge4,7
1Translational Imaging in Neurology (ThINK) Basel, Department of Medicine and Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland, 2Department of Neurology, University Hospital Basel, Basel, Switzerland, 3Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland, 4MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland, 5CIBM Center for Biomedical Imaging, Lausanne, Switzerland, 6Radiology Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland, 7Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, Explainability, interpretability

Motivation: The use of AI in clinical routine is often jeopardized by its lack of transparency. Explainable methods would help both clinicians and developers to identify model bias and interpret the automatic outputs.

Goal(s): We propose an explainable method providing insights into the decision process of an MS lesion segmentation network.

Approach: We adapt SmoothGrad to perform instance-level explanations and apply it to a U-Net, whose inputs are FLAIR and MPRAGE from 10 MS patients.

Results: Our saliency maps provide local-level information on the network's decisions. Predictions of the U-Net rely predominantly on lesions' voxel intensities in FLAIR and the amount of perilesional volume.

Impact: These results cast some light on the decision mechanisms of deep learning networks performing semantic segmentation. The acquired new knowledge can be an important step to facilitate AI integration into clinical practice.

2232.
48Adaptive Plane Reformatting for 4D Flow MRI using Deep Reinforcement Learning
Javier Bisbal1,2,3, Julio Sotelo4, Cristóbal Arrieta2,5, Pablo Irarrazaval2,3,6,7, Marcelo E Andia1,2,8, denis Parra2,9,10, Maria Ignacia Valdes1,3, Cristián Tejos1,2,3, Julio Garcia11, José F. Rodríguez-Palomares12, Francesca Raimondi13, and Sergio Uribe2,14
1Biomedical Imaging Center, Pontificia Universidad Catolica de Chile, Santiago, Chile, 2Millennium Institute for Intelligent Healthcare Engineering, iHEALTH, Santiago, Chile, 3Department of Electrical Engineering, Pontificia Universidad Catolica de Chile, Santiago, Chile, 4Departamento de Informática, Universidad Técnica Federico Santa Maria, Santiago, Chile, 5Faculty of Engineering, Universidad Alberto Hurtado, Santiago, Chile, 6Biomedical Imaging Center, Pontificia Universidad Católica de chile, Santiago, Chile, 7Institute for Biological and Medical Engineering, Pontificia Universidad Catolica de Chile, Santiago, Chile, 8Department of Radiology, Pontificia Universidad Católica de chile, Santiago, Chile, 9Computer Science Department, Pontifici Universidad Catolica de Chile, Santiago, Chile, 10Centro Nacional de Inteligencia Artificial, CENIA, Santiago, Chile, 11Stephenson Cardiac Imaging Centre, Department of Radiology, University of Calgary, Calgary, AB, Canada, 12Department of Cardiology, Hospital Universitari Vall d'Hebron, CIBER-CV, Vall d'Hebron Institut de Recerca (VHIR), Barcelona, Spain, 13Department of Cardiology and Cardiovascular Surgery, Papa Giovanni XXIII Hospital, Bergamo, Italy, 14Department of Medical Imaging and Radiation Sciences, Monash University, Melbourne, Australia

Keywords: Analysis/Processing, Velocity & Flow, Plane reformatting, Deep reinforcement learning

Motivation: The standard approach for plane reformatting in 4D flow MRI is manual, leading to time-consuming and user-dependent results.

Goal(s): Our goal was to enhance plane reformatting in 4D flow MRI and overcome limitations associated with existing automated methods.

Approach: We introduce a novel approach that employs deep reinforcement learning (DRL) with a flexible coordinate system for precise and adaptable plane reformatting.

Results: Results demonstrate superior performance compared to baseline DRL and similar outcomes compared to those of landmark-based techniques, showing its potential for use in complex medical imaging scenarios beyond 4D flow MRI.

Impact: The proposed framework allows for automated, precise, and adaptive plane reformatting, facilitating the use of 4D flow MRI in clinical routines. It was trained with data sets from different vendors, making this approach widely applicable.