ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
AI/ML for Image Analysis, Diagnosis & Predictive Insights
Traditional Poster
Monday, 06 May 2024
Gather.town Space:   Room: Exhibition Hall (Hall 403)
13:45 -  14:45
Session Number: T-19
No CME/CE Credit

4843.
Early Alzheimer's Detection and Classification using VGG Convolutional Neural Network and Systematic Data Augmentation using MR Images
Elena Budyak1, Jihoon Kwon1, and Surendra Maharjan2
1Carmel High School, Carmel, IN, United States, 2Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, United States

Keywords: Diagnosis/Prediction, Machine Learning/Artificial Intelligence

Motivation: The use of different imaging tools at various hospitals results in varying contrast images. This fragmentation in healthcare prompted me to develop a personalized network that can be trained using hospital imaging database.

Goal(s): The main goal of this project is to predict early stages of Alzheimer's Disease (AD) using Magnetic Resonance (MR) images.

Approach: We applied convolutional neural network (CNN) to the T1 weighted images of AD, publicly available at https://www.kaggle.com/datasets/tourist55/alzheimers-dataset-4-class-of-images. The images were classified into four classes. F1 score and Area Under Curve (AUC) were calculated for the model after training.

Results: We demonstrated F1 score of 99.60% and AUC 0.994.

Impact: This model could be used to predict AD to other datasets that might help early detection of AD and subsequently improve treatment strategies. With various mice brain scan training, this network can also be used to aid AD researchers.

4844.
Predictive Value of Biochemical Recurrence in Advanced Prostate Cancer: Development of Deep Learning-based Radiomics Model
Huihui Wang1, Kexin Wang2, Yaofeng Zhang3, and xiaoying Wang1
1Peking University First Hospital, Beijing, China, 2School of Basic Medical Sciences, Capital Medical University, Beijing, China, 3Beijing Smart Tree Medical Technology Co. Ltd, Beijing, China

Keywords: Diagnosis/Prediction, Radiomics

Motivation: Deep learning forpredicting biochemical recurrence (BCR) is feasible but needs further evaluation in advanced prostate cancer (PCa). 

Goal(s): We aimed to develop radiomics models with automatic segmentation derived from pretreatment ADC maps that may be predictive of BCR in advanced PCa.

Approach: In this study, PCa areas were segmented on ADC images by using a pre-trained artificial intelligence (AI) model. Three models were constructed to evaluate BCR prediction level.

Results: The deep-radiomics model was superior than the clinical model and the conventional radiomics model in the aspect of prediction accuracy, clinical impact and risk assessment.

Impact: With accurate BCR prediction by deep-radiomics model, more appropriate treatment plans may be formulated and intervention treatment can be carried out as soon as possible, resulting in better prognosis for patients with PCa.

4845.
Automatic Quantification of Abdominal Subcutaneous and Visceral Adipose Tissue based on Dixon Sequences using Convolutional Neural Networks
Benito de Celis Alonso1, José Gerardo Suárez García2, Po Wah-So3, Javier Miguel Hernández López1, Silvia Sandra Hidalgo Tobón4,5, and Pilar Dies Suárez6
1Faculty of Physical and Mathematical Sciences, Benemérita Universidad Autónoma de Puebla, BUAP, Puebla, Mexico, 2Benemérita Universidad Autónoma de Puebla, BUAP, Puebla, Mexico, 3Department of Neuroimaging, Institute of Psychiatry, King´s College London, London, United Kingdom, 4Facultad de Ciencias, UAM Campus Iztapalapa, CDMX, Mexico, 5Imaging Department., Hospital Infantil de México, federico Gómez, CDMX, Mexico, 6Imaging Department, Hospital Infantil de México Federico Gómez, CDMX, Mexico

Keywords: AI/ML Software, Fat

Motivation: Currently there is a widely validated commercial semi-automatic method called AMRA® Researcher, which quantifies ASAT and VAT. However, it is not accessible to everyone due to the necessary economic means.

Goal(s): To develop an automatic, simple and free methodology to quantify ASAT and VAT, with at least the same precision as AMRA® Researcher.

Approach: Preprocessing and simple CNNs applied on in-phase Dixon MRI sequences were proposed for quantify VAT and ASAT.

Results: There were no significant differences between the quantifications from AMRA Researcher and our methodology. Both obtained a high correlation and our methodology reached the precision of AMRA® Researcher.

Impact: Our automatic, simple and free ASAT and VAT quantification methodology, studying MRI through preprocessing and CNNs, achieved the precision of the commercial semi-automatic AMRA Researcher method. After future independent validation, this could become an accessible tool to assist specialists.

4846.
A Convolutional Neural Network Approach to Personalized Neuropil Density Prediction
Brian Chang1, Adil Akif1, John Onofrey1, and Fahmeed Hyder1
1Biomedical Engineering, Yale University, New Haven, CT, United States

Keywords: Diagnosis/Prediction, Diffusion/other diffusion imaging techniques, Brain, gray matter, white matter

Motivation: Bottom-up energy budgets provide a way to quantify electrical activity in the brain using metabolic imaging. However, existing models are not patient-specific, instead using generalized neural cell counts, preventing direct measures of cognitive activity in the brain.

Goal(s): Our goal was to use a convolutional neural network (CNN) to demonstrate the possibility of predicting individualized neural cell counts.

Approach: Multi-modal MRI from nine patients was used to model neural and synaptic density predictions, which were compared to silver standard counts using correlation coefficient in a cross-validation study.

Results: The model demonstrates an ability to predict patient-specific energy budgets.

Impact: The success of machine learning methods in predicting neural cell and synaptic density paves the way for the use of CNNs to generate patient-specific energy budgets, improving understanding of brain energetics at a microscopic level in health and disease.

4847.
Fully Automatic Vertebrae and Spinal Cord Segmentation Using a Hybrid Approach Combining nnU-Net and Iterative Algorithm
Yehuda Warszawer1,2,3, Nathan Molinier4,5, Jan Valosek4,5, Emanuel Shirbint1, Pierre-Louis Benveniste4,5, Anat Achiron1,6, Arman Eshaghi7,8,9, and Julien Cohen-Adad4,5,10,11
1Multiple Sclerosis Center, Sheba Medical Center, Ramat-Gan, Israel, 2Arrow Program for Medical Research Education, Sheba Medical Center, Ramat-Gan, Israel, 3Adelson School of Medicine, Ariel University, Ariel, Israel, 4NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada, 5Mila - Quebec AI Institute, Montreal, QC, Canada, 6Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel, 7Queen Square Multiple Sclerosis Centre, Department of Neuroinflammation, University College London, London, United Kingdom, 8Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom, 9Centre for Medical Image Computing (CMIC), Department of Computer Science, Faculty of Engineering Sciences, University College London, London, United Kingdom, 10Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada, 11Centre de recherche du CHU Sainte-Justine, Université de Montréal, Montreal, QC, Canada

Keywords: AI/ML Software, Segmentation

Motivation: 3D visualisation of the spinal cord and vertebrae anatomy is critical for treatment planning and assessment of cord atrophy in neurodegenerative and traumatic diseases.

Goal(s): Develop a fully automatic segmentation of the whole spinal cord, vertebrae and discs.

Approach: The hybrid method combines a nnU-Net with an iterative processing algorithm with Spinal Cord Toolbox to conveniently generate ground truth labels. We used 3D T1w and T2w scans from three different databases.

Results: A validation Dice score of 0.928 was obtained (averaged across contrasts, classes and datasets), suggesting promising segmentation accuracy and capabilities for generalisation given the use of multi-site/multi-vendor datasets.

Impact: The fully automatic segmentation of the spine and spinal cord will pinpoint pathologies at specific vertebrae level, offering visualization for surgery preparation. This could also refine segmentation of substructures like multiple sclerosis lesions and tumors, inspiring solutions for related issues.

4848.
Semi-supervised learning for non-invasive radiopathomic mapping of treatment naïve glioma with multi-parametric MRI
Jacob Ellison1,2,3, Nate Tran1,2,3, Paramjot Singh1, Oluwaseun Adegbite1,2,3, Joanna Phillips4,5, Annette Molinaro4, Valentina Pedoia1,2,3, Tracy Luks1, Anny Shai4,5, Devika Nair1, Javier Villanueva-Meyer1,2, Mitchel Berger4, Shawn Hervey-Jumper4, Manish Aghi4, Susan Chang4, and Janine Lupo1,2,3
1Radiology and Biomedical Imaging, UCSF, San Francisco, CA, United States, 2Center for Intelligent Imaging, UCSF, San Francisco, CA, United States, 3Bioengineering, UCSF/UC Berkeley, San Francisco, CA, United States, 4Neurological Surgery, UCSF, San Francisco, CA, United States, 5Pathology, UCSF, San Francisco, CA, United States

Keywords: Diagnosis/Prediction, Machine Learning/Artificial Intelligence

Motivation:  Radiopathomic mapping of glioma could improve standard of care by helping guide surgical resection and subsequent treatment. Most current methods for predicting tumor pathology using MRI neglect intra-tumoral heterogeneity.

Goal(s): We aim to use multi-parametric MRI and deep learning to spatially map pathology for treatment naïve glioma.

Approach: We utilized histopathologically analyzed tissue samples taken during surgical resection with known coordinates on pre-surgical multi-parametric MRI and semi-supervised ensemble networks.

Results: Our model classifies Ki-67 with an AUROC of 0.84 and 0.73 for combined Ki-67 and percent cancerous cells. Including physiologic MRI and pretraining on patches of unknown pathology improved performance.

Impact: We performed radiopathomic mapping in patients with newly-diagnosed glioma using presurgical physiological + anatomical MRI and semi-supervised ensemble networks and achieved AUROCs of 0.84 and 0.73 for Ki-67 and combined Ki-67 and % cancerous cells, respectively.

4849.
Pre-operative prediction of cerebral hemodynamics for cognitive dysfunction in adults with Moyamoya Disease based on 3D-pCASL and radiomics
Tingxi Wu1, Xiangyue Zha1, Kan Deng2, Yaohong Deng3, Qin Liu1, and Yikai Xu1
1Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China, 2Philips Healthcare, Guangzhou, China, 3Department of Research & Development, Yizhun Medical AI Co. Ltd, Beijing, China

Keywords: Diagnosis/Prediction, Arterial spin labelling, Moyamoya disease

Motivation: Cognitive function in adult patients with moyamoya disease (MMD) is often impaired because of low cerebral perfusion.

Goal(s): To identify brain regions where low CBF is associated with cognitive dysfunction and assess the predictive performance of radiomics models for cognitive dysfunction in adults MMD.

Approach: 3D-pCASL and logistic regression analysis was employed to quantify CBF and explore independent predictors for preoperative cognitive dysfunction. And five different classifiers were used to establish radiomics models.

Results: Cerebral perfusion in the left LOFL, left IPL, left SMA, and left ACG showed significant associations with cognitive impairment. The final combined model had the best predictive performance.

Impact: Hypoperfusion on 3D-pCASL plays a crucial role in the detection of early cognitive impairment in adults with MMD, and the combined model that combined with CBF and radiomics features of specific brain regions showed better performance in predicting cognitive dysfunction.

4850.
Gadolinium contrast-enhanced lesion segmentation in multiple sclerosis: a deep-learning approach.
Martina Greselin1,2,3, Po-Jui Lu1,2,3, Lester Melie-Garcia1,2,3, Mario Ocampo-Pineda1,2,3, Riccardo Galbusera1,2,3, Alessandro Cagol1,2,3,4, Matthias Weigel1,2,3,5, Nina de Oliveira Siebenborn1,2,3,6, Esther Ruberte1,2,3,6, Pascal Benkert7, Stefanie Müller8, Lutz Achtnichts9, Jochen Vehoff8, Giulio Disanto10, Oliver Findling9, Andrew Chan11, Anke Salmen11,12, Caroline Pot13, Claire Bridel14, Chiara Zecca10,15, Tobias Derfuss3, Johanna M. Lieb16, Luca Remonda17, Franca Wagner18, Maria I. Vargas19, Renaud Du Pasquier13, Patrice H. Lalive14, Emanuele Pravatà20, Johannes Weber21, Claudio Gobbi10,15, David Leppert3, Ludwig Kappos1,2,3, Jens Kuhle2,3, and Cristina Granziera1,2,3
1Translational Imaging in Neurology (ThINk) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland, 2Department of Neurology, University Hospital Basel, Basel, Switzerland, 3Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland, 4Department of Health Sciences, University of Genova, Genova, Italy, 5Division of Radiological Physics, Department of Radiology, University Hospital Basel, Basel, Switzerland, 6Medical Image Analysis Center (MIAC), Basel, Switzerland, 7Clinical Trial Unit, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland, 8Department of Neurology, Cantonal Hospital St. Gallen, St. Gallen, Switzerland, 9Department of Neurology, Cantonal Hospital Aarau, Aarau, Switzerland, 10Neurology Department, Neurocenter of Southern Switzerland, Lugano, Switzerland, 11Department of Neurology, Inselspital, Bern University Hospital and University of Bern, Bern, Switzerland, 12Department of Neurology, St. Josef-Hospital, Ruhr-University Bochum, Bochum, Germany, 13Service of Neurology, Department of Clinical Neurosciences, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland, 14Department of Clinical Neurosciences, Division of Neurology, Geneva University Hospitals and Faculty of Medicine, Geneva, Switzerland, 15Faculty of biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland, 16Division of Diagnostic and Interventional Neuroradiology, Clinic for Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, 17Department of Radiology, Cantonal Hospital Aarau, Aarau, Switzerland, 18Department of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital and University of Bern, Bern, Switzerland, 19Department of Radiology, Geneva University Hospital and Faculty of Medicine, Geneva, Switzerland, 20Department of Neuroradiology, Neurocenter of Southern Switzerland, Lugano, Switzerland, 21Department of Radiology, Cantonal Hospital St. Gallen, St. Gallen, Switzerland

Keywords: Diagnosis/Prediction, Segmentation

Motivation: Detection of contrast-enhanced lesions (CELs) is fundamental for the diagnosis and monitoring of Multiple Sclerosis (MS) patients. This task is time-consuming and variable in the clinical setting. However, only a few studies reported automatic approaches.

Goal(s): To develop a deep-learning tool to automatically detect and segment CELs in clinical MRI scans from MS patients.

Approach: We implemented a UNet-based network with an adapted sampling strategy to overcome the scarcity of CELs. We considered the data imbalance to weight the training loss function.

Results: The model performance was evaluated for different lesion-volume ranges and achieved high performance even in low-volume lesions. 

Impact: We developed a deep-learning method fulfilling clinical needs in detecting and segmenting lesions characterized by low volume, low numbers per patient and heterogeneous shapes.

4851.
Can We Distinguish Intra- and Inter-Variability with Log Jacobian Maps Derived from Brain Morphological Deformations Using Pediatric MRI Scans?
Andjela Dimitrijevic1,2, Fanny Dégeilh3, and Benjamin De Leener1,4,5
1NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montréal, Montréal, QC, Canada, 2Research Center, Ste-Justine Hospital University Centre, Montréal, QC, Canada, 3IRISA UMR 6074, EMPENN ERL U-1228, Université de Rennes, CNRS, Inria, Inserm, Rennes, France, 4Research Center, Ste-Justine Hospital University Centre, Montreal, QC, Canada, 5Computer Engineering and Software Engineering, Polytechnique Montréal, Montreal, QC, Canada

Keywords: Other AI/ML, Data Analysis, Modelling

Motivation: Importance of analyzing deformation fields derived from both intra- and inter-individual pairs of T1-weighted images which could offer insights into typical and atypical  neurodevelopment.

Goal(s): We aimed to fine-tune a 3D CNN to classify intra and inter-individual variability based on log Jacobian maps from deformation fields of pediatric longitudinal MRI.

Approach: 279 log Jacobian maps of both intra- and inter-individual pairs are extracted using ANTs. A 3D CNN is trained in two ways (overlap and no overlap) for binary classification using 10-fold cross-validation.

Results: As expected, the overlap scenario had higher accuracy and F1 score compared to no-overlap, nonetheless both achieving good results.

Impact: This project's focus on pediatric MRI scans aims to understand deformations in medical imaging, advancing diagnostic tools. By distinguishing intra and inter-individual variability using log Jacobian-derived deformation patterns, it subsequently aims to model typical neurodevelopment through trajectories for deviation prediction.

4852.
An appropriate threshold for LGE images using deep learning-based reconstruction in revelation clinically unrecognized myocardial infarction
Weiyin Vivian Liu1, Xuefang Lu2, Yuchen Yan2, and Yunfei Zha2
1GE Healthcare, MR Research China, Beijing, China, 2Department of radiology, Renmin Hospital Wuhan University, Wuhan, China

Keywords: AI/ML Image Reconstruction, Cardiovascular

Motivation: To precisely screen out infarction in patients with unrecognized myocardial infarction (UMI) in hope of early intervention to reduce adverse cardiac events.

Goal(s): To evaluate deep learning reconstruction based late gadolinium enhancement (LGEDL) in comparison with conventional reconstructed LGE (LGEO) and also to explore an appropriate threshold method for LGE measurements.

Approach: LGEDL and LGEO of 77 patients diagnosed with UMI were evaluated for image quality and analyzed for MI areas using different standard deviation thresholds and a full-width-half-height (FWHM) method.

Results: The STRM ≥ 4SD and ≥ 3SD is respectively reckoned as the best reference threshold for LGEDL and LGEO.

Impact: The deep-learning based reconstruction LGE images had better image quality and reliable pathological evidences for detection of UMI. Significantly different Parea using threshold techniques for LGEDL and LGEO indicated the utility of STRM should be concerned.

4853.
MRI-based prediction of cerebral palsy risk in infants aged 6 months to 2 years: a deep learning approach
Zhen Jia1,2,3, Tingting Huang2,3, Man Li4, Yitong Bian2,3, Xianjun Li2,3, Feng Shi4, and Jian Yang1,2,3
1School of Future Technology, Xi'an Jiaotong University, Xi'an, China, 2Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China, 3Shaanxi Engineering Research Center of Computational Imaging and Medical Intelligence, Xi'an, China, 4Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China

Keywords: Diagnosis/Prediction, Brain, Cerebral Palsy

Motivation: Early prediction of cerebral palsy (CP) in infants plays a pivotal role in facilitating tailored rehabilitation treatment.

Goal(s): We hope to achieve early prediction of CP in infants aged 6 months to 2 years old based on MRI and deep learning technology.

Approach: We introduce a novel neural network model, known as the "Cerebral Palsy Brain Constraint Residual Network" (CPBC-Resnet), for the automatic prediction of CP risk based on MRI data.

Results: The CPBC-Resnet model exhibits an impressive receiver operating characteristic area under the curve (AUC) of 0.9521, achieving a sensitivity of 94.12% and a specificity of 100%.

Impact: This study streamlines cerebral palsy (CP) imaging diagnostics, reducing physician training costs, and expanding the reach of CP diagnostic technology. It promotes early CP diagnosis and intervention, particularly in areas with underdeveloped medical standards, contributing to overall child health improvement.

4854.
A multi-scale pyramid residual weight network for medical image fusion
Yiwei Liu1, Shaoze Zhang2, Xihai Zhao1, and Zuo-Xiang He3
1Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, BeiJing, China, 2Department of Biomedical Engineering, Tsinghua University, BeiJing, China, 3Beijing Tsinghua Changgung Hospital, BeiJing, China

Keywords: Diagnosis/Prediction, PET/MR, Artificial Intelligence,Brain

Motivation: At present, the multi-modal fusion image has the problems of weak functional information performance and much noise.

Goal(s): Based on the existing technology, this research increases the retention of functional information and improves the display quality of the fused image.

Approach: In this study, a multi-scale pyramid convolutional neural network model based on residual structure is constructed, which can extract deeper semantic information while retaining shallow context information.

Results: By constructing a new convolutional neural network, the loss of functional information in the fused image is reduced, the noise of the fused image is reduced, and the image quality is improved.

Impact: The multimodal image fusion technology proposed in this paper preserves the texture information of MRI and CT, and the functional information of PET/SPECT at the same time, which makes more dimensions available for clinical diagnosis in the future.

4855.
Classification of Grade II and III Astrocytomas for Multi-modal MRI using Deep Volumetric Attention Networks.
Hamail Ayaz1, Oladosu Oyebisi Oladimeji1, David Tormey2, Ian McLoughlin3, and Saritha Unnikirishnan1
1Computing and Electronics, Atlantic Technological University Sligo, Sligo, Ireland, 2Mechanical & Electronic Engineering, Atlantic Technological University Sligo, Sligo, Ireland, 3Computer Science and Applied Physic, Atlantic Technological University Sligo, Galway, Ireland

Keywords: Diagnosis/Prediction, Brain, Volumetric Attention Network, Deep Learning, Astrocytomas, Glioma, Classification

Motivation: Diagnosis and grading of astrocytomas tumour present considerable challenges. Manual grading is time-consuming and error prone. Preoperative MRIs are a useful, yet deep learning presents challenges due to computing limitations and complex architecture.
 

Goal(s): Study introduces novel multimodal MRI classification for grade II and III astrocytomas, aiming to improve accuracy, reduce complexity, and address interclass homogeneity via attention mechanism.

Approach: Single slice from eight MRI modalities forms a three-dimensional cube. Normalized, iPCA processed, and passed to deep model with volumetric attention network.

Results: The DVA using advanced and traditional MRI information outperforms existing models achieving an overall accuracy of 77% using five-fold cross-validation.

Impact: The proposed multimodal MRI classification approach enhances astrocytoma diagnosis and grading. The deep volumetric attention model improves accuracy, reduces model complexity, and holds potential for trustworthiness impacts in clinical practice.

4856.
A Two Step Workflow to Support Fully Autonomous MR Scanning in Prostate
Dawei Gui1, Aanchal Mongia2, Chitresh Bhushan3, Jeremy Heinlein1, Kavitha Manickam1, Muhan Shao3, Uday Patil2, and Dattesh Shanbhag2
1GE Healthcare, Waukesha, WI, United States, 2GE Healthcare, Bengaluru, India, 3GE Healthcare, Niskayuna, NY, United States

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence

Motivation: A fully automatic workflow for scan plane prescription is desirable in clinical settings.

Goal(s): Our goal is to demonstrate a deep learning-based MRI scan workflow for fully automated MR scanning in the prostate.

Approach: This new scan workflow will identify anatomical landmarks and scan planes for prostate planning (coverage, FOV and orientation) from coil sensitivity and 3plane scout images. 

Results: The deep learning-based anatomy recognition showed acceptable average location error below 5mm and plane orientation error below 10 degrees.

Impact: As no interaction from the operator is required to complete a full MR prostate scan, it paves the way for fully automated MR scans for the prostate anatomy.

4857.
High-variability synthetic fat-water MRI dataset for testing the robustness of Deep Learning-based reconstruction models
Ganeshkumar M1, Devasenathipathy Kandasamy2, Raju Sharma2, and Amit Mehndiratta1,3
1Centre for Biomedical Engineering, Indian Institute of Technology - Delhi, New Delhi, India, 2Department of Radio Diagnosis, All India Institute of Medical Sciences, New Delhi, India, 3Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, India

Keywords: AI/ML Image Reconstruction, Quantitative Imaging, Fat-water seperation, PDFF, Deep Learning, Fat Quantification, Physics Informed Deep Learning, Synthetic MRI

Motivation: Deep Learning (DL) models have recently been used for fat-water separation in Multi-Echo MRI (ME-MRI). However, DL models may not always be robust and under-perform when not trained with a large and diverse dataset.

Goal(s): This research proposes high-variability synthetic ME-MRI generated using the biophysical model of fat-water separation as a tool for testing the generalizability and robustness of DL-based fat-water separation models.

Approach: High-variability synthetic ME-MRI was used to evaluate the robustness of the recent state-of-the-art DL-based Ad-Hoc Reconstruction (AHR) method for fat-water separation.

Results: The AHR method lacked robustness and synthetic ME-MRIs can be effectively used to test DL models.

Impact: The fat-water maps obtained by processing the Multi Echo-MRI (ME-MRI) are of diagnostic and prognostic value in many diseases. This study investigates the role of synthetic ME-MRIs with high variability in testing the robustness of Deep Learning-based fat-water separation models.

4858.
Automated Bladder Segmentation of 3D Dynamic MRI For Urodynamic Analysis Using Deep Learning
Labib Shahid1, Juan Pablo Gonzalez-Pereira1, Jennifer Franck1, and Alejandro Roldan-Alzate1
1University of Wisconsin-Madison, Madison, WI, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, Segmentation, Urodynamics

Motivation: Bladder dysfunction is assessed using catheterization which are invasive and provides insufficient biomechanical information. MRI urodynamics is tedious because segmentation of bladders over numerous time steps during voiding.

Goal(s): Implement automated segmentation using deep learning for accelerating the workflow of MRI-based urodynamic assessment.

Approach: Train a U-Net using 3D dynamic images and manually segmented masks. Use time and dice score to assess the performance of the network.

Results: Images of bladder voiding from five subjects were used to train the network and can segment one bladder in <3 minutes, compared to 20 minutes for manual segmentation. Dice score was 0.99 showing excellent performance.

Impact: Urodynamic assessment using MRI is a tedious process due to segmentation of the bladder from 3D dynamic image datasets. We automated segmentation using deep learning to accelerate the workflow. Our automated process reduced time sixfold and produces excellent segmentation.

4859.
Automated Pancreatic Segmentation and Quantitative Calculation Based on nnUnet
Li YingHao1, Wang SuCheng1, Zhu ZhongQi1, Wang HongZhi1, Li RenFeng2, Wang LiHui2, and Lu Qing2
1East China Normal University, ShangHai, China, 2Department of Radiology, Shanghai East Hospital, Tongji University, Shang Hai, China

Keywords: Diagnosis/Prediction, Segmentation

Motivation: Pancreatic diseases often exhibit spatial non-uniformity. Achieving automated segmentation of different pancreatic regions and conducting quantitative calculations of volume and fat content can effectively assist physicians in diagnosis and treatment.

Goal(s): Developing a segmentation network to achieve the automatic segmentation of the pancreas and to perform quantitative calculations of volume and fat content in diffrent regions.

Approach: Sample acquisition was performed using Dixon sequences, and training was conducted using an improved nnUnet network. Additionally, an automated pancreatic segmentation and quantitative calculation method was developed.

Results: With a training dataset consisting of 800 cases, the network achieved a segmentation Dice coefficient of 0.92.

Impact: To save professional physician annotation time for early detection and diagnosis of pancreatic diseases, as well as for quantifying changes before and after pancreatic treatments, and to assist in clinical drug therapy.

4860.
Harmonizing Multicenter Datasets: Enhancing Consistency and Longitudinal Alignment using NLP and Realignment Algorithms
Thomas Campbell Arnold1, Lanhong Yao1, Ben Duffy1, Greg Zaharchuk2, Ryan Chamberlain1, and Ludovic Sibille1
1Subtle Medical, Menlo Park, CA, United States, 2Radiology, Stanford University, Palo Alto, CA, United States

Keywords: AI/ML Software, Machine Learning/Artificial Intelligence, Natural Language Processing

Motivation: Cohesive multicenter imaging datasets are critical for research, yet variability across institutions poses a significant challenge, especially when aggregating retrospective data for longitudinal disease monitoring.

Goal(s): Here, we present a method for harmonizing multicenter data that produces consistent series descriptions and enhances brain alignment between longitudinal time points.

Approach: We employed an NLP pipeline to standardize series descriptions and an automated algorithm to realign images. We applied these tools to ADNI imaging collected across multiple sites, scanners, and time points.

Results: The pipeline consolidated 101 unique series descriptions into 17 standardized descriptions. The alignment algorithm reduced orientation error and improved longitudinal image consistency.

Impact: Our methodology can impact clinical workflows by streamlining multicenter data analysis and enhancing longitudinal disease monitoring. These techniques improve image consistency between time points, which can facilitate disease monitoring and allow radiologists to assess changes in chronic disorders.

4861.
Overcoming the missing data challenge in clinical imaging using CycleGAN based on brain MRI in Multiple Sclerosis
Shayan Shahrokhi1, Rehman Tariq2, Olayinka Oladosu1, and Yunyan Zhang3
1Neuroscience, University of Calgary, CALGARY, AB, Canada, 2Biomedical Engineering, University of Calgary, Calgary, AB, Canada, 3Radiology, University of Calgary, Calgary, AB, Canada

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Clinical MRI datasets are not always comprehensive or consistent, limiting their use for secondary analysis.

Goal(s): Investigating the suitability of a deep learning model named CycleGAN, with optional spectral normalization, for dealing with the missing sequence problems in clinical imaging as seen in multiple sclerosis (MS). 

Approach: Using standard brain MRI of 104 MS people, we implemented 2 CycleGAN models, one with and one without spectral normalization to compare.

Results:  CycleGAN performed competitively in image transformation between T1-weighted and T2-weighted images. Adding spectral normalization appears to improve performance, especially when the quality of training scans is inconsistent.

Impact: CycleGAN-based model has the potential to generate non-acquired images not always needed in standard clinical imaging, as seen in brain MRI in MS, where the resulting images can help promote various secondary analysis studies including machine learning.