ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
AI-Empowered Image Segmentation
Digital Poster
AI & Machine Learning
Monday, 06 May 2024
Exhibition Hall (Hall 403)
16:00 -  17:00
Session Number: D-165
No CME/CE Credit

Computer #
2090.
65Automatic segmentation of the fetal hippocampus using 3D deep convolution neural networks
Yao Wu1, Kushal Kapse1, Christina Elizabeth Mastracchio1, Hironori Teramoto1, Stephanie Araki1, Patricia Saulino1, Merrick Lynne Kasper1, Nickie Niforatos Andescavage1, Gilbert Vezina1, and Catherine Limperopoulos1
1Children's National Hospital, Washington, DC, United States

Keywords: Analysis/Processing, Segmentation, Fetal hippocampus; Convolution neural networks

Motivation: The ability to accurately segment the fetal hippocampus is critical to advancing our understanding of the origins of prenatal memory and emotional processing difficulties. Current manual methods are laborious and subjective. 

Goal(s): We aim to automate left and right fetal hippocampal segmentation in 3D MR images. 

Approach: We applied a 3D U-Net based model to automatically segment the left and right fetal hippocampus in 3D MR images. 

Results: Our dataset comprised 131 fetuses with 191 MRI scans. The results demonstrated high accuracy and efficiency, particularly for this challenging-to-segment structure, illuminating the potential of deep convolutional neural networks in this application.

Impact: This study's automatic fetal hippocampal segmentation with deep learning has the potential to advance in utero brain development research and biomarker studies. The potential impact includes improving early diagnostics, in-utero neuro-surveillance, and future targeted therapeutics.

2091.
66MR-guided automatic whole-brain segmentation via deep learning technology based on integrated PET/MRI system
Wenbo Li1, Zhenxing Huang1, Yaping Wu2, Wenjie Zhao1, Yongfeng Yang1,3, Hairong Zheng1,3, Dong Liang1,3, Meiyun Wang2, and Zhanli Hu1,3
1Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China., shenzhen, China, 2Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou 450003, China., henan, China, 3Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences., shenzhen, China

Keywords: Analysis/Processing, Brain

Motivation: Segmentation of brain tissues plays a significant role in quantifying and visualizing anatomical structures based on PET/MRI systems. 

Goal(s): However, most of the current methods are based on unimodal MRI but rarely combine structural and functional dual-modality information.

Approach: In this paper, we proposed a dual-modality segmentation framework to achieve automatic and accurate segmentation for the whole brain. 

Results: The numerical experimental results demonstrate that the proposed method can incorporate multimodal information with the efficient and accurate segmentation performance achieved, allowing for better visualization and quantification results.

Impact: We proposed a novel dual-modality whole-brain segmentation method based on PET and MR images that is beniificial to enrich the network features. Additionally,  our method has reduced the segmentation time and  could be implemented with other multimodal data.

2092.
67Scalable and Transferable U-Net for Accurate Simultaneous 3D MRI Segmentation of Gestational Sac and Decidual Tissue in Cesarean Scar Pregnancy
Jie Shi1, Jin Ye2, Le Fu3, Junjun He2, Tianbin Li2, and Jiejun Cheng3
1GE Healthcare, Shanghai, China, 2Shanghai AI Laboratory, Shanghai, China, 3Shanghai first maternity and infant hospital, Shanghai, China

Keywords: Analysis/Processing, Uterus, Large-scale Deep Learning, Cesarean scar pregnancy, gestational sac

Motivation: Cesarean scar pregnancies (CSP) pose significant risks and complications. Accurate segmentation of the gestational sac (GS) and decidual tissue (DEC) in CSP through MRI is crucial for diagnosis, but current methods are limited in effectiveness. 

Goal(s): Introduce a large-scale and pre-trained model, Scalable and Transferable U-Net (STU-Net), to accurately segment GS and DEC simultaneously. 

Approach: 151 CSP females with structural MRI were enrolled. STU-Net was trained and evaluated.

Results: The proposed STU-Net achieved promising segmentation performance.

Impact: The proposed STU-Net enables precise segmentations of GS and DEC, potentially enhancing the diagnostic accuracy of CSP.

2093.
68Contrastive Learning with Multi-Contrast Constraints for Segmentation in Renal Magnetic Resonance Imaging
Aya Ghoul1, Lavanya Umapathy2, Cecilia Zhang3, Petros Martirosian3, Ferdinand Seith3, Sergios Gatidis1,4, and Thomas Küstner1
1Medical Image And Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen, Tuebingen, Germany, 2Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, United States, 3Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen, Tuebingen, Germany, 4Department of Radiology, Stanford University, Stanford, CA, United States

Keywords: Analysis/Processing, Segmentation, Reproducibility challenge, multi-parametric renal MRI, AI/ML Image segmentation, Kidney

Motivation: Supervised deep learning provides state-of-the-art medical image segmentation when large labeled images are accessible. However, manual segmentation suffers from prolonged delineation.

Goal(s): In response to the 2024 ISMRM Challenge “Repeat it With Me: Reproducibility Team Challenge”, we aim to show the effectiveness of contrastive learning to find suitable initialization for segmentation with limited annotation.

Approach: We use a multi-contrast contrastive loss guided by representational constraints to learn discriminating features within multi-parametric renal MR images and fine-tune the pretrained model on segmentation tasks.

Results: Our findings validate that pretraining diminishes the needed annotation effort by 60% for different imaging sequences and enhances segmentation performance.

Impact: Multi-contrast contrastive learning reduces annotation effort to train deep-learning segmentation models, confirming prior findings in a new cohort, within the 2024 ISMRM Challenge “Repeat it With Me: Reproducibility Team Challenge” and indicating its potential to improve multi-parametric imaging workflows.

2094.
69CPU-based real time cardiac MRI segmentation using lightweight neural network and knowledge distillation
Yijun Cui1, Craig H Meyer2, and Xue Feng2
1Computer Science, Vrije University Amsterdam, Amsterdam, Netherlands, 2Biomedical Engineering, University of Virginia, Charlottesville, VA, United States

Keywords: Analysis/Processing, Segmentation

Motivation: Cardiac MRI plays an important role in diagnosis and prognosis of cardiovascular disease. Ideally, clinicians wants to get real-time segmentation using the existing CPU device but is challenging due to high computation burden of current neural networks.

Goal(s): We aim to develop a lightweight network to accelerate cardiac MRI segmentation on a CPU device while maintaining the accuracy.

Approach: We used layer-wish knowledge distillation to improve the accuracy of the lightweight network.

Results: Our results showed that the accuracy of the lightweight model can satisfy the real-time segmentation on CPU devices and achieve the same level of accuracy as the complex model.

Impact: This research provides a way to significantly reduce the running time of neural network on CPU device while maintaining accuracy using knowledge distillation. It facilitates the deployment of neural network in clinical practice by eliminating the need for additional hardware.

2095.
70Deep Learning Myocardial Segmentation in 3D Whole-Heart Joint T1/T2 mapping: Comparison of nnU-Net and MA-SAM
Carlota Gladys Rivera1, Carlos Velasco 2, Alina Hua2, René M. Botnar1,2,3,4,5, and Claudia Prieto1,2,4
1IMPACT, Center of Interventional Medicine for Precision and Advanced Cellular Therapy, Santiago, Chile, 2School of Biomedical Engineering, King’s College London, London, United Kingdom, 3Millennium Institute iHEALTH, Santiago, Chile, 4School of Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile, 5Institute for Biological and Medical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile

Keywords: Analysis/Processing, Segmentation, 3D mapping, joint T1/T2

Motivation: The significant amount of data collected from a single 3D whole-heart joint T1/T2 mapping sequence substantially increases the time required for segmenting and analyzing the quantitative maps, therefore, automating the segmentation process could result in a significant reduction.

Goal(s): To automate the segmentation of myocardium using state-of-the-art segmentation networks.

Approach: Two segmentation networks, nnUNet and MA-SAM, are trained and compared for myocardial segmentation of whole-heart joint T1/T2 mapping in healthy subjects and patients.

Results: nnUNET and MA-SAM achieved good quality segmentations with DICE score higher than ~0.856 with smoothed masks. nnU-Net achieved better results in term DICE and required the shortest training time.

Impact: State-of-the-art nnUNET and MA-SAM networks achieve accurate automatic myocardial segmentation of whole-heart joint T1/T2 mapping. This can significantly reduce the laborious task of manual segmentation and could help to accelerate the analysis and therefore the diagnosis of myocardium-related disease.

2096.
71A-Eye: quality control and deep learning segmentation of the complete eye in MRI
Jaime Barranco1,2,3, Hamza Kebiri1,2,3, Óscar Esteban2, Raphael Sznitman4, Sönke Langner5,6, Oliver Stachs7, Adrian Konstantin Luyken7, Philipp Stachs8, Benedetta Franceschiello2,3,9,10,11, and Meritxell Bach Cuadra3,11
1Center for Biomedical Imaging (CIBM), Lausanne, Switzerland, 2Lausanne University Hospital (CHUV ), Lausanne, Switzerland, 3University of Lausanne (UNIL), Lausanne, Switzerland, 4ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland, 5Institute for Diagnostic and Interventional Radiology, Pediatric and Neuroradiology, Rostock University Medical Center, Rostock, Germany, 6Department of Diagnostic Radiology and Neuroradiology, University of Greifswald, Greifswald, Germany, 7Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany, 8Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 9HES-SO Valais-Wallis, Sion, Switzerland, 10The Sense Innovation and Research Center, Sion and Lausanne, Switzerland, 11These authors provided equal last-authorship contribution, Lausanne, Switzerland

Keywords: Analysis/Processing, Segmentation, Quality Assessment and Control, Eye, MREye, Ophthalmology, Ocular

Motivation: Reliable large-scale MREye segmentation.

Goal(s): Quality control of eye MRI and deep learning segmentation validation.

Approach: We automatically extract Image Quality Metrics (IQMs) and use them as features to train a model in a supervised framework with expert rating annotations as target. Multi-class 3D MREye segmentation is done for the first time using the deep-learning-based approach nnUNet.

Results: None of the models achieved the required levels of sensitivity and specificity necessary for our MREye application. nnUNet for MREye segmentation tasks yielded promising outcomes, robust to a variety of MRI quality.

Impact: MREye does not escape the evidence that insufficient data quality threatens the reliability of analysis outcomes. We pioneer manual and automated quality control on MREye and benchmark deep learning eye segmentation.

2097.
72Lesion Instance Segmentation in Multiple Sclerosis: Assessing the Efficacy of Statistical Lesion Splitting
Maxence Wynen1,2, Pedro Macias Gordaliza3,4, Anna Stölting2, Pietro Maggi2,5, Meritxell Bach Cuadra3,6, and Benoit Macq1
1ICTeam, UCLouvain, Louvain-la-Neuve, Belgium, 2Louvain Neuroinflammation Imaging Lab (NIL), UCLouvain, Brussels, Belgium, 3Center for Biomedical Imaging (CIBM), University of Lausanne, Lausanne, Switzerland, 4Medical Image Analysis Laboratory, Radiology Department, University of Lausanne, Lausanne, Switzerland, 5Department of Neurology, Saint-Luc University Hospital, Brussels, Belgium, 6Medical Image Analysis Laboratory, Radiology Department, Lausanne University Hospital, Lausanne, Switzerland

Keywords: Analysis/Processing, Segmentation, Instance Segmentation

Motivation: Accurate white matter lesion (WML) counting and delineation are crucial for multiple sclerosis (MS) diagnosis and prognosis. Though being a critical step in clinical research and automated tools relying on lesion-centered patches, no previous work studied post-processing methods to transform voxel-wise segmentations into lesion instance masks in MS.

Goal(s): In this study, we compare the conventional connected components (CC) method to a confluent lesion splitting (CLS) method that was used but never validated.

Approach: CC and CLS's performances are evaluated using three common lesion segmentation tools (LSTs): SPM, SAMSEG, and nnU-Net.

Results: CLS lacks generalization, sacrifices specificity for sensitivity and worsens segmentation quality.

Impact: Our results underscore the need for the development of a novel instance segmentation methodology that accounts for (i) the potential large distance between voxels and the center of the lesions to which they belong and (ii) confluent lesions.

2098.
73Robust Rat Liver Lobe Segmentation in Low-SNR T2-weighted Datasets Using a 2.5D Approach
Wei-Chan Hsu1,2,3, Wan-Ting Zhao2, Karl-Heinz Herrmann2, Daniel Güllmar2, Weiwei Wei4, Uta Dahmen4, Kai Lawonn3, and Jürgen Reichenbach2
1Neuroradiology Division, Institute of Diagnostic and Interventional Radiology, Jena University Hospital, Jena, Germany, 2Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena University Hospital, Jena, Germany, 3Visualization and Explorative Data Analysis Group, Faculty of Mathematics and Computer Science, Friedrich Schiller University Jena, Jena, Germany, 4Experimental Transplantation Surgery, Department of General, Visceral and Vascular Surgery, Jena University Hospital, Jena, Germany

Keywords: Analysis/Processing, Segmentation, portal vein ligation (PVL), signal-to-noise ratio (SNR)

Motivation: Our project was motivated by the lack of an efficient way of segmenting ligated and non-ligated liver lobes in portal vein ligation (PVL) experiments.

Goal(s): Our goal was to demonstrate that a 2.5D segmentation approach can achieve precise and robust lobe segmentation in experimental PVL volumetry to reduce manual annotation work.

Approach: We stacked adjacent slices as input and trained a U-Net to segment the rat liver lobes using 15 rat T2-weighted datasets.

Results: An average Dice score of 0.707 was reached by 5-fold cross validation on 15 datasets, showing the robustness in low-SNR MR images with high intensity variation.

Impact: We demonstrate the 2.5D approach is robust in segmenting liver lobes with varied intensity in low-SNR MR images. The framework can greatly reduce manual annotation work even with limited datasets.

2099.
74Automatic segmentation of spinal cord nerve rootlets
Jan Valosek1,2,3,4, Theo Mathieu1, Raphaëlle Schlienger5, Olivia Kowalczyk6,7, and Julien Cohen-Adad1,2,8,9
1NeuroPoly Lab, Polytechnique Montreal, Montreal, QC, Canada, 2Mila - Quebec AI Institute, Montreal, QC, Canada, 3Department of Neurosurgery, Faculty of Medicine and Dentistry, Palacký University Olomouc, Olomouc, Czech Republic, 4Department of Neurology, Faculty of Medicine and Dentistry, Palacký University Olomouc, Olomouc, Czech Republic, 5Laboratoire de Neurosciences Cognitives (UMR 7291), CNRS – Aix Marseille Université, Marseille, France, 6Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom, 7Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom, 8Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada, 9Centre de Recherche du CHU Sainte-Justine, Université de Montréal, Montreal, QC, Canada

Keywords: Analysis/Processing, Spinal Cord, Deep Learning; Nerve Rootlets; Segmentation

Motivation: Precise identification of spinal nerve rootlets is relevant for studying functional activity in the spinal cord.

Goal(s): Our goal was to develop a deep learning-based tool for the automatic segmentation of spinal nerve rootlets from multi-site T2-w images coupled with a method for the automatic identification of spinal levels.

Approach: Active learning was employed to iteratively train a nnUNet model to perform multi-class spinal nerve rootlets segmentation.

Results: The code/model is available on GitHub and is currently being validated by several laboratories worldwide.

Impact: Currently, most spinal cord fMRI studies use vertebral levels for groupwise registration, which is inaccurate. This new tool enables researchers to identify spinal levels via the automatic segmentation of nerve rootlets, improving fMRI analysis pipeline accuracy.

2100.
75Deep Learning-Based Automated Kidney and Cortex Segmentation from Non-contrast T1-weighted Images
lianqiu xiong1,2, Gang Huang2, Shanshan Jiang3, Yi Zhu4, caixia zou1, nini pan1, and liuyan shi1
1Gansu University of Chinese Medicine, lanzhou, China, 2Department of Radiology, Gansu Provincial Hospital, lanzhou, China, 3Philips Healthcare, Xi'an, China, 4Philips Healthcare, Bejing, China

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: In the realm of kidney imaging, the precise measurement of kidney volumes, including total, cortical, and medullary volumes, is of significant clinical importance, but manual segmentation is time-consuming and impractical.

Goal(s): To develop a fully automated deep learning-based segmentation method for segmenting the entire kidney and internal structures in MR images.

Approach: Utilized a 3D nnU-Net deep learning model trained with non-contrast-enhanced T1-weight MR images from 40 volunteers, validated against manual segmentation.

Results: The automated method strongly correlated with manual measurements (Pearson’s > 0.9) and achieved Dice coefficients of 0.96 for the whole kidney and 0.84 for the cortex on the test set.

Impact: This deep learning approach offers rapid, precise, and replicable kidney volume analysis, enhancing both research and clinical care.

2101.
76Semi-supervised segmentation method based on a small number of labeled left atria
Yiwei Liu1, Shaoze Zhang2, Xihai Zhao1, and Zuo-Xiang He3
1Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, BeiJing, China, 2Department of Biomedical Engineering, Tsinghua University, BeiJing, China, 3Beijing Tsinghua Changgung Hospital, BeiJing, China

Keywords: Analysis/Processing, Segmentation, left atria; semi-supervised

Motivation: The study aims to propose segmentation models and explore the impact of the proposed model on small models and the impact of the proportion and amount of different labeled data on the results.

Goal(s): The goal of this study is to achieve better results with less labeled data.

Approach: The study builds a model  and evaluates the performance of the model on different sizes of left atria labeled data.

Results: The method proposed in this study can improve the effect of small models, and reducing the labeled data proportion has a greater impact on the model performance than reducing the training data.

Impact: This study provides help for how to improve the medical image segmentation performance of small models, improve the efficiency of manual labeling, and achieve better segmentation results with fewer manual annotations.

2102.
77Deep learning-based comprehensive multi-sequence liver lesion segmentation for accurate tumor burden assessment
Xiaolan Zhang1, Botong Wu1, and Chao Zheng1
1Shukun Technology Co.,Ltd, Beijing, China

Keywords: Analysis/Processing, Liver

Motivation: Accurate lesion segmentation is crucial for tumor burden assessment and subsequent patient-specific treatment prediction.

Goal(s): To develop and validate a deep learning-based automated segmentation model for accurate hepatocellular carcinoma (HCC) lesion detection across various imaging sequences.

Approach: A total of 2800 patients with focal liver lesions (FLLs) were included for developing automated segmentation models.
The automated segmentation models were trained involving preprocessing, lesion detection using mask R-CNN, and lesion segmentation using a 3D-UNet framework.

Results: The 3D-UNet framework is used for lesion segmentation, achieving a DSC accuracy ranging from 78.23% to 85.14% and volume ratios from 0.92 to 1.51 across different sequences.

Impact: The model demonstrates promising potential in accurately segmenting HCC lesions.

2103.
78Fully-Automated Segmentation algorithm of Rectal Cancer and mesorectum on Multiparametric MR
Lili Guo1, Kuang Fu2, and Wenjia Wang3
1Department of MRI Diagnosis, The Second Affiliated Hospital of Harbin Medical University, Harbin, China, 2The Second Affiliated Hospital of Harbin Medical University, Harbin, China, 3MR Research China, GE HealthCare, Beijing, China

Keywords: Analysis/Processing, Cancer

Motivation:  Developing a fast solution to segment rectal tumors and mesorectal tissue instead of the current manual labeling.

Goal(s): The goal was to develop an automated segmentation model using nnU-Net for fully-automated segmentation of rectal cancer and mesorectum on  MR images.

Approach: The dataset was divided into training and testing sets, and pre-processing steps were conducted to minimize computational burden. The nnU-Net deep learning network was employed to train the model.

Results: The Dice similarity coefficients for tumor and mesorectum in both the training and testing sets were as follows: 0.91 (training) and 0.88(testing) for tumor, and 0.93 (training) and 0.89 (testing) for mesorectum.

Impact: This study proposes an automatic segmentation scheme for rectal tumor and mesentery using deep learning. It can be used to guide the annotation of new medical images, potentially improving the accuracy of rectal cancer treatment response predictions.

2104.
79MRI-Based Deep Learning for Automatic Segmentation of Punctate White Matter Injury in Neonates
Qinli Sun1, Yuwei Xia2, Miaomiao Wang1, Xianjun Li1, Congcong Liu1, Huifang Zhao1, Pengxuan Bai1, Yao Ge1, Feng Shi2, and Jian Yang1
1Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi’an, China, 2Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China

Keywords: Analysis/Processing, Segmentation, Punctate white matter injury

Motivation: Punctate white matter injury (PWMI) in neonates is characterized by small lesions and significant sample variability, posing a challenge for quantification.

Goal(s): We introduce a novel approach that combines the 3D nnU-Net framework for semantic segmentation of PWMI using neonatal brain MR images. 

Approach: The PWML automatic segmentation models, based on 3D-T1WI, was developed utilizing V-Net, VB-Net, 2D nnU-Net and 3D nnU-Net. Automatic localization of lesions and quantitative analysis of the brain regions were further realized by segmentation of dHCP template brain regions.

Results: The automatic segmentation model demonstrated robust performance, achieving a median Dice Similarity Coefficientn of 0.865 on the test set.

Impact: This innovation offers an automatic and accurate segmentation of PWMI regions, potentially providing clinicians with a powerful tool for the automatic localization and classification model construction, quantitative analysis and grading prognostic study of PWMI in neonates.

2105.
80Deep Learning Based Automated Brain Segmentation from Computed Tomography Scans
Won Jun Son1, Sung Jun Ahn2, and Hyunyeol Lee1
1School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, Korea, Republic of, 2Department of Radiology, Yonsei University College of Medicine, Seoul, Korea, Republic of

Keywords: Analysis/Processing, Segmentation

Motivation: While computed tomography (CT) imaging has been actively employed in clinical practice, its limited contrast for brain tissues makes it challenging to achieve precise brain segmentation.

Goal(s): In this study, we developed a deep learning (DL)-based method enabling brain tissue segmentation from CT image.

Approach: MRI-derived tissue labels were provided as ground truth to a DL network, where U-Net and VGG16 interact to each other for model optimization by means of a perceptual loss.

Results: Results demonstrate the effectiveness of incorporating the perceptual loss to the model in preserving image details, and in terms of evaluation scores. 

Impact: The presented method, upon further validation and optimization, is expected to be a valuable means to a range of brain imaging studies where MRI is somehow not available.