ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Analysis: Segmentation
Digital Poster
Analysis Methods
Monday, 06 May 2024
Exhibition Hall (Hall 403)
16:00 -  17:00
Session Number: D-179
No CME/CE Credit

Computer #
2121.
97Open-Source Automatic Whole and Subchondral Bone Segmentation using a Deep-Learning-Based Framework, DOSMA
Ananya Goyal1, Rune Pedersen2, Yael Vainberg3, Bryan Haddock2, Akshay Chaudhari3, Feliks Kogan3, and Anthony Gatti3
1Radiology, Stanford University, Stanford, CA, United States, 2Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark, 3Stanford University, Stanford, CA, United States

Keywords: Segmentation, Segmentation, Bone, Knee MRI, Pipeline, AI

Motivation: [18F]NaF PET imaging is a promising technique to study the role of bone metabolism in joint diseases such as osteoarthritis. To ease the corresponding burden of segmentations and data analysis, we developed an automated pipeline.

Goal(s): We developed a new automated pipeline for knee bone segmentations. We validated the creation of subchondral bone masks for applications to [18F]NaF PET imaging by measuring changes in PET measures.

Approach: We developed an automated segmentation pipeline for bone segmentations and validated the results for PET imaging. 

Results: DOSMA automated segmentations perform highly for bones and show applicability for quantitative musculoskeletal analysis.

Impact: Our automated bone segmentation and PET data analysis pipeline enables a streamlined way of automating PET-MRI processing, including registration, segmentation, quantitative mapping, and visualization of outcome measures.

2122.
98Automatic Right Ventricular Segmentation of Cardiac Cine Magnetic Resonance Images Based on a Novel Multi-atlas Two-Stage U-net
Lijia Wang1 and Hanlu Su1
1School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China

Keywords: Segmentation, Heart

Motivation: Right ventricular(RV) segmentation is of great significance for the clinical diagnosis of heart diseases. However, due to the complex structure, RV segmentation is still challenging.

Goal(s): Fully automatic and accurate segmentation of the right ventricle.

Approach: A  new deep atlas network that combines atlas prior knowledge with Deformable Multi-scale Two-Stage U-net(DMTSU-net) is proposed to extract and fuse multi-scale RV features in Cine Cardiac Magnetic Resonance (CCMR) images.

Results: Compare with 8 classical methods, the segmentation results of DMTSU-net are mostly close to the gold standard and significantly correlate with it on all evaluation  indices in 15 testing datasets.

Impact: The proposed framework integrates prior information of atlases into a deep neural network to achieve accurate segmentation, which is promising for clinical heart disease diagnosis.

2123.
99High quality brain segmentation from synthetic MPRAGE images at 7T MRI
Marc-Antoine Fortin1, Rüdiger Stirnberg2, Yannik Völzke2, Laurent Lamalle3, Eberhard Pracht2, Daniel Löwen2, Tony Stöcker2,4, and Pål Erik Goa1
1Department of Physics, NTNU, Trondheim, Norway, 2DZNE, Bonn, Germany, 3GIGA-CRC In Vivo Imaging, University of Liège, Liège, Belgium, 4Department of Physics and Astronomy, University of Bonn, Bonn, Germany

Keywords: Segmentation, Brain, High-Field MRI, Data Analsyis, Anaysis/Processing, Segementation, Multi-Contrast, Neuro-imaging, synthetic MPRAGE

Motivation: Brain segmentation and multiparameter mapping (MPM) are important for neurodegenerative disease characterization. Acquiring sub-millimeter images increases scan time and patient discomfort. At 7T, B1+ inhomogeneities challenge brain segmentation.

Goal(s): The quality of brain segmentations produced from FastSurferVINN was evaluated and compared between a 7T MPRAGE protocol and two synthetic MPRAGE approaches.

Approach: MPRAGE and MPM images were acquired on 16 subjects across three 7T sites using pTx pulses. MPRAGElike and synMPRAGE images were generated from MPM. All images were segmented with FastSurferVINN.

Results: FastSurferVINN seems to be a robust technique to segment sub-millimeter 7T images. MPRAGElike generated superior segmentations compared to synMPRAGE. 

Impact: Neuroscientists with Multi-Parameter Mapping sequences in their imaging protocol can approximate an MPRAGElike image (preferably over synthetic synMPRAGE). Acquiring an MPRAGE sequence solely for brain segmentation can be avoided resulting in a considerable amount of scan time saved.

2124.
100MRF-Synth: An Image Generation Framework for Learning Contrast-Invariant Brain Segmentation
Richard James Adams1, Walter Zhao1, Jessie E.P. Sun2, Siyuan Hu1, Dan Ma1, and Pew-Thian Yap3
1Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States, 2Radiology, Case Western Reserve University, Cleveland, OH, United States, 3Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

Keywords: Segmentation, Brain, Magnetic Resonance Fingerprinting, Contrast-Invariant, Age-Agnostic

Motivation: Intensity-based brain segmentation methods face challenges with generalizability, as they are susceptible to site, age, and contrast variations.

Goal(s): Develop an efficient, unified framework to train segmentation networks that are insensitive to contrast variations.

Approach: MRF sequences encode image time series that include both common and uncommon contrasts. We develop MRF-Synth, a framework to generate contrasts from MRF quantitative maps for training and evaluating contrast-invariant networks.

Results: We show that a segmentation U-Net trained with MRF-Synth yields highly consistent results across contrasts, vendors, and ages (DICE > 0.86 in adults).

Impact: MRF-Synth represents an efficient, generalizable framework for developing and evaluating contrast-invariant segmentation networks. We demonstrate the utility of MRF-Synth in training a U-Net to segment healthy MR brain images into 18 anatomical regions regardless of contrast, scanner, vendor, or age.

2125.
101Efficient Multi-modality MRI Fusion Based on Superpixel Method for Semi-supervised Brain Tumor Segmentation
Yifan Deng1, Sa Xiao1, Zhen Chen1, Cheng Wang1, and Xin Zhou1
1State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan National Laboratory for Optoelectronics, Wuhan,China, China

Keywords: Segmentation, Brain

Motivation: The multi-modal images can provide complementary information to improve automatic MRI segmentation performance. However, most multi-modal methods require long training times due to the complex network structures and the large amounts of multi-modal images involved. Furthermore, obtaining numerous labeled data is time-consuming and laborious.

Goal(s): To achieve efficient and high-quality segmentation using only a few labeled data.

Approach: We propose an efficient training-free multi-modal fusion strategy based on superpixel method for semi-supervised brain tumor segmentation.

Results: The experiments on BraTS18 dataset show that our method results in superior overall performance, and can greatly reduce the time costs of doctors.

Impact: The strategy of using superpixel method to accelerate the network training process can assist in the timely diagnosis and treatment of diseases in the clinic, and provides a new idea to simplify multi-modal information fusion.

2126.
102Visualization of Thalamic Subnuclei using DiMANI (Diffusion MRI for Anatomical Nuclei Imaging)
Remi Patriat1, Tara Palnitkar1, Jayashree Chandrasekaran1, Karianne Sretavan Wong1,2, Henry Braun1, Essa Yacoub1, Robert A McGovern III 3, Joshua E Aman4, Scott E Cooper4, Jerrold L Vitek4, and Noam Harel1,3
1CMRR / Radiology, University of Minnesota, Minneapolis, MN, United States, 2Graduate Program in Neuroscience, University of Minnesota, Minneapolis, MN, United States, 3Neurosurgery, University of Minnesota, Minnesota, MN, United States, 4Neurology, University of Minnesota, Minneapolis, MN, United States

Keywords: MR-Guided Interventions, Diffusion/other diffusion imaging techniques

Motivation: Lack of direct visualization methods for thalamic subnuclei has resulted in variable patient outcomes and repeat surgeries in deep brain stimulation (DBS).

Goal(s): To generate images with sufficient intra-thalamic contrast to visualize subnuclei for surgical intervention.

Approach: We introduce DiMANI, an image obtained from combining diffusion-weighted volumes. We compared DiMANI to atlases as well as intra- and post-operative clinical DBS data.

Results: DiMANI showed strong correspondence to anatomical organization from atlases, was highly reproducible, and was observable at both 3T and 7T. Clinical data from six DBS patients corroborated DiMANI’s ability to identify the motor and sensory thalamus locations.

Impact: Visualization of thalamic subnuclei is now achievable using DiMANI, enabling direct targeting for DBS and MR-guided focused ultrasound procedures. This will provide immediate impact by enhancing clinical workflow efficiency, improving patient outcomes and advancing brain networks conception.

2127.
103Point-Guided 3D U-SAM: MRI abdominal Segmentation Model using 3D Interactive U-Net and Segment Anything Model
Yuta Sugimoto1, Naoto Fujita1, Daiki Tamada2, Satoshi Funayama3, Shintaro Ichikawa3, Satoshi Goshima3, and Yasuhiko Terada1
1Graduate School of Science and Technology, University of Tsukuba, Tsukuba, Japan, 2Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States, 3Department of Radiology, Hamamatsu University School of Medicine, Hamamatsu, Japan

Keywords: Segmentation, Segmentation, Deep Learning, Digestive

Motivation: In abdominal MRI segmentation tasks, the need for high-quality support information for Segment Anything Model (SAM)-driven segmentation in limited data scenarios has motivated the search for an architecture with high performance and minimal support information requirements.

Goal(s): Our objective is to design a user-friendly architecture for segmentation, focusing on using only support information within the region of interest. We aim to verify its high-performance capabilities.

Approach: We developed Point-Guided 3D U-SAM, combining SAM and 3D U-Net with point-based support input. We compared its segmentation performance with existing methods.

Results: The model excelled in abdominal MRI segmentation across various contrast levels, ensuring high performance.

Impact: Point-Guided 3D U-SAM, which combines Segment Anything Model (SAM) and 3D U-Net with point-based inputs, would advance semi-automated organ segmentation, particularly where contrast is poor, such as MRCP, and in abdominal imaging, significantly reducing manual effort in clinical segmentation.

2128.
104Spatial-temporal segmentation of cine cardiac MRI time-series
Yingqi Qin1, Fumin Guo1, and Xin Zhou2
1Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China, 2State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan, China

Keywords: Segmentation, Segmentation

Motivation:  Cine cardiac MRI provides a way to quantify additional cardiac indices beyond ejection fraction,  including ejection and filling rates, myocardial wall motion, and strain; segmentation of all temporal phases is required.

Goal(s): To develop an approach to segmenting images in all cardiac phases in cine MRI.

Approach: A U-net and a recurrent-neural-network were integrated to exploit the spatial-temporal information in cine time-series. 100 and 50 subjects labeled at the end-systole and end-diastole phases were used for network training and testing, respectively.

Results: The use of spatial-temporal information substantially improved the segmentation accuracy and the algorithm cardiac indices were strongly correlated with manual measurements.

Impact: The proposed method made effective use of the spatial-temporal information in a cine time-series and yielded highly accurate and precise segmentation and cardiac functional measurements, suggesting the utility of our approach for clinical cardiac patient care.

2129.
105Evaluation of the fairness and effectiveness of nnU-Net on multi-organ segmentation
Qing Li1, Yan Li2, Longyu Sun1, Mengting Sun1, Meng Liu1, Xumei Hu1, Xinyu Zhang1, Xueqin Xia3, Shuo Wang4, Yinghua Chu5, and Chengyan Wang1
1Human Phenome Institute, Fudan University, Shanghai, China, 2Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China, 3Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China, 4Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China, 5Simens Healthineers Ltd, Shanghai, China

Keywords: Visualization, Visualization, Fairness; Bias

Motivation: A systematic analysis of the segmentation effectiveness for fairness helps enhance the effectiveness of artificial intelligence(AI) model, which has not been done before.

Goal(s): This study aims to compile statistics the relation between the segmentation effectiveness and aging, gender as well as anatomical regions.

Approach: The nnU-Net model is used for organ segmentation while the DICE was computed to evaluate the relation between the effectiveness with aging and gender and the heatmap was used to visualize the spatial error distribution regarding anatomical regions.

Results: The result demonstrates variations in nnU-Net's effectiveness within subgroups, highlighting the significance of attention mechanisms for segmentation model enhancement.

Impact: This study comprehensively evaluated the fairness and effectiveness of nnU-Net across multiple organs within the body. An analysis was conducted to investigate the relationship between segmentation errors and age, gender as well as anatomical regions for organ segmentation.

2130.
106Clinical Validation of the InnerEye Hippocampal Segmentation Tool
Anna Schroder1, Hamza A. Salhab2,3, James Moggridge2,3, Caroline Micallef2, Jiaming Wu1,4, Sjoerd Vos5, Melissa Bristow6, Fernando Pérez-García6, Javier Alvarez-Valle6, Tarek A. Yousry2,3, John S. Thornton2,3, Frederik Barkhof1,3,4,7, Daniel C. Alexander1, and Matthew Grech-Sollars1,2
1Centre for Medical Image Computing, Department of Computer Science, University College London, London, United Kingdom, 2Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, United Kingdom, 3Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, London, United Kingdom, 4Department of Medical Physics & Biomedical Engineering, University College London, London, United Kingdom, 5Centre for Microscopy, Characterisation & Analysis, University of Western Australia, Perth, Australia, 6Health Futures, Microsoft Research Cambridge, Cambridge, United Kingdom, 7Department of Radiology and Nuclear Medicine, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, Netherlands

Keywords: Segmentation, Machine Learning/Artificial Intelligence

Motivation: Accurate segmentation of the hippocampus provides an important biomarker in neurodegenerative diseases, e.g., Alzheimer’s disease. However, currently available tools are not robust to disease-related atrophy.

Goal(s): We aim to demonstrate the accuracy of our InnerEye hippocampal segmentation tool on clinical data.

Approach: We fine-tuned our existing model on manually segmented data and externally validated the model on a clinical dataset of patients referred to a dementia clinic. We compare our model to three commonly used segmentation tools.

Results: Our model provides significant improvements over currently available tools when tested on an external, clinical dataset.

Impact: The hippocampal segmentation model presented in this work provides significant improvements over currently available tools in an external, clinical dataset. Segmentation performance was increased, while run-times were decreased. These results support the tool as a viable alternative in clinical settings.

2131.
107Clustering for Anatomical Quantification and Evaluation (CAQE): Unsupervised brain segmentation with quantitative MRI
Sharada Balaji1, Marek Obajtek1, Irene M. Vavasour1, Adam Dvorak1, Guillaume Gilbert2, Roger Tam1, Cornelia Laule1,3, David K.B. Li1, Anthony Traboulsee1, Alex L. MacKay1, and Shannon H. Kolind1
1University of British Columbia, Vancouver, BC, Canada, 2MR Clinical Science, Philips Healthcare Canada, Missisauga, ON, Canada, 3International Collaboration on Repair Discoveries, Vancouver, BC, Canada

Keywords: Microstructure, Microstructure

Motivation: Traditional image segmentation uses conventional MRI to classify tissue types based on image intensity. However, segmenting using microstructural data may provide more specific classification of tissue.

Goal(s): Cluster and segment brain tissue using quantitative MRI measures, to classify tissue based only on microstructural features without spatial input.

Approach: Measures denoting myelin content, anisotropy and tissue heterogeneity were clustered and used to label test datasets based on microstructural features alone, using an unsupervised approach.

Results: Segmentations were more informative than traditional segmentation, and consistent between healthy subjects. Differences between clusters reflect microstructural feature differences which would otherwise be invisible with conventional imaging.

Impact: The CAQE framework can be used for segmentation of tissue based on quantitative measures alone, providing better delineation of regions based on microstructural features. This allows for future comparisons between healthy and damaged tissue, to visualise and interpret pathological changes.

2132.
108Deep learning segmentation of I-125 brachytherapy seeds in prostate cancer patients based on synthetically generated multi-echo training data
Lion H. Mücke1, Johanna Grigo2,3, Andre Karius2,3, Christoph Bert2,3, Michael Uder1, Frederik B. Laun1, and Jannis Hanspach1
1Institute of Radiology, University Hospital Erlangen, Erlangen, Germany, 2Department of Radiation Oncology, University Hospital Erlangen, Erlangen, Germany, 3Comprehensive Cancer Center Erlangen-EMN, Erlangen, Germany

Keywords: Electromagnetic Tissue Properties, Machine Learning/Artificial Intelligence, Brachytherapy, Segmentation, Susceptibility

Motivation: Deep learning (DL) networks trained with synthetically generated data enable the visualization of I-125 brachytherapy seeds in prostate cancer patients in quantitative susceptibility mapping (QSM), possibly eliminating the need for a CT-scan in future.

Goal(s): The Goal was to automatically detect and segment I-125 seeds in-vivo by using a DL network directly (without QSM) on gradient-echo-sequence (GRE) data.

Approach: A U-Net was trained with synthetically generated multi-echo GRE magnitude and phase input data and corresponding target seed segmentations.

Results: The seed segmentations were of high visual quality and showed good agreement (85% detection rate) with corresponding CT-scans in five prostate cancer patients.

Impact: This work proposes a fast and completely automatic MRI-only based workflow for segmenting in-vivo brachytherapy seeds in prostate cancer patients. This approach has the potential to eliminate the need for a CT-scan, thereby reducing the use of ionizing radiation.

2133.
109Prompt guided multi-organ segmentation of the total body
Meiyuan Wen1, Yunlong Gao1, Yaping Wu2, Zhenxing Huang1, Wenbo Li1, Wenjie Zhao1, Yongfeng Yang1, Hairong Zheng1, Dong Liang1, Meiyun Wang2, and Zhanli Hu1
1Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, CAS, Shenzhen, China, 2Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, China

Keywords: Segmentation, Whole Body

Motivation: Numerous studies have made significant strides in the field of medical image segmentation. However, most studies have focused on specific localized regions rather than addressing the challenge of unified segmentation across the entire human body.

Goal(s): To improve the efficiency and accuracy of disease diagnosis and treatment, continuous advancements in multi-organ segmentation brings great advantages.

Approach:  In this paper, we present a prompt guided multi-organ segmentation model on total-body images, which can be adapted for CT, PET and MRI modalities. 

Results: Our extensive experiments demonstrate the superior performance of our model in accurately segmenting 21 organs. 

Impact: Our research leverages the power of prompts to tackle the challenge of multi-organ segmentation. It has potentially wide applications in the fields of CT, MRI and PET, enabling the simultaneous segmentation of multiple organs and images from diverse modalities.

2134.
110Thresholding to remove fluid from FreeSurfer segmentations of quantitative brain maps – a simple solution to a painful problem
Simran Kukran1,2, Joely Smith 1,3, Ben Statton4, Luke Dixon3,5, Stefanie Thust6,7,8, Iulius Dragonu9, Sarah Cardona3, Mary Finnegan3, Rebecca Quest1,3, Neal Bangerter1,10, Dow Mu Koh2, Peter Lally1, Matthew Orton2, and Matthew Grech Sollars11,12
1Bioengineering, Imperial College London, London, United Kingdom, 2Radiotherapy and Imaging, Institute of Cancer Research, London, United Kingdom, 3Department of Imaging, Imperial College Healthcare NHS Trust, London, United Kingdom, 4London Institute of Medical Sciences, Medical Research Council, London, United Kingdom, 5Surgery and Cancer, Imperial College London, London, United Kingdom, 6Precision Imaging Beacon, School of Medicine, University of Nottingham, Nottingham, United Kingdom, 7School of Physics and Astronomy, University of Nottingham, Nottingham, United Kingdom, 8Dept. of Brian Rehabilitation and Repair, UCL Institute of Neurology, London, United Kingdom, 9Research and Collaborations UK, Siemens Healthcare Ltd, Camberley, United Kingdom, 10Computer and Electrical Engineering, Boise State University, Boise, ID, United States, 11Centre for Medical Imaging and Computing, UCL, London, United Kingdom, 12University College London Hospitals NHS Foundation Trust, London, United Kingdom

Keywords: Segmentation, MR Fingerprinting

Motivation:  Segmentation of quantitative maps is required to compute anatomical region mean relaxation times. FreeSurfer is designed to automatically segment conventional weighted images. Voxels of CSF contaminate tissue segmentations in some healthy volunteers, skewing the mean T1 or T2.

Goal(s): To remove contaminant fluid from tissue regions of interest in T1 and T2 maps.

Approach:  Mean T1 or T2 of CSF in ventricles is used as a maximum threshold within brain tissue.

Results: Thresholding prior to 2D erosion of masks removes contaminant CSF and prevents erroneous variation between 10 healthy volunteers.

Impact: A simple threshold-based correction of FreeSurfer segmentation applied to quantitative T1 and T2 maps. The same threshold can be applied to all subjects in one step, eliminating the need for laborious manual adjustment.

2135.
111The use of multiparametric brain tumor segmentation for investigating 1H MRSI-detected, fasting-induced ketone body accumulation in glioma
Seyma Alcicek1,2,3,4, Iris Divé2,3,4,5, Ulrich Pilatus1, Vincent Prinz6, Joachim P. Steinbach2,3,4,5, Marie-Thérèse Forster6, Elke Hattingen1,2,3,4, Michael W. Ronellenfitsch2,3,4,5, and Katharina J. Wenger1,2,3,4
1Institute of Neuroradiology, Goethe University Frankfurt, University Hospital, Frankfurt am Main, Germany, 2University Cancer Center Frankfurt (UCT), Frankfurt am Main, Germany, 3Frankfurt Cancer Institute (FCI), Frankfurt am Main, Germany, 4German Cancer Research Center (DKFZ) Heidelberg, Germany and German Cancer Consortium (DKTK), Partner Site Frankfurt/Mainz, Frankfurt am Main, Germany, 5Dr. Senckenberg Institute of Neurooncology, Goethe University Frankfurt, University Hospital, Frankfurt am Main, Germany, 6Department of Neurosurgery, Goethe University Frankfurt, University Hospital, Frankfurt am Main, Germany

Keywords: Cancer, Tumor, Nutritional Intervention, MR spectroscopy, Tumor segmentation

Motivation: The evaluation of MR spectroscopy imaging findings with multiparametric brain tumor segmentation might facilitate the understanding of altered tumor metabolism induced by intervention on an individual patient/tumor level.

Goal(s): In this study, we used this approach to elucidate the glioma metabolism under nutritional intervention.

Approach: The concentrations of ketone bodies in brain tumor after 72-hour-fasting were correlated with the volume of glioma sub-regions for 13 brain tumor patients.

Results: The outcome indicates that the accumulation of ketone bodies in solid tumors and necrotic areas after fasting might be a result of neovascularization and the blood-brain barrier compromise.

Impact: Here, we report on the validation of a dedicated, multi-voxel MRSI protocol with fully automated multiparametric segmentation of glioma sub-regions for monitoring fasting-induced changes. 

2136.
112Impact of Fiber Segmentation Methods on Fiber Quantification Reliability
Lingyu Li1,2, QiQi Tong3, Silei Zhu4, Chenxi Lu2,5, and Hongjian He2,6,7
1Polytechnic Institute, Zhejiang University, Hangzhou, China, 2Center for Brain Imaging Science and Technology, Zhejiang University, Hangzhou, China, 3Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China, 4Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, London, United Kingdom, 5College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China, 6School of Physics, Zhejiang University, Hangzhou, China, 7State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China

Keywords: Data Processing, Data Analysis, fiber quantification, tract segmentation, reliability

Motivation: The reliability of fiber quantification depends on the quality of the fiber tract reconstruction, which is influenced by the method used for fiber segmentation.

Goal(s): Our objective was to assess the reliability of different fiber segmentation methods and determine the most suitable strategy for the specific bundle.

Approach: We compared the distributions of intra-class coefficient (ICC) for different measurements and tracts across three widely used fiber segmentation methods. This analysis was conducted using the traveling subject dataset. The results are presented along with examples, followed by a discussion of the underlying reasons.

Results: We summarized the advantageous strategies for main tracts.

Impact: This strategy aims to enhance the stability of study results by facilitating the selection of more efficient segmentation methods for conducting fiber-specific quantification in the future.