ISSN# 1545-4428 | Published date: 19 April, 2024
You must be logged in to view entire program, abstracts, and syllabi
At-A-Glance Session Detail
   
Curating Synthetic Imaging Data
Digital Poster
AI & Machine Learning
Monday, 06 May 2024
Exhibition Hall (Hall 403)
17:00 -  18:00
Session Number: D-163
No CME/CE Credit

Computer #
2233.
49Synthesizing high-resolution brain MR T1-MPRage-like Images from low-dose CT
Yasheng Chen1, Chunwei Ying2, Tongyao Wang2, Andria Ford1, Jin-Moo Lee1, Rajat Dhar1, and Hongyu An2
1Neurology, Washington University School of Medicine, St. Louis, MO, United States, 2Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, United States

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, image conversion, MR to CT conversion, image synthesis

Motivation: Using deep learning to improve the soft tissue contrast in low-dose brain CT similar to MRI.

Goal(s): Converting low-dose brain CT to high-resolution T1-MPRAGE-like Images.

Approach: A ResUNet-based deep learning approach is developed to learn the complex transformation from low-dose brain CT to its corresponding T1-MPRAGE.

Results: With the proposed approach, we obtained high-resolution MR T1-MPRAGE-like images with superior soft-tissue contrasts from noisy low-dose brain CT images.

Impact: By transferring brain CT to T1-MPRAGE-like images, our approach provides superior soft tissue contrast from low-dose brain CT images. Our method would allow tissue-specific analysis using noisy non-contrast CT.

2234.
50Meta-Learning Guided Pelvis MR to CT Translation: Addressing Cross-Modality Misalignments
Daniel Kim1, Jae-Hun Lee1, Yoseob Han2, Kanghyun Ryu3, and Dong-Hyun Kim1
1Yonsei University, Seoul, Korea, Republic of, 2Soongsil University, Seoul, Korea, Republic of, 3Korea Institute of Science and Technology, Seoul, Korea, Republic of

Keywords: AI/ML Software, Body, Pelvis

Motivation: In radiation therapy planning, both MR and CT are essential, but there is a potential risk of radiation exposure from CT. To address this problem, MR to CT translation could be an important solution.

Goal(s): In cross-modality translations like MR to CT, misalignment is significant challenge. The goal is to develop a method that can effectively learn to handle this misalignment.

Approach: We propose a method that utilizes meta-learning to focus on reliable regions and employs loss functions and network suited for misalignment.

Results: Our method surpassed existing GAN-based methods in quantitative evaluations, particularly in the reconstruction of bone structures.

Impact: It can be seen that meta-learning can be effectively applied to the problem of misalignment. This can aid in preserving fine details and bone structures in MR to CT translation. It is also broadly applicable to cross-modality translation.

2235.
51Comparison of Variational Autoencoders for Magnetic Resonance Spectroscopy Data Synthesis
Dennis van de Sande1, Justin Kleinveld1, Sina Amirrajab1, Mitko Veta1, and Marcel Breeuwer1
1Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands

Keywords: Other AI/ML, Modelling, Generative modelling

Motivation: Scarcity of MRS datasets hinders deep learning model development, often leading to reliance on simulations which show difficulties with replicating in-vivo characteristics.

Goal(s): The goal is to evaluate different variational autoencoders to enhance MRS datasets and improve deep learning model development for MRS.

Approach: This study assesses different models for generating MRS data. Additionally, interpolation is done between multiple pairs of real spectra to improve the diversity of the synthetic data.
 

Results: One model shows great potential in terms of reconstruction quality and generative performance. The incorporation of interpolation further enhances the diversity in synthetic spectra, particularly in relation to residual water signals.

Impact: The demonstrated potential of variational autoencoders for MRS data generation will help in generating synthetic data that is similar to in-vivo data. This will help the development of other deep learning models for MRS applications. 

2236.
52Prospective Quality Metric Assessment of SyntheticCT Images via a Learnable Framework
Sandeep Kaushik1,2, Cristina Cozzini1, Florian Wiesinger1, Ponnam Mahendhar Goud3, Bjoern Menze2, and Dattesh Shanbhag3
1GE HealthCare, Munich, Germany, 2University of Zurich, Zurich, Switzerland, 3GE HealthCare, Bangalore, India

Keywords: Other AI/ML, Data Analysis, Predictive deep learning, MLOps

Motivation: Prospective quality assessment of synthetic CT images by predicting an accuracy metric. Such a score can be an indication of confidence of model prediction or be used as a feedback for performance of the model. 

Goal(s): Prediction of mean absolute value of synthetic CT image without a reference CT image

Approach: A deep learning framework which is trained to predict MAE metric of a given image.

Results: The proposed QMetNet model learns to predict the MAE metric on unseen data in a reliable manner without a reference image. 

Impact: This novel framework makes it possible to train models to predict a choice of metrics as suitable for different applications. It could be a potential solution to provide confidence of prediction of a model to ease adoption of AI solutions. 

2237.
53Code-Aware Transformation from NAC PET to AC PET, MRI, or CT Imaging
Yuxi Jin1, Qingneng Li2, Chao Zhou3, Zhihua Li2, Zixiang Chen2, Zhenxing Huang2, Na Zhang2, Xu Zhang3, Wei Fan3, Jianmin Yuan4, Qiang He4, Weiguang Zhang3, Hairong Zheng2,5, Dong Liang2,5, and Zhanli Hu2,5
1Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese, Shenzhen, China, 2Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, Shenzhen, China, 3Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China, 4Central Research Institute, United Imaging Healthcare Group, Shanghai, China, 5Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences., Beijing, China

Keywords: Other AI/ML, Multimodal, modality transformation, dynamicly transformation, NAC PET, AC PET, MRI, CT

Motivation: In radiation therapy with PET images, CT and MR images are used for precise targeting, but acquiring them is expensive, time-consuming and increases radiation risk.

Goal(s): Developing a deep learning model capable of dynamically switching to a specified mode enhances flexibility beyond traditional one-to-one cross-modal conversion methods.

Approach:  We developed a deep learning model with dynamic modality translation capabilities by the incorporation of switch layers within the decoder module.

Results: The evaluations showed that our model excels at converting non-attenuation corrected PET images to attenuation corrected PET, MR, or CT images, making it easier to obtain additional modality images for radiation therapy.

Impact: Dynamic conversion from NAC PET to desired modalities like AC PET, CT, or MRI on demand is more efficient, saving on data storage and processing, and offers customized imaging for specific clinical needs, enhancing workflow efficiency.

2238.
54Evaluation of generative models for synthetic CT images using SINGHA, a new spectrally informed metric
Veronica Ravano1,2,3, Adham Elwakil1,2,3, Thomas Yu1,2,3, Tom Hilbert1,2,3, Bénédicte Maréchal1,2,3, Jonas Richiardi2, Jean-Philippe Thiran3, Charbel Mourad2, Paul Margain4, Julien Favre4, Tobias Kober1,2,3, Patrick Omoumi2, and Stefan Sommer1,5
1Advanced Clinical Imaging Technology, Siemens Healthineers International AG, Lausanne, Geneva and Zurich, Switzerland, 2Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 3LTS5, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 4Swiss Biomotion Lab, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 5Swiss Centre for Musculoskeletal Imaging (SCMI), Balgrist Campus, Zurich, Switzerland

Keywords: Analysis/Processing, MSK, synthetic CT

Motivation: Synthetic CT (sCT) based on MRI could improve the characterization of bone pathology by estimating bone mineral density and providing a high level of structural details. However, evaluating the performance of sCT is challenging in both respects.

Goal(s): To propose an evaluation framework for sCT that quantifies accuracy both in terms of image intensity and depiction of structural details.

Approach: We propose the new frequency-based metric SINGHA that captures the sharpness difference between images.

Results: SINGHA was complementary to standard metrics and captured differences in high frequency content, thereby contributing to a more comprehensive evaluation of sCT images.

Impact: Using the newly introduced Spectrally-INformed Grading of High-frequency Attributes (SINGHA) in conjunction with standard intensity-based metrics enables to simultaneously evaluate synthetic CT accuracy in terms of bone mineral density and sharpness.

2239.
55Three-Dimensional Amyloid-Beta PET Synthesis from Structural MRI with Conditional Generative Adversarial Networks
Fernando Vega1,2,3,4, Abdoljalil Addeh1,2,3,4, and M. Ethan MacDonald1,2,3,4
1Biomedical Engineering, University of Calgary, Calgary, AB, Canada, 2Electrical & Software Engineering, University of Calgary, Calgary, AB, Canada, 3Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada, 4Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada

Keywords: AI/ML Image Reconstruction, Alzheimer's Disease, MRI, PET, Image Translation

Motivation: Alzheimer’s Disease hallmarks include amyloid-beta deposits and brain atrophy, detectable via PET and MRI scans, respectively.  PET is expensive, invasive and exposes patients to ionizing radiation.  MRI is cheaper, non-invasive, and free from ionizing radiation but limited to measuring brain atrophy.

Goal(s): To develop an 3D image translation model that synthesizes amyloid-beta PET images from T1-weighted MRI, exploiting the known relationship between amyloid-beta and brain atrophy.

Approach: The model was trained on 616 PET/MRI pairs and validated with 264 pairs.

Results: The model synthesized amyloid-beta PET images from T1-weighted MRI with high-degree of similarity showing high SSIM and PSNR metrics (SSIM>0.97&PSNR=34).

Impact: Our model proves the feasibility of synthesizing amyloid-beta PET images from structural MRI ones, significantly enhancing accessibility for large-cohort studies and early dementia detection, while also reducing cost, invasiveness, and radiation exposure.

2240.
56Zero-Dose PET Synthesis for Patients with Glioblastoma by Mixture-of-Experts-Based TransUNet
Ella Lan1
1Stanford CAFN Lab, Santa Clara, CA, United States

Keywords: AI/ML Image Reconstruction, PET/MR

Motivation: Through the utilization of MRI images: T1, T1c, ASL, and T2-FLAIR, the PET image can be synthesized without requiring the patients to face radiation exposure.


Goal(s): To synthesize high-quality FDG-PET images by multi-contrast MRI, using TransUNet with mixture-of-experts (MoE) in order to ensemble both local and global feature maps.

Approach: TransUNet was utilized as the backbone to synthesize PET from multi-contrast MRI. A mixture-of-experts (MoE) was designed to assign a weight map to each layer's feature.

Results: Multi-contrast MRIs can be used to synthesize FDG-PET images for GBM cases by the proposed MoE-based TransUNet model, and incorporating MoE in TransUNet yields solid results.

Impact: To synthesize high-quality FDG-PET images by multi-contrast MR via deep learning, this zero-dose project can be transformative in today’s modern field of medicine, It would not require the patients to face radiation exposure while improving the accessibility of PET information.

2241.
57Physics-informed Variational Auto-Encoder to generate synthetic multi-echo chemical shift-encoded liver MR images
Juan Pablo Meneses1,2, Juan Cristobal Gana3, Jose Eduardo Galgani4,5, Cristian Tejos1,2,6, Zhaolin Chen7,8, and Sergio Uribe1,2,9
1Biomedical Imaging Center, Pontificia Universidad Catolica de Chile, Santiago, Chile, 2i-Health Millennium Institute for Intelligent Healthcare Engineering, Santiago, Chile, 3Pediatric Gastroenterology and Nutrition Department, Division of Pediatrics, School of Medicine, Pontificia Universidad Catolica de Chile, Santiago, Chile, 4Nutrition & Dietetics. Department of Health Sciences; Faculty of Medicine, Pontificia Universidad Catolica de Chile, Santiago, Chile, 5Department of Nutrition, Diabetes and Metabolism. Faculty of Medicine, Pontificia Universidad Catolica de Chile, Santiago, Chile, 6Department of Electrical Engineering, Pontificia Universidad Catolica de Chile, Santiago, Chile, 7Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia, 8Department of Data Science and AI, Monash University, Melbourne, VIC, Australia, 9Department of Medical Imaging and Radiation Sciences, Monash University, Melbourne, VIC, Australia

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, Generative Model

Motivation: Deep Learning (DL)-based methods to quantify liver PDFF have had robustness difficulties due to the lack of large and heterogeneous training datasets with known results.

Goal(s): To create a DL algorithm to synthesize realistic multi-echo liver MR images given a set of arbitrary MR scan parameters.

Approach: To use a physics-driven approach to create a DL-based generative model able to synthesize realistic liver CSE-MR images with different compositions and geometries.

Results: Our framework enabled a reliable customization of MR scan parameters, by directly adjusting them in the physical model. Feasibility of training a DL method purely based on synthetic data was also demonstrated.

Impact: We successfully generated realistic multi-echo liver MR images with diverse geometries and compositions, which can be used to efficiently train DL-based methods for liver PDFF quantification. The physics-driven nature of our model enables the customization of MR scan parameters.

2242.
58A Unified Approach for Synthesizing Multimodal Brain MR Images via Gated Hybrid Fusion
Jihoon Cho1,2, Xiaofeng Liu2, Fangxu Xing2, Jinsong Ouyang2, Georges El Fakhri3, Jinah Park1, and Jonghye Woo2
1School of Computing, Korea Advanced Institute of Science and Technology, Daejeon, Korea, Republic of, 2Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States, 3Yale School of Medicine, New Haven, CT, United States

Keywords: Analysis/Processing, Brain

Motivation: It is possible that some MR images may not be acquired during a scanning session. Therefore, it is necessary to generate missing modalities for accurate diagnosis and treatment planning.

Goal(s): Our goal is to synthesize missing modalities from the acquired images while minimizing any loss of information.

Approach: We propose a unified framework that employs a gated hybrid fusion approach to synthesize multimodal brain MR images from the acquired images.

Results: In our experiments, carried out on the BraTS 2018 database containing four MR modalities, we observed improved synthesis quality in all metrics, when synthesizing missing modalities from one or three provided modalities.

Impact: Our unified framework for synthesizing missing modalities is versatile enough to handle all scenarios, irrespective of the modalities provided. It can be applied in clinical decision-making and computer-aided diagnosis, where all modalities are essential for processing.

2243.
59MR Contrast-Invariant Deep Learning Method for Synthetic CT Generation
Sandeep Kaushik1,2, Cristina Cozzini1, Jonathan J Wyatt3, Hazel McCallum3, Ross Maxwell4, Bjoern Menze2, and Florian Wiesinger1
1GE HealthCare, Munich, Germany, 2University of Zurich, Zurich, Switzerland, 3Newcastle University and Northern Centre for Cancer Care, Newcastle upon Tyne, United Kingdom, 4Newcastle University, Newcastle upon Tyne, United Kingdom

Keywords: Analysis/Processing, Radiotherapy, synthetic CT, PET/MR, image synthesis

Motivation: Deep learning models are sensitive to image contrast variations. We explore the feasibility of training a single model to process multiple MR contrasts.

Goal(s): To generate synthetic CT images from different MR image contrasts using a image contrast agnostic model. 

Approach: A multi-task deep convolutional neural network has been trained using a variety of MR image contrasts. 

Results: We demonstrate generation of synthetic CT images from multiple MR images with superior qualitative accuracy and encouraging quantitative accuracy. 

Impact: The ability to generate synthetic CT from a variety of MR contrasts brings flexibility of choice of MR sequence in MR guided radiation therapy clinical setup. It improves the model robustness to scan parameter variations leading to a consistent outcome. 

2244.
60Synthesising ultra-strong gradients diffusion MRI with high-resolution convolutional neural networks
Matteo Mancini1,2, Carolyn McNabb2, Mara Cercignani2, Derek Jones2, and Marco Palombo2
1Italian National Institute of Health, Rome, Italy, 2Cardiff University Brain Research Imaging Centre, Cardiff University, Cardiff, United Kingdom

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, Synthesis

Motivation: Ultra-strong gradients scanners allow to explore microstructure, but these systems are not widespread because of their associated challenges.

Goal(s): Our goal is to leverage image synthesis and deep learning to design a neural network able to predict high b-values data from low b-values.

Approach: .We implemented a U-net architecture and tailored a loss function able to learn tissue-based features in a patch-based fashion. We trained it on a large dataset and tested it quantitatively and qualitatively.

Results: .Qualitative and quantitative results showed a remarkable agreement between synthetic high-b values and the ground-truth. A preliminary test with a microstructural model also gave encouraging results.

Impact: Being able to synthesise high b-value data from clinical data could unleash the availability of advanced microstructural models to study the human brain and body, with applications in fundamental research and in the clinical settings.

2245.
61CDGAN:Cross Datasets Generative Adversarial Network for MR multi-contrast Image Synthesis
Guowen Wang1, Silei Wang1, Yuebin He1, Liangjie Lin2, Shuhui Cai1, Congbo Cai1, and Zhong Chen1
1Xiamen University, Xiamen, China, 2Clinical & Technical Support, Philips Healthcare, China, Shengzhen, China

Keywords: Other AI/ML, Brain, Synthesis

Motivation: Multi-contrast MR images usually take a long time to scan, resulting in only a part of the valuable contrasts being obtained. Current deep learning methods face challenges when applied to domain adaptation across datasets or when tasked with generating high-quality images of various contrast.

Goal(s): Our purpose is to synthesize diverse contrast MRIs across different datasets.

Approach: we propose a cross-dataset generative adversarial network (CDGAN).The synthesized MR modalities of one specific object not only conform to the characteristics of the modalities themselves, but also have the same structure. 

Results: This method effectively addresses the issue of synthesizing across datasets.

Impact: The method demonstrates a significant improvement in the quality of generated images when tested on different datasets.

2246.
62Improved Deep Learning MR Image Enhancement with Synthetic Images
Zechen Zhou1 and Ryan Chamberlain1
1Subtle Medical Inc, Menlo Park, CA, United States

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence

Motivation: Deep learning (DL) based image enhancement requires paired data for supervised training. But separately acquired data pairs may encounter spatial mis-alignment that limits the model performance.

Goal(s): Incorporate synthetic data into the training set to address the mis-alignment issue and improve the quality and diversity of the training set.

Approach: Develop and validate the diffusion based image degrader to synthesize low quality images. Compare the performance of DL models trained with/without synthetic data.

Results: DL models trained with synthetic data can achieve similar performance compared to training with acquired pairs. Additional synthetic data can improve DL image enhancement.

Impact: Synthetic data allows building more diverse training sets to achieve multi-task DL models. How much faster the DL model can support and whether it can control the quality of output to meet different clinical preferences is worth further investigation.

2247.
63MR-to-CT Synthesis in MR-only Radiotherapy Based on Deep Learning
Yibo Hu1, Shi'ang Zhang1, Wentao Li2, Jianqi Sun1, and Lisa X. Xu1
1Shanghai Jiao Tong University, Shanghai, China, 2Fudan University Shanghai Cancer Center, Shanghai, China

Keywords: Analysis/Processing, Liver, MR radiotherapy, medical image synthesis,

Motivation: MRI-only radiotherapy requires synthesizing MR images into CT-equivalent images to calculate the radiation dose. However, the current synthesis methods are limited when applied to small anatomical regions, such as tumors.

Goal(s): Our goal was to develop a novel MR-to-CT synthesis algorithm that produces better results for small anatomical structures.

Approach: We introduced a multi-branch hybrid perceptual generative model incorporating an attention mechanism to synthesize different scales anatomical structures.

Results: Our proposed algorithms yield favorable results for small anatomical structures based on physician feedback and quantitative assessments.


 

Impact: The proposed synthesis algorithm simplifies and speeds up MRI-only radiotherapy workflow. It is also applicable to other fields of medical image synthesis, such as multi-modal image diagnosis treatment.

2248.
64Assessment of Synthetic 3T Image Generator for Accurate CSF Volume Measurement in T1-w, T2-w, and FLAIR Sequences Compared to Low-Field Imaging
Kh Tohidul Islam1, Shenjun Zhong1,2, Parisa Zakavi1, Helen Kavnoudias3,4, Shawna Farquharson2, Gail Durbridge5, Markus Barth6, Katie L. McMahon7, Paul M. Parizel8,9, Gary F. Egan1, Andrew Dwyer10, Meng Law3,4, and Zhaolin Chen1,11
1Monash Biomedical Imaging, Monash University, Clayton, Victoria, Australia, 2Australian National Imaging Facility, Queensland, Australia, 3Neuroscience, Monash University, Clayton, Victoria, Australia, 4Radiology, Alfred Hospital, Victoria, Australia, 5Herston Imaging Research Facility, University of Queensland, Queensland, Australia, 6School of Electrical Eng. and Computer Science, University of Queensland, Queensland, Australia, 7School of Clinical Science, Queensland University of Technology, Queensland, Australia, 8David Hartley Chair of Radiology, Royal Perth Hospital, Western Australia, Australia, 9Medical School, University of Western Australia, Western Australia, Australia, 10South Australian Health and Medical Research Institute, South Australia, Australia, 11Data Science and AI, Monash University, Clayton, Victoria, Australia

Keywords: AI/ML Image Reconstruction, Low-Field MRI

Motivation: Addressing limited access to high-field MRI systems, our study investigates whether a Synthetic 3T Image Generator can enhance low-field MRI to match high-field image quality, crucial for accurate cerebrospinal fluid (CSF) volume analysis.

Goal(s): We aimed to validate the efficacy of the Synthetic 3T generator in improving CSF volume measurements on low-field MRI, in comparison to high-field T1-w, T2-w, and FLAIR sequences.

Approach: A cGAN was employed to enhance 64mT MRI data to synthetic 3T images, for evaluation in comparison to high-field MRIs.

Results: The synthetic 3T images demonstrated significant improvements in CSF volume estimation across all sequences when compared to low-field images.

Impact: The synthetic 3T MRI enhancements could advance neuroimaging in resource-limited settings, improve diagnostic precision for brain injuries and potentially broaden the application of neurological and psychiatric patient care worldwide and expand neuroimaging research opportunities.