STAN-CT: Standardization and Normalization of CT Images for Lung Cancer Patients

Grants and Contracts Details

Description

Lung cancer is the leading cause of cancer death and one of the most common cancers among both men and women in the United States.1 The extraordinarily high incidence rate urges us to identify key characteristics that lead to better cancer prognosis. Leveraging the recent advances in high-resolution imaging that allows for capturing detailed quantification of tumor phenotypic characteristics, radiomics has become an emerging field in cancer research [X]. In radiomics, advanced quantitative image features have been extracted from a large volume of diagnostic images, providing opportunities to quantify spatial and temporal variations in tumor architecture and function, through which intra-tumor evolution could be determined [X]. However, we are still limited by the tools to conduct large-scale cross-site radiomic studies, mainly because diagnostic images, such as computed tomography (CT), are often acquired using machines from different vendors and with customized acquisition parameters, posing a fundamental challenge to radiomic study across sites. To overcome the barriers that prevent the use of images in large-scale radiomic studies, algorithms are required to integrate, standardize, and normalize CT images from multiple sources. The goal of the Standardization and Normalization of CT images for lung cancer patients (STAN-CT) project is to develop a novel computational platform that can automatically standardize and normalize a large volume of diagnostic images, leading to cross-site large-scale image feature extraction and accelerating research on lung cancer. By precisely identifying high-level tumor phenotypic characters of lung cancer patients and by building an integrative radiomics map towards the prognosis of lung cancer, STAN-CT will overcome research silos, promote medical image resource sharing, and transform massive data into knowledge or testable hypotheses, ultimately improving the understanding and treatment of lung cancer. Our work has three specific aims: Aim 1. To develop a computational platform to standardize and normalize medical images acquired using various parameters. Diagnostic images are dependent on different vendors and various acquisition parameters, posing a fundamental challenge on phenotypic feature extraction across sites. Using a small image set with the same parameters will greatly limit the statistical power, whereas directly using images from all sources will introduce artifacts in the extracted image features even using deep learning models 2,3. To address this problem, we will develop a deep learning algorithm called GANai for image standardization and normalization. GANai is a generative adversarial network (GAN) model for mitigating the differences in radiomic features across CT images captured using non-standard imaging protocols. GAN models can learn image-to-image transference from one domain to another,4-6 but are not directly applicable for our task mainly due to: the lack of constraints to control what modes of data it can generate. To address these issues, GANai introduces an alternative improvement training strategy to alternatively and gradually improve model performance. The new training strategy enables a series of technical improvements, including phase-specific loss functions, phase-specific training data, and the adoption of ensemble learning, leading to better model performance. Aim 2. To deploy and test GANai for image standardization and normalization locally. We will improve and increase the capabilities of GANai to fit multiple medical imaging standards and deploy the enhanced GANai model on a local cohort at University of Kentucky for local validation. This aim has two phases. Aim 2.1 to develop a computational platform with user-friendly graphical user interface, with that users can conveniently convert medical images that were taken using non-standard protocols to one or multiple standards that they specify. Aim 2.2 to deploy the platform at the Radiology Department, University of Kentucky for performance testing. We will scan a multipurpose chest phantom, which is an accurate life-size anatomical model of a human torso, using different acquisition parameters and multiple CT scanners. Three performance evaluation criteria will be applied to validate the functionality, reliability, and performance of GANai systematically. Aim 3. To deploy and test GANai for image standardization and normalization across three medical centers. The platform of GANai will be deployed at the Emory University, the University of Texas at Dallas, and the University of Kentucky for cross-center performance validation. With the standard and non-standard imaging protocols at the three centers, we will scan the same multipurpose chest phantom to generate chest CT image data. Besides the validation on functionality, reliability, and performance, we will test the generalizability of GANai, i.e. whether a model trained using data from one medical center are applicable for images collected at another place. Finally, we will distribute the software package of GANai for public use. Impact. To leverage the recent advances in high-resolution medical imaging, STAN-CT will standardize and normalize CT images regarding high-level image features and create new opportunities for targeted therapy and improved outcome 7. Our computational framework will facilitate researchers to extract and prioritize image features, leading to better prognosis with personalized medicine where treatment is increasingly tailored based on critical radiomic image features examined in large-scale and cross-site.8 Also, it will provide a better understanding to lung cancer and shed light on oncologic diagnosis and treatment guidance.
StatusFinished
Effective start/end date7/1/196/30/22

Funding

  • National Cancer Institute: $388,776.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.