Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images

Xue Feng, Kun Qing, Nicholas J. Tustison, Craig H. Meyer, Quan Chen

Research output: Contribution to journalArticlepeer-review

85 Scopus citations


Purpose: Automatic segmentation of organs-at-risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with large 3D images for optimal results. The purpose of this study is to develop a novel DCNN method for thoracic OARs segmentation using cropped 3D images. Methods: To segment the five organs (left and right lungs, heart, esophagus and spinal cord) from the thoracic CT scans, preprocessing to unify the voxel spacing and intensity was first performed, a 3D U-Net was then trained on resampled thoracic images to localize each organ, then the original images were cropped to only contain one organ and served as the input to each individual organ segmentation network. The segmentation maps were then merged to get the final results. The network structures were optimized for each step, as well as the training and testing strategies. A novel testing augmentation with multiple iterations of image cropping was used. The networks were trained on 36 thoracic CT scans with expert annotations provided by the organizers of the 2017 AAPM Thoracic Auto-segmentation Challenge and tested on the challenge testing dataset as well as a private dataset. Results: The proposed method earned second place in the live phase of the challenge and first place in the subsequent ongoing phase using a newly developed testing augmentation approach. It showed superior-than-human performance on average in terms of Dice scores (spinal cord: 0.893 ± 0.044, right lung: 0.972 ± 0.021, left lung: 0.979 ± 0.008, heart: 0.925 ± 0.015, esophagus: 0.726 ± 0.094), mean surface distance (spinal cord: 0.662 ± 0.248 mm, right lung: 0.933 ± 0.574 mm, left lung: 0.586 ± 0.285 mm, heart: 2.297 ± 0.492 mm, esophagus: 2.341 ± 2.380 mm) and 95% Hausdorff distance (spinal cord: 1.893 ± 0.627 mm, right lung: 3.958 ± 2.845 mm, left lung: 2.103 ± 0.938 mm, heart: 6.570 ± 1.501 mm, esophagus: 8.714 ± 10.588 mm). It also achieved good performance in the private dataset and reduced the editing time to 7.5 min per patient following automatic segmentation. Conclusions: The proposed DCNN method demonstrated good performance in automatic OAR segmentation from thoracic CT scans. It has the potential for eventual clinical adoption of deep learning in radiation treatment planning due to improved accuracy and reduced cost for OAR segmentation.

Original languageEnglish
Pages (from-to)2169-2180
Number of pages12
JournalMedical Physics
Issue number5
StatePublished - May 2019

Bibliographical note

Publisher Copyright:
© 2019 American Association of Physicists in Medicine


  • 3D U-Net
  • automatic segmentation
  • convolutional neural network
  • deep learning
  • lung cancer

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging


Dive into the research topics of 'Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images'. Together they form a unique fingerprint.

Cite this