TY - JOUR
T1 - Accelerated training of bootstrap aggregation-based deep information extraction systems from cancer pathology reports
AU - Yoon, Hong Jun
AU - Klasky, Hilda B.
AU - Gounley, John P.
AU - Alawad, Mohammed
AU - Gao, Shang
AU - Durbin, Eric B.
AU - Wu, Xiao Cheng
AU - Stroup, Antoinette
AU - Doherty, Jennifer
AU - Coyle, Linda
AU - Penberthy, Lynne
AU - Blair Christian, J.
AU - Tourassi, Georgia D.
N1 - Publisher Copyright:
© 2020 Elsevier Inc.
PY - 2020/10
Y1 - 2020/10
N2 - Objective: In machine learning, it is evident that the classification of the task performance increases if bootstrap aggregation (bagging) is applied. However, the bagging of deep neural networks takes tremendous amounts of computational resources and training time. The research question that we aimed to answer in this research is whether we could achieve higher task performance scores and accelerate the training by dividing a problem into sub-problems. Materials and Methods:: The data used in this study consist of free text from electronic cancer pathology reports. We applied bagging and partitioned data training using Multi-Task Convolutional Neural Network (MT-CNN) and Multi-Task Hierarchical Convolutional Attention Network (MT-HCAN) classifiers. We split a big problem into 20 sub-problems, resampled the training cases 2,000 times, and trained the deep learning model for each bootstrap sample and each sub-problem—thus, generating up to 40,000 models. We performed the training of many models concurrently in a high-performance computing environment at Oak Ridge National Laboratory (ORNL). Results: We demonstrated that aggregation of the models improves task performance compared with the single-model approach, which is consistent with other research studies; and we demonstrated that the two proposed partitioned bagging methods achieved higher classification accuracy scores on four tasks. Notably, the improvements were significant for the extraction of cancer histology data, which had more than 500 class labels in the task; these results show that data partition may alleviate the complexity of the task. On the contrary, the methods did not achieve superior scores for the tasks of site and subsite classification. Intrinsically, since data partitioning was based on the primary cancer site, the accuracy depended on the determination of the partitions, which needs further investigation and improvement. Conclusion: Results in this research demonstrate that 1. The data partitioning and bagging strategy achieved higher performance scores. 2. We achieved faster training leveraged by the high-performance Summit supercomputer at ORNL.
AB - Objective: In machine learning, it is evident that the classification of the task performance increases if bootstrap aggregation (bagging) is applied. However, the bagging of deep neural networks takes tremendous amounts of computational resources and training time. The research question that we aimed to answer in this research is whether we could achieve higher task performance scores and accelerate the training by dividing a problem into sub-problems. Materials and Methods:: The data used in this study consist of free text from electronic cancer pathology reports. We applied bagging and partitioned data training using Multi-Task Convolutional Neural Network (MT-CNN) and Multi-Task Hierarchical Convolutional Attention Network (MT-HCAN) classifiers. We split a big problem into 20 sub-problems, resampled the training cases 2,000 times, and trained the deep learning model for each bootstrap sample and each sub-problem—thus, generating up to 40,000 models. We performed the training of many models concurrently in a high-performance computing environment at Oak Ridge National Laboratory (ORNL). Results: We demonstrated that aggregation of the models improves task performance compared with the single-model approach, which is consistent with other research studies; and we demonstrated that the two proposed partitioned bagging methods achieved higher classification accuracy scores on four tasks. Notably, the improvements were significant for the extraction of cancer histology data, which had more than 500 class labels in the task; these results show that data partition may alleviate the complexity of the task. On the contrary, the methods did not achieve superior scores for the tasks of site and subsite classification. Intrinsically, since data partitioning was based on the primary cancer site, the accuracy depended on the determination of the partitions, which needs further investigation and improvement. Conclusion: Results in this research demonstrate that 1. The data partitioning and bagging strategy achieved higher performance scores. 2. We achieved faster training leveraged by the high-performance Summit supercomputer at ORNL.
KW - Bootstrap aggregation
KW - Convolutional neural networks
KW - Data partitioning
KW - Deep learning
KW - Hierarchical self-attention networks
KW - High-performance computing
KW - Natural language processing
UR - http://www.scopus.com/inward/record.url?scp=85090853874&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090853874&partnerID=8YFLogxK
U2 - 10.1016/j.jbi.2020.103564
DO - 10.1016/j.jbi.2020.103564
M3 - Article
C2 - 32919043
AN - SCOPUS:85090853874
SN - 1532-0464
VL - 110
JO - Journal of Biomedical Informatics
JF - Journal of Biomedical Informatics
M1 - 103564
ER -