Abstract
Virtual unwrapping is a software pipeline for the noninvasive recovery of texts inside damaged manuscripts via the analysis of three dimensional tomographic data, typically X-ray micro-CT. Recent advancements to the virtual unwrapping pipeline include the use of trained models to perform the “texturing” phase, where the content written upon a surface is extracted from the 3D volume and projected onto a surface mesh representing that page. Trained models are critical for their ability to discern subtle changes that indicate the presence or absence of writing at a given point on the surface. The unique datasets and computational pipeline required to train and make use of these models make it a challenge to develop succinct, reliable, and reproducible research infrastructure. This paper presents our response to that challenge and outlines our framework designed to support the ongoing development of machine learning models to advance the capability of virtual unwrapping. Our approach is designed on the principles of visualization, automation, data access, metadata, and consistent benchmarks.
Original language | English |
---|---|
Article number | 015 |
Journal | Proceedings of Science |
Volume | 378 |
State | Published - Oct 22 2021 |
Event | 2021 International Symposium on Grids and Cloud, ISGC 2021 - Taipei, Taiwan, Province of China Duration: Mar 22 2021 → Mar 26 2021 |
Bibliographical note
Funding Information:We would like to thank Christy Chapman for extensive operational support and Mami Hayashida for infrastructure support and feedback. We thank the University of Kentucky’s Center for Computational Sciences and Information Technology Services Research Computing for use of the Lipscomb Computing Cluster resources. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1839289. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation. This material has been made possible in part by the National Endowment for the Humanities: Democracy demands wisdom. Any views, findings, conclusions, or recommendations expressed in this article, do not necessarily represent those of the National Endowment for the Humanities.
Publisher Copyright:
© Copyright owned by the author(s) under the terms of the Creative Commons
Funding
We would like to thank Christy Chapman for extensive operational support and Mami Hayashida for infrastructure support and feedback. We thank the University of Kentucky’s Center for Computational Sciences and Information Technology Services Research Computing for use of the Lipscomb Computing Cluster resources. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1839289. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation. This material has been made possible in part by the National Endowment for the Humanities: Democracy demands wisdom. Any views, findings, conclusions, or recommendations expressed in this article, do not necessarily represent those of the National Endowment for the Humanities.
Funders | Funder number |
---|---|
National Science Foundation Arctic Social Science Program | 1839289 |
National Science Foundation Arctic Social Science Program | |
National Endowment for the Humanities |
ASJC Scopus subject areas
- General