Using METS to Express Digital Provenance for Complex Digital Objects

Christy Chapman, Clifford Parker, Stephen Parsons, W. Brent Seales

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Today’s digital libraries consist of much more than simple 2D images of manuscript pages or paintings. Advanced imaging techniques – 3D modeling, spectral photography, and volumetric x-ray, for example – can be applied to all types of cultural objects and can be combined to create complex digital representations comprising many disparate parts. In addition, emergent technologies like virtual unwrapping and artificial intelligence (AI) make it possible to create “born digital” versions of unseen features, such as text and brush strokes, that are “hidden” by damage and therefore lack verifiable analog counterparts. Thus, the need for transparent metadata that describes and depicts the set of algorithmic steps and file combinations used to create such complicated digital representations is crucial. At EduceLab, we create various types of complex digital objects, from virtually unwrapped manuscripts that rely on machine learning tools to create born-digital versions of unseen text, to 3D models that consist of 2D photos, multi- and hyperspectral images, drawings, and 3D meshes. In exploring ways to document the digital provenance chain for these complicated digital representations and then support the dissemination of the metadata in a clear, concise, and organized way, we settled on the use of the Metadata Encoding Transmission Standard (METS). This paper outlines our design to exploit the flexibility and comprehensiveness of METS, particularly its behaviorSec, to meet emerging digital provenance metadata needs.

Original languageEnglish
Title of host publicationMetadata and Semantic Research - 14th International Conference, MTSR 2020, Revised Selected Papers
EditorsEmmanouel Garoufallou, María-Antonia Ovalle-Perandones
Pages143-154
Number of pages12
DOIs
StatePublished - 2021
Event14th International Conference on Metadata and Semantics Research, MTSR 2020 - Madrid, Spain
Duration: Dec 2 2020Dec 4 2020

Publication series

NameCommunications in Computer and Information Science
Volume1355 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference14th International Conference on Metadata and Semantics Research, MTSR 2020
Country/TerritorySpain
CityMadrid
Period12/2/2012/4/20

Bibliographical note

Funding Information:
1 EduceLab, inspired by the Digital Restoration Initiative at the University of Kentucky (UK), is a research group in UK’s Computer Science department. EduceData refers to the image data used to build EduceLab’s complex born-digital objects and the accompanying digital provenance metadata. 2 Funded by the Andrew W. Mellon Foundation, grant number G-1810-06243. 3For a comprehensive description of the P.Herc.118 project, see Bertlesman et al. (2021) and https://www.thinking3d.ac.uk/Seales/.

Publisher Copyright:
© 2021, Springer Nature Switzerland AG.

Keywords

  • 3D modeling
  • Cultural heritage
  • Digital libraries
  • Digital provenance
  • Herculaneum papyri
  • METS
  • Metadata
  • Virtual unwrapping

ASJC Scopus subject areas

  • Computer Science (all)
  • Mathematics (all)

Fingerprint

Dive into the research topics of 'Using METS to Express Digital Provenance for Complex Digital Objects'. Together they form a unique fingerprint.

Cite this