Abstract
Time domain continuous imaging (TDCI) models scene appearance as a set of continuous waveforms, each recording how the value of an individual pixel changes over time. When a set of timestamped still images is converted into a TDCI stream, pixel value change records are created based on when the pixel value becomes more different from the previous value than the value error model classifies as noise. Virtual exposures may then be rendered from the TDCI stream data for arbitrary time intervals by integrating the area under the pixel value waveforms. Using conventional cameras, multispectral and high dynamic range imaging both involve combining multiple exposures; the needed variations in exposure and/or spectral filtering generally skew the time periods represented by the component exposures or compromise capture quality in other ways. This paper describes a simple approach in which converting the image data to a TDCI representation is used to support generation of a higher-quality fusion of the separate captures.
Original language | English |
---|---|
Journal | IS and T International Symposium on Electronic Imaging Science and Technology |
DOIs | |
State | Published - 2018 |
Event | Photography, Mobile, and Immersive Imaging 2018, PMII 2018 - Burlingame, United States Duration: Jan 28 2018 → Feb 1 2018 |
Bibliographical note
Funding Information:This work is supported in part under NSF Award #1422811,
Publisher Copyright:
© 2018, Society for Imaging Science and Technology.
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design
- Computer Science Applications
- Human-Computer Interaction
- Software
- Electrical and Electronic Engineering
- Atomic and Molecular Physics, and Optics