Abstract
The primary goal in most uses of a camera is not to capture properties of light, but to use light to construct a model of the appearance of the scene being photographed. That model should change over time as the scene changes, but how does it change over different timescales? At low framerates, there often are large changes between temporally adjacent images, and many are attributed to motion. However, as the scene appearance is sampled in ever finer time intervals, the changes in the scene become simpler and eventually insignificant compared to noise in the sampling process (e.g., photon shot noise). Thus, increasing the temporal resolution of the scene model can be expected to produce a decreasing amount of additional data. This property can be leveraged to allow virtual still exposures, or video at other framerates, to be computationally extracted after capture of a high-temporal-resolution scene model, providing a variety of benefits. The current work attempts to quantify how scene appearance models change over time by examining properties of high-framerate video, with the goal of characterizing the relationship between temporal resolution and effectiveness of data compression.
Original language | English |
---|---|
Journal | IS and T International Symposium on Electronic Imaging Science and Technology |
DOIs | |
State | Published - 2016 |
Event | Digital Photography and Mobile Imaging XII 2016 - San Francisco, United States Duration: Feb 14 2016 → Feb 18 2016 |
Bibliographical note
Publisher Copyright:© 2016 Society for Imaging Science and Technology.
Funding
Funders | Funder number |
---|---|
National Science Foundation (NSF) | 1422811 |
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design
- Computer Science Applications
- Human-Computer Interaction
- Software
- Electrical and Electronic Engineering
- Atomic and Molecular Physics, and Optics