Abstract
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video Inputs and the added freedom of controlling the visualization In the temporal domain, such as smooth slow motion and temporal Integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.
Original language | English |
---|---|
Pages (from-to) | 697-710 |
Number of pages | 14 |
Journal | IEEE Transactions on Visualization and Computer Graphics |
Volume | 13 |
Issue number | 4 |
DOIs | |
State | Published - Jul 2007 |
Bibliographical note
Funding Information:The authors would like to thank Greg Turk for suggestions and Sifang Li for participation in the experiments. This work was done while the authors were at the University of Kentucky and is supported in part by the University of Kentucky Research Foundation, the US Department of Homeland Security, and US National Science Foundation Grant IIS-0448185.
Keywords
- Epipolar constraint
- Image morphing
- Image-based rendering
- Space-time light field
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Computer Graphics and Computer-Aided Design