In this paper we present a novel approach for depth map enhancement from an RGB-D video sequence. The basic idea is to exploit the photometric information in the color sequence. Instead of making any assumption about surface albedo or controlled object motion and lighting, we use the lighting variations introduced by casual object movement. We are effectively calculating photometric stereo from a moving object under natural illuminations. The key technical challenge is to establish correspondences over the entire image set. We therefore develop a lighting insensitive robust pixel matching technique that out-performs optical flow method in presence of lighting variations. In addition we present an expectation-maximization framework to recover the surface normal and albedo simultaneously, without any regularization term. We have validated our method on both synthetic and real datasets to show its superior performance on both surface details recovery and intrinsic decomposition.
|Title of host publication||Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017|
|Number of pages||10|
|State||Published - Dec 22 2017|
|Event||16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy|
Duration: Oct 22 2017 → Oct 29 2017
|Name||Proceedings of the IEEE International Conference on Computer Vision|
|Conference||16th IEEE International Conference on Computer Vision, ICCV 2017|
|Period||10/22/17 → 10/29/17|
Bibliographical noteFunding Information:
This work is partially supported by the US NFS (IIS-1231545, IIP-1543172), US Army Research grant W911NF-14-1-0437, NSFC (No. 61332017, 51475373, 61603302, 51375390), Key Industrial Innovation Chain of Shaanxi Province Industrial Area (2015KTZDGY04-01, 2016KTZDGY06-01), the fundamental Research Funds for the Central Universities (No. 3102016ZY013). Jiangbin Zheng and Ruigang Yang are the co-corresponding authors for this paper.
© 2017 IEEE.
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition