Abstract
This article presents a novel approach for depth map enhancement from an RGB-D video sequence. The basic idea is to exploit the photometric information in the color sequence to resolve the inherent ambiguity of shape from shading problem. Instead of making any assumption about surface albedo or controlled object motion and lighting, we use the lighting variations introduced by casual object movement. We are effectively calculating photometric stereo from a moving object under natural illuminations. One of the key technical challenges is to establish correspondences over the entire image set. We, therefore, develop a lighting insensitive robust pixel matching technique that out-performs optical flow method in presence of lighting variations. An adaptive reference frame selection procedure is introduced to get more robust to imperfect lambertian reflections. In addition, we present an expectation-maximization framework to recover the surface normal and albedo simultaneously, without any regularization term. We have validated our method on both synthetic and real datasets to show its superior performance on both surface details recovery and intrinsic decomposition.
Original language | English |
---|---|
Article number | 8911257 |
Pages (from-to) | 2720-2734 |
Number of pages | 15 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 42 |
Issue number | 10 |
DOIs | |
State | Published - Oct 1 2020 |
Bibliographical note
Publisher Copyright:© 1979-2012 IEEE.
Keywords
- Depth enhancement
- intrinsic decomposition
- shape from shading
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics