Time-of-flight range sensors have error characteristics, which are complementary to passive stereo. They provide real-time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes where stereo excels. We explore their complementary characteristics and introduce a method for combining the results from both methods that achieve better accuracy than either alone. In our fusion framework, the depth probability distribution functions from each of these sensor modalities are formulated and optimized. Robust and adaptive fusion is built on a pixel-wise reliability weighting function calculated for each method. In addition, since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturer's calibration. We demonstrate that our proposed techniques lead to improved accuracy and robustness on an extensive set of experimental results.
|Number of pages||15|
|Journal||IEEE Transactions on Pattern Analysis and Machine Intelligence|
|State||Published - 2011|
Bibliographical noteFunding Information:
Ruigang Yang was supported by the University of Kentucky Research Foundation, the US Department of Homeland Security, US National Science Foundation (NSF) HCC-0448185, and CPA-0811647. James E. Davis was supported by NSF CCF-0746690. Zhigeng Pan was supported by the China NSFC 60533080 and the China 863 Plans 2006AA01Z335. The authors thank Qing Zhang and Xueqing Xiang for collecting part of the data. This work was done when Jiejie Zhu was with the University of Kentucky as a postdoctoral researcher.
- Time-of-Flight sensor
- global optimization
- multisensor fusion
- stereo vision
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics