TY - GEN
T1 - Joint depth and alpha matte optimization via fusion of stereo and time-of-flight sensor
AU - Zhu, Jiejie
AU - Liao, Miao
AU - Yang, Ruigang
AU - Pan, Zhigeng
PY - 2009
Y1 - 2009
N2 - We present a new approach to iteratively estimate both high-quality depth map and alpha matte from a single image or a video sequence. Scene depth, which is invariant to illumination changes, color similarity and motion ambiguity, provides a natural and robust cue for foreground/ background segmentation - a prerequisite for matting. The image mattes, on the other hand, encode rich information near boundaries where either passive or active sensing method performs poorly. We develop a method to combine the complementary nature of scene depth and alpha matte to mutually enhance their qualities. We formulate depth inference as a global optimization problem where information from passive stereo, active range sensor and matte is merged. The depth map is used in turn to enhance the matting. In addition, we extend this approach to video matting by incorporating temporal coherence, which reduces flickering in the composite video. We show that these techniques lead to improved accuracy and robustness for both static and dynamic scenes.
AB - We present a new approach to iteratively estimate both high-quality depth map and alpha matte from a single image or a video sequence. Scene depth, which is invariant to illumination changes, color similarity and motion ambiguity, provides a natural and robust cue for foreground/ background segmentation - a prerequisite for matting. The image mattes, on the other hand, encode rich information near boundaries where either passive or active sensing method performs poorly. We develop a method to combine the complementary nature of scene depth and alpha matte to mutually enhance their qualities. We formulate depth inference as a global optimization problem where information from passive stereo, active range sensor and matte is merged. The depth map is used in turn to enhance the matting. In addition, we extend this approach to video matting by incorporating temporal coherence, which reduces flickering in the composite video. We show that these techniques lead to improved accuracy and robustness for both static and dynamic scenes.
UR - http://www.scopus.com/inward/record.url?scp=70450206554&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=70450206554&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2009.5206520
DO - 10.1109/CVPRW.2009.5206520
M3 - Conference contribution
AN - SCOPUS:70450206554
SN - 9781424439935
T3 - 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009
SP - 453
EP - 460
BT - 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
T2 - 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009
Y2 - 20 June 2009 through 25 June 2009
ER -