Abstract
With the wide-spread of consumer 3D-TV technology, stereoscopic videoconferencing systems are emerging. However, the special glasses participants wear to see 3D can create distracting images. This paper presents a computational framework to reduce undesirable artifacts in the eye regions caused by these 3D glasses. More specifically, we add polarized filters to the stereo camera so that partial images of reflection can be captured. A novel Bayesian model is then developed to describe the imaging process of the eye regions including darkening and reflection, and infer the eye regions based on Classification Expectation-Maximization (EM). The recovered eye regions under the glasses are brighter and with little reflections, leading to a more nature videoconferencing experience. Qualitative evaluations and user studies are conducted to demonstrate the substantial improvement our approach can achieve.
Original language | English |
---|---|
Article number | 6619000 |
Pages (from-to) | 1179-1186 |
Number of pages | 8 |
Journal | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
DOIs | |
State | Published - 2013 |
Event | 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 - Portland, OR, United States Duration: Jun 23 2013 → Jun 28 2013 |
Keywords
- 3D Videoconferencing
- Reflection Reduction
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition