We propose a novel method for detecting horizontal vanishing points and the zenith vanishing point in man-made environments. The dominant trend in existing methods is to first find candidate vanishing points, then remove outliers by enforcing mutual orthogonality. Our method reverses this process: we propose a set of horizon line candidates and score each based on the vanishing points it contains. A key element of our approach is the use of global image context, extracted with a deep convolutional network, to constrain the set of candidates under consideration. Our method does not make a Manhattan-world assumption and can operate effectively on scenes with only a single horizontal vanishing point. We evaluate our approach on three benchmark datasets and achieve state-of the-art performance on each. In addition, our approach is significantly faster than the previous best method.
|Title of host publication||Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016|
|Number of pages||9|
|State||Published - Dec 9 2016|
|Event||29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 - Las Vegas, United States|
Duration: Jun 26 2016 → Jul 1 2016
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Conference||29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016|
|Period||6/26/16 → 7/1/16|
Bibliographical noteFunding Information:
We gratefully acknowledge the support of DARPA (contract CSSG D11AP00255). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government
© 2016 IEEE.
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition