Abstract
The horizon line is an important contextual attribute for a wide variety of image understanding tasks. As such, many methods have been proposed to estimate its location from a single image. These methods typically require the image to contain specific cues, such as vanishing points, coplanar circles, and regular textures, thus limiting their real-world applicability. We introduce a large, realistic evaluation dataset, Horizon Lines in the Wild (HLW), containing natural images with labeled horizon lines. Using this dataset, we investigate the application of convolutional neural networks for directly estimating the horizon line, without requiring any explicit geometric constraints or other special cues. An extensive evaluation shows that using our CNNs, either in isolation or in conjunction with a previous geometric approach, we achieve state-of-the-art results on the challenging HLW dataset and two existing benchmark datasets.
Original language | English |
---|---|
Pages | 20.1-20.12 |
DOIs | |
State | Published - 2016 |
Event | 27th British Machine Vision Conference, BMVC 2016 - York, United Kingdom Duration: Sep 19 2016 → Sep 22 2016 |
Conference
Conference | 27th British Machine Vision Conference, BMVC 2016 |
---|---|
Country/Territory | United Kingdom |
City | York |
Period | 9/19/16 → 9/22/16 |
Bibliographical note
Funding Information:W e are grateful to Jan-Michael Frahm, Jared Heinly , Y unpeng Li, T orsten Sattler , Noah Sna v ely , and K yle W ilson for making SfM models a v ai lable to us. This research was supported by the Intelligence Adv anced Research Projects Acti vity (IARP A) via Air F orce Research Laboratory , contract F A8650-12-C-7212. The U.S. Go v ernment is authorized to reproduce and distrib ute reprints for Go v ernmental purposes notwithstanding an y copyright annotation thereon. Disclaimer: The vie ws and conclusions contained herein are those of theauthorsandshouldnotbeinterpretedasnecessarilyrepresentingtheofficial policiesor endorsements, either e xpressed or implied, of IARP A, AFRL, or the U.S. Go v ernment.
Funding Information:
We are grateful to Jan-Michael Frahm, Jared Heinly, Yunpeng Li, Torsten Sattler, Noah Snavely, and Kyle Wilson for making SfM models available to us. This research was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory, contract FA8650-12-C-7212. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government.
Publisher Copyright:
© 2016. The copyright of this document resides with its authors.
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition