Abstract
In this work, we propose a cross-view learning approach, in which images captured from a ground-level view are used as weakly supervised annotations for interpreting overhead imagery. The outcome is a convolutional neural network for overhead imagery that is capable of predicting the type and count of objects that are likely to be seen from a ground-level perspective. We demonstrate our approach on a large dataset of geotagged ground-level and overhead imagery and find that our network captures semantically meaningful features, despite being trained without manual annotations.
Original language | English |
---|---|
Title of host publication | 2018 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Proceedings |
Pages | 4375-4378 |
Number of pages | 4 |
ISBN (Electronic) | 9781538671504 |
DOIs | |
State | Published - Oct 31 2018 |
Event | 38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Valencia, Spain Duration: Jul 22 2018 → Jul 27 2018 |
Publication series
Name | International Geoscience and Remote Sensing Symposium (IGARSS) |
---|---|
Volume | 2018-July |
Conference
Conference | 38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 |
---|---|
Country/Territory | Spain |
City | Valencia |
Period | 7/22/18 → 7/27/18 |
Bibliographical note
Publisher Copyright:© 2018 IEEE.
Keywords
- Semantic transfer
- Weak supervision
ASJC Scopus subject areas
- Computer Science Applications
- General Earth and Planetary Sciences