What goes where: Predicting object distributions from above

Connor Greenwell, Scott Workman, Nathan Jacobs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

In this work, we propose a cross-view learning approach, in which images captured from a ground-level view are used as weakly supervised annotations for interpreting overhead imagery. The outcome is a convolutional neural network for overhead imagery that is capable of predicting the type and count of objects that are likely to be seen from a ground-level perspective. We demonstrate our approach on a large dataset of geotagged ground-level and overhead imagery and find that our network captures semantically meaningful features, despite being trained without manual annotations.

Original languageEnglish
Title of host publication2018 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Proceedings
Pages4375-4378
Number of pages4
ISBN (Electronic)9781538671504
DOIs
StatePublished - Oct 31 2018
Event38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Valencia, Spain
Duration: Jul 22 2018Jul 27 2018

Publication series

NameInternational Geoscience and Remote Sensing Symposium (IGARSS)
Volume2018-July

Conference

Conference38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018
Country/TerritorySpain
CityValencia
Period7/22/187/27/18

Bibliographical note

Publisher Copyright:
© 2018 IEEE.

Keywords

  • Semantic transfer
  • Weak supervision

ASJC Scopus subject areas

  • Computer Science Applications
  • General Earth and Planetary Sciences

Fingerprint

Dive into the research topics of 'What goes where: Predicting object distributions from above'. Together they form a unique fingerprint.

Cite this