Learning to Map Nearly Anything

Tawfiq Salem, Connor Greenwell, Hunter Blanton, Nathan Jacobs

Research output: Working paperPreprint

11 Downloads (Pure)

Abstract

Looking at the world from above, it is possible to estimate many properties of a given location, including the type of land cover and the expected land use. Historically, such tasks have relied on relatively coarse-grained categories due to the difficulty of obtaining fine-grained annotations. In this work, we propose an easily extensible approach that makes it possible to estimate fine-grained properties from overhead imagery. In particular, we propose a cross-modal distillation strategy to learn to predict the distribution of fine-grained properties from overhead imagery, without requiring any manual annotation of overhead imagery. We show that our learned models can be used directly for applications in mapping and image localization.
Original languageUndefined/Unknown
StatePublished - Sep 16 2019

Keywords

  • cs.CV

Fingerprint

Dive into the research topics of 'Learning to Map Nearly Anything'. Together they form a unique fingerprint.

Cite this