Cross-view convolutional networks

Nathan Jacobs, Scott Workman, Menghua Zhai

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Billions of geotagged ground-level images are available via social networks and Google Street View. Recent work in computer vision has explored how these images could serve as a resource for understanding our world. However, most ground-level images are captured in cities and around famous landmarks; there are still very large geographic regions with few images. This leads to artifacts when estimating geospatial distributions. We propose to leverage satellite imagery, which has dense spatial coverage and increasingly high temporal frequency, to address this problem. We introduce Cross-view ConvNets (CCNs), a novel approach for estimating geospatial distributions in which semantic labels of ground-level imagery are transferred to satellite imagery to enable more accurate predictions.

Original languageEnglish
Title of host publication2016 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2016
ISBN (Electronic)9781509032846
DOIs
StatePublished - Aug 14 2017
Event2016 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2016 - Washington, United States
Duration: Oct 18 2016Oct 20 2016

Publication series

NameProceedings - Applied Imagery Pattern Recognition Workshop
ISSN (Print)2164-2516

Conference

Conference2016 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2016
Country/TerritoryUnited States
CityWashington
Period10/18/1610/20/16

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Cross-view convolutional networks'. Together they form a unique fingerprint.

Cite this