FACE2GPS: Estimating geographic location from facial features

Mohammad T. Islam, Scott Workman, Nathan Jacobs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

The facial appearance of a person is a product of many factors, including their gender, age, and ethnicity. Methods for estimating these latent factors directly from an image of a face have been extensively studied for decades. We extend this line of work to include estimating the location where the image was taken. We propose a deep network architecture for making such predictions and demonstrate its superiority to other approaches in an extensive set of quantitative experiments on the GeoFaces dataset. Our experiments show that in 26% of the cases the ground truth location is the topmost prediction, and if we allow ourselves to consider the top five predictions, the accuracy increases to 47%. In both cases, the deep learning based approach significantly outperforms random chance as well as another baseline method.

Original languageEnglish
Title of host publication2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings
Pages1608-1612
Number of pages5
ISBN (Electronic)9781479983391
DOIs
StatePublished - Dec 9 2015
EventIEEE International Conference on Image Processing, ICIP 2015 - Quebec City, Canada
Duration: Sep 27 2015Sep 30 2015

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2015-December
ISSN (Print)1522-4880

Conference

ConferenceIEEE International Conference on Image Processing, ICIP 2015
Country/TerritoryCanada
CityQuebec City
Period9/27/159/30/15

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

Keywords

  • facial features
  • image localization

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'FACE2GPS: Estimating geographic location from facial features'. Together they form a unique fingerprint.

Cite this