The facial appearance of a person is a product of many factors, including their gender, age, and ethnicity. Methods for estimating these latent factors directly from an image of a face have been extensively studied for decades. We extend this line of work to include estimating the location where the image was taken. We propose a deep network architecture for making such predictions and demonstrate its superiority to other approaches in an extensive set of quantitative experiments on the GeoFaces dataset. Our experiments show that in 26% of the cases the ground truth location is the topmost prediction, and if we allow ourselves to consider the top five predictions, the accuracy increases to 47%. In both cases, the deep learning based approach significantly outperforms random chance as well as another baseline method.
|Title of host publication||2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings|
|Number of pages||5|
|State||Published - Dec 9 2015|
|Event||IEEE International Conference on Image Processing, ICIP 2015 - Quebec City, Canada|
Duration: Sep 27 2015 → Sep 30 2015
|Name||Proceedings - International Conference on Image Processing, ICIP|
|Conference||IEEE International Conference on Image Processing, ICIP 2015|
|Period||9/27/15 → 9/30/15|
Bibliographical notePublisher Copyright:
© 2015 IEEE.
- facial features
- image localization
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
- Signal Processing