Abstract
In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor. Our method first captures a few high-quality scans of the user rotating herself at multiple poses from different views. We fit each incomplete scan using template fitting techniques with a generic human template, and register all scans to every pose using global consistency constraints. After registration, these watertight models with different poses are used to train a parametric model in a fashion similar to the SCAPE method. Once the parametric model is built, it can be used as an animitable avatar or more interestingly synthesizing dynamic 3D models from single-view depth videos. Experimental results demonstrate the effectiveness of our system to produce dynamic models.
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
Pages | 676-683 |
Number of pages | 8 |
ISBN (Electronic) | 9781479951178, 9781479951178 |
DOIs | |
State | Published - Sep 24 2014 |
Event | 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014 - Columbus, United States Duration: Jun 23 2014 → Jun 28 2014 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
ISSN (Print) | 1063-6919 |
Conference
Conference | 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014 |
---|---|
Country/Territory | United States |
City | Columbus |
Period | 6/23/14 → 6/28/14 |
Bibliographical note
Publisher Copyright:© 2014 IEEE.
Keywords
- 3D Model
- Avatar
- Mesh Registration
- SCAPE
- Template Fitting
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition