SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images

Xinxin Zuo, Sen Wang, Jiangbin Zheng, Weiwei Yu, Minglun Gong, Ruigang Yang, Li Cheng

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

In this paper, we propose a novel approach to reconstruct 3D human body shapes based on a sparse set of RGBD frames using a single RGBD camera. We specifically focus on the realistic settings where human subjects move freely during the capture. The main challenge is how to robustly fuse these sparse frames into a canonical 3D model, under pose changes and surface occlusions. This is addressed by our new framework consisting of the following steps. First, based on a generative human template, for every two frames having sufficient overlap, an initial pairwise alignment is performed; It is followed by a global non-rigid registration procedure, in which partial results from RGBD frames are collected into a unified 3D shape, under the guidance of correspondences from the pairwise alignment; Finally, the texture map of the reconstructed human model is optimized to deliver a clear and spatially consistent texture. Empirical evaluations on synthetic and real datasets demonstrate both quantitatively and qualitatively the superior performance of our framework in reconstructing complete 3D human models with high fidelity. It is worth noting that our framework is flexible, with potential applications going beyond shape reconstruction. As an example, we showcase its use in reshaping and reposing to a new avatar.

Original languageEnglish
Article number9113759
Pages (from-to)1617-1629
Number of pages13
JournalIEEE Transactions on Multimedia
Volume23
DOIs
StatePublished - 2021

Bibliographical note

Publisher Copyright:
© 1999-2012 IEEE.

Funding

Manuscript received January 8, 2020; revised April 19, 2020 and May 28, 2020; accepted June 5, 2020. Date of publication June 10, 2020; date of current version May 26, 2021. This work was supported in part by the USDA under Grant 2018-67021-27416, in part by NSFC under Grants 61972321 and 61603302, in part by NSERC Discovery under Grant RGPIN-2019-04575, in part by University of Alberta-Huawei Joint Innovation Collaboration grants, in part by Key R & D plan of Shaanxi Province (No. 2019GY-120), and in part by the 111 Project under Grant B13044. The associate editor coordinating the review of this manuscript and approving it for publication was Sebastian Knorr. (Corresponding authors: Jiangbin Zheng; Li Cheng.) Xinxin Zuo and Sen Wang are with the Northwestern Polytechnical University, Xi’an 710072, China, and with the University of Kentucky, Lexington, KY 40508 USA, and also with the University of Alberta, Edmonton, AB T6G 2R3, Canada (e-mail: [email protected]; [email protected]).

FundersFunder number
Key R & D plan of Shaanxi Province2019GY-120
University of Alberta-Huawei Joint Innovation Collaboration
U.S. Department of Agriculture2018-67021-27416
Natural Sciences and Engineering Research Council of CanadaRGPIN-2019-04575
National Natural Science Foundation of China (NSFC)61603302, 61972321
Higher Education Discipline Innovation ProjectB13044

    Keywords

    • RGBD
    • human body
    • non-rigid fusion

    ASJC Scopus subject areas

    • Signal Processing
    • Media Technology
    • Computer Science Applications
    • Electrical and Electronic Engineering

    Fingerprint

    Dive into the research topics of 'SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images'. Together they form a unique fingerprint.

    Cite this