TY - GEN
T1 - A generative human-robot motion retargeting approach using a single depth sensor
AU - Wang, Sen
AU - Zuo, Xinxin
AU - Wang, Runxiao
AU - Cheng, Fuhua
AU - Yang, Ruigang
PY - 2017/7/21
Y1 - 2017/7/21
N2 - The goal of human-robot motion retargeting is to let a robot follow the movements performed by a human subject. This is traditionally achieved by applying the estimated poses from a human pose tracking system to a robot via explicit joint mapping strategies. In this paper, we present a novel approach that combine the human pose estimation and the motion retarget procedure in a unified generative framework. A 3D parametric human-robot model is proposed that has the specific joint and stability configurations as a robot while its shape resembles a human subject. Using a single depth camera to monitor human pose, we use its raw depth map as input and drive the human-robot model to fit the input 3D point cloud. The calculated joint angles of the fitted model can be applied onto the robots for retargeting. The robot's joint angles, instead of fitted individually, are fitted globally so that the transformed surface shape is as consistent as possible to the input point cloud. The robot configurations including its skeleton proportion, joint limitation, and DoF are enforced implicitly in the formulation. No explicit and pre-defined joints mapping strategies are needed. This framework is tested with both simulations and real robots that have different skeleton proportion and DoFs compared with human to show its effectiveness for motion retargeting.
AB - The goal of human-robot motion retargeting is to let a robot follow the movements performed by a human subject. This is traditionally achieved by applying the estimated poses from a human pose tracking system to a robot via explicit joint mapping strategies. In this paper, we present a novel approach that combine the human pose estimation and the motion retarget procedure in a unified generative framework. A 3D parametric human-robot model is proposed that has the specific joint and stability configurations as a robot while its shape resembles a human subject. Using a single depth camera to monitor human pose, we use its raw depth map as input and drive the human-robot model to fit the input 3D point cloud. The calculated joint angles of the fitted model can be applied onto the robots for retargeting. The robot's joint angles, instead of fitted individually, are fitted globally so that the transformed surface shape is as consistent as possible to the input point cloud. The robot configurations including its skeleton proportion, joint limitation, and DoF are enforced implicitly in the formulation. No explicit and pre-defined joints mapping strategies are needed. This framework is tested with both simulations and real robots that have different skeleton proportion and DoFs compared with human to show its effectiveness for motion retargeting.
UR - http://www.scopus.com/inward/record.url?scp=85027987469&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85027987469&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2017.7989632
DO - 10.1109/ICRA.2017.7989632
M3 - Conference contribution
AN - SCOPUS:85027987469
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 5369
EP - 5376
BT - ICRA 2017 - IEEE International Conference on Robotics and Automation
T2 - 2017 IEEE International Conference on Robotics and Automation, ICRA 2017
Y2 - 29 May 2017 through 3 June 2017
ER -