This is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
|Session Name:||Machine Learning Summit: Ragdoll Motion Matching|
|Company Name(s):||Ubisoft Montreal|
|Track / Format:||Machine Learning Summit|
Did you know free users get access to 30% of content from the last 2 years?
|Overview:||Physical human simulation holds the promise of unlocking unprecedented levels of interaction, fidelity, and variety to game animation. The incredibly intricate relations between a character's body and his environment can only be faithfully synthesized if we avoid cheating as much as possible, and trust the physics engine as the ground truth of what is possible in the virtual world. On the other hand, data-driven animation systems utilizing large amounts of motion capture data have shown that artistic style and variety can be preserved even when tight constraints on responsiveness are required by game design. We present an animation system that combines both ideas. A virtual robot is trained using deep reinforcement learning to closely follow the output of motion matching. The ragdoll is powered with his own motors, without 'god forces' attaching its limbs to points in global space, and learns to balance itself as it follows the interactive motion capture controlled by the user and recovers from unplanned perturbations. We believe that this result is an interesting milestone on the path towards realistic interactive virtual humans.|