AI4Animation

Bringing Characters to Life with Computer Brains in Unity

Stars
7.8K

AI4Animation: Deep Learning for Character Control

This repository explores the opportunities of deep learning for character animation and control. It aims to be a comprehensive framework for data-driven character animation, including data processing, neural network training and runtime control, developed in Unity3D / PyTorch. The various projects below demonstrate such capabilities using neural networks for animating biped locomotion, quadruped locomotion, and character-scene interactions with objects and the environment, plus sports and fighting games, as well as embodied avatar motions in AR/VR. Further advances on this research will continue being added to this project.


SIGGRAPH 2024
Categorical Codebook Matching for Embodied Character Controllers

Sebastian Starke,
Paul Starke,
Nicky He,
Taku Komura,
Yuting Ye,
ACM Trans. Graph. 43, 4, Article 142.

Unlike existing methods for kinematic character control that learn a direct mapping between inputs and outputs or utilize a motion prior that is trained on the motion data alone, our framework learns from both the inputs and outputs simultaneously to form a motion manifold that is informed about the control signals. To learn such setup in a supervised manner, we propose a technique that we call Codebook Matching which enforces similarity between both latent probability distributions $Z_$ and $Z_$. In the context of motion generation, instead of directly predicting the motions outputs from the control inputs, we only predict their probabilities for each of them to appear. By introducing a matching loss between both categorical probability distributions, our codebook matching technique allows to substitute $Z_$ by $Z_$ during test time.

Training:
\begin{cases}
    Y \rightarrow Z_Y \rightarrow Y
    \\
    X \rightarrow Z_X
    \\
    Z_X \sim Z_Y
\end{cases}

Inference: 
X \rightarrow Z_X \rightarrow Y

Our method is not limited to three-point inputs but we can also use it to generate embodied character movements with additional joystick or button controls by what we call hybrid control mode. In this setting, the user, engineer or artist can additionally tell the character where to go via a simple goal location while preserving the original context of motion from three-point tracking signals. This changes the scope of applications we can address by walking / running / crouching in the virtual world while standing or even sitting in the real world.

Furthermore, our codebook matching architecture shares many similarities with motion matching and is able to learn a similar structure in an end-to-end manner. While motion matching can bypass ambiguity in the mapping from control to motion by selecting among candidates with similar query distances, our setup selects possible outcomes from predicted probabilities and naturally projects against valid output motions if their probabilities are similar. However, in contrast to database searches, our codebook matching is able to effectively compress the motion data where same motions map to same codes, and can bypass ambiguity issues which existing learning-based methods such as standard feed-forward networks (MLP) or variational models (CVAE) may struggle with. We demonstrate such capabilities by reconstructing the ambiguous toy example functions in the figure below.


SIGGRAPH 2022
DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds

Sebastian Starke,
Ian Mason,
Taku Komura,
ACM Trans. Graph. 41, 4, Article 136.


SIGGRAPH 2021
Neural Animation Layering for Synthesizing Martial Arts Movements

Sebastian Starke,
Yiwei Zhao,
Fabio Zinno,
Taku Komura,
ACM Trans. Graph. 40, 4, Article 92.


SIGGRAPH 2020
Local Motion Phases for Learning Multi-Contact Character Movements

Sebastian Starke,
Yiwei Zhao,
Taku Komura,
Kazi Zaman.
ACM Trans. Graph. 39, 4, Article 54.


SIGGRAPH Asia 2019
Neural State Machine for Character-Scene Interactions

Sebastian Starke+,
He Zhang+,
Taku Komura,
Jun Saito.
ACM Trans. Graph. 38, 6, Article 178.
(+Joint First Authors)


SIGGRAPH 2018
Mode-Adaptive Neural Networks for Quadruped Motion Control

He Zhang+,
Sebastian Starke+,
Taku Komura,
Jun Saito.
ACM Trans. Graph. 37, 4, Article 145.
(+Joint First Authors)


SIGGRAPH 2017
Phase-Functioned Neural Networks for Character Control

Daniel Holden,
Taku Komura,
Jun Saito.
ACM Trans. Graph. 36, 4, Article 42.


Thesis Fast Forward Presentation from SIGGRAPH 2020

Copyright Information

This project is only for research or education purposes, and not freely available for commercial use or redistribution. The motion capture data is available only under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.