Learning a Single Policy for Diverse Behaviors on a Quadrupedal Robot using Scalable Motion Imitation

arXiv Preprint 2023

Arnaud Klipfel (1)    Nitish Sontakke (1)    Ren Liu (2)    Sehoon Ha(1)   
(1) Georgia Institute of Technology    (2) Meta Platforms, Inc., USA, renl@meta.com. Work done while at Georgia Tech.   


Abstract

Learning various motor skills for quadrupedal robots is a challenging problem that requires careful design of task-specific mathematical models or reward descriptions. In this work, we propose to learn a single capable policy using deep reinforcement learning by imitating a large number of reference motions, including walking, turning, pacing, jumping, sitting, and lying. On top of the existing motion imitation framework, we first carefully design the observation space, the action space, and the reward function to improve the scalability of the learning as well as the robustness of the final policy. In addition, we adopt a novel adaptive motion sampling (AMS) method, which maintains a balance between successful and unsuccessful behaviors. This technique allows the learning algorithm to focus on challenging motor skills and avoid catastrophic forgetting. We demonstrate that the learned policy can exhibit diverse behaviors in simulation by successfully tracking both the training dataset and out-of-distribution trajectories. We also validate the importance of the proposed learning formulation and the adaptive motion sampling scheme by conducting experiments.

Paper: [PDF]       Preprint: [Arxiv]

Video