We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy. This RL policy is trained in simulation over a wide range of procedurally generated terrains. When run online, the system tracks the generated footstep plans using a model-based motion controller. We evaluate the robustness of our method over a wide variety of complex terrains. It exhibits behaviors that prioritize stability over aggressive locomotion. Additionally, we introduce two ancillary RL policies for corrective whole-body motion tracking and recovery control. These policies account for changes in physical parameters and external perturbations. We train and evaluate our framework on a complex quadrupedal system, ANYmal version B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.
RLOC: terrain-aware legged locomotion using reinforcement learning and optimal control
10.05.2022
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
DDC: | 629 |
Europäisches Patentamt | 2025
|Europäisches Patentamt | 2025
|IEEE | 2005
|A Bayesian regression approach to terrain mapping and an application to legged robot locomotion
British Library Online Contents | 2009
|