Learning Physically Based Humanoid Climbing Movements
We propose a novel learning‐based solution for motion planning of physically‐based humanoid climbing that allows for fast and robust planning of complex climbing strategies and movements, including extreme movements such as jumping. Similar to recent previous work, we combine a high‐level graph‐based path planner with low‐level sampling‐based optimization of climbing moves. We contribute through showing that neural network models of move success probability, effortfulness, and control policy can make both the high‐level and low‐level components more efficient and robust. The models can be trained through random simulation practice without any data. The models also eliminate the need for laboriously hand‐tuned heuristics for graph search. As a result, we are able to efficiently synthesize climbing sequences involving dynamic leaps and one‐hand swings, i.e. there are no limits to the movement complexity or the number of limbs allowed to move simultaneously. Our supplemental video also provides some comparisons between our AI climber and a real human climber.
No Supplementary Data
No Article Media