Standard robotic control works perfectly in case of ordinary conditions, but in case a change in the conditions (e.g. damaging of one of the motors), the robot won’t achieve its task anymore. We need an algorithm that provide the robot with the ability of adaption to unforeseen situations. Reinforcement learning provide a framework corresponds with that requirements, but it needs big data sets to learn robotic tasks, which is impractical. We propose using Gaussian processes to improve the efficiency of the Reinforcement learning, where GP will learn a state transition model from robot (interaction) phase, and after that we use the GP to simulate trajectories and optimize the robot’s controller in a (simulation) phase. We tested the algorithm in Cart-pole task, and it gave interesting results, where a working controller was learned after just 28 seconds of (interaction) on the real robot, and the whole training time was 21 minuts, considering the training in the (simulation).
Toward faster reinforcement learning for robotics applications by using Gaussian processes
XLIII ACADEMIC SPACE CONFERENCE: dedicated to the memory of academician S.P. Korolev and other outstanding Russian scientists – Pioneers of space exploration ; 2019 ; Moscow, Russia
AIP Conference Proceedings ; 2171 , 1
2019-11-15
7 pages
Conference paper
Electronic Resource
English
Human-Aware Reinforcement Learning for Fault Recovery Using Contextual Gaussian Processes
BASE | 2021
|