In this chapter, the authors modify the reinforcement learning (RL) methods to overcome the overestimation and the dimension problems for the robust control problem under the worst‐case uncertainty. The objective of robust control is to achieve robust performance in presence of disturbances. Most robust controllers using RL are based on the actor‐critic algorithms and Q‐learning. The authors prove that the reinforcement learning with the k‐nearest neighbors (kNN) and double Q‐learning modifications guarantees the robust controller under worst‐case uncertainty convergence with a near‐optimal value. Optimization procedures can be used for the kNN rule to find the optimal number of k‐nearest neighbors. The authors discuss two cases: ideal control without uncertainty and robust control with worst‐case uncertainty. They compare continuous‐time critic learning with the H 2 solution, and the continuous‐time actor‐critic learning method. The results of the simulation and the experiment show that the RL algorithms are robust with sub‐optimal control policy with respect to the worst‐case uncertainty.
Robot Control in Worst‐Case Uncertainty Using Reinforcement Learning
2021-10-05
34 pages
Article/Chapter (Book)
Electronic Resource
English
Flight Control Law Clearance Using Worst-Case Inputs Under Parameter Uncertainty
AIAA | 2020
|Modeling and Computing Worst-Case Uncertainty Combinations for Flight Control Systems Analysis
Online Contents | 2002
|Human-robot interaction control using reinforcement learning
TIBKAT | 2022
|Linear-Quadratic Worst Case Control
British Library Conference Proceedings | 1996
|