One of the major challenges in Deep Reinforcement Learning for control is the need for extensive training to learn the policy. Motivated by this, we present the design of the Control-Tutored Deep Q-Networks (CT-DQN) algorithm, a Deep Reinforcement Learning algorithm that leverages a control tutor, i.e., an exogenous control law, to reduce learning time. The tutor can be designed using an approximate model of the system, without any assumption about the knowledge of the system’s dynamics. There is no expectation that it will be able to achieve the control objective if used stand-alone. During learning, the tutor occasionally suggests an action, thus partially guiding exploration. We validate our approach on three scenarios from OpenAI Gym: the inverted pendulum, lunar lander, and car racing. We demonstrate that CT-DQN is able to achieve better or equivalent data efficiency with respect to the classic function approximation solutions.
CT-DQN: Control-Tutored Deep Reinforcement Learning
2023-06-16
In: Proceedings of The 5th Annual Learning for Dynamics and Control Conference, Volume 211. (pp. pp. 941-953). PMLR: Philadelphia, PA, USA. (2023)
Paper
Electronic Resource
English
DDC: | 629 |
DEEP REINFORCEMENT LEARNING FOR EVTOL HOVERING CONTROL
British Library Conference Proceedings | 2022
|Universal Quantum Control through Deep Reinforcement Learning
TIBKAT | 2019
|