Since designing aircraft flight controllers is complex, expensive, and time consuming, the interchangeability of flight controllers between different aircraft platforms has been an active research area. This work presents the development of interchangeable, verifiable flight controllers for fixed-wing unmanned aircraft systems (UASs). A model-free deep reinforcement learning (RL) algorithm—called proximal policy optimization—trains the RL-based control policy. Instead of using high-fidelity dynamic models, the RL policy-based controller is trained in simulation using an engineering-level dynamic model. The robustness of the flight controller toward uncertainty in the dynamic model is improved using randomization of the dynamic model. An aircraft's six degrees of freedom model is used in training to eliminate the heavy reliance of modern controllers on dynamic models, which are prone to the accuracy of the trim information. The idea of an interchangeable flight controller is developed by incorporating memory functions into the policy using long-short-term memory, a variant of recurrent neural network architecture. The developed flight controller is uniquely verified and validated in actual flight tests using fixed-wing autonomous aircraft. The interchangeable RL-based flight controller is flight-tested on an entirely different aircraft, which is the first of its kind. Its performance is superior to commercial-off-the-shelf flight controllers and linear quadratic regulator (LQR)-based flight controllers explicitly designed for that platform. Flight test validation and verification data are used to assess flight controller performance and the comparison matrices.
Interchangeable Reinforcement-Learning Flight Controller for Fixed-Wing UASs
IEEE Transactions on Aerospace and Electronic Systems ; 60 , 2 ; 2305-2318
2024-04-01
4459791 byte
Article (Journal)
Electronic Resource
English
British Library Conference Proceedings | 2018
|