As hundreds of Unmanned Aircraft System (UAS) operate within urban airspaces, automated and decentralized UAS traffic management (UTM) will be critical to maintain safe and efficient operations. In this work, we present Learning-to-Fly with Reinforcement Learning (L2F-RL), a decentralized, on-demand Collision Avoidance (CA) framework that systematically combines machine learning with cooperative model predictive control for UAS collision avoidance while retaining satisfaction of higher-level mission objectives. L2F-RL consists of: 1) RL-based policy for conflict resolution (CR) with discrete-decision making, 2) decentralized, cooperative model predictive control for CA. To accelerate training with RL, we utilize reward shaping and curriculum learning. Our approach outperforms baseline approaches with a 99.10% separation rate (ratio of success to total test cases) in the worst case, improving to 100% in the best case with a 1000X improvement in computation time compared to centralized methods. Our results demonstrate the potential of combining learning approaches with optimization-based control, making it a significant contribution towards scalable, decentralized UTM.
Learning-to-Fly RL: Reinforcement Learning-based Collision Avoidance for Scalable Urban Air Mobility
2020-10-11
1151179 byte
Conference paper
Electronic Resource
English
REINFORCEMENT LEARNING-BASED MID-AIR COLLISION AVOIDANCE
European Patent Office | 2023
|Pedestrian Collision Avoidance Using Deep Reinforcement Learning
Springer Verlag | 2022
|ICAS2014_0892: A REINFORCEMENT LEARNING BASED UAVS AIR COLLISION AVOIDANCE
British Library Conference Proceedings | 2014
|