As unmanned aerial vehicles (UAVs) play an increasingly significant role in modern society, using reinforcement learning to build safe multi-UAV navigation algorithms has become a hot topic. One of the major issues is coordinating multi-UAV to navigate safely in complicated and unknown 3D environments. The current decentralized navigation systems suffer from the challenges of partially observable properties in unknown environments and unstable environments, resulting in poor learning effects. In this article, we proposed a new multi-agent recurrent deterministic policy gradient (MARDPG) algorithm based on the depth deterministic policy gradient algorithm for controlling the navigation action of multi-UAV. Specifically, each critic network learns centrally so that each UAV can estimate the policies of other UAVs, thereby resolving the problem of slow convergence caused by the unstable environment. Decentralized execution eliminates the need for communication resources between UAVs. Sharing parameters in critics network accelerates training. The added LSTM network enables UAVs to use historical exploration information to improve the prediction of action values without being trapped by local traps. Finally, thorough simulation results in a realistic simulation environment were provided to demonstrate the superiority in terms of convergence and efficacy over the state-of-the-art DRL approach.
Multi-Agent Deep Reinforcement Learning for UAVs Navigation in Unknown Complex Environment
IEEE Transactions on Intelligent Vehicles ; 9 , 1 ; 2290-2303
2024-01-01
8325965 byte
Article (Journal)
Electronic Resource
English
British Library Conference Proceedings | 2019
|