Uncrewed Aerial Vehicles (UAVs) are increasingly applied across various fields due to their strong mobility and high flexibility. Concurrently, the rapid development of Artificial Intelligence (AI) has unlocked new potentials for autonomous learning and the evolution of robots. This synergy enables UAVs equipped with AI capabilities to perform complex tasks such as real-time path planning and swarm management more adeptly than traditional models reliant on pre-programmed algorithms. This paper builds on our previously proposed deep reinforcement learning and fuzzy logic-based multiple UAV dynamic target interception algorithm, introducing several improvements and innovations aimed at safe applications in the real world. Initially, several components of the original algorithm have been redesigned and improved; subsequently, an inter-platform simulation environment incorporating MATLAB, ROS, PX4 has been established. Finally, a programmable drone has been constructed. The improved algorithm has been validated through systematic phases of simulations and actual flight tests under complex and dynamic conditions, establishing a solid link from algorithm design to practical applications.
Synthesized Control for In-Field UAV Moving Target Interception Via Deep Reinforcement Learning and Fuzzy Logic
14.05.2025
5048312 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
DOAJ | 2024
|Multi-Underwater Target Interception Strategy Based on Deep Reinforcement Learning
DOAJ | 2025
|Dubins Vehicle Interception of a Moving Target
British Library Conference Proceedings | 2014
|Loitering Munition Interception Decision-Making Technology Based on Deep Reinforcement Learning
Springer Verlag | 2025
|