This paper addresses the challenge of finding the shortest path in complex environments by integrating machine learning and traditional algorithms to enhance path planning techniques. The goal is to strike a balance between path length and processing time, ensuring reliable trajectories for Unmanned Aerial Vehicles. We explore four methodologies: Reinforcement Learning, Sample-Based, Geometric-Based, and Polynomial-Based Methods. Our main focus is on harnessing Reinforcement Learning for its adaptability and experiential learning capabilities in complex environments, despite its known slow convergence and high computational costs. Our proposed algorithm optimizes each step of the standard Reinforcement Learning method, Q-Learning, using classical techniques to refine its core behavior and overcome limitations. Testing in various simulated complex and unknown environments demonstrates the algorithm's efficacy in enhancing path planning efficiency and accuracy. Our approach successfully reduces path lengths by 11 %, decreases flight time by 35 %, and lowers processing time by 64 % compared to the original Q-Learning approach.
Effective Path Planning for UAVs in Complex and Unknown Environments Through Integrated Q-Learning and Classical Algorithms
14.05.2025
283046 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch