The focus of unmanned aerial vehicle (UAV) path planning includes challenging tasks such as obstacle avoidance and efficient target reaching in complex environments. Building upon these fundamental challenges, an additional need exists for agents that can handle diverse missions like round-trip navigation without requiring retraining for each specific task. In our study, we present a path planning method using reinforcement learning (RL) for a fully controllable UAV agent. We combine goal-conditioned RL and curriculum learning to enable agents to progressively master increasingly complex missions, from single-target reaching to round-trip navigation. Our experimental results demonstrate that the trained agent successfully completed 95% of simple target-reaching tasks and 70% of complex round-trip missions. The agent maintained stable performance even with multiple subgoals, achieving over 75% success rate in three-subgoal missions, indicating strong potential for practical applications in UAV path planning.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    A Fully Controllable UAV Using Curriculum Learning and Goal-Conditioned Reinforcement Learning: From Straight Forward to Round Trip Missions


    Beteiligte:
    Hyeonmin Kim (Autor:in) / Jongkwan Choi (Autor:in) / Hyungrok Do (Autor:in) / Gyeong Taek Lee (Autor:in)


    Erscheinungsdatum :

    2024




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Unbekannt




    Hierarchical Planning Through Goal-Conditioned Offline Reinforcement Learning

    HUANG MINGLEI / ZHAN WEI / TOMIZUKA MASAYOSHI et al. | Europäisches Patentamt | 2024

    Freier Zugriff

    GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation

    Ransiek, Joshua / Plaum, Johannes / Langner, Jacob et al. | IEEE | 2024



    Launch window analysis for round trip Mars missions.

    Deerwester, J. M. / Manning, L. A. / Swenson, B. L. | NTRS | 1968