In the multi-agent cooperative navigation problem (MCNP), each agent needs to reach a different target, and the last agent to reach the target takes the least amount of time. The traditional deep reinforcement learning method (DRL) for solving MCNP adopts deep deterministic policy gradient (DDPG) with continuous actions, which has a huge action space and results in long training time. This article proposed a three-layer hierarchical deep reinforcement learning method to solve MCNP in environment with U-shaped obstacles. The high level networks learn target selection and the middle level networks learn right turn strategy after target selection to avoid getting stuck in U-shaped obstacle environment. The low level networks learn obstacle avoidance strategy. When the agent does not observe any obstacles, it moves towards the selected target. When the agent detects an obstacle, the middle level networks are activated to obtain the direction $\boldsymbol{K}$ to move forward. If there are no obstacles in the $\boldsymbol{K}$ direction at a set distance, the agent will move a step in the K direction; otherwise, the moving direction is determined by low level networks. The three-layer networks perform sequential interlaced learning. Because the high and middle networks adopt double deep Q-learning networks (DDQN) outputting discrete action, and the low-level policy DDPG is only activated when obstacles are observed ahead at a set distance, action space and training time are reduced considerably. Because the middle level may output actions to turn right during training, it is possible to train a policy to bypass U-shaped obstacles. The simulated results show that the proposed method can reduce training time and reach the target without collision by bypassing $\mathbf{U}$-shaped obstacles.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Multi-Agent Cooperative Navigation with Interlaced Deep Reinforcement Learning


    Beteiligte:
    Zheng, Shengxuan (Autor:in) / Luo, Liang (Autor:in)


    Erscheinungsdatum :

    23.10.2024


    Format / Umfang :

    940262 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Microscopic Traffic Simulation by Cooperative Multi-agent Deep Reinforcement Learning

    Bacchiani, Giulio / Molinari, Daniele / Patander, Marco | ArXiv | 2019

    Freier Zugriff


    Multi-Agent Deep Reinforcement Learning for Decentralized Cooperative Traffic Signal Control

    Zhao, Yang / Hu, Jian-Ming / Gao, Ming-Yang et al. | ASCE | 2020



    Multi-Agent Navigation with Reinforcement Learning Enhanced Information Seeking

    Zhang, Siwei / Guerra, Anna / Guidi, Francesco et al. | Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2022

    Freier Zugriff