The cooperation in mixed cooperative-competitive tasks has drawn significant attention in multi-agent deep reinforcement learning. Agents need to cooperate with their teammates while competing against their opponents. However, most existing works treat the cooperative agents and competitive agents equally as they perform the same operation on all the agents. As a result, without distinguishing between cooperative and competitive agents, they may suffer from information disorder in learning an optimally cooperative policy and struggle to decide on the next step action. To address the above issues, we decompose the final Q-value into a weighted combination of three parts: the Q-values of the cooperative agents, the competitive group, and the current agent. A theoretical proof of the correctness of the decomposition is provided. With this decomposition, we are able to consider cooperative and competitive agents separately. Accordingly, we propose a multi-agent actor-critic algorithm called actor-double-attention-critic (ADAC) under centralized training and decentralized execution according to the decomposition. In ADAC, networks with group-specific attention and an attentional weighting network are specially designed. With the designed double-attention structure, ADAC can capture the distributions from different agents and improve cooperation performance. Extensive experiments are conducted in three scenarios with nine settings against six representative methods. The results demonstrate the superiority of the proposed ADAC model against state-of-the-art methods in various mixed cooperative-competitive tasks. The code is available at https://github.com/CrazyBayes/ADAC


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    ADAC: Actor-Double-Attention-Critic for Multi-Agent Cooperation in Mixed Cooperative-Competitive Environments


    Beteiligte:
    Kong, He (Autor:in) / Xing, Qianli (Autor:in) / Wang, Qi (Autor:in) / Niu, Runliang (Autor:in) / Chen, Hechang (Autor:in) / Wang, Yu (Autor:in) / Wang, Shiqi (Autor:in) / Duan, Zhiyi (Autor:in) / Chang, Yi (Autor:in)


    Erscheinungsdatum :

    01.07.2025


    Format / Umfang :

    3861215 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Altruistic Maneuver Planning for Cooperative Autonomous Vehicles Using Multi-agent Advantage Actor-Critic

    Toghi, Behrad / Valiente, Rodolfo / Sadigh, Dorsa et al. | ArXiv | 2021

    Freier Zugriff

    Multi-agent deep reinforcement learning with actor-attention-critic for traffic light control

    Wang, Bin / He, ZhengKun / Sheng, JinFang et al. | SAGE Publications | 2024


    Factored Multi-Agent Soft Actor-Critic for Cooperative Multi-Target Tracking of UAV Swarms

    Longfei Yue / Rennong Yang / Jialiang Zuo et al. | DOAJ | 2023

    Freier Zugriff

    Actor-Critic Policy Learning in Cooperative Planning

    Redding, Joshua / Geramifard, Alborz / Choi, Han-Lim et al. | AIAA | 2010


    Actor-Critic Policy Learning in Cooperative Planning

    Redding, J. / Geramifard, A. / Choi, H.-L. et al. | British Library Conference Proceedings | 2010