The utility of multiple reinforcement learning (RL) agents collaboratively training within a shared environment, all working towards common objectives, is increasingly evident within the Internet of Vehicles (IoV). The multi-agent Advantage Actor-Critic (MA2C) algorithm is a prominent example of such a Multi-Agent Reinforcement Learning (MARL) system. However, MA2C requires agents to share policies, such as pairs of states and actions and even trained models, among neighboring agents, to overcome the challenge of agents having only partial observations. Unfortunately, this requirement amplifies the communication overhead and raises privacy concerns. Federated learning (FL), as a privacy-preserving machine learning method, can be applied in the MARL context with a central server aggregating the weights of the agents' models. However, this technique assumes that all agents are capable of executing identical actions, which may be impractical. In this paper, we introduce a novel FL A2C algorithm called Advantage Actor Federated Critic (A2FC). The proposed algorithm streamlines the aggregation of agents' critic models while offloading the training of actor models to the individual agents' local machines. An experiment conducted in an adaptive traffic signal control (ATSC) system demonstrates the method's effectiveness in personalizing agents' actions, preserving agents' privacy during training, and mitigating communication overhead issues.
Federated Multi-Agent Reinforcement Learning for Heterogeneous Action Spaces
24.06.2024
884736 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Communication-efficient and federated multi-agent reinforcement learning
BASE | 2022
|Vehicular cooperative perception through action branching and federated reinforcement learning
BASE | 2022
|Vehicular Cooperative Perception Through Action Branching and Federated Reinforcement Learning
ArXiv | 2020
|British Library Conference Proceedings | 2021
|