This paper delves into the dynamic resource allocation challenges for spectrum sharing in Vehicle-to-Everything (V2X) communication, influenced by the time-varying channel conditions. We concentrate on the dual goals of sub-channel and power allocation in complex V2X environment, underscoring the pivotal role that the Age of Information (AoI) plays in preserving the reliability of safety-critical data across Vehicle-to-Vehicle (V2V) links. To address the complex interplay between minimizing AoI for V2V links and maximizing the overall capacity for vehicle-to-infrastructure (V2I) links, we introduce a novel approach grounded in multi-agent reinforcement learning. This strategy enables adaptive learning in response to V2X rapidly changing channel conditions, with V2V links conceptualized as agents. These agents employ an actor network to select actions and a critic network to evaluate those actions through Q-values, incorporating observations, actions, and individual contributions via attention mechanisms. Our method is further refined by incorporating maximum entropy to enhance action exploration. Through extensive simulations, we demonstrate that our algorithm allows agents to effectively sample and assess the states of their counterparts, leading to optimized decision-making processes.
AoI-Aware Resource Allocation for C-V2X Networks via Multi-Agent Reinforcement Learning with Attention
2024-10-07
1410427 byte
Conference paper
Electronic Resource
English