The rapid changes in high-mobility vehicle environments make it challenging for base stations (BS) to obtain comprehensive channel state information. Furthermore, road and traffic safety require communication with low latency and high reliability, posing significant challenges to spectrum resource allocation in vehicular networks. To address these challenges, this paper proposes a method combining dueling double deep-Q network (D3QN) reinforcement learning (RL) with long short term memory (LSTM) network. By using a Manhattan Grid Layout City Model as the foundational environment, a multi-agent model is constructed, with each vehicle-to-vehicle (V2V) link acting as an individual agent. These agents collaborate and interact with the environment, receiving feedback, and then determining the optimal resource allocation to ensure both superior mobile service and a safe driving environment. The experimental results indicate that our proposed method outperforms the conventional D3QN network in both the vehicle-to-infrastructure (V2I) links and the V2V links.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Enhanced Resource Allocation in Vehicular Networks via Multi-Agent Reinforcement Learning


    Contributors:
    Zhang, Yu (author) / Wang, Shufei (author) / Hua, Minyu (author) / Zhang, Yibin (author) / Wang, Yu (author) / Tomoaki, Ohtsuki (author) / Sari, Hikmet (author) / Gui, Guan (author)


    Publication date :

    2024-06-24


    Size :

    620030 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Multi-Agent Reinforcement Learning for Slicing Resource Allocation in Vehicular Networks

    Cui, Yaping / Shi, Hongji / Wang, Ruyan et al. | IEEE | 2024





    Cooperative perception in vehicular networks using multi-agent reinforcement learning

    Abdel-Aziz, M. K. (Mohamed K.) / Samarakoon, S. (Sumudu) / Perfecto, C. (Cristina) et al. | BASE | 2021

    Free access