In recent years, vehicular communications have attached great interests in both academy and industry for its potential of promoting safety and autonomous driving. Unlike classical communication scenarios, in vehicular communications the optimal resource allocation must be accomplished in a real-time manner, in order to maximally reduce the response delay. This presents a substantial challenge for current machine learning based intelligent resource optimization methods which may be sample inefficient, especially when the problem space becomes extremely huge. In this study, we develop a fast reinforcement learning (RL) framework for the real-time resource optimization of vehicular communications, whereby the transmitting power and the accessing frequency channels need to be jointly allocated. The main concept of our new method is that it incorporates a sample efficient structured exploration mechanism in the action space, which firstly ignores the local exploitation but focuses on a randomized global exploration. Thus, our exploration-first method, in contrast to classical exploitation-first RL, can reconstruct the coarse-grained global landscape of a huge Q-table from only the few samples. This learned prior knowledge would remarkably accelerate the convergence of subsequent incremental learning process, by concentrating on the identified attentional subspace of the Q-table. As demonstrated by numerical results, our new method would reduce the time complexity or the response delay by around 10 folds. As such, our fast RL method would have the great potential to such challenging optimization problems whereby the acquisition of massive training samples is time demanding, which hence provides the great promise to the emerging vehicular networks.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Fast Reinforcement Learning for Resource Optimization in Dynamic Vehicular Communications


    Contributors:
    Jiang, Shuwen (author) / Li, Bin (author) / Zhao, Chenglin (author)


    Publication date :

    2025-05-01


    Size :

    2508472 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Reinforcement Learning for Resource Provisioning in Vehicular Cloud

    Salahuddin, Mohammad A. / Al-Fuqaha, Ala / Guizani, Mohsen | ArXiv | 2018

    Free access

    Meta-Reinforcement Learning Based Resource Allocation for Dynamic V2X Communications

    Yuan, Y / Zheng, G / Wong, KK et al. | BASE | 2021

    Free access

    Multi-Agent Reinforcement Learning for Slicing Resource Allocation in Vehicular Networks

    Cui, Yaping / Shi, Hongji / Wang, Ruyan et al. | IEEE | 2024