In this paper, the distributed edge caching problem in fog radio access networks (F-RANs) is investigated. By considering the unknown spatio-temporal content popularity and user preference, a user request model based on hidden Markov process is proposed to characterize the fluctuant spatio-temporal traffic demands in F-RANs. Then, the Q-learning method based on the reinforcement learning (RL) framework is put forth to seek the optimal caching policy in a distributed manner, which enables fog access points (F-APs) to learn and track the potential dynamic process without extra communications cost. Furthermore, we propose a more efficient Q-learning method with value function approximation (Q-VFA-learning) to reduce complexity and accelerate convergence. Simulation results show that the performance of our proposed method is superior to those of the traditional methods.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Distributed Edge Caching via Reinforcement Learning in Fog Radio Access Networks


    Contributors:
    Lu, Liuyang (author) / Jiang, Yanxiang (author) / Bennis, Mehdi (author) / Ding, Zhiguo (author) / Zheng, Fu-Chun (author) / You, Xiaohu (author)


    Publication date :

    2019-04-01


    Size :

    2406041 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Cooperative edge caching via multi agent reinforcement learning in fog radio access networks

    Chang, Q. (Qi) / Jiang, Y. (Yanxiang) / Zheng, F.-C. (Fu-Chun) et al. | BASE | 2022

    Free access

    Distributed Edge Caching in Ultra-Dense Fog Radio Access Networks: A Mean Field Approach

    Hu, Yabai / Jiang, Yanxiang / Bennis, Mehdi et al. | IEEE | 2018



    Cooperative Edge Caching via Federated Deep Reinforcement Learning in Fog-RANs

    Zhang, M. (Min) / Jiang, Y. (Yanxiang) / Zheng, F.-C. (Fu-Chun) et al. | BASE | 2021

    Free access