We consider a reinforcement learning (RL) based joint cache placement and delivery (CPD) policy for cellular networks with limited caching capacity at both Base Stations (BSs) and User Equipments (UEs). The dynamics of file preferences of users is modeled by a Markov process. User requests are based on current preferences, and on the content of the user’s cache. We assume probabilistic models for the cache placement at both the UEs and the BSs. When the network receives a request for an un-cached file, it fetches the file from the core network via a backhaul link. File delivery is based on network-level orthogonal multipoint multicasting transmissions. For this, all BSs caching a specific file transmit collaboratively in a dedicated resource. File reception depends on the state of the wireless channels. We design the CPD policy while taking into account the user Quality of Service and the backhaul load, and using an Actor-Critic RL framework with two neural networks. Simulation results are used to show the merits of the devised CPD policy.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Joint Cache Placement and Delivery Design using Reinforcement Learning for Cellular Networks


    Beteiligte:
    Amidzadeh, Mohsen (Autor:in) / Al-Tous, Hanan (Autor:in) / Tirkkonen, Olav (Autor:in) / Zhang, Junshan (Autor:in)


    Erscheinungsdatum :

    01.04.2021


    Format / Umfang :

    4133943 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Joint Edge Content Cache Placement and Recommendation: Bayesian Approach

    Krishnendu, S. / Bharath, B. N. / Bhatia, Vimal | IEEE | 2021



    Cache Placement Solutions in Software-Defined Radio Access Networks

    Dao, Ngoc-Dung / Farmanbar, Hamid / Zhang, Hang | IEEE | 2017


    Cache Placement and Power Allocation in Offshore Maritime Wireless Networks

    Sun, Shixuan / Dai, Yanpeng / Lyu, Ling | IEEE | 2023