Utilizing a UAV to build aerial mobile small cell can provide more flexible and efficient access services for ground terminal users.Constrained by the coverage and limited energy of the UAV,it is necessary to study how to build a fast,efficient and energy-saving air-ground collaborative network.To deal with complex dynamic scenarios,the UAV needs to deploy an optimal coverage position,and meanwhile reduce both path loss and energy consumption in the deployment process.Based on the deep reinforcement learning,a strategy of autonomous UAV deployment and efficiency optimization was proposed.The coverage state set of UAV was established,and the energy efficiency was used as a reward function.Depth neural network and Q-learning were used to guide UAV to make autonomous decision and deploy the optimal position.The simulation results show that the deployment time of the proposed method can be effectively reduced by 60%,while the energy consumption can be reduced by 10%~20%.


    Access

    Download


    Export, share and cite



    Title :

    Autonomous deployment and energy efficiency optimization strategy of UAV based on deep reinforcement learning


    Contributors:
    Yi ZHOU (author) / Xiaoyong MA (author) / Fuxiao GAO (author) / Wei LI (author) / Nan CHENG (author) / Ning LU (author)


    Publication date :

    2019



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    Unknown





    Deep reinforcement learning-based vehicle energy efficiency autonomous learning system

    Qi, Xuewei / Luo, Yadan / Wu, Guoyuan et al. | IEEE | 2017


    Deep Reinforcement Learning-Based Vehicle Energy Efficiency Autonomous Learning System

    Qi, Xuewei / Luo, Yadan / Wu, Guoyuan et al. | British Library Conference Proceedings | 2017



    Train temporary parking strategy optimization method based on deep reinforcement learning

    XU KAI / ZHANG HAOTONG / HUANG DEQING et al. | European Patent Office | 2023

    Free access