Utilizing a UAV to build aerial mobile small cell can provide more flexible and efficient access services for ground terminal users.Constrained by the coverage and limited energy of the UAV,it is necessary to study how to build a fast,efficient and energy-saving air-ground collaborative network.To deal with complex dynamic scenarios,the UAV needs to deploy an optimal coverage position,and meanwhile reduce both path loss and energy consumption in the deployment process.Based on the deep reinforcement learning,a strategy of autonomous UAV deployment and efficiency optimization was proposed.The coverage state set of UAV was established,and the energy efficiency was used as a reward function.Depth neural network and Q-learning were used to guide UAV to make autonomous decision and deploy the optimal position.The simulation results show that the deployment time of the proposed method can be effectively reduced by 60%,while the energy consumption can be reduced by 10%~20%.
Autonomous deployment and energy efficiency optimization strategy of UAV based on deep reinforcement learning
2019
Article (Journal)
Electronic Resource
Unknown
Metadata by DOAJ is licensed under CC BY-SA 1.0
Deep Reinforcement Learning-Based Vehicle Energy Efficiency Autonomous Learning System
British Library Conference Proceedings | 2017
|Train temporary parking strategy optimization method based on deep reinforcement learning
European Patent Office | 2023
|