Abstract Recently, the trends of automation and intelligence in vehicular networks have led to the emergence of intelligent connected vehicles (ICVs), and various intelligent applications like autonomous driving have also rapidly developed. Usually, these applications are compute-intensive, and require large amounts of computation resources, which conflicts with resource-limited vehicles. This contradiction becomes a bottleneck in the development of vehicular networks. To address this challenge, the researchers combined mobile edge computing (MEC) with vehicular networks, and proposed vehicular edge computing networks (VECNs). The deploying of MEC servers near the vehicles allows compute-intensive applications to be offloaded to MEC servers for execution, so as to alleviate vehicles’ computational pressure. However, the high dynamic feature which makes traditional optimization algorithms like convex/non-convex optimization less suitable for vehicular networks, often lacks adequate consideration in the existing task offloading schemes. Toward this end, we propose a reinforcement learning based task offloading scheme, i.e., a deep Q learning algorithm, to solve the delay minimization problem in VECNs. Extensive numerical results corroborate the superior performance of our proposed scheme on reducing the processing delay of vehicles’ computation tasks.
A Reinforcement Learning Based Task Offloading Scheme for Vehicular Edge Computing Network
01.01.2019
12 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch