With the explosion of connected devices and Internet-of-Things (IoT) services in the smart city, the challenge to meet the demands of urban computing is increasingly prominent. Recent advances in vehicle-to-everything (V2X) communications promote urban Internet-connected vehicles to become excellent candidates for computing tasks. However, due to the limited computing capacity of vehicles, conducting computation in the vehicular networks themselves is insufficient to satisfy the demands of smart city applications. Edge computing, which delivers computing tasks to edge servers (e.g., base stations, BSs, or roadside units, RSUs) with plenty of computing resources, could be a possible solution. However, the static deployment of edge servers may cause severe load unbalance among servers in both real-time communication and computation, thereby decreasing the system performance. This paper explores a two-hop vehicle-assisted edge computing network framework in which vehicles are able to offload the tasks beyond their capabilities to underloaded edge servers relaying via neighbor vehicles. According to the state of the time-varying vehicular environment and the dynamic traffic loads among RSUs, we formulate the task offloading, relay node selection, and resources allocations problem as a Markov decision process (MDP) aiming at maximizing the performance of the computation offloading capacity with the considerations of load balancing and latency constraints. We propose a deep reinforcement learning (DRL) algorithm with a DNN as Q action-value function approximator to solve this problem. Extensive simulation results reveal that the proposed scheme can significantly improve the system performance compared to other state-of-the-art algorithms.
Intelligent Offloading Balance for Vehicular Edge Computing and Networks
IEEE Transactions on Intelligent Transportation Systems ; 26 , 5 ; 5792-5803
01.05.2025
3282348 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch