Cloud-edge-terminal collaborative network (CETCN) has become a key enabler of the next generation wireless network. However, due to the privacy concern, terminals and the edge server may not will to leakage individual data to the cloud server. At the same time, limited computing capability of the edge server and terminals leads to long latency. Therefore, in this paper, we utilize a federated deep reinforce learning (DRL) algorithm named federated learning-based Double Deep Q Network (FL-DDQN) algorithm to solve the task offloading problem in CETCN. To be specific, firstly, we model the task offloading issue as the minimization problem of weighted energy consumption and latency of the CETCN with the constraints of subcarriers and maximum task processing latency. Secondly, we propose a DRL algorithm to obtain the suboptimal solution of the optimization problem. Thirdly, aiming to improve data security and reduce the working pressure of the terminals and the edge server, the FL-DDQN algorithm is further utilized, where the DDQN model is trained cooperatively in the terminals and the edge server. Finally the simulation result demonstrate that our proposed method superior to benchmark solutions in terms of total energy consumption and latency.
Federated Deep Reinforcement Learning-enabled Task Offloading in Cloud-Edge-Terminal Collaborative Networks
24.06.2024
1644387 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Adaptive task offloading in V2X networks based on deep reinforcement learning
British Library Conference Proceedings | 2022
|