This paper presents a novel two-stage deep reinforcement learning (DRL) algorithm built on a Transformer Encoder-based Deep Deterministic Policy Gradient (TEDDPG) framework, named TS-TEDDPG, which jointly optimizes the learning latency, energy consumption and model accuracy of Asynchronous Federated Learning (AFL) systems with prescribed security. The CPU configuration of local training and the transmit power of model uploading are learnt by the TEDDPG in the first stage. A linear programming-based device scheduling and cooperative jamming strategy is designed to efficiently optimize the rest of the decisions in the second stage and evaluates the immediate reward to train the TEDDPG. Experimental results based on a CNN model and the MNIST dataset demonstrate that the proposed TS-TEDDPG can reduce the training latency and energy consumption by 68.6% compared to its benchmarks, when the required test accuracy is 0.9.
Privacy-Preserving Resource Allocation for Asynchronous Federated Learning
24.06.2024
1558231 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Preserving Privacy with Federated Learning in Route Choice Behavior Modeling
Transportation Research Record | 2021
|Efficient privacy-preserving federated learning method for Internet of Ships
DOAJ | 2022
|