Compared to the modularized rule-based framework, end-to-end deep reinforcement learning (DRL) algorithms have demonstrated greater adaptability in autonomous driving (AD) scenarios. However, DRL algorithms often face challenges related to model convergence and sample dependence, which limit their applicability to complex driving tasks and lack interpretability. To address these limitations, we present a novel hybrid algorithm framework called federated learning (FL)-based distributed proximal policy optimization (FLDPPO). This framework combines modularized rule-based complex network cognition and end-to-end DRL to realize the fusion driving of the mechanism model and data. Our algorithm generates dynamic driving recommendations that guide agent learning rules, enabling the model to handle complex driving environments. In addition, FLDPPO addresses model robustness and sample dependence issues through a model confidence-based distributed multiagent aggregation architecture. By measuring model confidence, the architecture learns to effectively aggregate knowledge from each unique experience distribution. Simulation results show that the proposed FLDPPO algorithm achieves competitive performance on various benchmarks.
Complex Network Cognition-Based Federated Reinforcement Learning for End-to-End Urban Autonomous Driving
IEEE Transactions on Transportation Electrification ; 10 , 3 ; 7513-7525
2024-09-01
6102696 byte
Article (Journal)
Electronic Resource
English
European Patent Office | 2025
|European Patent Office | 2025
|Deep Federated Learning for Autonomous Driving
IEEE | 2022
|