In autonomous driving (AD) tasks, data-driven deep reinforcement learning (DRL) outperforms rule-based methods in terms of continuous decision-making and adaptability. However, traditional DRL relies on hand-crafted reward functions, which introduce objective alignment challenges and reward loopholes. Moreover, the black-box structure makes it difficult to explain the decision-making process, which has a direct impact on DRL performance in complex driving situations. To address these shortcomings, a preference-based decomposable proximal policy optimization algorithm (PDPPO) is proposed for reliable interactive urban AD. The framework deconstructs the federated reinforcement learning (FRL) algorithm from various perspectives using a rule-based preference model, resulting in high-availability algorithmic performance for AD. PDPPO employs a data-rule fusion-driven hybrid vision transformer to overcome the objective alignment and high-dimensional state-space representation challenges of traditional DRL in complex urban traffic environments. Furthermore, to address the issue of algorithmic trustworthiness, PDPPO models the multi-agent FRL co-optimization process as an interpretable self-organized group collaboration process. This approach enables the algorithm to strike a balance between model robustness and sample efficiency using preference-heuristic parameter aggregation. The simulation results demonstrate that the proposed PDPPO algorithmic framework can implement interpretable single-agent decision control and multi-agent co-optimization processes. Furthermore, it exhibits competitive performance on various benchmark tests.
A Preference-Based Multi-Agent Federated Reinforcement Learning Algorithm Framework for Trustworthy Interactive Urban Autonomous Driving
IEEE Transactions on Intelligent Transportation Systems ; 26 , 7 ; 10131-10145
2025-07-01
4404833 byte
Article (Journal)
Electronic Resource
English
ArXiv | 2025
|