Autonomous driving has made significant advancements in recent years, with reinforcement learning emerging as a promising approach for developing high-level driving policies in urban traffic scenarios. Nevertheless, one challenge encountered when utilizing reinforcement learning is the issue of suboptimal and unstable driving policy due to the Q-value overestimation. To address this challenge, we propose a novel distributional reinforcement learning method that incorporates implicit quantiles into the actor-critic framework, thereby enabling a more accurate estimation of Q-values. Another issue is the inefficiency of sample learning. To enhance the representation learning of urban traffic scenarios and improve sample efficiency, we introduce a temporal-wise attention-based model that effectively aggregates heterogeneous types of state information. Through extensive experiments, our approach demonstrates superior performance when compared to the baselines on the NoCrash and CoRL benchmarks. The results show that our proposed method not only learns improved policies but also surpasses the baselines in dense traffic scenarios, as well as obtains comparative performance in other traffic scenarios.
Heterogeneous Information Fusion-Based Distributional Reinforcement Learning for Autonomous Driving
2024-09-24
723646 byte
Conference paper
Electronic Resource
English
DISTRIBUTIONAL EXPERT DEMONSTRATIONS FOR AUTONOMOUS DRIVING
European Patent Office | 2023
|Distributional expert demonstrations for autonomous driving
European Patent Office | 2025
|Navigating autonomous vehicles in uncertain environments with distributional reinforcement learning
SAGE Publications | 2024
|