Given the complex nature of interaction under ambiguous right-of-way scenarios, the interactions between Autonomous Vehicles (AVs) and Human-driven Vehicles (HVs) present considerable challenges to the safety and efficiency of the traffic system. Existing AVs struggle to comprehend and apply common HV social norms, especially the proactive behavior exhibited by adept human drivers in ambiguous right-of-way scenarios. In this study, we propose a novel framework to leverage expert priors for proactive-aware decision-making in ambiguous right-of-way, merging Reinforcement Learning (RL) with parameterized modeling. Building upon unprotected-turning interactions from real-world driving datasets, we select typical cases under ambiguous right-of-way as human-expert priors, which are utilized to guide the learning of the RL agent. Then, a Hidden Markov Model (HMM), which is governed by interpretable parameters derived from expert priors, introduces human decision updating mechanism into AV strategy. Experimenting with typical driving tasks, our approach achieves balanced safety and efficiency in tackling ambiguities of right-of-way, with superior decision-making performance via the guidance of expert priors when compared with established baselines. Furthermore, the results indicate that the proposed method enables AVs to accelerate the convergence during the interaction by consistent probing and decision updates.
Toward Proactive-Aware Autonomous Driving: A Reinforcement Learning Approach Utilizing Expert Priors During Unprotected Turns
IEEE Transactions on Intelligent Transportation Systems ; 26 , 3 ; 3700-3712
2025-03-01
2532800 byte
Article (Journal)
Electronic Resource
English
Efficient Reinforcement Learning for Autonomous Driving with Parameterized Skills and Priors
ArXiv | 2023
|