Reinforcement learning (RL) shows promise for autonomous driving decision-making. However, designing appropriate reward functions to guide RL agents towards complex optimization objectives is challenging. This article proposes a framework that learns the reward function from human driving data to guide RL agent's learning. The proposed framework consists of three components: trajectory sample, offline preference learning, and RL. Firstly, feasible trajectories are generated by sampling end targets from a reachable state space. Subsequently, a novel offline preference learning framework is utilized to train a transformer network by comparing generated feasible trajectories with human driving trajectories. The transformer network is used to model the human driving decision-making process, thereby obtaining a reward function. Finally, to obtain the final driving decision network, the derived reward function is incorporated into a RL framework. To validate the proposed method, a highway simulator is established where the surrounding vehicle trajectories are derived from real-world driving scenarios. Compared with baseline algorithms, the proposed method achieves the best performances in terms of decision safety and human-likeness. Additionally, the learned policy network performs well in driving decision-making tasks with longer total decision steps. Experimental results demonstrate that the proposed method can obviate the requirement for manual design of sophisticated reward functions in RL-based autonomous driving decision-making systems.
HGRL: Human-Driving-Data Guided Reinforcement Learning for Autonomous Driving
IEEE Transactions on Intelligent Vehicles ; 9 , 12 ; 8089-8103
01.12.2024
3243451 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Safe Reinforcement Learning with Policy-Guided Planning for Autonomous Driving
British Library Conference Proceedings | 2020
|Autonomous Driving with Deep Reinforcement Learning
SLUB | 2023
|