In autonomous driving, the ego vehicle and its surrounding traffic environments always have uncertainties like parameter and structural errors, behavior randomness of road users, etc. Furthermore, environmental sensors are noisy or even biased. This problem can be formulated as a partially observable Markov decision process. Existing methods lack a good representation of historical information, making it very challenging to find an optimal policy. This paper proposes a belief state separated reinforcement learning (RL) algorithm for decision-making of autonomous driving in uncertain environments. We extend the separation principle from linear Gaussian systems to general nonlinear stochastic environments, where the belief state, defined as the posterior distribution of the true state, is found to be a sufficient statistic of historical information. This belief state is estimated by action-enhanced variational inference from historical information and is proved to satisfy the Markovian property, thus allowing us to obtain the optimal policy using traditional RL algorithms for Markov decision processes. The policy gradient of a task-specific prior model is mixed with that of the interaction data to improve learning performance. The proposed algorithm is evaluated in a multi-lane autonomous driving task, where the surrounding vehicles are subject to behavior uncertainty and observation noise. The simulation results show that compared with existing RL algorithms, the proposed method can achieve a higher average return with better driving performance.
Belief state separated reinforcement learning for autonomous vehicle decision making under uncertainty
2021-09-19
1005283 byte
Conference paper
Electronic Resource
English
TACTICAL DECISION-MAKING IN AUTONOMOUS DRIVING BY REINFORCEMENT LEARNING WITH UNCERTAINTY ESTIMATION
British Library Conference Proceedings | 2020
|HIGHWAY TRAFFIC MODELING AND DECISION MAKING FOR AUTONOMOUS VEHICLE USING REINFORCEMENT LEARNING
British Library Conference Proceedings | 2018
|