An automated vehicle needs to be able to predict the future evolution of a perceived traffic situation to safely and comfortably interact with surrounding vehicles. This work focuses on generating predictions by executing a traffic simulation. The advantage of this simulation-based prediction is that the predictions of all vehicles are constructed simultaneously and can interact with each other. Moreover, conditional predictions become possible, e.g., "How would the traffic situation evolve, if the automated vehicle merges in front of or behind another vehicle?" The behavior model is crucial for the accuracy of the prediction. Therefore, this thesis investigates three approaches to learning a behavior model: Multi-Step Behavior Cloning, Reinforcement Learning, and Inverse Reinforcement Learning. For Multi-Step Behavior Cloning, the behavior model is trained such that it selects an action sequence, and hence a trajectory, as similar as possible to human drivers, starting from the same initial situation. The training requires a differentiable simulation environment, which is introduced in this work. In contrast, the training goal of Reinforcement Learning (RL) is to maximize a hand-defined reward function. With this, explicit goals can be formulated, such as avoiding collisions, remaining on the road, and maintaining safety distances. A modification of the method is proposed to represent different driving styles with one single behavior model, e.g., sporty or careful driving. To model human driving with RL, the reward function must be adapted until the resulting trajectories are similar enough to human trajectories. This tedious procedure can be automatized with Inverse Reinforcement Learning (IRL). To this end, Adversarial Inverse Reinforcement Learning (AIRL) is employed. With the reconstructed reward function, the behavior model is trained in additional fictional critical situations to obtain a more robust model. Finally, all trained models are compared under equal conditions in an untrained roundabout. The IRL algorithms achieve the best results with collision rates below 1% and root mean squared prediction errors (RMSE) below 22m. RL and IRL reduce the collision rate compared to Behavior Cloning, because they directly penalize collisions beyond the goal of pure imitation.


    Access

    Download


    Export, share and cite



    Title :

    Learning Driver Behavior Models for Predicting Urban Traffic Situations.
    Lernen von Fahrermodellen zur Prognose urbaner Verkehrssituationen



    Publication date :

    2024


    Size :

    256 S., 10MB



    Type of media :

    Miscellaneous


    Type of material :

    Electronic Resource


    Language :

    English