Ensuring safe and capable motion planning is paramount for automated vehicles. Traditional methods are limited in their ability to handle complex and unpredictable traffic situations. Model-free reinforcement learning (RL) addresses this challenge by generalizing across different traffic situations without requiring explicit knowledge of all possible outcomes. However, it also poses challenges due to its inherent lack of safety guarantees. To bridge this gap, we integrate online reachability analysis into model-free RL to provide real-time safety guarantees. Reachability analysis helps to identify unsafe states and actions, enabling provably safe decision-making in automated vehicles. We evaluate the effectiveness of our approach through extensive numerical experiments. Our results demonstrate that we can efficiently provide safety guarantees without impairing the performance of the learned agent.


    Access

    Download


    Export, share and cite



    Title :

    Safe Reinforcement Learning for Automated Vehicles via Online Reachability Analysis


    Contributors:

    Published in:

    Publication date :

    2024-09-01


    Size :

    3473062 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Safe Planning with Game-Theoretic Formulation, Reachability Analysis and Reinforcement Learning

    Shang, Xu / Sagheb, Shahabedin / Eskandarian, Azim | IEEE | 2023


    Safe Platooning of Unmanned Aerial Vehicles via Reachability

    Chen, Mo / Hu, Qie / Mackin, Casey et al. | ArXiv | 2015

    Free access



    CommonRoad-Reach: A Toolbox for Reachability Analysis of Automated Vehicles

    Liu, Edmond Irani / Wursching, Gerald / Klischat, Moritz et al. | IEEE | 2022