Autonomous driving in urban settings requires intelligent decision-making ability to deal with complex behaviors in dense traffic scenarios. Traditional modular methods address these challenges using classical rule-based approaches but require heavy engineering efforts to scale to diverse and unseen environments. Recently, Deep Reinforcement Learning (DRL) has provided a data-driven framework for decision making and has been applied to urban driving. However, prior works that employ end-to-end DRL with high-dimensional sensor inputs report poor performance on complex urban driving tasks. In this work, we present a framework that combines modular and DRL approaches to solve the planning and control subproblems in urban driving. We design an input representation that enables our DRL agent to learn the complex urban driving tasks of lane-following, driving around turns and intersections, avoiding collisions with other dynamic actors, and following traffic light rules. The agent learned using our proposed approach achieves state-of-the-art performance on the NoCrash benchmark in the CARLA urban driving simulator.
Learning Urban Driving Policies using Deep Reinforcement Learning
19.09.2021
1693142 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Autonomous Driving using Deep Reinforcement Learning in Urban Environment
BASE | 2019
|DEEP REINFORCEMENT LEARNING FOR OPTIMIZING CARPOOLING POLICIES
Europäisches Patentamt | 2019
|