Traffic oscillations degrade efficiency, increase safety risks, and lead to excessive energy consumption. To address this, we propose the Bilateral Control Model with Deep Reinforcement Learning (BCM-DRL), integrating Deep Reinforcement Learning (DRL), specifically the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm to suppress oscillations and enhance stability. Using the I-80 Next Generation Simulation (NGSIM) dataset, BCM-DRL is trained and evaluated against Bilateral Control Model (BCM) and Car-Following Model with Deep Reinforcement Learning (CFM-DRL). Simulation results show that BCM-DRL reduces the cumulative damping ratio by 75%, decreases fuel consumption by 21.6%, and achieves near-zero Time-to-Instability (TIT) values, significantly improving stability and efficiency. These findings validate BCM-DRL as an effective approach to mitigating traffic oscillations and optimizing vehicle control.
Bilateral Control Model for Autonomous Vehicles Based on Deep Reinforcement Learning
IEEE Transactions on Intelligent Transportation Systems ; 26 , 5 ; 6216-6230
01.05.2025
6383835 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Deep Reinforcement-Learning-based Driving Policy for Autonomous Road Vehicles
ArXiv | 2019
|Deep reinforcement-learning-based driving policy for autonomous road vehicles
IET | 2019
|Deep reinforcement‐learning‐based driving policy for autonomous road vehicles
Wiley | 2020
|REINFORCEMENT LEARNING-BASED PREDICTIVE CONTROL FOR AUTONOMOUS ELECTRIFIED VEHICLES
British Library Conference Proceedings | 2018
|