The paper uses Deep Reinforcement Learning (DRL) to traffic signal regulation, solving urban traffic congestion. Our research paper demonstrates the simulation of intricate traffic conditions with microscopic accuracy, utilizing the Simulation of Urban Mobility (SUMO) platform. As the foundation for DRL, a state representation and action space are established. The learning process is guided by the Deep Q-Network (DQN) architecture, precisely set hyperparameters, and epsilon-greedy exploration. To improve stability, training data is gathered and stored in an experience replay buffer. The DQN-based technology is built into the SUMO simulation and allows for dynamic traffic signal modifications. Updates to the target network on a regular basis increase stability. Metrics for evaluation include average waiting time, traffic flow efficiency, and congestion levels, which compare favorably to baseline approaches. In conclusion, this study shows the usefulness of DRL in optimizing traffic signal control, providing viable strategies to minimize urban traffic congestion and increase transportation efficiency.
Deep Adaptive Algorithms for Local Urban Traffic Control: Deep Reinforcement Learning with DQN
04.01.2024
702440 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch