Traffic congestion is a critical issue in urban areas, contributing to increased travel time, fuel consumption, and environmental pollution. Traditional traffic signal control methods, such as fixed-time systems, cannot adapt to real-time changes in traffic conditions. This paper presents a novel approach using Deep Q-Networks (DQN) to control traffic signals based on realtime traffic data dynamically. Our simulation environment models a four-way intersection, where vehicle densities and wait times are continuously monitored. The DQN agent optimizes green light durations by minimizing vehicle wait times and maximizing traffic throughput. Extensive simulation results demonstrate that our system significantly reduces congestion and improves traffic flow compared to traditional fixed-time systems. Furthermore, the approach shows potential for real-world applications in urban traffic control systems, particularly in smart cities equipped with Internet of Things (IoT) devices.
Reinforcement Learning for Adaptive Traffic Signal Control Using Deep Q-Networks
21.02.2025
369169 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Wiley | 2020
|