To address the collaborative issue in large-scale urban rail transit (URT) network operations, this paper proposes an adaptive real-time control framework based on the Soft Actor-Critic (SAC) deep reinforcement learning (DRL) method, featuring flexible train scheduling capabilities. First, by analyzing dynamic passenger travel behavior (e.g., entering/exiting stations, transferring) and train operation events (e.g., dispatching, interstation running, station dwelling), the control problem is modeled as a Markov Decision Process (MDP) and an efficient URT simulation environment is constructed. Then, considering constraints such as train capacity and dispatch intervals, a train scheduling model is developed to minimize both passenger costs and operational costs. Subsequently, the real-time state of the URT system is represented by the overall number of passengers present at every platform, and train dispatch intervals on all lines are used as decision variables. A solving algorithm based on the SAC framework is developed. Finally, experimental results on a large-scale URT network comprising 10 lines demonstrate the effectiveness of the proposed framework, showing superior performance compared to other reinforcement learning algorithms and traditional heuristic optimization algorithms. The proposed approach achieves a 1.63% reduction in average passenger waiting time, equivalent to 2.09 seconds, while utilizing 49 fewer trains, representing a 2.97% decrease, compared to the second-best TD3 algorithm.
Soft Actor-Critic Deep Reinforcement Learning for Train Timetable Collaborative Optimization of Large-Scale Urban Rail Transit Network Under Dynamic Demand
IEEE Transactions on Intelligent Transportation Systems ; 26 , 5 ; 7021-7035
01.05.2025
2827433 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Taylor & Francis Verlag | 2023
|Train timetable adjusting method under sudden interruption scene of urban rail transit
Europäisches Patentamt | 2023
|