Bus bunching, a pervasive phenomenon in public transit systems, significantly undermines passenger satisfaction and operational efficiency. Recent studies on mitigating bus bunching have turned to model-free reinforcement learning (RL)-based methods for developing holding control strategies, demonstrating superior performance over traditional model-based approaches. However, a systematic evaluation of how different learning configurations affect the overall performance of these methods remains unexplored. To this end, this study develops an open-source framework to systematically examine the control performance of various RL approaches, including both value-based and policy-based algorithms, under different configurations using real-world bus data. Our findings reveal that: 1) simpler state representations (e.g., considering only forward and backward spacings between adjacent buses) may outperform more complex ones that additionally incorporate stop-specific information; 2) adopting actions with discrete holding time achieve comparable results to their continuous counterparts, offering practical advantages for implementation; 3) while multi-agent methods can slightly improve performance, their computational costs can outweigh benefits in certain scenarios; and 4) RL approaches demonstrate strong generalizability across diverse routes and fleet sizes, suggesting potential for widespread application with minimal retraining. These insights could guide the selection and implementation of practical RL deployment for public transit agencies.
Reinforcement Learning for Bus Bunching Mitigation: A Systematic Evaluation of Configurations and Performances
IEEE Transactions on Intelligent Transportation Systems ; 26 , 7 ; 10443-10455
2025-07-01
2300385 byte
Article (Journal)
Electronic Resource
English
A Reinforcement Learning Approach to Streetcar Bunching Control
Online Contents | 2005
|Reducing Bus Bunching with Asynchronous Multi-Agent Reinforcement Learning
ArXiv | 2021
|Engineering Index Backfile | 1963
|