The large-scale popularization of electric buses (EBs) marks a significant stride in sustainable development strategies aimed at environmental conservation. A critical concern for bus companies is reducing the operational costs associated with charging these vehicles. This task is particularly challenging due to the uncertainties in travel time, energy consumption, and fluctuating electricity prices, compounded by the constraints of limited charging infrastructure. This paper tackles these complexities by leveraging Deep Reinforcement Learning (DRL), which excels in learning directly from environmental interactions without relying on pre-defined models. We conceive two augmented Markov Decision Processes (MDPs) and propose a novel Hierarchical Deep Reinforcement Learning (HDRL) algorithm called Double Actor-Critic Multi-Agent Proximal Policy Optimization (DAC-MAPPO). The proposed algorithm enhances learning efficiency and convergence speed by integrating the MAPPO algorithm into the DAC architecture. Specifically, a centralized high-level agent is responsible for making charger allocation decisions, while multiple decentralized low-level agents determine the charging power for each EB at every time step. Experimental evaluations using real-world data demonstrate the superior performance and effectiveness of the DAC-MAPPO algorithm.
Hierarchical Deep Reinforcement Learning for Charging Scheduling of Electric Buses with Uncertainties
24.09.2024
754247 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
CHARGING SCHEDULING SYSTEMS AND METHODS THEREOF FOR ELECTRIC BUSES
Europäisches Patentamt | 2022
|Charging scheduling systems and methods thereof for electric buses
Europäisches Patentamt | 2023
|Scheduling and Balancing of Electric Buses’ Charging Operations in Public Transportation
Springer Verlag | 2021
|