The application of deep reinforcement learning (DRL) techniques in intelligent transportation systems garners significant attention. In this field, reward function design is a crucial factor for DRL performance. Current research predominantly relies on a trial-and-error approach for designing reward functions, lacking mathematical support and necessitating extensive empirical experimentation. Our research uses vehicle velocity control as a case study, build training and test sets, and develop a DRL framework for speed control. This framework examines both single-objective and multi-objective optimization in reward function designs. In single-objective optimization, we introduce “expected optimal velocity” as an optimization objective and analyze how different reward functions affect performance, providing a mathematical perspective on optimizing reward functions. In multi-objective optimization, we propose a reward function design paradigm and validate its effectiveness. Our findings offer a versatile framework and theoretical guidance for developing and optimizing reward functions in DRL, particularly for intelligent transportation systems.
Exploring the design of reward functions in deep reinforcement learning-based vehicle velocity control algorithms
Y. HE ET AL.
TRANSPORTATION LETTERS
Transportation Letters ; 16 , 10 ; 1338-1352
25.11.2024
15 pages
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
ArXiv | 2025
|Reinforcement learning reward function in unmanned aerial vehicle control tasks
ArXiv | 2022
|Wiley | 2020
|Aero-engine acceleration control using deep reinforcement learning with phase-based reward function
SAGE Publications | 2022
|IET | 2021
|