Eco-driving control for connected and automated vehicles (CAVs) aims to co-optimize energy efficiency, ride comfort, and travel time while adhering to safety regulations. Model-based eco-driving strategies have proven robust and effective in simplified traffic scenarios. However, their application to complex tasks incurs high computational costs due to their reliance on precise nonlinear models that accurately reflect real-world physical systems. Model free deep reinforcement learning (DRL) methods exhibit potential in addressing challenges presented by high-dimensional state/action spaces encountered in real-time CAV control. Nevertheless, they require extensive training data and time, and are susceptible to getting stuck in suboptimal solutions, especially in complex urban traffic scenarios. To leverage the advantages of both model-based controllers and DRL algorithms, this study develops a novel model based controller online assisted-twin delayed deep deterministic policy gradient algorithm (MCOA-TD3) algorithm. The proposed algorithm integrates imitation learning into the Vanilla TD3 agent. During training, the MCOA-TD3 agent can learn from demonstrations generated by a model predictive control-based expert controller. The performance of the proposed strategy is evaluated through simulations conducted in a dynamic traffic simulation scenario replicating the testfield of Hamburg, Germany. The results show that our proposed strategy improves energy efficiency and ride comfort while maintaining comparable driving times to the Vanilla TD3 strategy. Notably, compared with the Vanilla TD3 strategy, our proposed strategy demonstrates superior adaptability and online fine-tuning ability. These improvements make it more suitable for complex and dynamic real-world scenarios.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Intelligent Eco-Driving Control for Urban CAVs Using a Model-Based Controller Assisted Deep Reinforcement Learning


    Beteiligte:
    Li, Jie (Autor:in) / Wu, Xiaodong (Autor:in) / Bai, Xianxu (Autor:in) / Liu, Yonggang (Autor:in) / Xu, Min (Autor:in)


    Erscheinungsdatum :

    01.06.2025


    Format / Umfang :

    4093888 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Simulated CAVs Driving and Characteristics of the Mixed Traffic Using Reinforcement Learning Method

    Guo, Jingqiu / Liu, Yangzexi / Fang, Shouen | Springer Verlag | 2019


    Integrated Routing and Traffic Signal Control for CAVs via Reinforcement Learning Approach

    Park, Jiho / Zhang, Guohui / Wang, Chieh et al. | IEEE | 2024


    Deep Double Q-Learning Method for CAVs Traffic Signal Control

    Zhao, Chunxia / Lin, Peiqun / Liu, QingChao et al. | SAE Technical Papers | 2020


    Virtual Platoon based CAVs Cooperative Driving at Unsignalized Intersection

    Cong, Xiangyue / Yang, Bo / Gao, Fengkun et al. | IEEE | 2022


    CAVS 2019 Reviewers

    IEEE | 2019

    Freier Zugriff