This paper proposes a cooperative merging control strategy of connected and automated vehicles (CAVs) using distributed multi-agent Deep Deterministic Policy Gradient (MADDPG). First, the on-ramp merging scenario and vehicle model are built, considering the safe merging distances and acceleration limits. Second, the MADDPG is adopted to learn the cooperative control strategy considering the rear-end safety, lateral safety, and vehicle energy consumption. A distributed architecture is proposed to improve training efficiency. Finally, several on-ramp merging scenarios are simulated. Simulation results show that the distributed MADDPG merging strategy reduces energy consumption by 7.4% and travel time by 5.3% compared to the regular merging strategy.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Cooperative On-Ramp Merging Control of Connected and Automated Vehicles: Distributed Multi-Agent Deep Reinforcement Learning Approach


    Contributors:


    Publication date :

    2022-10-08


    Size :

    886485 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    Cooperative Ramp Merging Control for Connected and Automated Vehicles

    Tianchuang, Meng / Biao, Xu / Xiaohui, Qin et al. | British Library Conference Proceedings | 2020


    Cooperative Ramp Merging Control for Connected and Automated Vehicles

    Manjiang, Hu / Jin, Huang / Tianchuang, Meng et al. | SAE Technical Papers | 2020