This paper proposes a cooperative merging control strategy of connected and automated vehicles (CAVs) using distributed multi-agent Deep Deterministic Policy Gradient (MADDPG). First, the on-ramp merging scenario and vehicle model are built, considering the safe merging distances and acceleration limits. Second, the MADDPG is adopted to learn the cooperative control strategy considering the rear-end safety, lateral safety, and vehicle energy consumption. A distributed architecture is proposed to improve training efficiency. Finally, several on-ramp merging scenarios are simulated. Simulation results show that the distributed MADDPG merging strategy reduces energy consumption by 7.4% and travel time by 5.3% compared to the regular merging strategy.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Cooperative On-Ramp Merging Control of Connected and Automated Vehicles: Distributed Multi-Agent Deep Reinforcement Learning Approach


    Beteiligte:
    Zhou, Shanxing (Autor:in) / Zhuang, Weichao (Autor:in) / Yin, Guodong (Autor:in) / Liu, Haoji (Autor:in) / Qiu, Chunlong (Autor:in)


    Erscheinungsdatum :

    2022-10-08


    Format / Umfang :

    886485 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Cooperative Ramp Merging Control for Connected and Automated Vehicles

    Manjiang, Hu / Jin, Huang / Tianchuang, Meng et al. | SAE Technical Papers | 2020


    Cooperative Ramp Merging Control for Connected and Automated Vehicles

    Tianchuang, Meng / Biao, Xu / Xiaohui, Qin et al. | British Library Conference Proceedings | 2020


    A Deep Reinforcement Learning Approach for Automated On-Ramp Merging

    Zhao, Ruibin / Sun, Zhanbo / Ji, Ang | IEEE | 2022