With the rapid development of connected automated vehicles (CAVs), the trajectory control of CAVs has become a focus in traffic engineering. This paper proposes a distributed deep reinforcement learning-based longitudinal control strategy for CAVs combining attention mechanism, which enhances the stability of mixed traffic, car-following efficiency, energy efficiency, and safety. A longitudinal control strategy is built using a deep reinforcement learning model. The CAVs gradually learn optimal car-following strategy in training process to improve safety, stability, fuel economy, mobility, and driving comfort. To further capture the interactions among vehicles in each platoon, the graph attention network is introduced to facilitate the car-following control strategy. To verify the effectiveness of the proposed method, a comparative analysis is conducted, which indicates that the proposed method can dramatically dampen oscillations, enhance traffic efficiency, reduce fuel consumption, and improve driving safety under different scenarios.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A distributed deep reinforcement learning-based longitudinal control strategy for connected automated vehicles combining attention mechanism


    Additional title:

    C. LIU ET AL.
    TRANSPORTATION LETTERS


    Contributors:
    Liu, Chunyu (author) / Sheng, Zihao (author) / Li, Pei (author) / Chen, Sikai (author) / Luo, Xia (author) / Ran, Bin (author)

    Published in:

    Transportation Letters ; 17 , 2 ; 183-199


    Publication date :

    2025-02-07


    Size :

    17 pages




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English