The slow convergence rate is a thorny problem of current end-to-end autonomous driving paradigm with various traffic elements and tasks. In this paper, we propose an end-to-end autonomous driving framework FEN-DQN to simplify problems involving feature extraction network (FEN) with explicit affordance, along with some associated driving measurements such as vehicle speed and position. The FEN-DQN can be divided into two part. First, FEN is applied to map the forward-looking camera images into explicit affordances, which represent traffic information in low-dimension. Secondly, a deep Q-network (DQN) is used to map explicit affordances to vehicle actions. Based on the CARLA simulator, we use the OpenAI-Gym to construct a simulation scenario at traffic intersections to evaluate our proposed framework. In addition, we also conduct comparative experiments on different inputs to show the excellent effect of our framework. The results showcase that FEN-DQN could converge faster and perform better compared with the other inputs at traffic intersections in the simulation scenario with the assistance of FEN.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    FEN-DQN: An End-to-End Autonomous Driving Framework Based on Reinforcement Learning with Explicit Affordance


    Beteiligte:
    Bai, Yulong (Autor:in) / Du, Jiatong (Autor:in) / Zhang, Yuanjian (Autor:in) / Huang, Yanjun (Autor:in)


    Erscheinungsdatum :

    27.10.2023


    Format / Umfang :

    2058468 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch






    VISION-BASED SAMPLE-EFFICIENT REINFORCEMENT LEARNING FRAMEWORK FOR AUTONOMOUS DRIVING

    CHIANG SU-HUI / LIU MING-CHANG | Europäisches Patentamt | 2019

    Freier Zugriff

    - - VISION-BASED SAMPLE-EFFICIENT REINFORCEMENT LEARNING FRAMEWORK FOR AUTONOMOUS DRIVING

    CHIANG SU HUI / LIU MING CHANG | Europäisches Patentamt | 2019

    Freier Zugriff