Urban autonomous driving decision making is challenging due to complex road geometry and multi-agent interactions. Current decision making methods are mostly manually designing the driving policy, which might result in suboptimal solutions and is expensive to develop, generalize and maintain at scale. On the other hand, with reinforcement learning (RL), a policy can be learned and improved automatically without any manual designs. However, current RL methods generally do not work well on complex urban scenarios. In this paper, we propose a framework to enable model-free deep reinforcement learning in challenging urban autonomous driving scenarios. We design a specific input representation and use visual encoding to capture the low-dimensional latent states. Several state-of-the-art model-free deep RL algorithms are implemented into our framework, with several tricks to improve their performance. We evaluate our method in a challenging roundabout task with dense surrounding vehicles in a high-definition driving simulator. The result shows that our method can solve the task well and is significantly better than the baseline.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Model-free Deep Reinforcement Learning for Urban Autonomous Driving


    Contributors:


    Publication date :

    2019-10-01


    Size :

    3177920 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Autonomous Driving using Deep Reinforcement Learning in Urban Environment

    Hashim Shakil Ansari / Goutam R | BASE | 2019

    Free access

    Autonomous Driving with Deep Reinforcement Learning

    Zhu, Yuhua / Technische Universität Dresden | SLUB | 2023




    Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning

    Chen, Jianyu / Li, Shengbo Eben / Tomizuka, Masayoshi | IEEE | 2022