Highlights Reinforcement learning (RL) can learn from past failures and has the potential to provide self-improvement ability and higher-level intelligence. However, the current RL algorithms still suffer from challenges in reliability, especially compared to the rule/model-based algorithms that are pre-engineered, human-input intensive, but widely used in autonomous vehicles. This work aims to design a decision-making framework that leverages RL and use an existing rule-based policy as its performance lower bound, named as trustworthy improvement RL (TiRL). The basic idea is to activate the RL policy only in the cases where the RL has learned a better policy than the existing rule-based policy, i.e., a higher policy value. This work proves that the final TiRL policy could outperform the existing rule-based policy. The TiRL framework is evaluated in a highway-driving environment with more than 42,000 km of driving. The results show that the TiRL outperforms the given arbitrary rule-based driving policy. It indicates that the TiRL remains the potential of self-learning while guaranteeing a better system performance compared with the integrated rule-based policy.

    Abstract Reinforcement learning (RL) can learn from past failures and has the potential to provide self-improvement ability and higher-level intelligence. However, the current RL algorithms still suffer from challenges in reliability, especially compared to the rule/model-based algorithms that are pre-engineered, human-input intensive, but widely used in autonomous vehicles. To take advantages of both the RL and rule-based algorithms, this work aims to design a decision-making framework that leverages RL and use an existing rule-based policy as its performance lower bound. In this way, the final policy remains the potential of self-learning, while guaranteeing a better system performance compared with the integrated rule-based policy. Such a decision-making framework is called trustworthy improvement RL (TiRL). The basic idea is to make the RL policy iteration process synchronously estimate the given rule-based policy’s value function. AV will then use the RL policy to drive only in the cases where the RL has learned a better policy, i.e., a higher policy value. This work takes highway safe driving as the case study. The results are obtained through more than 42,000 km driving in stochastic simulated traffic, and calibrated by naturalistic driving data. The TiRL planner is given two typical rule-based highway-driving policies for comparison. The results show that the TiRL can outperform the given arbitrary rule-based driving policy. In summary, the proposed TiRL can leverage the learning-based method in stochastic and emergent scenarios, while having a trustworthy safety improvement from the existing rule-based policies.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Trustworthy safety improvement for autonomous driving using reinforcement learning


    Contributors:
    Cao, Zhong (author) / Xu, Shaobing (author) / Jiao, Xinyu (author) / Peng, Huei (author) / Yang, Diange (author)


    Publication date :

    2022-03-17




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Deep Reinforcement Learning with Enhanced Safety for Autonomous Highway Driving

    Baheri, Ali / Nageshrao, Subramanya / Tseng, H. Eric et al. | IEEE | 2020


    Autonomous Driving with Deep Reinforcement Learning

    Zhu, Yuhua / Technische Universität Dresden | SLUB | 2023




    Autonomous Driving using Deep Reinforcement Learning in Urban Environment

    Hashim Shakil Ansari / Goutam R | BASE | 2019

    Free access