Deep Reinforcement Learning (DRL) has been successfully applied to learn policies for safety-critical systems with unknown model dynamics in simulation. DRL controllers though optimal in terms of reward, do not provide any safety and stability guarantees. With reliance on model information, safety conditions can be expressed as Control Barrier Functions (CBF’s) and performance objectives can be expressed as Control Lyapunov Functions (CLF’s) for real-time optimization-based controllers. In this work, we use an amalgamation of model-free RL and model-based controllers to establish safety and stability. We first design CLF, CBF Quadratic Programs (QP’s) for different driving manoeuvres on nominal vehicle dynamics. Reinforcement Learning (RL) agents are trained to learn policies for the actual vehicle with enhanced dynamics. In order to incorporate safety and stability while retaining optimal behaviour we selectively guide the RL agents using CLF, CBF QP’s. This results in both safe and stable (S2RL) policies. We empirically validate the proposed methodology on different driving manoeuvres.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Safe and Stable RL (S2RL) Driving Policies Using Control Barrier and Control Lyapunov Functions


    Contributors:

    Published in:

    Publication date :

    2023-02-01


    Size :

    2698770 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English