In this paper, we address the chance-constrained safe Reinforcement Learning (RL) problem using the function approximators based on Stochastic Model Predictive Control (SMPC) and Distributionally Robust Model Predictive Control (DRMPC). We use Conditional Value at Risk (CVaR) to measure the probability of constraint violation and safety. In order to provide a safe policy by construction, we first propose using parameterized nonlinear DRMPC at each time step. DRMPC optimizes a finite-horizon cost function subject to the worst-case constraint violation in an ambiguity set. We use a statistical ball around the empirical distribution with a radius measured by the Wasserstein metric as the ambiguity set. Unlike the sample average approximation SMPC, DRMPC provides a probabilistic guarantee of the out-of-sample risk and requires lower samples from the disturbance. Then the Q-learning method is used to optimize the parameters in the DRMPC to achieve the best closed-loop performance. Wheeled Mobile Robot (WMR) path planning with obstacle avoidance will be considered to illustrate the efficiency of the proposed method.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Safe Reinforcement Learning Using Wasserstein Distributionally Robust MPC and Chance Constraint


    Beteiligte:

    Erscheinungsdatum :

    2022-01-01


    Anmerkungen:

    Kordabad , A B , Wisniewski , R & Gros , S 2022 , ' Safe Reinforcement Learning Using Wasserstein Distributionally Robust MPC and Chance Constraint ' , IEEE Access , vol. 10 , 9982609 , pp. 130058-130067 . https://doi.org/10.1109/ACCESS.2022.3228922



    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629