We investigate the problem of risk averse robot path planning using the deep reinforcement learning and distributionally robust optimization perspectives. Our problem formulation involves modelling the robot as a stochastic linear dynamical system, assuming that a collection of process noise samples is available. We cast the risk averse motion planning problem as a Markov decision process and propose a continuous reward function design that explicitly takes into account the risk of collision with obstacles while encouraging the robot’s motion towards the goal. We learn the risk-averse robot control actions through Lipschitz approximated Wasserstein distributionally robust deep Q-learning to hedge against the noise uncertainty. The learned control actions result in a safe and risk averse trajectory from the source to the goal, avoiding all the obstacles. Various supporting numerical simulations are presented to demonstrate our proposed approach.


    Access

    Download


    Export, share and cite



    Title :

    Path Planning Using Wassertein Distributionally Robust Deep Q-learning


    Contributors:

    Publication date :

    2023-01-01


    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629



    Distributionally robust airline fleet assignment problem

    Silva, Marco / Poss, Michael | DataCite | 2019


    Safe Reinforcement Learning Using Wasserstein Distributionally Robust MPC and Chance Constraint

    Kordabad, Arash Bahari / Wisniewski, Rafael / Gros, Sebastien | BASE | 2022

    Free access

    Distributionally robust origin–destination demand estimation

    Wang, Jingxing / Song, Jun / Zhao, Chaoyue et al. | Elsevier | 2024


    Distributionally robust ramp metering under traffic demand uncertainty

    Gu, Chuanye / Wu, Changzhi / Wu, Yonghong et al. | Taylor & Francis Verlag | 2022


    A Distributionally Robust Approach to Black-Box Optimization

    Kapteyn, Michael G. / Willcox, Karen E. / Philpott, Andy | AIAA | 2018