We propose a novel formulation for the Inverse Reinforcement Learning (IRL) problem, which jointly accounts for the compatibility with the expert behavior of the identified reward and its effectiveness for the subsequent forward learning phase. Albeit quite natural, especially when the final goal is apprenticeship learning (learning policies from an expert), this aspect has been completely overlooked by IRL approaches so far. We propose a new model-free IRL method that is remarkably able to autonomously find a trade-off between the error induced on the learned policy when potentially choosing a sub-optimal reward, and the estimation error caused by using finite samples in the forward learning phase, which can be controlled by explicitly optimizing also the discount factor of the related learning problem. The approach is based on a min-max formulation for the robust selection of the reward parameters and the discount factor so that the distance between the expert’s policy and the learned policy is minimized in the successive forward learning task when a finite and possibly small number of samples is available. Differently from the majority of other IRL techniques, our approach does not involve any planning or forward Reinforcement Learning problems to be solved. After presenting the formulation, we provide a numerical scheme for the optimization, and we show its effectiveness on an illustrative numerical case.


    Access

    Download


    Export, share and cite



    Title :

    Balancing Sample Efficiency and Suboptimality in Inverse Reinforcement Learning



    Publication date :

    2022-01-01


    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    629



    Suboptimality of Cascaded and Federated Kalman Filters

    Levy, L. / Institute of Navigation | British Library Conference Proceedings | 1996


    Interconnection between communication and suboptimality for distributed control systems

    Sprodowski, Tobias / Universität Bremen | TIBKAT | 2021

    Free access

    Hybrid vehicle fuel efficiency using inverse reinforcement learning

    GUPTA RAKESH / RAMACHANDRAN DEEPAK / VOGEL ADAM C et al. | European Patent Office | 2015

    Free access

    Curricular Subgoals for Inverse Reinforcement Learning

    Liu, Shunyu / Qing, Yunpeng / Xu, Shuqi et al. | IEEE | 2025


    EFFICIENT ROUTING WITH INVERSE REINFORCEMENT LEARNING

    Subramanian, Srikrishnan / Sankar, Adithya Raam | BASE | 2018

    Free access