We propose a novel formulation for the Inverse Reinforcement Learning (IRL) problem, which jointly accounts for the compatibility with the expert behavior of the identified reward and its effectiveness for the subsequent forward learning phase. Albeit quite natural, especially when the final goal is apprenticeship learning (learning policies from an expert), this aspect has been completely overlooked by IRL approaches so far. We propose a new model-free IRL method that is remarkably able to autonomously find a trade-off between the error induced on the learned policy when potentially choosing a sub-optimal reward, and the estimation error caused by using finite samples in the forward learning phase, which can be controlled by explicitly optimizing also the discount factor of the related learning problem. The approach is based on a min-max formulation for the robust selection of the reward parameters and the discount factor so that the distance between the expert’s policy and the learned policy is minimized in the successive forward learning task when a finite and possibly small number of samples is available. Differently from the majority of other IRL techniques, our approach does not involve any planning or forward Reinforcement Learning problems to be solved. After presenting the formulation, we provide a numerical scheme for the optimization, and we show its effectiveness on an illustrative numerical case.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Balancing Sample Efficiency and Suboptimality in Inverse Reinforcement Learning



    Erscheinungsdatum :

    01.01.2022


    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch


    Klassifikation :

    DDC:    629



    Suboptimality of Cascaded and Federated Kalman Filters

    Levy, L. / Institute of Navigation | British Library Conference Proceedings | 1996


    Interconnection between communication and suboptimality for distributed control systems

    Sprodowski, Tobias / Universität Bremen | TIBKAT | 2021

    Freier Zugriff

    Hybrid vehicle fuel efficiency using inverse reinforcement learning

    GUPTA RAKESH / RAMACHANDRAN DEEPAK / VOGEL ADAM C et al. | Europäisches Patentamt | 2015

    Freier Zugriff

    Curricular Subgoals for Inverse Reinforcement Learning

    Liu, Shunyu / Qing, Yunpeng / Xu, Shuqi et al. | IEEE | 2025


    EFFICIENT ROUTING WITH INVERSE REINFORCEMENT LEARNING

    Subramanian, Srikrishnan / Sankar, Adithya Raam | BASE | 2018

    Freier Zugriff