Tackling overestimation in Q-learning is an important problem that has been extensively studied in single-agent reinforcement learning, but has received comparatively little attention in the multi-agent setting. In this work, we empirically demonstrate that QMIX, a popular Q-learning algorithm for cooperative multi-agent reinforcement learning (MARL), suffers from a more severe overestimation in practice than previously acknowledged, and is not mitigated by existing approaches. We rectify this with a novel regularization-based update scheme that penalizes large joint action-values that deviate from a baseline and demonstrate its effectiveness in stabilizing learning. Furthermore, we propose to employ a softmax operator, which we efficiently approximate in a novel way in the multi-agent setting, to further reduce the potential overestimation bias. Our approach, Regularized Softmax (RES) Deep Multi-Agent Q-Learning, is general and can be applied to any Q-learning based MARL algorithm. We demonstrate that, when applied to QMIX, RES avoids severe overestimation and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks.


    Access

    Download


    Export, share and cite



    Title :

    Regularized Softmax Deep Multi−Agent Q−Learning


    Contributors:
    Pan, L (author) / Rashid, T (author) / Peng, B (author) / Huang, L (author) / Whiteson, S (author)

    Publication date :

    2021-11-23


    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    629 / 006




    Multi-output regularized projection

    Yu, K. / Yu, S. / Tresp, V. | IEEE | 2005


    Multi-Agent Deep Reinforcement Learning in Vehicular OCC

    Islam, Amirul / Musavian, Leila / Thomos, Nikolaos | IEEE | 2022



    DEEP REINFORCEMENT LEARNING FOR MULTI-AGENT AUTONOMOUS SATELLITE INSPECTION

    Lei, Henry H. / Shubert, Matt / Damron, Nathan et al. | Springer Verlag | 2024