To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize duringn execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning


    Beteiligte:
    Lanctot, M (Autor:in) / Zambaldi, V (Autor:in) / Gruslys, A (Autor:in) / Lazaridou, A (Autor:in) / Tuyls, K (Autor:in) / Perolat, J (Autor:in) / Silver, D (Autor:in) / Graepel, T (Autor:in) / Guyon, I / Luxburg, UV

    Erscheinungsdatum :

    2017-12-09


    Anmerkungen:

    In: Guyon, I and Luxburg, UV and Bengio, S and Wallach, H and Fergus, R and Vishwanathan, S and Garnett, R, (eds.) Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017). Neural Information Processing Systems (NIPS): Long Beach, CA, USA. (2017)


    Medientyp :

    Paper


    Format :

    Elektronische Ressource


    Sprache :

    Englisch


    Klassifikation :

    DDC:    629




    A Cooperative Multiagent Approach for Optimal Drone Deployment Using Reinforcement Learning

    Acosta‐González, Rigoberto / Klaine, Paulo V. / Montejo‐Sánchez, Samuel et al. | Wiley | 2021


    UAV Swarm Confrontation Using Hierarchical Multiagent Reinforcement Learning

    Baolai Wang / Shengang Li / Xianzhong Gao et al. | DOAJ | 2021

    Freier Zugriff


    Autonomous Bus Fleet Control Using Multiagent Reinforcement Learning

    Sung-Jung Wang / S. K. Jason Chang | DOAJ | 2021

    Freier Zugriff