Many real-world scenarios involve a team of agents that have to coordinate their policies to achieve a shared goal. Previous studies mainly focus on decentralized control to maximize a common reward and barely consider the coordination among control policies, which is critical in dynamic and complicated environments. In this work, we propose factorizing the joint team policy into a graph generator and graph-based coordinated policy to enable coordinated behaviours among agents. The graph generator adopts an encoder-decoder framework that outputs directed acyclic graphs (DAGs) to capture the underlying dynamic decision structure. We also apply the DAGness-constrained and DAG depth-constrained optimization in the graph generator to balance efficiency and performance. The graph-based coordinated policy exploits the generated decision structure. The graph generator and coordinated policy are trained simultaneously to maximize the discounted return. Empirical evaluations on Collaborative Gaussian Squeeze, Cooperative Navigation, and Google Research Football demonstrate the superiority of the proposed method. The code is available at https://github.com/Amanda-1997/GCS_aamas337.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning


    Beteiligte:
    Ruan, J (Autor:in) / Du, Y (Autor:in) / Xiong, X (Autor:in) / Xing, D (Autor:in) / Li, X (Autor:in) / Meng, L (Autor:in) / Zhang, H (Autor:in) / Wang, J (Autor:in) / Xu, B (Autor:in)

    Erscheinungsdatum :

    2022-05-01


    Anmerkungen:

    In: AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. (pp. pp. 1128-1136). ACM Press: New York, NY, USA. (2022)


    Medientyp :

    Paper


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629