A growing number of real-world control problems require teams of software agents to solve a joint task through cooperation. Such tasks naturally arise whenever human workers are replaced by machines, such as robot arms in manufacturing or autonomous cars in transportation. At the same time, new technologies have given rise to novel cooperative control problems that are beyond human reach, such as in package routing. Be it for physical constraints such as partial observability, robustness requirements, or to manage large joint action spaces, cooperative agents are often required to function in a fully decentralised fashion. This means that each agent merely has access to its own local sensory input during task execution, and does not have explicit communication channels to other agents. Deep multi-agent reinforcement learning (DMARL) is a natural framework for learning control policies in such settings. When trained in simulation or in a laboratory, learning algorithms often have access to additional information that will not be available at execution. Such centralised training with decentralised execution (CTDE) poses a number of technical challenges to DMARL algorithms that try to exploit the centralised setting in order to facilitate the training of decentralised policies. These difficulties arise primarily from the apparent incongruency between joint policy learning, which can learn arbitrary policies but is not naively decentralisable and scales poorly with the number of agents, and independent learning, which is readily decentralisable and scalable but provably less expressive and prone to environment non-stationarity due to the presence other of learning agents. The first part of this thesis develops algorithms that use the technique of value decomposition in order to exploit the centralised training of decentralised policies. In Monotonic Value Factorisation for Deep Multi-Agent Reinforcement Learning, we introduce the novel Q-learning algorithm QMIX. QMIX uses a centralised monotonic mixing network in ...


    Access

    Download


    Export, share and cite



    Title :

    Coordination and communication in deep multi-agent reinforcement learning



    Publication date :

    2022-04-12


    Type of media :

    Theses


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    006 / 629



    GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning

    Ruan, J / Du, Y / Xiong, X et al. | BASE | 2022

    Free access

    Multi-Agent Deep Reinforcement Learning in Vehicular OCC

    Islam, Amirul / Musavian, Leila / Thomos, Nikolaos | IEEE | 2022


    DEEP REINFORCEMENT LEARNING FOR MULTI-AGENT AUTONOMOUS SATELLITE INSPECTION

    Lei, Henry H. / Shubert, Matt / Damron, Nathan et al. | Springer Verlag | 2024


    Autonomous Separation Assurance with Deep Multi-Agent Reinforcement Learning

    Brittain, Marc W. / Yang, Xuxi / Wei, Peng | AIAA | 2021


    Communication-efficient and federated multi-agent reinforcement learning

    Krouka, M. (Mounssif) / Elgabli, A. (Anis) / Issaid, C. B. (Chaouki Ben) et al. | BASE | 2022

    Free access