In order to collaborate effciently with unknown partners in cooperative control settings, adaptation of the partners based on online experience is required. The rather general and widely applicable control setting, where each cooperation partner might strive for individual goals while the control laws and objectives of the partners are unknown, entails various challenges such as the non-stationarity of the environment, the multi-agent credit assignment problem, the alter-exploration problem and the coordination problem. We propose new, modular deep decentralized Multi-Agent Reinforcement Learning mechanisms to account for these challenges. Therefore, our method uses a time-dependent prioritization of samples, incorporates a model of the system dynamics and utilizes variable, accountability-driven learning rates and simulated, artificial experiences in order to guide the learning process. The effectiveness of our method is demonstrated by means of a simulated, nonlinear cooperative control task.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Deep Decentralized Reinforcement Learning for Cooperative Control


    Beteiligte:
    Köpf, Florian (Autor:in) / Tesfazgi, Samuel (Autor:in) / Flad, Michael (Autor:in) / Hohmann, Sören (Autor:in)

    Erscheinungsdatum :

    2020-03-10


    Anmerkungen:

    ISSN: 2405-8963


    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    620 / 629



    Multi-Agent Deep Reinforcement Learning for Decentralized Cooperative Traffic Signal Control

    Zhao, Yang / Hu, Jian-Ming / Gao, Ming-Yang et al. | ASCE | 2020



    Decentralized control and local information for robust and adaptive decentralized Deep Reinforcement Learning

    Schilling, Malte / Melnik, Andrew / Ohl, Frank W. et al. | BASE | 2021

    Freier Zugriff

    DDRL: A Decentralized Deep Reinforcement Learning Method for Vehicle Repositioning

    Xi, Jinhao / Zhu, Fenghua / Chen, Yuanyuan et al. | IEEE | 2021