In order to collaborate effciently with unknown partners in cooperative control settings, adaptation of the partners based on online experience is required. The rather general and widely applicable control setting, where each cooperation partner might strive for individual goals while the control laws and objectives of the partners are unknown, entails various challenges such as the non-stationarity of the environment, the multi-agent credit assignment problem, the alter-exploration problem and the coordination problem. We propose new, modular deep decentralized Multi-Agent Reinforcement Learning mechanisms to account for these challenges. Therefore, our method uses a time-dependent prioritization of samples, incorporates a model of the system dynamics and utilizes variable, accountability-driven learning rates and simulated, artificial experiences in order to guide the learning process. The effectiveness of our method is demonstrated by means of a simulated, nonlinear cooperative control task.


    Access

    Download


    Export, share and cite



    Title :

    Deep Decentralized Reinforcement Learning for Cooperative Control


    Contributors:

    Publication date :

    2020-03-10


    Remarks:

    ISSN: 2405-8963


    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    620 / 629



    Multi-Agent Deep Reinforcement Learning for Decentralized Cooperative Traffic Signal Control

    Zhao, Yang / Hu, Jian-Ming / Gao, Ming-Yang et al. | ASCE | 2020



    Decentralized control and local information for robust and adaptive decentralized Deep Reinforcement Learning

    Schilling, Malte / Melnik, Andrew / Ohl, Frank W. et al. | BASE | 2021

    Free access


    DDRL: A Decentralized Deep Reinforcement Learning Method for Vehicle Repositioning

    Xi, Jinhao / Zhu, Fenghua / Chen, Yuanyuan et al. | IEEE | 2021