Image-based Multi-Agent Coverage Path Planning (MACPP) utilizes images as input to control multiple agents touring all nodes in a map, minimizing task duration and node revisiting. State-of-the-art (SOTA) studies have applied Multi-Agent Deep Reinforcement Learning (MADRL) to automate MACPP, primarily focusing on minimizing task duration. However, these approaches overlook the issue of repeated node visits, resulting in longer task durations and limited real-world applicability. To tackle this challenge, we develop a novel MADRL solution, referred to as MADRL with Mask Soft Attention, to minimize task duration and node re-visiting simultaneously. Our method uses mask soft attention to extract key features from raw image observations while masking task-independent features, reducing computational complexity and improving sample efficiency. We also cascade a multi-actor-critic architecture to accommodate even more agents with ease. Each agent is equipped with an actor to learn an action policy, and a shared critic evaluates a state value. To validate our approach, we implement seven SOTA MADRL methods in the MACPP area as baselines. Simulation results show that our method significantly outperforms the baselines regarding task duration and the number of times the node is repeatedly visited.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Deep Reinforcement Learning for Image-Based Multi-Agent Coverage Path Planning


    Contributors:
    Xu, Meng (author) / She, Yechao (author) / Jin, Yang (author) / Wang, Jianping (author)


    Publication date :

    2023-10-10


    Size :

    4174999 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English