With recent technological developments, The UAS (unmanned aerial system) has been recognized for its value and usefulness in various fields. Prior researchers have utilized several drones in collaboration to navigate to achieve common goals such as target tracking, rescue operations, and target-finding with multi-UAS systems. Multi-agent reinforcement learning algorithms are a type of artificial intelligence technology in which many agents collaborate to perform tasks. When a multi-UAS cooperative navigation technique is deployed to a complicated environment such as an urban logistics system, the agents’ learning capacities become more tedious. In this study, we present what is termed the improved Multi-Actor-Attention-Critic (iMAAC) approach, a modified multi-agent reinforcement learning method for application to urban air mobility logistic services. A virtual simulation environment based on Unity is created to validate the suggested method. In the virtual environment, the real-world situation of UAS logistics development services is replicated. When the findings are compared to those of other landmark reinforcement algorithms, iMAAC shows a higher learning rate than those by the other algorithms when utilized in multi-agent systems.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Multi-agent Reinforcement Learning-Based UAS Control for Logistics Environments


    Additional title:

    Lect. Notes Electrical Eng.


    Contributors:

    Conference:

    Asia-Pacific International Symposium on Aerospace Technology ; 2021 ; Korea (Republic of) November 15, 2021 - November 17, 2021



    Publication date :

    2022-09-30


    Size :

    10 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English