In recent years, enhancing intelligence in the development of Unmanned Aerial Vehicles (UAVs) with a decrease in cost for the application of swarm fleets has attracted a variety of interest. Specifically, for urban applications such as public transportation, logistics mobilization, rescue operations, etc., there is a need for a fleet of UAVs to plan and coordinate intelligently without human intervention. A multi-agent reinforcement learning framework that can learn and make policies for such systems is desperately needed. This paper proposes an AI-based Bio-inspired Decentralized Multi-Agent Reinforcement Learning (B-DMARL) framework as a multi-agent actor-critic model for executing an assigned job in an increasingly dynamic environment. The B-DMARL is a distributed control architecture with two levels. For group coordination and collision avoidance, low-level control is developed using an AI-based bio-inspired steering behavior algorithm. Using a Proximal Policy Optimization (PPO) based reinforcement learning method, the agent is trained as a high-level control to correctly execute tasks in more dynamic environments. In a virtual simulation environment, the proposed B-DMARL framework is applied to persistent surveillance tasks that require the cooperation and collaboration of UAVs. Simulation results demonstrate that the proposed methods have an improved learning rate and reward signal.
Deep Multi Agent Reinforcement Learning Based Decentralized Swarm UAV Control Framework for Persistent Surveillance
Lect. Notes Electrical Eng.
Asia-Pacific International Symposium on Aerospace Technology ; 2021 ; Korea (Republic of) November 15, 2021 - November 17, 2021
The Proceedings of the 2021 Asia-Pacific International Symposium on Aerospace Technology (APISAT 2021), Volume 2 ; Kapitel : 70 ; 951-962
30.09.2022
12 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch
A Multi-agent Deep Reinforcement Learning Framework for UAV Swarm
Springer Verlag | 2024
|ArXiv | 2021
|