Deploying decentralized control strategies for outdoor multi-agent Uncrewed Aircraft Systems (UASs) is challenging due to timing variations, packet loss, and computing resource limitations. In this work we address robustness to these conditions through a novel co-regulated control strategy that varies the periodicity of control inputs and communication with other agents. Co-regulation is applied to a decentralized hierarchical controller consisting of a global component governing inter-group coordination to multiple targets while a local component governs intra-group coordination of the agents as they progress to the target of interest. The control gains are “gain scheduled” according to current conditions while a cyber controller schedules the control and communication tasks for execution based on swarm performance. The control gains are found via reinforcement learning and the entire algorithm is deployed on a swarm consisting of 7 custom agents. Our results show the impact of rethinking swarming algorithms with computation and communication resource limitations in mind and indicate we can provide exceptional swarm control utilizing fewer resources while also improving the quality of service for an onboard, anytime collision avoidance algorithm.
Co-Regulated Hierarchical Reinforcement Learning for Uncrewed Aircraft System Swarms
2025-05-14
2242588 byte
Conference paper
Electronic Resource
English
European Patent Office | 2022
|European Patent Office | 2022
|Multi-agent Deep Reinforcement Learning for Countering Uncrewed Aerial Systems
Springer Verlag | 2024
|