The interception game between groups of unmanned aerial vehicles (UAVs) is crucial in the future intelligent warfare. In response to the collaborative interception gaming problem against aerial cluster attacks, a multi-agent deep reinforcement learning (DRL) framework based on the twin delayed deep deterministic policy gradient (TD3) method is proposed. The framework combines single-agent delayed policy gradient algorithms with a centralized evaluation and distributed execution algorithm architecture. In order to enhance the convergence of the algorithm, a generalized advantage function is designed. The simulation results show that the strategy enables UAVs to assign interception targets based on real-time battlefield conditions intelligently.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A Multi-agent Reinforcement Learning Framework for Coordinated Multi-UAV Interception Strategies


    Additional title:

    Lect. Notes Electrical Eng.


    Contributors:
    Yan, Liang (editor) / Duan, Haibin (editor) / Deng, Yimin (editor) / Chen, Hong (author) / Li, Bochen (author) / Wang, Chenggang (author) / Ding, Lu (author) / Song, Lei (author)

    Conference:

    International Conference on Guidance, Navigation and Control ; 2024 ; Changsha, China August 09, 2024 - August 11, 2024



    Publication date :

    2025-03-06


    Size :

    11 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Multi-Agent Coordinated Interception of Multiple Rogue Drones

    Valianti, Panayiota / Papaioannou, Savvas / Kolios, Panayiotis et al. | IEEE | 2020


    Multi-Agent Reinforcement Learning for Multiple Rogue Drone Interception

    Valianti, Panayiota / Malialis, Kleanthis / Kolios, Panayiotis et al. | IEEE | 2023




    Multi-Underwater Target Interception Strategy Based on Deep Reinforcement Learning

    Wenhao GAN / Yunfei PENG / Lei QIAO | DOAJ | 2025

    Free access