In future aerial combat, more tactical flight maneuvers are necessary for aerial engagement. It is challenging to solve all the unpredictable air combat scenarios only with the conventional rule-based flight maneuvers. This study aims to design an intelligent agent model to perform autonomous air combat, focusing on the close dogfight scenarios. We apply the deep reinforcement learning techniques to allow the ownship aircraft to learn the offensive maneuvers for tracking and shooting down the target aircraft. We use two representative state-of-the-art algorithms to train our agent's policy model, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC). The agent model learns the combat strategy in a realistic flight environment, Digital Combat Simulator (DCS), which has high fidelity for simulating air combat scenarios. To verify our proposed approach, we design baseline policy models in terms of the learning algorithms and time-delayed state transition methods. In the evaluation, the suggested policy model is able to handle the delayed state transition of the aircraft system and shows better performance for target tracking in the various air combat scenarios.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Deep Reinforcement Learning-based Intelligent Agent for Autonomous Air Combat


    Beteiligte:
    Yoo, Jaewoong (Autor:in) / Seong, Hyunki (Autor:in) / Shim, David Hyunchul (Autor:in) / Bae, Jung Ho (Autor:in) / Kim, Yong-Duk (Autor:in)


    Erscheinungsdatum :

    18.09.2022


    Format / Umfang :

    1916210 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Research on intelligent combat decision making based on deep reinforcement learning

    Wang, Yao / Chen, Qijie / Ma, Haiqiang et al. | SPIE | 2023




    Multi-UAV Cooperative Offensive Combat Intelligent Planning Based on Deep Reinforcement Learning

    LI Junsheng / YUE Longfei / ZUO Jialiang et al. | DOAJ | 2022

    Freier Zugriff