Abstract Obstacle avoidance and path planning of unmanned aerial vehicles (UAVs) is an essential and challenging task, especially in the unknown environment with dynamic obstacles. To address this problem, a method of UAV path planning based on Deep Q-Learning is proposed. The experience replay mechanism is introduced in the deep reinforcement learning (DRL) process, and a value network is established to calculate the optimal value for the action of the UAV. The optimal flight policy of the UAV is determined through the $$\epsilon $$ -greed algorithm. The experimental results show that the UAV with well-trained model can avoid the obstacles in motion perfectly, and the cruise time is reduced by half compared with the untrained UAV.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Unmanned Aerial Vehicles Path Planning Based on Deep Reinforcement Learning


    Beteiligte:
    Wang, Guoqiu (Autor:in) / Zheng, Xuanyu (Autor:in) / Zhao, Haitong (Autor:in) / Zhao, Qidong (Autor:in) / Zhang, Changsheng (Autor:in) / Zhang, Bin (Autor:in)


    Erscheinungsdatum :

    07.11.2019


    Format / Umfang :

    8 pages





    Medientyp :

    Aufsatz/Kapitel (Buch)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Cooperative path planning of unmanned aerial vehicles

    Tsourdos, Antonios / White, Brian / Shanmugavel, Madhavan | TIBKAT | 2011



    Cooperative path planning of unmanned aerial vehicles

    Tsourdos, Antonios / White, Brian / Shanmugavel, Madhavan | TIBKAT | 2011


    Cooperative path planning of unmanned aerial vehicles

    Tsourdos, Antonios ;White, Brian ;Shanmugavel, Madhavan | SLUB | 2011


    Deep Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles

    Grando, Ricardo B. / de Jesus, Junior C. / Drews-Jr, Paulo L. J. | IEEE | 2020