In recent years, we can see the number of applications of UAVs in various situations are growing, such as accomplishing missions in resource limited environments where the space and time to perform tasks are limited. In this paper, we propose a new framework for leveraging machine learning technologies such as deep reinforcement learning methods to let the UAV to accomplish navigation tasks in complex resource limited environments. The proposed framework adopt PID algorithm to control the UAV’s attitude and position during flight and use PPO algorithm to optimize the navigation planning. Technical details involve the use of domain specific knowledge and well-designed reward function and state representation. We make some general tests with a single quadcopter UAV in a simulated pybullet environment using the developed framework. This experimental results show that the proposed framework can achieve high performance in navigation tasks in resource limited environments. This will enable continuing research and active development on the deep reinforcement learning based frameworks for UAV autonomous navigation in more complex applications and environments.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Autonomous Navigation of UAVs in Resource Limited Environment Using Deep Reinforcement Learning


    Contributors:
    Sha, Peng (author) / Wang, Qingling (author)


    Publication date :

    2022-11-19


    Size :

    584757 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English





    A new deep reinforcement learning architecture for autonomous UAVs

    Muñoz Ferran, Guillem | BASE | 2018

    Free access

    Autonomous vehicle navigation with deep reinforcement learning

    Cabañeros López, Àlex | BASE | 2019

    Free access

    Autonomous UAV Navigation Using Reinforcement Learning

    Pham, Huy X. / La, Hung M. / Feil-Seifer, David et al. | ArXiv | 2018

    Free access