Aiming at the autonomous obstacle avoidance problem of UAV in multi-obstacle map environment, a UAV obstacle avoidance algorithm based on the improved Q learning method is proposed. By analyzing the UAV dynamics principle, the UAV kinematic model is built, and the Markov jump system model is further obtained. Considering the safe distance from the obstacle and the position of the target point, an improved immediate reward function is presented, and a Q learning algorithm of UAV obstacle avoidance is proposed by adopting the ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document}-greedy strategy, which can improve the learning efficiency, realize autonomous obstacle avoidance and optimize the route to the target position. In the simulation experiment, the UAV can track with down different environments and the accumulative rewards are compared and analyzed, which show the effectiveness and advantages of the UAV self-learning algorithm proposed in this paper.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    UAV Formation Obstacle Avoidance Based on Improved Consistency Algorithm

    Hu, Shaoli / Tang, Jiankai / Chen, Chen et al. | IEEE | 2022


    Neural Q Learning Algorithm based UAV Obstacle Avoidance

    Zhou, Benchun / Wang, Weihong / Wang, Zhifeng et al. | IEEE | 2018


    Multi-UAV Cooperative Obstacle Avoidance Trajectory Planning Method Based on Improved RRT Algorithm

    Zhang, Zhaohua / Zhang, Dong / Liu, Wenyi et al. | Springer Verlag | 2025


    Learning-Based Multi-Robot Formation Control With Obstacle Avoidance

    Bai, Chengchao / Yan, Peng / Pan, Wei et al. | IEEE | 2022