This paper addresses the challenge of finding the shortest path in complex environments by integrating machine learning and traditional algorithms to enhance path planning techniques. The goal is to strike a balance between path length and processing time, ensuring reliable trajectories for Unmanned Aerial Vehicles. We explore four methodologies: Reinforcement Learning, Sample-Based, Geometric-Based, and Polynomial-Based Methods. Our main focus is on harnessing Reinforcement Learning for its adaptability and experiential learning capabilities in complex environments, despite its known slow convergence and high computational costs. Our proposed algorithm optimizes each step of the standard Reinforcement Learning method, Q-Learning, using classical techniques to refine its core behavior and overcome limitations. Testing in various simulated complex and unknown environments demonstrates the algorithm's efficacy in enhancing path planning efficiency and accuracy. Our approach successfully reduces path lengths by 11 %, decreases flight time by 35 %, and lowers processing time by 64 % compared to the original Q-Learning approach.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Effective Path Planning for UAVs in Complex and Unknown Environments Through Integrated Q-Learning and Classical Algorithms




    Publication date :

    2025-05-14


    Size :

    283046 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English





    3D REAL-TIME PATH PLANNING OF UAVS IN DYNAMIC ENVIRONMENTS

    Zammit, Christian / Kampen, Erik-Jan Van | TIBKAT | 2021


    Path Planning of Unmanned Aerial Vehicles (UAVs) in Windy Environments

    Herath M. P. C. Jayaweera / Samer Hanoun | DOAJ | 2022

    Free access

    3D real-time path planning of UAVs in dynamic environments

    Zammit, Christian / Van Kampen, Erik-Jan | AIAA | 2021