Through the comparative investigation of the existing research on the UAV distribution center these years, our group selected unmanned vehicles as the carrier for the UAV distribution, used to collect and transport the UAV, and at the same time carried out functional splitting for the application workflow, realized the autonomous navigation and obstacle avoidance of the UAV, the UAV mobile object landed, the unmanned vehicle autonomous navigation, the obstacle avoidance, mobile object recognition, tracking and gesture recognition a total of five main functional modules. In this paper, the two modules of autonomous navigation, obstacle avoidance, and object detection of unmanned vehicles in a simulation environment are introduced. In particular, the paper focuses on the experimental methods and results of AirSim-based autonomous vehicle self-driving simulation through Deep Reinforcement Learning under the UE4 engine, analyzes the simulation results and puts forward the corresponding optimization ideas, and introduces the object detection method and concrete implementation details based on YOLO algorithm. A more complete solution is provided for the unmanned vehicle part of the UAV distribution center management dilemma. From the simulation results, the Deep Q Network itself and simulation environment used in this paper are suitable for verification of unmanned vehicle control, through a certain period of training, the neural network could make stable decisions for unmanned vehicles reaching the destination in a specific indoor simulation environment. The verification of the unmanned vehicle provides a solid foundation for the implementation of the technologies in the UAV distribution center.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Automatic Control of Unmanned Vehicles Based on Deep Reinforcement Learning and YOLO Algorithm Using Airsim Simulation


    Weitere Titelangaben:

    Lect. Notes Electrical Eng.


    Beteiligte:
    Long, Shengzhao (Herausgeber:in) / Dhillon, Balbir S. (Herausgeber:in) / Ye, Long (Herausgeber:in) / Zhang, Xiulin (Autor:in) / Qu, Xiaolei (Autor:in) / Yang, Shuting (Autor:in) / Dong, Junbiao (Autor:in) / Zhang, Jingcheng (Autor:in) / Li, Ke (Autor:in)

    Kongress:

    International Conference on Man-Machine-Environment System Engineering ; 2024 ; Beijing, China October 18, 2024 - October 20, 2024



    Erscheinungsdatum :

    29.09.2024


    Format / Umfang :

    7 pages





    Medientyp :

    Aufsatz/Kapitel (Buch)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch





    Deep Reinforcement Learning Algorithm and Simulation Verification Analysis for Automatic Control of Unmanned Vehicles

    Chen, Yonghong / Zhang, Yuxiang / Chen, Jiaao et al. | British Library Conference Proceedings | 2023


    A Parameter Sharing Method for Reinforcement Learning Model between AirSim and UAVs

    Tseng, Shau Yin / Lai, Chin Feng / Wang, Ming Shi et al. | IEEE | 2018



    Motion control of unmanned underwater vehicles via deep imitation reinforcement learning algorithm

    Chu, Zhenzhong / Sun, Bo / Zhu, Daqi et al. | Wiley | 2020

    Freier Zugriff