object tracking is one of the most sophisticated and least researched tasks in computer vision, especially with respect to unmanned aerial vehicles. Primarily it caused by several challenges such as high distance to the tracking objects, variety in object sizes, camera motion, etc. This paper demonstrates the implementation of a tracking pipeline using the YOLOv4-based architecture network defined as YOLOv4eff with the doubled long-short time memory (LSTM) network for road-objects tracking. Our optimization method within YOLOv4eff addresses higher accuracy by improving the network architecture, using modified CSP techniques, Swish activation function, etc. Our object tracker used convolutional network as feature map extractor network of differential sequential frames, YOLOv4eff as detector and doubled long-short time memory (LSTM) network as tracking locations predictor. Our extensive experimental results on self-collected dataset at the height of 10–30 meters and performance comparison with other state-of-the-art tracking methods show that our LYOLOv4eff is more accurate and effectively applicable to objects tracking from a drone.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Efficient objects tracking from an unmanned aerial vehicle


    Beteiligte:


    Erscheinungsdatum :

    23.06.2021


    Format / Umfang :

    731947 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Unmanned aerial vehicle target tracking method and system, unmanned aerial vehicle holder and unmanned aerial vehicle

    HU HUAZHI / HE WEIXIONG / HU HAISHENG | Europäisches Patentamt | 2022

    Freier Zugriff


    Efficient Road Detection and Tracking for Unmanned Aerial Vehicle

    Zhou, Hailing / Kong, Hui / Wei, Lei et al. | IEEE | 2015


    Pedestrian tracking from an unmanned aerial vehicle

    Bian, Chao / Yang, Zhen / Zhang, Tao et al. | IEEE | 2016