Due to traditional recovery systems lacking visual perception, it is difficult to monitor UAVs’ real-time status in communication-constrained or GPS-denied environments. This leads to insufficient ability in decision-making and parameter adjustment and increase uncertainty and risk of recovery. Visual inspection technology can make up for the limitations of GPS and communication and improve the autonomy and adaptability of the system. However, the existing RT-DETR algorithm is limited by single-path feature extraction, a simplified fusion mechanism, and high-frequency information loss, which makes it difficult to balance detection accuracy and computational efficiency. Therefore, this paper proposes a lightweight visual detection model based on transformer architecture to further optimize computational efficiency. Firstly, aiming at the performance bottleneck of existing models, the Parallel Backbone is proposed, which captures local features and global semantic information by sharing the initial feature extraction module and the double-branch structure, respectively, and uses the progressive fusion mechanism to realize the adaptive integration of multiscale features so as to balance the accuracy and lightness of target detection. Secondly, an adaptive multiscale feature pyramid network (AMFPN) is designed, which effectively integrates different scales of information through multi-level feature fusion and information transmission mechanism, alleviates the problem of information loss in small-target detection, and improves the detection accuracy in complex backgrounds. Finally, a wavelet frequency–domain-optimized reverse feature fusion mechanism (WT-FORM) is proposed. By using the wavelet transform to decompose the shallow features into multi-frequency bands and combining the weighted calculation and feature compensation strategy, the computational complexity is reduced, and the representation ability of the global context is further enhanced. The experimental results show that the improved model reduces the parameter size and computational load by 43.2% and 58% while maintaining detection accuracy comparable to the original RT-DETR in three datasets. Even in complex environments with low light, occlusion, or small targets, it can provide more accurate detection results.


    Access

    Download


    Export, share and cite



    Title :

    FUR-DETR: A Lightweight Detection Model for Fixed-Wing UAV Recovery


    Contributors:
    Yu Yao (author) / Jun Wu (author) / Yisheng Hao (author) / Zhen Huang (author) / Zixuan Yin (author) / Jiajing Xu (author) / Honglin Chen (author) / Jiahua Pi (author)


    Publication date :

    2025




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    Unknown




    WRRT-DETR: Weather-Robust RT-DETR for Drone-View Object Detection in Adverse Weather

    Bei Liu / Jiangliang Jin / Yihong Zhang et al. | DOAJ | 2025

    Free access

    Pipers Row - the DETR view

    Desai, S. / British Cement Association | British Library Conference Proceedings | 1997



    FIXED WING DRONE FIXED WING DRONE

    European Patent Office | 2021

    Free access

    Fixed-Wing UAV Model

    Yu, Ziquan / Zhang, Youmin / Jiang, Bin et al. | Springer Verlag | 2023