Due to traditional recovery systems lacking visual perception, it is difficult to monitor UAVs’ real-time status in communication-constrained or GPS-denied environments. This leads to insufficient ability in decision-making and parameter adjustment and increase uncertainty and risk of recovery. Visual inspection technology can make up for the limitations of GPS and communication and improve the autonomy and adaptability of the system. However, the existing RT-DETR algorithm is limited by single-path feature extraction, a simplified fusion mechanism, and high-frequency information loss, which makes it difficult to balance detection accuracy and computational efficiency. Therefore, this paper proposes a lightweight visual detection model based on transformer architecture to further optimize computational efficiency. Firstly, aiming at the performance bottleneck of existing models, the Parallel Backbone is proposed, which captures local features and global semantic information by sharing the initial feature extraction module and the double-branch structure, respectively, and uses the progressive fusion mechanism to realize the adaptive integration of multiscale features so as to balance the accuracy and lightness of target detection. Secondly, an adaptive multiscale feature pyramid network (AMFPN) is designed, which effectively integrates different scales of information through multi-level feature fusion and information transmission mechanism, alleviates the problem of information loss in small-target detection, and improves the detection accuracy in complex backgrounds. Finally, a wavelet frequency–domain-optimized reverse feature fusion mechanism (WT-FORM) is proposed. By using the wavelet transform to decompose the shallow features into multi-frequency bands and combining the weighted calculation and feature compensation strategy, the computational complexity is reduced, and the representation ability of the global context is further enhanced. The experimental results show that the improved model reduces the parameter size and computational load by 43.2% and 58% while maintaining detection accuracy comparable to the original RT-DETR in three datasets. Even in complex environments with low light, occlusion, or small targets, it can provide more accurate detection results.
FUR-DETR: A Lightweight Detection Model for Fixed-Wing UAV Recovery
2025
Article (Journal)
Electronic Resource
Unknown
Metadata by DOAJ is licensed under CC BY-SA 1.0
WRRT-DETR: Weather-Robust RT-DETR for Drone-View Object Detection in Adverse Weather
DOAJ | 2025
|British Library Conference Proceedings | 1997
|Springer Verlag | 2023
|