Enabling obstacles detection capabilities onboard manned or unmanned aircraft during the landing phase using vision sensors is a challenging task. In fact, detecting and tracking static and moving objects such as other vehicles which typically lie below the horizon is hindered by the cluttered background and the ownship motion. Motion-based detection approaches (such as the ones exploiting homography) may attain a reasonable performance in flat scenarios, but encounter challenges in three-dimensional environments due to the highly variable distance of imaged features. This paper explores obstacle detection algorithms for landing considering both motion-based algorithms using homography, and appearance-based techniques built on Convolutional Neural Networks (CNNs), with the idea of combining them. The obstacle detection function is conceived to complement a previously developed precision navigation system for landing, exploiting the same vision sensors. Different approaches are considered for static and moving obstacles. The pipeline has been validated through both synthetic and flight-testing data showing promising results for future more structured integrations.
Integrated Vision-Aided Precision Navigation and Obstacle Detection Sensing Pipeline for UAM Approach and Landing
29.09.2024
781042 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Vision-aided inertial navigation for pinpoint planetary landing
Online Contents | 2007
|Terrain Aided Navigation for Precision Landing on Lunar Surface
British Library Conference Proceedings | 2000
|