Fault tolerant flight control systems require multiple independent sources of navigation data, especially in the case of low altitude maneuvers. During final approach, the relative position and orientation of the aircraft can be computed based on monocular camera images in which the runway presents. However, this navigation sensor has different error characteristics compared to ILS or GNSS. This paper presents the first steps towards a runway detection method with subpixel accuracy and collects the main possible bottlenecks of vision-aided navigation.
Navigation data extraction from monocular camera images during final approach
2018-06-01
1250607 byte
Conference paper
Electronic Resource
English
Optimal Maneuvering for Autonomous Relative Navigation Using Monocular Camera Sequential Images
AIAA | 2021
|A Bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images
British Library Online Contents | 2016
|A bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images
BASE | 2016
|Scale-aware navigation of a low-cost quadrocopter with a monocular camera
Tema Archive | 2014
|MONOCULAR VISION RANGING METHOD, STORAGE MEDIUM, AND MONOCULAR CAMERA
European Patent Office | 2022
|