In GPS-denied environments, using vision-based techniques to perform simultaneous localization and mapping (SLAM) can be beneficial, even in cases where maneuvers to precise locations are still necessary. In this work, we explore monocular vision-based SLAM as the basis for a guidance method for an unmanned aerial vehicle (UAV) to perform a “homing” maneuver towards location(s) in the flight environment that may be decided “on-the-go” during flight. The estimation uses a Harris corner algorithm that generates “feature points” in images from a monocular camera. An Extended Kalman Filter (EKF) fuses these feature points at each instant with measurements from an IMU. This fusion is used for SLAM, i.e., to both estimate the state of the ownship and maintain a database of estimated “world points” (features) in the flight environment. The results of this localization are used for the homing guidance framework. The vision-based estimation framework was implemented first in simulation. The full vision-based estimation and homing framework was then validated in an indoor flight test using a small UAV with an onboard monocular camera. This flight test demonstrates that the vision-based framework can guide a UAV and operator to complete a semi-automated homing maneuver towards an selected object in an unknown environment.
Vision-Based Localization and Autonomous Homing for UAVs
2024-09-29
4144577 byte
Conference paper
Electronic Resource
English