Pose estimation using a monocular camera is an area of active research in visual inertial unmanned aerial vehicles. Some of the common methods involve fusing pose from two sources using a kalman filter or a lie algebraic representation for solving the global motion constraints. Some systems even couple inertial systems (IMU) with the visual pose estimation from the camera for robust pose estimation. This work extends the idea of a tightly coupled visual inertial system to a photometric error based method of semi dense visual odometry. The estimated pose from the camera is prone to error spikes due to loss in tracking keyframes. With extensive experimentation we prove that our visual inertial system for semi dense visual odometry is better than using visual odometry alone. We also show robustness of our method and compare our performance to the existing state of the art tightly coupled visual inertial systems that exist in an outdoor environment.
Multi modal pose fusion for monocular flight with unmanned aerial vehicles
2018-03-01
989654 byte
Conference paper
Electronic Resource
English
SAGE Publications | 2015
|POSE ESTIMATION OF UNMANNED AERIAL VEHICLES BASED ON A VISION-AIDED MULTI-SENSOR FUSION
DOAJ | 2016
|