UAV's auto landing based on GPS data is considered to be unprecise, because GPS inaccuracy is inherently attributed to the low rate of satellite signal update. Besides of that, GPS data does not pay attention to the condition of the area and allows landing in any hazardous place. The visually augmented precision landing (VAPL) guidance system proposed in this paper guides the landing rocket model (LRM) to conduct vertical-takeoff vertical-landing (VTVL) maneuvers by fusing data from vision sensors, GPS and even ultrasonic as navigational aids. This system uses the MAVlink protocol to communicate data between the sensor board and the flight controller. The difficulty of tracking object using visual sensor is caused by many factors i.e. light intensity, color saturation, parallax, and aspect angle segmentation. This paper addresses those obstacles by refining calibration procedure and replace area segmentation method to proportional feedback method. Unwanted inverted flight response could be dealt with the improved program algorithm. The results show that the proportional feedback method approach can reduce the error in area segmentation method significantly, from 100% error to maximum value of accuracy up to 80%. In this study, the landing rocket model could land on the target, but data errors still exist due to the mistakes produced by the sensor vision when focusing on the target while discriminating it from the larger block area. The miss distance between the LRM to the target is approximately 1 meter. Therefore, the proposed VAPL could be considered more precise than a solely GPS-guided landing system that has average miss distance 3 meters.
Visually Augmented Guidance System Realization for Landing Rocket Model
2021-11-03
2512900 byte
Conference paper
Electronic Resource
English