There is growing interest to explore the autonomous taxiing that can sense its environment and maneuver safely with little or no human input. This technology is like the one developed for driver less cars that synthesize information from multiple sensors, which sense surrounding environment to detect road surface, lanes, obstacles and signage. This paper presents application of computer vision and machine learning to autonomous method for the surface movement of an air vehicle. We present a system and method that uses pattern recognition which aids unmanned aircraft system (UAS) and enhance the manned air vehicle landing and taxiing. Encouraged with our previous results [1], we extend upon our research to include multiple object relevant to taxiing. The objective of the current project is to build training dataset of annotated objects acquired from overhead perspective. It is useful for training a deep neural network to learn to detect, count specific airport objects in a video or image. This paper details the procedure and parameters used to create training dataset for running convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. In this method, multiple airport surface signage dataset from satellite images are subjected to training for pattern recognition. This trained system learns and then identifies and locates important visual references from imaging sensors and could help in decision making during taxiing phase.
Multiclass Geospatial Object Detection using Machine Learning-Aviation Case Study
2020-10-11
2546138 byte
Conference paper
Electronic Resource
English
MULTICLASS CONFIDENCE AND LOCALIZATION CALIBRATION FOR OBJECT DETECTION
European Patent Office | 2025
|Sharing Features: Efficient Boosting Procedures for Multiclass Object Detection
British Library Conference Proceedings | 2004
|