In order for an autonomous military robot to "appropriately" navigate through a complex environment, it must have an in-depth understanding of the immediate surroundings. We have developed a scene understanding system based on a multi-sensor system that uses an "operator-trained" rule-base to analyze the pixel level attributes across the set of diverse phenomenology imaging sensors. Each pixel is registered to range information so we not only know what but where features are in the environment. This three dimensional labeled world model can then be used to control the speed and steering of the vehicle in an appropriate manner. In this paper we discuss our multi-sensor system, the operator trained analysis algorithm called ONAV (opportunistic navigation), and the reactive control algorithm used to control the speed and steering of the vehicle.
A high fidelity multi-sensor scene understanding system for autonomous navigation
2000-01-01
629343 byte
Conference paper
Electronic Resource
English
A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation
British Library Conference Proceedings | 2000
|Auditory Scene Understanding for Autonomous Driving
IEEE | 2021
|