We introduce a method that uses a single camera to localize a vehicle within a pre-constructed map consisting of a voxel occupancy grid and road-line marker positions. Sophisticated mapping hardware is capable of creating high-accuracy 3D maps of road environments, but localizing a vehicle within such maps is one of the challenges at the forefront of automated driving. A solution which is robust to dynamic environments, while using only inexpensive sensors, is a difficult problem. In addition, maps that enable precise localization consume a lot of data which is impractical for the expansive environments encountered in real-world road networks. We show how using the area of edge regions shared between rendered views of a compact voxel map and in-vehicle camera images can be coupled with non-linear optimization methods to determine the camera position and pose.
Monocular localization within sparse voxel maps
2017-06-01
867215 byte
Conference paper
Electronic Resource
English
A GPU Accelerated Particle Filter Based Localization Using 3D Evidential Voxel Maps
SAE Technical Papers | 2019
|A GPU Accelerated Particle Filter Based Localization Using 3D Evidential Voxel Maps
British Library Conference Proceedings | 2019
|Planetary Rover Localization Within Orbital Maps
NTRS | 2014
|Monocular Localization of a Mobile Robot
British Library Conference Proceedings | 1993
|SPARSE VOXEL TRANSFORMER FOR CAMERA-BASED 3D SEMANTIC SCENE COMPLETION
European Patent Office | 2024
|