In this study, we introduce a novel fusion technique that combines vehicle perception with the motion model to achieve a robust and accurate localization module in the Frenet frame. Our approach begins with the early-stage fusion of radar and LiDAR point cloud data, creating a rich and efficient environmental representation. This enhanced point cloud data is then processed through a modified version of the kiss-ICP method to generate an initial assessment of vehicle positioning. Recognizing the critical importance and inherent challenges of lateral position estimation in the Frenet frame, our methodology integrates outputs from camera-based lane detection to refine lateral position accuracy. Subsequently, we employ the vehicle's motion model using an Extended Kalman Filter (EKF) for the prediction step, while utilizing the augmented kiss-ICP data in conjunction with camera-derived lateral positions for measurement. The efficacy of our approach has been thoroughly tested and validated using WATonoBus, an autonomous shuttle navigating the University of Waterloo's campus ring road.
Robust Localization for Autonomous Vehicles via Multisensor Fusion
24.09.2024
4196188 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
MULTISENSOR SAFETY ROUTE SYSTEM FOR AUTONOMOUS VEHICLES
Europäisches Patentamt | 2019
|Autonomous Multisensor Calibration and Closed-loop Fusion for SLAM
British Library Online Contents | 2015
|Multisensor Integrated Autonomous Navigation Based on Intelligent Information Fusion
AIAA | 2024
|Performance evaluation of a multisensor system for autonomous guidance vehicles
Tema Archiv | 2000
|