Detection and tracking of on-road moving objects are crucial for vehicle perception. Data Fusion of perception sensors like cameras, lidar, and radar, allows the automotive perception to achieve a certain level of robustness and performance. The efficiency of vehicle perception determines its accuracy to take better decisions related to road safety. In this article, we propose two approaches for the detection and tracking of on-road moving objects by combining information from lidar and radar sensors. In this work, we propose to use a probabilistic occupancy grid for sensor fusion of radar and lidar point cloud, and a modified YOLO DeepSORT approach to detect, classify, and track the moving objects. Using both traditional and deep learning approaches, data fusion allows us to improve the robustness of the automotive perception pipeline. We finally used the nuScenes dataset to evaluate and compare the accuracy of our approaches with state-of-the-art trackers and detectors. We achieved an AMOTA score of 68.3% and a mAP score of 68.7% for the tracking and detection tasks respectively.
LiDAR and Radar Sensor Fusion for Detection and Tracking of Dynamic Objects in Autonomous Vehicles using Probabilistic Occupancy Grid, YOLO and DeepSORT
08.10.2022
1019820 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch