Multimodal sensor fusion techniques have promoted the development of autonomous driving, while perception in a complex environment remains a challenging problem. This chapter proposes the Open Multimodal Perception Dataset (OpenMPD), a multimodal perception benchmark aimed at difficult examples. Compared with existing datasets, OpenMPD focuses more on those complex traffic scenes in urban areas with overexposure or darkness, crowded environments, unstructured roads, and intersections. It acquires the multimodal data through a vehicle with 6 cameras and 4 LiDAR for a 360-degree field of view and collects 180 clips of 20-s synchronized images at 20Hz and point clouds at 10Hz. In particular, we applied a 128-beam LiDAR to provide Hi-Res point clouds to understand the 3D environment and sensor fusion better. We sampled 15K keyframes at equal intervals from clips for annotations, including 2D/3D object detections, 3D object tracking, and 2D semantic segmentation. Moreover, we provide four benchmarks for all tasks to evaluate algorithms and conduct extensive 2D/3D detection and segmentation experiments on OpenMPD. Data and further information are available at http://www.openmpd.com/.
OpenMPD: An Open Multimodal Perception Dataset
Multi-sensor Fusion for Autonomous Driving ; Kapitel : 7 ; 153-175
2023-05-11
23 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch