This paper investigates neural network architectures that fuse feature-level data of radar and vision sensors in order to improve automotive environment perception for advanced driver assistance systems. Fusion is performed with occupancy grids, which incorporate sensor-specific information mapped from their individual detection lists. The fusion step is evaluated on three types of neural networks: (1) fully convolutional, (2) auto-encoder and (3) auto-encoder with skipped connections. These networks are trained to fuse radar and camera occupancy grids with the ground truth obtained from lidar scans. A detailed analysis of network architectures and parameters is performed. Results are compared to classical Bayesian occupancy fusion on typical evaluation metrics for pixel-wise classification tasks, like intersection over union and pixel accuracy. This paper shows that it is possible to perform grid fusion of feature-level sensor data with the proposed system architecture. Especially the auto-encoder architectures show significant improvements in evaluation metrics compared to classical Bayesian fusion method.
Deep Grid Fusion of Feature-Level Sensor Data with Convolutional Neural Networks
2019-11-01
1285476 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
British Library Conference Proceedings | 1999
|Feature-Based Satellite Detection Using Convolutional Neural Networks
British Library Conference Proceedings | 2019
|Automated Vehicle Recognition with Deep Convolutional Neural Networks
Transportation Research Record | 2017
|