Intelligent vehicles (IVs) are pursued in both research laboratories and industries to revolutionize transportation systems. Since the driving surroundings can be cluttered and the weather conditions may vary, environment perception in IVs represents a challenging task. Therefore, multi-modal sensors are engaged. In perception, outstanding performance is obtained by employing deep learning algorithms. However, deep learning often relies on probabilities while there is a better formalism to handle prediction uncertainty. To circumvent this, in this work, evidence theory is combined with a camera-lidar-based deep learning fusion architecture. The coupling is based on generating basic belief functions using distance to prototypes. It also uses a distance-based decision rule. Because IVs have constrained computational power, a reduced deep-learning architecture is leveraged in this formulation. In the task of road detection, the evidential approach outperforms the probabilistic one. Besides, ambiguous features can be prudently settled as ignorance rather than making a possibly wrong decision using probability. The coupling is also extended to the task of semantic segmentation. This shows how evidential formulation can be easily adapted to the multi-class case. Therefore, the evidential formulation is generic and produces a more accurate and versatile prediction while maintaining the trade-off between performances and computational costs in IVs. This work uses the KITTI dataset.
Evidential deep learning-based multi-modal environment perception for intelligent vehicles
2023-06-04
5070550 byte
Conference paper
Electronic Resource
English
Multi-sensor environment perception for automated vehicles with semantic evidential grid maps
TIBKAT | 2023
|British Library Conference Proceedings | 2022
|British Library Conference Proceedings | 2022
|SAE Technical Papers | 2022
|