Quantifying uncertainty will significantly improve perceptual performance and provide more comprehensive environmental information for decision-making and planning modules of autonomous vehicles. Unfortunately, most perception methods exhibit excellent performance in accuracy but fall short in estimating associated uncertainty. To fill this gap, the variance inference ensemble network is proposed to enhance environmental perception and quantify uncertainty for 3-D object detection in point cloud. Specifically, the method is divided into three parts. Several variance inference neural networks that adopt multivariate Gaussian distribution for direct modeling are first constructed through a two-stage training strategy, extracting both the object details and variances from point cloud data in parallel. Following this, an uncertainty-aware fusion strategy is designed to integrate and filter the multiple results above based on the associated uncertainty and yield reliable and comprehensive results. Furthermore, a novel metric, uncertainty index, is coined to estimate the uncertainty of detected objects for the single deterministic network and ensemble network in a unified and quantitative manner. Finally, we validate our method on the KITTI dataset. The experiment demonstrates that our method outperforms the original baseline and recent uncertainty quantification methods across different scenarios.
Uncertainty Quantification Using Variance Inference Ensemble Network for Object Detection
IEEE Transactions on Intelligent Transportation Systems ; 26 , 8 ; 11728-11740
01.08.2025
3003449 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Inference Uncertainty Quantification Instead of Full-scale Testing
British Library Conference Proceedings | 2008
|