Sensor fusion is a crucial augmentation technique for improving the accuracy and reliability of perception sys-tems for automated vehicles under diverse driving conditions. However, adverse weather and low-light conditions remain challenging, where sensor performance degrades significantly, exposing vehicle safety to potential risks. Advanced sensors such as LiDARs can help mitigate the issue but with extremely high marginal costs. In this paper, we propose a novel transformer-based 3D object detection model “REDFormer” to tackle low visibility conditions, exploiting the power of a more practi-cal and cost-effective solution by leveraging bird's-eye-view camera-radar fusion. Using the nuScenes dataset with multi-radar point clouds, weather information, and time-of-day data, our model outperforms state-of-the-art (SOTA) models on clas-sification and detection accuracy. Finally, we provide extensive ablation studies of each model component on their contributions to address the above-mentioned challenges. Particularly, it is shown in the experiments that our model achieves a significant performance improvement over the baseline model in low-visibility scenarios, specifically exhibiting a 31.31% increase in rainy scenes and a 46.99% enhancement in nighttime scenes. The source code of this study is publicly available11https://github.com/PurdueDigitalTwin/REDFormer.
Radar Enlightens the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion
24.09.2023
5154450 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Object Tracking System With Radar/Vision Fusion For Automated Vehicles
Europäisches Patentamt | 2017
|Object tracking system with radar/vision fusion for automated vehicles
Europäisches Patentamt | 2020
|