A crucial part of safe navigation of autonomous vehicles is the robust detection of surrounding objects. While there are numerous approaches covering object detection in images or LiDAR point clouds, this paper addresses the problem of object detection in radar data. For this purpose, the fully convolutional neural network YOLOv3 is adapted to operate on sparse radar point clouds. In order to apply convolutions, the point cloud is transformed into a grid-like structure. The impact of this representation transformation is shown by comparison with a network based on Frustum PointNets, which directly processes point cloud data. The presented networks are trained and evaluated on the public nuScenes dataset. While experiments show that the point cloud-based network outperforms the grid-based approach in detection accuracy, the latter has a significantly faster inference time neglecting the grid conversion which is crucial for applications like autonomous driving.
Radar-based 2D Car Detection Using Deep Neural Networks
2020-09-20
2230619 byte
Conference paper
Electronic Resource
English
Improving Performance in Pulse Radar Detection Using Neural Networks
Online Contents | 1995
|Radar target discrimination using neural networks
IEEE | 2010
|