3D object detection is one of the vital tasks in many fields, especially for autonomous driving. Among all technology paths for this task, the detection based on monocular images has been proven an efficient way with low cost. However, the performance of most current algorithms is far from satisfactory. The current leading systems can’t achieve the accuracy which is comparable with LiDAR-based algorithms, while the real-time problems also exist. In this paper, we propose a novel anchor-free model for monocular 3D object detection. We choose the effective modified DenseNet as the feature extraction part. The Joint Pyramid Upsampling is applied to fuse features maps for multiply scales, and the Atrous Spatial Pyramid Pooling is used to maximize the context information. Finally, 5 convolution layers are connected with the feature map to predict the information. We call the model Dense-JANet and train it on the large autonomous driving dataset, called nuScenes, which has more scenes and data than KITTI. Experiments show that Dense-JANet’s performance exceeds SOTA model on small object prediction and orientation prediction, while the proposed model can fully meet the real-time requirements.
Dense-JANet for Monocular 3D Object Detection
20.09.2020
2713758 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Shape-Aware Monocular 3D Object Detection
IEEE | 2023
|Incorporating scene priors to dense monocular mapping
British Library Online Contents | 2015
|Robust Environmental Perception of Monocular 3D Object Detection
Springer Verlag | 2023
|