LiDAR-based 3D object detection is essential for autonomous driving. In order to extract information from sparse and unordered point cloud data, pillar-based methods make the data compact and orderly by converting point cloud into pseudo images. However, these methods suffer from limited feature extraction capabilities, and tend to lose key information during the conversion, leading to inferior detection accuracy than voxel-based or point-based methods especially for small objects. In this paper, we propose SCNet3D, a novel pillar-based method that tackles the challenges of feature enhancement, information preservation, and small target detection from the perspectives of features and data. We first introduce a Feature Enhancement Module (FEM), which uses the attention mechanism to weight features in three dimensions, and enhances 3D features from local to global layer by layer. Then, a STMod-Convolution Network (SCNet) is designed, which achieves sufficient feature extraction and fusion of BEV pseudo images through two channels, one for basic feature and one for advanced feature. Moreover, a Shape and Distance Aware Data Augmentation (SDAA) approach is proposed to add more samples to the point cloud while maintaining the original shape and distance of the samples during the training process. Extensive experiments demonstrate that our SCNet3D has superior performance and excellent robustness. Remarkably, SCNet3D achieves the AP of 82.35% in the moderate Car category, 44.64% in the moderate Pedestrian category and 67.55% in the moderate Cyclist category on the KITTI test split in 3D detection benchmark, outperforming many state-of-the-art 3D detectors.
SCNet3D: Rethinking the Feature Extraction Process of Pillar-Based 3D Object Detection
IEEE Transactions on Intelligent Transportation Systems ; 26 , 1 ; 770-784
2025-01-01
5546973 byte
Article (Journal)
Electronic Resource
English
Object contour extraction based on image feature analysis
British Library Online Contents | 2016
|