Knowledge distillation has recently been proven to be effective for model compression and acceleration of point cloud-based 3D object detection. However, the complementary network pruning is often overlooked during knowledge distillation. In this paper, we propose a pre-pruned distillation framework that combines network pruning and knowledge distillation to better transfer knowledge from the teacher to the student. To maintain the feature consistency between the student and the teacher, we train a teacher model and then generate a compact student model by structural channel pruning. Then, we employ multi-source knowledge distillation to transfer both mid-level and high-level information to the student model. Additionally, to improve the object detection performance of the student model, we propose a soft pivotal position selection mask to emphasize the features of the foreground regions during distillation. We conduct experiments on both pillarand voxel-based 3D object detectors on the Waymo datasets, demonstrating the effectiveness of our approach in compressing point cloud-based 3D detectors.
Pre-pruned Distillation for Point Cloud-based 3D Object Detection
2024 IEEE Intelligent Vehicles Symposium (IV) ; 3192-3198
2024-06-02
1519633 byte
Conference paper
Electronic Resource
English
Local pruning global pruned network under knowledge distillation
British Library Conference Proceedings | 2021
|Fast interpolation for line-pruned images
British Library Online Contents | 2011
|Covariance based point cloud descriptors for object detection and recognition
British Library Online Contents | 2016
|