3D scene flow represents the 3D motion of each point in the point cloud, which is a base 3D perception task for autonomous driving, like optical flow for 2D images. As non-learning methods are often inefficient or struggled to learn accurate correspondence in complex 3D real world, recent works turn to supervised learning methods, which require ground truth labels. However, acquiring the ground truth of 3D scene flow is challenging mainly due to the lack of sensors capable of capturing point-level motion and the complexity of accurately tracking each point in real-world environments. Therefore, it is important to resort to self-supervised methods, which do not require ground truth labels. In this paper, a novel unsupervised learning method of scene flow with LiDAR odometry is proposed, which enables the scene flow network can be trained directly on real-world LiDAR data without scene flow labels. In this structure, supervised odometry provides a more accurate shared cost volume for the interframe association of 3D scene flow. In addition, because static and occluded points are more suitable for using the pose transform while dynamic and non-occluded points are more suitable for using the scene flow transform, a static mask and an occlusion mask are designed to classify the states of points and a mask-weighted warp layer is proposed to transform source points in a divide-and-conquer manner. The experiments demonstrate that the divide-and-conquer strategy makes the predicted scene flow more accurate. The experiment results compared to other methods also show the application ability of our proposed method to real-world data. Our source codes are released at: https://github.com/IRMVLab/PSFNet.
Unsupervised Learning of 3D Scene Flow With LiDAR Odometry Assistance
IEEE Transactions on Intelligent Transportation Systems ; 26 , 4 ; 4557-4567
2025-04-01
3479751 byte
Article (Journal)
Electronic Resource
English
DELIO: DECOUPLED LIDAR ODOMETRY
British Library Conference Proceedings | 2019
|DeLiO: Decoupled LiDAR Odometry
IEEE | 2019
|