We present RoarNet, a new approach for 3D object detection from 2D image and 3D Lidar point clouds. Based on two stage object detection framework ([1], [2]) with PointNet [3] as our backbone network, we suggest several novel ideas to improve 3D object detection performance. The first part of our method, RoarNet_2D, estimates the 3D poses of objects from a monocular image, which approximates where to examine further, and derives multiple candidates that are geometrically feasible. This step significantly narrows down feasible 3D regions, which otherwise requires demanding processing of 3D point clouds in a huge search space. Then the second part, RoarNet_3D, takes the candidate regions and conducts in-depth inferences to conclude final poses in a recursive manner. Inspired by PointNet, RoarNet_3D processes 3D point clouds directly without any loss of data, leading to precise detection. We evaluate our method in KITTI, a 3D object detection benchmark. Our result shows that RoarNet has superior performance to state-of-the-art methods that are publicly available. Remarkably, RoarNet also outperforms state-of-the-art methods even in settings where Lidar and camera are not time synchronized, which is practically important for actual driving environment.
RoarNet: A Robust 3D Object Detection based on RegiOn Approximation Refinement
2019 IEEE Intelligent Vehicles Symposium (IV) ; 2510-2515
01.06.2019
1741907 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
ROARNET: A ROBUST 3D OBJECT DETECTION BASED ON REGION APPROXIMATION REFINEMENT
British Library Conference Proceedings | 2019
|A robust region-based active contour for object segmentation in heterogeneous case
British Library Online Contents | 2014
|Europäisches Patentamt | 2023
|