Bird-eye-view (BEV) perception for autonomous driving has become popular in recent years. Among various BEV perception tasks, moving-obstacle segmentation is very important, since it can provide necessary information for downstream tasks, such as motion planning and decision making, in dynamic traffic environments. Many existing methods segment moving obstacles with LiDAR point clouds. The point-wise segmentation results can be easily projected into BEV since point clouds are 3-D data. However, these methods could not produce dense 2-D BEV segmentation maps, because LiDAR point clouds are usually sparse. Moreover, 3-D LiDARs are still expensive to vehicles. To provide a solution to these issues, this paper proposes a semantics-assisted moving-obstacle segmentation network using only low-cost visual cameras to produce segmentation results in dense 2-D BEV maps. Our network takes as input visual images from six surrounding cameras as well as the corresponding semantic segmentation maps at the current and previous moments, and directly outputs the BEV map for the current moment. We also propose a movable-obstacle segmentation auxiliary task to provide semantic information to further benefit moving-obstacle segmentation. Extensive experimental results on the public nuScenes and Lyft datasets demonstrate the effectiveness and superiority of our network.
Semantic-MoSeg: Semantics-Assisted Moving-Obstacle Segmentation in Bird-Eye-View for Autonomous Driving
IEEE Transactions on Intelligent Transportation Systems ; 26 , 7 ; 9251-9262
2025-07-01
5260696 byte
Article (Journal)
Electronic Resource
English
Unified motion planner for autonomous driving vehicle in avoiding the moving obstacle
European Patent Office | 2016
|