Autonomous mobile systems such as vehicles or robots are equipped with multiple sensor modalities including Lidar, RGB, and Radar. The fusion of multi-modal information can enhance task accuracy but indiscriminate sensing and fusion in all modalities increase demand on available system resources. This paper presents a task-driven approach to input fusion that minimizes the utilization of resource-heavy sensors and demonstrates its application to Visual-Lidar fusion for object tracking and path planning. The proposed spatiotemporal sampling algorithm activates Lidar only at regions-of-interest identified by analyzing visual input and reduces the Lidar ‘base frame rate’ according to the kinematic state of the system. This significantly reduces Lidar usage, in terms of data sensed/transferred and potentially power consumed, without a severe reduction in performance compared to both a baseline decision-level fusion and state-of-the-art deep multi-modal fusion.
Task-Driven RGB-Lidar Fusion for Object Tracking in Resource-Efficient Autonomous System
IEEE Transactions on Intelligent Vehicles ; 7 , 1 ; 102-112
2022-03-01
4192091 byte
Article (Journal)
Electronic Resource
English
OBJECT TRACKING USING LiDAR DATA FOR AUTONOMOUS MACHINE APPLICATIONS
European Patent Office | 2023
|Object Tracking with 3D LIDAR Via Multi-task Sparse Learning
British Library Conference Proceedings | 2015
|ONLINE CAMERA LIDAR FUSION AND OBJECT DETECTION ON HYBRID DATA FOR AUTONOMOUS DRIVING
British Library Conference Proceedings | 2018
|