Autonomous driving research has predominantly focused on LiDAR and vision-based methodologies. While LiDAR excels in accuracy and robustness, its high cost is prohibitive; vision-based systems, alternatively, are more economical but limited in scope and precision. To overcome these limitations, this paper presents a cross-modal scene recognition algorithm integrating semantic information to facilitate a seamless positional transformation between vision devices and LiDAR maps. The core objective is to enable precise initial localization within LiDAR point cloud maps, thereby establishing a consistent linkage between visual perception and spatial mapping. The algorithm utilizes a cross-modal interaction network to synergize features from both modalities, significantly narrowing the semantic gap. Further enhancing this framework, graph neural networks are employed to deepen the semantic understanding and improve alignment between disparate modal scenes. This method demonstrates remarkable efficiency in decoding complex environmental contexts and elevating match precision. Validated on the KITTI dataset, the algorithm achieved a commendable average F1 score of 0.815, affirming its value in advancing autonomous navigation systems with more accurate and reliable scene recognition capabilities.
A Novel Cross-Modal Scene Recognition Algorithm Leveraging Semantic Information
24.06.2024
1477907 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Leveraging motion and semantic cues for 3D scene understanding
TIBKAT | 2020
|LEVERAGING SEMANTIC INFORMATION FOR A MULTI-DOMAIN VISUAL AGENT
Europäisches Patentamt | 2025
|