The multimodal fusion of point cloud data is a challenging task in 3D computer vision, particularly in fine object segmentation. In this study, we propose an innovative unified multimodal LiDAR segmentation network, named Align and Blend (A2Blend for short), which skillfully integrates three representative forms of point clouds: point view, voxel view, and range view. The core innovation of A2Blend lies in addressing two primary tasks: Align and Blend. For the “Align” task, we have carefully designed a learnable cross-modal association module, whose core is a unique “Cross-Modal Triplet Alignment Loss” mechanism. To the best of our knowledge, this is the first application of such a mechanism in the field of LiDAR segmentation. This mechanism applies principles from contrastive learning, promoting the tight clustering of similar semantic groups both within and across modalities, while increasing the separation between dissimilar groups by expanding the distance between similar and dissimilar sample clusters in the feature space. This significantly enhances the discriminative power and representational capacity of the feature embeddings. For the “Blend” task, we propose a fusion strategy that integrates intragroup self-attention within modalities and intergroup self-attention across modalities. This approach combines key concepts from standard self-attention and cross-self-attention mechanisms to achieve more comprehensive multimodal fusion. The segmentation performance on two large-scale outdoor datasets and one indoor dataset surpasses that of most state-of-the-art algorithms.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Align and Blend: A Unified Multi-Modal LiDAR Segmentation Network


    Beteiligte:
    Wang, Chuanxu (Autor:in) / Li, Jiajiong (Autor:in) / Chen, Xin (Autor:in) / Song, Da (Autor:in) / Wang, Binghui (Autor:in)


    Erscheinungsdatum :

    01.05.2025


    Format / Umfang :

    4807836 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry

    Wisth, D / Camurri, M / Das, S et al. | BASE | 2022

    Freier Zugriff

    M2S-RoAD: Multi-Modal Semantic Segmentation for Road Damage Using Camera and LiDAR Data

    Tseng, Tzu-Yun / Lyu, Hongyu / Li, Josephine et al. | ArXiv | 2025

    Freier Zugriff


    MULTI-MODAL SEGMENTATION NETWORK FOR ENHANCED SEMANTIC LABELING IN MAPPING

    WIDJAJA SERGI ADIPRAJA / SHARMA DHANANJAI / LIONG VENICE ERIN B | Europäisches Patentamt | 2023

    Freier Zugriff

    Multi-modal segmentation network for enhanced semantic labeling in mapping

    SERGI ADIPRAJA WIDJAJA / DHANANJAI SHARMA / VENICE ERIN BAYLON LIONG | Europäisches Patentamt | 2025

    Freier Zugriff