The multimodal fusion of point cloud data is a challenging task in 3D computer vision, particularly in fine object segmentation. In this study, we propose an innovative unified multimodal LiDAR segmentation network, named Align and Blend (A2Blend for short), which skillfully integrates three representative forms of point clouds: point view, voxel view, and range view. The core innovation of A2Blend lies in addressing two primary tasks: Align and Blend. For the “Align” task, we have carefully designed a learnable cross-modal association module, whose core is a unique “Cross-Modal Triplet Alignment Loss” mechanism. To the best of our knowledge, this is the first application of such a mechanism in the field of LiDAR segmentation. This mechanism applies principles from contrastive learning, promoting the tight clustering of similar semantic groups both within and across modalities, while increasing the separation between dissimilar groups by expanding the distance between similar and dissimilar sample clusters in the feature space. This significantly enhances the discriminative power and representational capacity of the feature embeddings. For the “Blend” task, we propose a fusion strategy that integrates intragroup self-attention within modalities and intergroup self-attention across modalities. This approach combines key concepts from standard self-attention and cross-self-attention mechanisms to achieve more comprehensive multimodal fusion. The segmentation performance on two large-scale outdoor datasets and one indoor dataset surpasses that of most state-of-the-art algorithms.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Align and Blend: A Unified Multi-Modal LiDAR Segmentation Network


    Contributors:
    Wang, Chuanxu (author) / Li, Jiajiong (author) / Chen, Xin (author) / Song, Da (author) / Wang, Binghui (author)


    Publication date :

    2025-05-01


    Size :

    4807836 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry

    Wisth, D / Camurri, M / Das, S et al. | BASE | 2022

    Free access

    M2S-RoAD: Multi-Modal Semantic Segmentation for Road Damage Using Camera and LiDAR Data

    Tseng, Tzu-Yun / Lyu, Hongyu / Li, Josephine et al. | ArXiv | 2025

    Free access


    MULTI-MODAL SEGMENTATION NETWORK FOR ENHANCED SEMANTIC LABELING IN MAPPING

    WIDJAJA SERGI ADIPRAJA / SHARMA DHANANJAI / LIONG VENICE ERIN B | European Patent Office | 2023

    Free access

    Multi-modal segmentation network for enhanced semantic labeling in mapping

    SERGI ADIPRAJA WIDJAJA / DHANANJAI SHARMA / VENICE ERIN BAYLON LIONG | European Patent Office | 2025

    Free access