Vehicle Re-IDentification (Re-ID) aims to retrieve the most similar images with a given query vehicle image from a set of images captured by non-overlapping cameras, and plays a crucial role in intelligent transportation systems and has made impressive advancements in recent years. In real-world scenarios, we can often acquire the text descriptions of target vehicle through witness accounts, and then manually search the image queries for vehicle Re-ID, which is time-consuming and labor-intensive. To solve this problem, this paper introduces a new fine-grained cross-modal retrieval task called text-to-image vehicle re-identification, which seeks to retrieve target vehicle images based on the given text descriptions. To bridge the significant gap between language and visual modalities, we propose a novel Multi-scale multi-view Cross-modal Alignment Network (MCANet). In particular, we incorporate view masks and multi-scale features to align image and text features in a progressive way. In addition, we design the Masked Bidirectional InfoNCE (MB-InfoNCE) loss to enhance the training stability and make the best use of negative samples. To provide an evaluation platform for text-to-image vehicle re-identification, we create a Text-to-Image Vehicle Re-Identification dataset (T2I VeRi), which contains 2465 image-text pairs from 776 vehicles with an average sentence length of 26.8 words. Extensive experiments conducted on T2I VeRi demonstrate MCANet outperforms the current state-of-art (SOTA) method by 2.2% in rank-1 accuracy.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Text-to-Image Vehicle Re-Identification: Multi-Scale Multi-View Cross-Modal Alignment Network and a Unified Benchmark


    Beteiligte:
    Ding, Leqi (Autor:in) / Liu, Lei (Autor:in) / Huang, Yan (Autor:in) / Li, Chenglong (Autor:in) / Zhang, Cheng (Autor:in) / Wang, Wei (Autor:in) / Wang, Liang (Autor:in)


    Erscheinungsdatum :

    01.07.2024


    Format / Umfang :

    2685350 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    UNIAA: A Unified Multi-modal Image Aesthetic Assessment Baseline and Benchmark

    Zhou, Zhaokun / Wang, Qiulin / Lin, Bin et al. | ArXiv | 2024

    Freier Zugriff

    Align and Blend: A Unified Multi-Modal LiDAR Segmentation Network

    Wang, Chuanxu / Li, Jiajiong / Chen, Xin et al. | IEEE | 2025


    Multi-modal vehicle

    KARADIA NARENDRA H | Europäisches Patentamt | 2024

    Freier Zugriff

    Multi-modal vehicle

    BENEDICT MOBLE / DENTON HUNTER / HRISHIKESHAVAN VIKRAM | Europäisches Patentamt | 2023

    Freier Zugriff

    Multi-modal vehicle

    KARADIA NARENDRA HIRALAL | Europäisches Patentamt | 2021

    Freier Zugriff