Due to their low storage capacity and fast retrieval speed, hashing techniques have received much attention in cross-modal retrieval. However, there are some issues that need to be further explored. First, some existing hashing methods use the labels to construct the semantic similarity matrix between pairwise data, ignoring the potential manifold structure between heterogeneous data. Second, some existing methods underestimate the importance of multi-label and the gaps between different class labels, making the learned hash codes less discriminative. Third, few of them embed both manifold and balanced structures within the same model, and the relaxation of discrete constraints will lead to an increasing quantization error. To mitigate these problems, this paper proposes a novel supervised hashing method, termed Label Guided Discrete Hashing (LGDH), which simultaneously preserves the comprehensive manifold structure and discriminative balanced codes that are both constructed by label information into Hamming space. We develop a local category distribution of the nearest neighbors, to excavate the underlying manifold structure of heterogeneous data. To maximize the gaps of different categories, a balanced matrix is constructed by labels to generate hash codes with balanced bits. For multi-label data, we also design a novel multi-label manifold and balanced structure matrix to adapt the real-world scenarios. An effective discrete optimization method is used to optimize our proposed objective function instead of the relaxation one. Extensive experiments on three benchmark datasets verify the effectiveness of LGDH. The comparison results demonstrat that LGDH achieves about 2% and 3% improved to different cross-modal tasks on average.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Label Guided Discrete Hashing for Cross-Modal Retrieval


    Beteiligte:
    Lan, Rushi (Autor:in) / Tan, Yu (Autor:in) / Wang, Xiaoqin (Autor:in) / Liu, Zhenbing (Autor:in) / Luo, Xiaonan (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    2022-12-01


    Format / Umfang :

    3880304 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Cross-Modal Generation and Pair Correlation Alignment Hashing

    Ou, Weihua / Deng, Jiaxin / Zhang, Lei et al. | IEEE | 2023


    Relaxed Energy Preserving Hashing for Image Retrieval

    Sun, Yuan / Dai, Jian / Ren, Zhenwen et al. | IEEE | 2024


    Deep Supervised Auto-encoder Hashing for Image Retrieval

    Tang, Sanli / Chi, Haoyuan / Yang, Jie et al. | British Library Conference Proceedings | 2018


    ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval

    Fragomeni, Adriano / Wray, Michael / Damen, Dima | British Library Conference Proceedings | 2023


    Lightweight Image Hashing Based on Knowledge Distillation and Optimal Transport for Face Retrieval

    Feng, Ping / Zhang, Hanyun / Sun, Yingying et al. | British Library Conference Proceedings | 2023