TNL (Tracking by Natural Language) aims to locate the target described by a natural language sentence in a video. Most existing TNL methods are typically composed of three modules: object grounding, object tracking, and switching module, and their performance is limited by the poor performance of the grounding and switching modules due to the complex backgrounds and inaccurate information stored in the memory. This paper presents a global-local framework to address these issues, which includes a prompt-guided grounding module, a trained local tracking module, and a memory-based switcher module. The prompt-guided grounding module uses noun prompts to guide the CLIP model in focusing more on target regions and aligning visual features semantically with linguistic features, avoiding being misled by distractors and background. The memory-based switch module stores historical information with higher-quality memory, allowing the model to make more accurate decisions based on reliable data, thus improving the overall performance. Experiments on TNL2K, LaSOT, and OTB-Lang demonstrate the effectiveness and generalizability of the proposed framework.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Boost Tracking by Natural Language With Prompt-Guided Grounding


    Contributors:
    Li, Hengyou (author) / Liu, Xinyan (author) / Li, Guorong (author) / Wang, Shuhui (author) / Qing, Laiyun (author) / Huang, Qingming (author)


    Publication date :

    2025-01-01


    Size :

    2524134 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Vision-Language Tracking With CLIP and Interactive Prompt Learning

    Zhu, Hong / Lu, Qingyang / Xue, Lei et al. | IEEE | 2025



    Wire-guided trolleys boost engine productivity

    Scott,D. / Fiat,IT | Automotive engineering | 1982


    Multi-Level Query Interaction for Temporal Language Grounding

    Tang, Haoyu / Zhu, Jihua / Wang, Lin et al. | IEEE | 2022