Automatic vehicle detection and annotation for streaming video data with complex scenes is an interesting but challenging task for intelligent transportation systems. In this paper, we present a fast algorithm: detection and annotation for vehicles (DAVE), which effectively combines vehicle detection and attributes annotation into a unified framework. DAVE consists of two convolutional neural networks: a shallow fully convolutional fast vehicle proposal network (FVPN) for extracting all vehicles’ positions, and a deep attributes learning network (ALN), which aims to verify each detection candidate and infer each vehicle’s pose, color, and type information simultaneously. These two nets are jointly optimized so that abundant latent knowledge learned from the deep empirical ALN can be exploited to guide training the much simpler FVPN. Once the system is trained, DAVE can achieve efficient vehicle detection and attributes annotation for real-world traffic surveillance data, while the FVPN can be independently adopted as a real-time high-performance vehicle detector as well. We evaluate the DAVE on a new self-collected urban traffic surveillance data set and the public PASCAL VOC2007 car and LISA 2010 data sets, with consistent improvements over existing algorithms.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Fast Automatic Vehicle Annotation for Urban Traffic Surveillance


    Contributors:
    Zhou, Yi (author) / Liu, Li (author) / Shao, Ling (author) / Mellor, Matt (author)


    Publication date :

    2018-06-01


    Size :

    5326346 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Airborne moving vehicle detection for urban traffic surveillance

    Lin, Renjun / Cao, Xianbin / Xu, Yanwu et al. | IEEE | 2008





    Automatic traffic surveillance system for vehicle tracking and classification

    Jun-Wei Hsieh, / Shih-Hao Yu, / Yung-Sheng Chen, et al. | IEEE | 2006