Object classification in far-field video sequences is a challenging problem because of low-resolution imagery and projective image distortion. Most existing far-field classification systems are trained to work well in a constrained set of scenes, but can fail dramatically when applied to new scenes, or even different views of the same scene. We identify discriminative object features for classifying vehicles and pedestrians and develop a scene-invariant classification system that is trained on a small number of labeled examples from a few scenes, but transfers well to a wide range of new scenes. Simultaneously, we demonstrate that use of scene-specific context features (such as image position and direction of motion of objects) can greatly improve classification in any given scene. To combine these ideas, we propose a new algorithm for adapting a scene-invariant classifier to scene-specific features by retraining with the help of unlabelled data in a novel scene. Experimental results demonstrate the effectiveness of our context features and scene-transfer/adaptation algorithm for multiple urban and highway scenes.
Improving object classification in far-field video
2004-01-01
960888 byte
Conference paper
Electronic Resource
English
Improving Object Classification in Far-Field Video
British Library Conference Proceedings | 2004
|Method and system for improving object detection and object classification
European Patent Office | 2020
|Improving Run Time Efficiency of Semantic Video Event Classification
Springer Verlag | 2023
|Gaussian Mixture Classification for Moving Object Detection in Video Surveillance Environment
British Library Conference Proceedings | 2005
|