In this paper, we describe a framework for video analysis and a method to detect and understand the class of events we refer to as "split and merge events" from single or multiple video streams. We start with automatic detection of scene changes, including camera operations such as zoom, pan, tilts and scene cuts. For each new scene, camera calibration is performed, the scene geometry is estimated, to determine the absolute positions for each detected object. Objects in the video scenes are detected using an adaptive background subtraction method and tracked over consecutive frames. Objects are detected and tracked in a way to identify the key split and merge behaviors where one object splits into two or more objects and two or more objects merge into one object. We have identified split and merge behaviors as the key behavior components for several higher level activities such package drop-off, exchange between people, people getting out of cars or forming crowds etc. We embed the data about scenes, camera parameters, object features, positions into the video stream as metadata to correlate, compare and associate the results for several related scenes and achieve better video event understanding. This location for the detailed syntactic information allows it to be physically associated with the video itself and guarantees that analysis results will be preserved while in archival storage or when sub-clips are created for distribution to other users. We present some preliminary results over single and multiple video streams.
Scene and content analysis from multiple video streams
2001-01-01
688624 byte
Conference paper
Electronic Resource
English
Scene and Content Analysis from Multiple Video Streams
British Library Conference Proceedings | 2001
|Dynamic Depth Recovery from Multiple Synchronized Video Streams
British Library Conference Proceedings | 2001
|A motion-based scene tree for compressed video content management
British Library Online Contents | 2006
|