In this paper, we describe a framework for video analysis and a method to detect and understand the class of events we refer to as "split and merge events" from single or multiple video streams. We start with automatic detection of scene changes, including camera operations such as zoom, pan, tilts and scene cuts. For each new scene, camera calibration is performed, the scene geometry is estimated, to determine the absolute positions for each detected object. Objects in the video scenes are detected using an adaptive background subtraction method and tracked over consecutive frames. Objects are detected and tracked in a way to identify the key split and merge behaviors where one object splits into two or more objects and two or more objects merge into one object. We have identified split and merge behaviors as the key behavior components for several higher level activities such package drop-off, exchange between people, people getting out of cars or forming crowds etc. We embed the data about scenes, camera parameters, object features, positions into the video stream as metadata to correlate, compare and associate the results for several related scenes and achieve better video event understanding. This location for the detailed syntactic information allows it to be physically associated with the video itself and guarantees that analysis results will be preserved while in archival storage or when sub-clips are created for distribution to other users. We present some preliminary results over single and multiple video streams.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Scene and content analysis from multiple video streams


    Beteiligte:
    Guler, S. (Autor:in)


    Erscheinungsdatum :

    01.01.2001


    Format / Umfang :

    688624 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Scene and Content Analysis from Multiple Video Streams

    Guler, S. | British Library Conference Proceedings | 2001


    Dynamic depth recovery from multiple synchronized video streams

    Hai Tao, / Sawhney, H.S. / Kumar, R. | IEEE | 2001


    Dynamic Depth Recovery from Multiple Synchronized Video Streams

    Tao, H. / Sawhney, H. S. / Kumar, R. et al. | British Library Conference Proceedings | 2001


    A motion-based scene tree for compressed video content management

    Yi, H. / Rajan, D. / Chia, L. T. | British Library Online Contents | 2006


    Video surveillance applications using multiple views of a scene

    Meyer, M. / Ohmacht, T. / Bosch, R. et al. | IEEE | 1999