We propose an algorithm for estimating dense depth information of dynamic scenes from multiple video streams captured using unsynchronized stationary cameras. We solve this problem by first imposing two assumptions about the scene motion and the temporal offset between cameras. The motion of a scene is described using a local constant velocity model and the camera temporal offset is assumed to be constant within a short of period of time. Based on these models, geometric relations between the images of moving scene points, the scene depth, the scene motions, and the camera temporal offset are investigated and an estimation method is developed to compute the camera temporal offset. The three main steps of the proposed algorithm are: 1) the estimation of the temporal offset between cameras, 2) the synthesis of synchronized image pairs based on the estimated camera temporal offset and optical flow fields computed in each view, and 3) the stereo computation based on the synthesized synchronous image pairs. The proposed algorithm has been tested on both synthetic data and real image sequences. Promising quantitative and qualitative experimental results are demonstrated in the paper.
Dynamic depth recovery from unsynchronized video streams
01.01.2003
592511 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Dynamic Depth Recovery from Unsynchronized Video Streams
British Library Conference Proceedings | 2003
|Dynamic Depth Recovery from Multiple Synchronized Video Streams
British Library Conference Proceedings | 2001
|Wide Baseline Matching between Unsynchronized Video Sequences
British Library Online Contents | 2006
|Frame-level temporal calibration of video sequences from unsynchronized cameras
British Library Online Contents | 2008
|