In this paper we propose a method to construct a virtual sequence for a camera moving through a static environment given an input sequence from a different camera trajectory. Existing image-based rendering techniques can generate photorealistic images given a set of input views, though the output images almost unavoidably contain small regions where the colour has been incorrectly chosen. In a single image these artifacts are often hard to spot, but become more obvious when viewing a real image with its virtual stereo pair, and even more so when when a sequence of novel views is generated, since the artifacts are rarely temporally consistent.
To address this problem of consistency, we propose a new spatio-temporal approach to novel video synthesis. The pixels in the output video sequence are modelled as nodes of a 3–D graph. We define an MRF on the graph which encodes photoconsistency of pixels as well as texture priors in both space and time. Unlike methods based on scene geometry which yield highly connected graphs, our approach results in a graph whose degree is independent of scene structure. The MRF energy is therefore tractable and we solve it for the whole sequence using a state-of-the-art message passing optimisation algorithm. We demonstrate the effectiveness of our approach in reducing temporal artifacts.
Temporal Priors for Novel Video Synthesis
Asian Conference on Computer Vision ; 2007 ; Tokyo, Japan November 18, 2007 - November 22, 2007
2007-01-01
10 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch
Bayesian video matting using learnt image priors
IEEE | 2004
|Bayesian Video Matting Using Learnt Image Priors
British Library Conference Proceedings | 2004
|Behavioral Priors for Detection and Tracking of Pedestrians in Video Sequences
British Library Online Contents | 2006
|