In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion of a dynamic 3D scene. Because these properties are completely unknown, our approach uses multiple views to build a piecewise continuous geometric and radiometric representation of the scene's trace in space-time. Basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called surfel sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectance model and complex real scenes (clothing, skin, shiny objects) illustrate our method's ability to explain pixels and pixel variations in terms of their physical causes-shape, reflectance, motion, illumination, and visibility.
Multi-view scene capture by surfel sampling: from video streams to non-rigid 3D motion, shape and reflectance
2001-01-01
1091502 byte
Conference paper
Electronic Resource
English
British Library Conference Proceedings | 2001
|British Library Online Contents | 2002
|GPU-friendly multi-view stereo reconstruction using surfel representation and graph cuts
British Library Online Contents | 2011
|