Abstract In this paper we attempt to solve the problem of synthesizing a novel view corresponding to a virtual camera given the scene description in the form of images captured from other view points. We project each line of sight emerging from the virtual camera on each of the given views, align them geometrically and assign a color that is photo consistent as per the radiance model. This being ill-conditioned, a smooth variation of depth in the scene is utilized as the regularizing constraint. It leads to development of an algorithm which is computationally fast and generates visually realistic images with negligible artifacts even with a limited number of input views. The proposed approach puts no restriction on the view points from which the input images are captured.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Novel View Synthesis Using Locally Adaptive Depth Regularization


    Contributors:


    Publication date :

    2006-01-01


    Size :

    11 pages





    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Novel View Synthesis Using Locally Adaptive Depth Regularization

    Shah, H. / Chaudhuri, S. | British Library Conference Proceedings | 2006


    A locally-adaptive imager

    Acosta Serafini, Pablo M. (Pablo Manuel), 1971- | DSpace@MIT | 1998

    Free access

    Adaptive Range Guided Multi-view Depth Estimation with Normal Ranking Loss

    Ding, Yikang / Li, Zhenyang / Huang, Dihe et al. | British Library Conference Proceedings | 2023


    Improved Locally Adaptive Vector Quantization

    Cheung, Kar-Ming / Sayano, Masahiro | NTRS | 1994


    Locally adaptive multiscale contrast optimization

    Bonnier, N. / Simoncelli, E.P. | IEEE | 2005