Dynamic scene modeling and rendering pose significant challenges in 3D vision. Previous methods rely on neural radiance fields (NeRF) and implicit representations, leading to slow rendering speeds due to the numerous multi-layer perceptron (MLP) calculations required. Although expedited NeRF variants based on explicit voxel grid features have been proposed, the exponential increase in storage demands with rising grid resolution limits the applicability of dynamic scene. Additionally, existing dynamic scene modeling methods typically employ deformation fields to map points at different time steps into canonical space. However, inappropriate canonical spaces struggle to smoothly accommodate motion. To address these challenges, this paper proposes a multi-resolution hybrid explicit representation for novel view synthesis of dynamic scenes. Specifically, we introduce an efficient hybrid feature representation for dynamic scenes that combines multi-resolution 3D hash grids and dense 2D plane features. Compared to dense voxel grid representations, two-dimensional planes can more effectively enlarge resolution, compensating for reconstruction errors in dynamic areas with minimal parameters and time costs. Moreover, we utilize a self-adaptive deformation module to learn canonical moments and identify optimal canonical moments across arbitrary scenes. Our method achieves commendable rendering fidelity while maintaining compact model sizes. We evaluate our approach in both synthetic and real-world environments and compare it to state-of-the-art techniques. Experimental results demonstrate that our approach achieves superior or comparable rendering quality and is more computationally efficient (more than 100 times faster than the original D-NeRF).


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Multi-Resolution Hybrid Explicit Representation for Novel View Synthesis of Dynamic Scenes


    Beteiligte:
    Chen, Yanshun (Autor:in) / Yan, Weiqing (Autor:in) / Yue, Guanghui (Autor:in) / Zhou, Wujie (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    01.02.2025


    Format / Umfang :

    8410432 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Parametric top-view representation of scenes

    SCHULTER SAMUEL / WANG ZIYAN / LIU BUYU et al. | Europäisches Patentamt | 2022

    Freier Zugriff

    PARAMETRIC TOP-VIEW REPRESENTATION OF SCENES

    SCHULTER SAMUEL / WANG ZIYAN / LIU BUYU et al. | Europäisches Patentamt | 2020

    Freier Zugriff

    Modelling dynamic scenes by registering multi-view image sequences

    Pons, J.-P. / Keriven, R. / Faugeras, O. | IEEE | 2005


    Dynamic Environment Prediction in Urban Scenes using Recurrent Representation Learning

    Itkina, Masha / Driggs-Campbell, Katherine / Kochenderfer, Mykel J. | IEEE | 2019