Dynamic scene modeling and rendering pose significant challenges in 3D vision. Previous methods rely on neural radiance fields (NeRF) and implicit representations, leading to slow rendering speeds due to the numerous multi-layer perceptron (MLP) calculations required. Although expedited NeRF variants based on explicit voxel grid features have been proposed, the exponential increase in storage demands with rising grid resolution limits the applicability of dynamic scene. Additionally, existing dynamic scene modeling methods typically employ deformation fields to map points at different time steps into canonical space. However, inappropriate canonical spaces struggle to smoothly accommodate motion. To address these challenges, this paper proposes a multi-resolution hybrid explicit representation for novel view synthesis of dynamic scenes. Specifically, we introduce an efficient hybrid feature representation for dynamic scenes that combines multi-resolution 3D hash grids and dense 2D plane features. Compared to dense voxel grid representations, two-dimensional planes can more effectively enlarge resolution, compensating for reconstruction errors in dynamic areas with minimal parameters and time costs. Moreover, we utilize a self-adaptive deformation module to learn canonical moments and identify optimal canonical moments across arbitrary scenes. Our method achieves commendable rendering fidelity while maintaining compact model sizes. We evaluate our approach in both synthetic and real-world environments and compare it to state-of-the-art techniques. Experimental results demonstrate that our approach achieves superior or comparable rendering quality and is more computationally efficient (more than 100 times faster than the original D-NeRF).
Multi-Resolution Hybrid Explicit Representation for Novel View Synthesis of Dynamic Scenes
IEEE Transactions on Intelligent Vehicles ; 10 , 2 ; 1117-1127
2025-02-01
8410432 byte
Article (Journal)
Electronic Resource
English