Simulation plays a critical role in the development and testing of autonomous driving, which encounters significant challenges when synthesizing complex driving scenarios and realistic sensor information. Existing scene simulation methods either fail to capture intricate physical characteristics of the 3D world or struggle to extend to autonomous driving datasets with uneven distribution of viewpoints. This paper proposes a point-based neural rendering approach to reconstruct and extend scenes, thereby generating real-world test data for autonomous driving systems from various views. By utilizing collected LiDAR data and filling in sparse regions in the point cloud, accurate depth and position references are provided. Additionally, the neural descriptor is enhanced by incorporating supplementary features relying on the observation views and sampling frequency, while rendering multi-scale descriptions to capture comprehensive information about the scene's appearance. Experimental results demonstrate that our method achieves high-quality rendering for large-scale autonomous driving scenes and enables scene editing to synthesize more diverse and adaptable testing scenes.
Enhancing Scene Simulation for Autonomous Driving with Neural Point Rendering
2023-09-24
1321208 byte
Conference paper
Electronic Resource
English
Point-Based Neural Scene Rendering for Street Views
IEEE | 2024
|Automatic driving scene data generation method and system based on implicit neural rendering
European Patent Office | 2023
|European Patent Office | 2018
|