Reconstructing dynamic traffic scenes has a wide range of applications in the development of modern autonomous driving systems. Recently, novel-view synthesis techniques, such as Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3D GS), have emerged as promising paradigms for the re-construction of 3D scenes. However, previous works in this area highly rely on LiDAR to provide accurate geometric prior and motion cues across frames to reconstruct dynamic objects, hindering the use of overwhelming vision data from mass-production vehicles. In this paper, we propose a novel Vision-Centric reconstruction framework based on 3D GS, VC-Gaussian, which allows high-quality novel-view synthesis and dynamic scene reconstruction for autonomous driving. A composite Gaussian model is designed to represent background and foreground objects separately. To facilitate the initialization and optimization process of Gaussians without LiDAR, we leverage easy-to-obtain monocular geometric prior including metric depth and normal. Experimental results on the real autonomous driving dataset demonstrate that our method out-performs other reconstruction methods with monocular vision inputs and even is competitive with LiDAR-based methods.
VC-Gaussian: Vision-Centric Gaussian Splatting for Dynamic Autonomous Driving Scenes
2024-09-24
1460324 byte
Conference paper
Electronic Resource
English
Spatiotemporal Gaussian mixture model to detect moving objects in dynamic scenes
British Library Online Contents | 2007
|