Stereo-LiDAR fusion is often used for autonomous systems such as self-driving cars as the two modalities are complementary to each other. Existing stereo-LiDAR fusion methods are mostly at feature level or outcome level, without considering the uncertainty of the depth estimation in each modality. To this end, we propose a holistic and contextual evidential stereo-LiDAR fusion network (HCENet) for depth estimation, which considers both intra-modality and inter-modality uncertainties from stereo matching and LiDAR point cloud depth completion. We design a dual network structure that consists of a stereo matching branch and a LiDAR depth completion branch with new introduced uncertainty estimation modules for both two branches. Specifically, a multi-scale depth guided feature aggregation module is first developed to enable information propagation at early input stage, and then followed by fusing the predicted depths from two branches based on evidential uncertainties to generate the final output. Extensive experimental results on KITTI depth completion and Virtual KITTI2 datasets achieve RMSE of 599.3 and 2253.1, and show that our method outperforms state-of-the-art SLFNet by 6.52% and 20.7%, respectively.
Holistic and Contextual Evidential Stereo-LiDAR Fusion for Depth Estimation
IEEE Transactions on Intelligent Vehicles ; 9 , 11 ; 7437-7448
01.11.2024
4747011 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Evidential occupancy grid mapping with stereo-vision
IEEE | 2015
|Fusion of Lidar and Stereo Point Clouds using Bayesian Networks
Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2018
|