In this work, a simple yet effective deep neural network is proposed to generate the dense depth map of the scene by exploiting both LiDAR sparse point cloud and the monocular camera image. Specifically, a feature pyramid network is firstly employed to extract feature maps from images across time. Then the relative pose is calculated by minimizing the feature distance between aligned pixels from inter-frame feature maps. Finally, the feature maps and the relative pose are further applied to compute the feature-metric loss for training the depth completion network. The key novelty of this work lies in that a self-supervised mechanism is presented to train the depth completion network by directly using visual-LiDAR odometry between consecutive frames. Comprehensive experiments and ablation studies on benchmark dataset KITTI demonstrate the superior performance over other state-of-the-art methods in terms of pose estimation and depth completion. The detailed performance of the proposed approach (referred to as SelfCompDVLO) can be found on the KITTI depth completion benchmark. The source code, models, and data have been made available at GitHub.
Self-Supervised Depth Completion From Direct Visual-LiDAR Odometry in Autonomous Driving
IEEE Transactions on Intelligent Transportation Systems ; 23 , 8 ; 11654-11665
2022-08-01
5583079 byte
Article (Journal)
Electronic Resource
English
Visual Odometry Integrated Semantic Constraints towards Autonomous Driving
SAE Technical Papers | 2022
|HETEROGENEOUS MULTI-THREADED VISUAL ODOMETRY IN AUTONOMOUS DRIVING
European Patent Office | 2023
|Visual Odometry Integrated Semantic Constraints towards Autonomous Driving
British Library Conference Proceedings | 2022
|