Visual Odometry (VO) estimation is an important source of information for vehicle state estimation and autonomous driving. Recently, deep learning based approaches have begun to appear in the literature. However, in the context of driving, single sensor based approaches are often prone to failure because of degraded image quality due to environmental factors, camera placement, etc. To address this issue, we propose a deep sensor fusion framework which estimates vehicle motion using both pose and uncertainty estimations from multiple onboard cameras. We extract spatio-temporal feature representations from a set of consecutive images using a hybrid CNN - RNN model. We then utilise a Mixture Density Network (MDN) to estimate the 6-DoF pose as a mixture of distributions and a fusion module to estimate the final pose using MDN outputs from multi-cameras. We evaluate our approach on the publicly available, large scale autonomous vehicle dataset, nuScenes. The results show that the proposed fusion approach surpasses the state-of-the-art, and provides robust estimates and accurate trajectories compared to individual camera-based estimations.
Multi-Camera Sensor Fusion for Visual Odometry using Deep Uncertainty Estimation
19.09.2021
1548355 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Uncertainty Estimation for Stereo Visual Odometry
British Library Conference Proceedings | 2021
|Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry
Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2020
|Deep 4D Automotive Radar-Camera Fusion Odometry with Cross-Modal Transformer Fusion
SAE Technical Papers | 2023
|IEEE | 2022
|