This paper proposes a novel method for corresponding visual measurements to map points in a visual-inertial navigation system. The algorithm is based on the minimization of the photometric error on sparse locations of the image region, and realizes a gain in robustness that comes from the elimination of the need of feature-extraction for correspondence. The system is compared to a standard approach based on feature extraction, within a visual-inertial EKF formulation. High-fidelity simulation results show the proposed method improves the horizontal RMS error by means of increasing the number of features corresponded by the algorithm.
Direct feature correspondence in vision-aided inertial navigation for unmanned aerial vehicles
2017-06-01
1532187 byte
Conference paper
Electronic Resource
English
Vision-Aided Inertial Navigation for Pose Estimation of Aerial Vehicles
British Library Conference Proceedings | 2009
|A model aided inertial navigation system for automatic landing of unmanned aerial vehicles
British Library Online Contents | 2018
|Vision-aided terrain referenced navigation for unmanned aerial vehicles using ground features
SAGE Publications | 2014
|