Object-based visual-inertial odometry (VIO) is applied broadly in location service for ground vehicles and robotics, but tends to be limited to research of pure-vision navigation due to huge object proposal error in dynamic scene. To make up for this limitation, this paper proposes an object-based visual-inertial odometry for ground vehicles in dynamic scene with the aid of LiDAR, which carries out an object detection refinement using bounding box height adjustment on the basis of a coarse frustum-based joint object detection of vision and LiDAR. In addition, we model Euclidean distance of anchored point between consecutive states and encode it as anchored residuals for final sliding window based optimization. Both ablation study and comparison against the state-of-the-art visual-inertial algorithms are implemented in UrbanNav dataset, Loco sequences and self-made data. The experimental results demonstrate that in dynamic scenes with multiple objects, our proposed odometry achieves a 20.7% improvement in localization accuracy, along with more intuitive object proposals and enhanced robustness.
LiDAR-Aided Object Visual-Inertial Odometry Using Anchored Residual in Dynamic Scene
IEEE Transactions on Intelligent Transportation Systems ; 26 , 7 ; 10146-10159
2025-07-01
2027676 byte
Article (Journal)
Electronic Resource
English
British Library Conference Proceedings | 2020
|