Long-term visual place recognition (VPR) has recently become a popular research topic in the field of autonomous driving. In urban scenarios, variations in scene appearance due to the change in seasons and illumination bring great challenges for scene description. Several learning-based VPR techniques can learn latent invariant descriptors for appearance variations and show excellent performance in long-term VPR tasks. However, these methods require huge datasets and computational resources (e.g., GPUs) for training and inference. Mobile platforms such as autonomous vehicles often cannot provide sufficient computing power. To address this issue, in this paper, a training-free lightweight global image descriptor named SSR-VLAD is proposed for VPR. This descriptor is able to work accurately in real-time without GPUs, even on embedded platforms. The contribution of this work has two aspects. (1) A novel semantic skeleton representation (SSR) is proposed to describe the semantic spatial distribution of scenes by using the semantic spatial context; (2) Inspired by the Vector of Locally Aggregated Descriptors (VLAD), a spatial-temporal aggregation framework for SSR features is constructed to aggregate all SSR features into one SSR-VLAD descriptor, which encodes the spatial and temporal information into a fixed-size global descriptor. SSR-VLAD shows robust performance towards the appearance variations of scenes. Specifically, on three public datasets with challenging urban scenes, experimental results show that SSR-VLAD has competitive VPR performance compared to several state-of-the-art (SoTA) VPR methods. Additionally, SSR-VLAD achieves SoTA real-time computational performance with lower RAM consumption in computationally constrained scenarios.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A Training-Free, Lightweight Global Image Descriptor for Long-Term Visual Place Recognition Toward Autonomous Vehicles


    Contributors:
    Nie, Jiwei (author) / Feng, Joe-Mei (author) / Xue, Dingyu (author) / Pan, Feng (author) / Liu, Wei (author) / Hu, Jun (author) / Cheng, Shuai (author)


    Publication date :

    2024-02-01


    Size :

    2926292 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Monocular Visual Localization for Autonomous Vehicles Based on Lightweight Landmark Map

    Xue, Feng / Zhuo, Guirong / Fu, Wufei | SAE Technical Papers | 2022



    Monocular Visual Localization for Autonomous Vehicles Based on Lightweight Landmark Map

    Zhuo, Guirong / Fu, Wufei / Xue, Feng | British Library Conference Proceedings | 2022


    Weak Supervised Hierarchical Place Recognition with VLAD-Based Descriptor

    Fang, Kai / Wang, Yafei / Li, Zexing | SAE Technical Papers | 2022