The motivation of visual place recognition (VPR) is to enable robots to identify and localize specific places within an environment using visual cues, facilitating navigation, mapping, and context-aware applications. On the other hand, deeper networks impose an extra computing strain on robots and greatly impede the development of real-time robot applications. Most groundbreaking studies either focus on learning from only one teacher in their distilled approach to learning, ignoring the possibility of students learning from multiple teachers, or failing to reveal teachers place varying importance on specific examples. To cope with the above issues, we propose a novel adaptive soft-hard label teaching feature-level knowledge distillation learning framework, namely ASHT-KD, for all-day mobile robot VPR tasks. This framework learns a compact and quick all-day place recognizer through knowledge transfer from several teachers to a limited number of students. Specifically, depending on the complexity of the environments, teachers can impart knowledge to two types of students in two teaching modes: soft-label teaching and hard-label teaching, which corresponds to one type of student being required to learn a new and uncomplicated environment (query image), while the other type of students are forced to learn a more complex environment (database images). To balance computational memory and performance, the teacher network is designed to be a two-level sampling ViT pipeline, while the Siamese student network is constructed to be a lightweight pipeline consisting only of one-level down-sampling ViT for place matching. In addition, a cross-entropy loss network is introduced to further improve the VPR performance by strengthening the correlation of feature representations from the Siamese network. Extensive experiments demonstrate the effectiveness and superiority of ASHT-KD. The practicability of ASHT-KD is also verified through outdoor testing.
Feature-Level Knowledge Distillation for Place Recognition Based on Soft-Hard Labels Teaching Paradigm
IEEE Transactions on Intelligent Transportation Systems ; 26 , 2 ; 2091-2101
2025-02-01
3453813 byte
Article (Journal)
Electronic Resource
English
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment
ArXiv | 2024
|