‘‘Angle sensitivity” aggravates the difficulty in radar-based omnidirectional human motion recognition. This issue is addressed in earlier work by using omnidirectional radar data for training. However, this practice requires labor-intensive radar measurements and a time-consuming annotation process. Tackling this issue, this article first introduces the active learning technique for the radar-based omnidirectional recognition problem, and we present a hybrid-uncertainty active learning method, which significantly reduces the annotation expenses required to train an omnidirectional motion classifier. In the context of the complex motions and varying angles, we propose a pixelwise similarity assessment methodology in addition to semantic-based sampling. This approach is proven to alleviate the issue of “imbalanced sampling” in active learning significantly by rebalancing the selected samples across categories. Furthermore, a hybrid-uncertainty dimension is introduced to quantify the uncertainty of the unlabeled samples from both pixel and semantic levels. The dimension is evaluated through three perspectives, including the consistency factor, difficulty factor, and pixelwise similarity. The experimental results exhibit that our algorithm achieves a recognition accuracy of 76.06% using only 40% of labeled data, which is a mere decrease of only 0.15% compared to the accuracy achieved with 100% labeled data. Our approach surpasses six state-of-the-art active learning methods in solving the omnidirectional problem, and ablation studies confirm the efficacy of each component presented in our model.
Omnidirectional Human Motion Recognition With Monostatic Radar System Using Active Learning
IEEE Transactions on Aerospace and Electronic Systems ; 61 , 2 ; 3456-3469
2025-04-01
5136716 byte
Article (Journal)
Electronic Resource
English