Recognizing human actions accurately from video data is a difficult task because of the variability in human motion, appearance, and context. A new approach to human action recognition is proposed in this study, which uses a combination of Variational Autoencoder (VAE) and Transformer models. The VAE with IWGECO optimization is utilized for acquiring a compact and discriminative representation of the input 2D skeletons from video frames along with its spatial relations. This encoded representation of 2D poses is then fed into a Transformer model for sequence modeling and classification, resulting in effective recognition of human actions. The proposed approach is evaluated on the MPOSE2021 benchmark dataset, outperforming existing methods (90.6%) with an average classification accuracy of 92.0%. The model performs well on real-life data too showcasing its generalizing potential. The study also addresses the domain shift problem occurring in real-world situations by using a pre-trained VAE and subsequently training the Transformer on the MPOSE2021 dataset. The results demonstrate the effectiveness of combining VAE and Transformer models for human action recognition.
Improved ELBO-assisted Transformer for Skeleton-Based Action Recognition
2023-09-24
8172729 byte
Conference paper
Electronic Resource
English
Deep Dive into Semi-Supervised ELBO for Improving Classification Performance
ArXiv | 2021
|The ELBO of Variational Autoencoders Converges to a Sum of Three Entropies
ArXiv | 2020
|Driver-Skeleton: A Dataset for Driver Action Recognition
IEEE | 2021
|