This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset.
Deep human activity recognition using wearable sensors
05.06.2019
In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. (pp. pp. 45-48). ACM: New York (NY), USA. (2019)
Paper
Elektronische Ressource
Englisch
DDC: | 629 |
Comparison of Different Sets of Features for Human Activity Recognition by Wearable Sensors
BASE | 2018
|Enhancing Activity Recognition of Self-Localized Robot Through Depth Camera and Wearable Sensors
BASE | 2018
|