CVPI (cross-view person identification) from multiple temporally coordinated images captured by a number of different wearable cameras in differing perspectives is a difficult but critical issue that has recently piqued attraction. CVPI's most recent state-of-the-art efficiency is accomplished by comparing presence and movement attributes through videos, but due to the high inconsistency of 3D pose prediction on videos/images taken in the wild, comparing posing characteristics is ineffective. To fix this issue, primarily a new metric of faith for each joints of human body approximate position in 3D Pose prediction for humans is implemented. In the CVPI pose matching, more weight is provided to joints with a higher conviction. Finally, to improve CVPI accuracy, the approximate knowledge about the pose is combined with the presence and motion functions. The proposed approach is tested on three wearable-camera video datasets, and its success is compared to that of many other CVPI approaches. The findings of the experiments demonstrate that the suggested confidence parameter is correct and combining pose, presence, and motion results in a new state-of-the-art CVPI score.
A Novel Methodology for Human Posture Recognition Using CVPI
02.12.2021
3947149 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Validation methodology development for predicted posture
Kraftfahrwesen | 2006
|Validation Methodology Development for Predicted Posture
British Library Conference Proceedings | 2007
|Posture Recognition and Segmentation from 3D Human Body Scans
British Library Conference Proceedings | 2002
|Motion and posture recognition for identifying human emotional reactions
British Library Online Contents | 2015
|