Automatically understanding human actions from video sequences is a very challenging problem. This involves the extraction of relevant visual information from a video sequence, representation of that information in a suitable form, and interpretation of visual information for the purpose of recognition and learning. We first present a view-invariant representation of action consisting of dynamic instants and intervals, which is computed using spatiotemporal curvature of a trajectory. This representation is then used by our system to learn human actions without any training. The system automatically segments video into individual actions, and computes a view-invariant representation for each action. The system is able to incrementally, learn different actions starting with no model. It is able to discover different instances of the same action performed by different people, and in different viewpoints. In order to validate our approach, we present results on video clips in which roughly 50 actions were performed by five different people in different viewpoints. Our system performed impressively by correctly interpreting most actions.
View-invariant representation and learning of human action
01.01.2001
1174533 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
View-Invariant Representation and Learning of Human Action
British Library Conference Proceedings | 2001
|View-Invariant Representation and Recognition of Actions
British Library Online Contents | 2002
|View invariant action recognition using projective depth
British Library Online Contents | 2014
|View invariant action recognition using weighted fundamental ratios
British Library Online Contents | 2013
|Improving face representation learning with center invariant loss
British Library Online Contents | 2018
|