We present a computational framework capable of labeling the effort of an action corresponding to the perceived level of exertion by the performer (low - high). The approach initially factorizes examples (at different efforts) of an action into its three-mode principal components to reduce the dimensionality. Then a learning phase is introduced to compute expressive-feature weights to adjust the model's estimation of effort to conform to given perceptual labels for the examples. Experiments are demonstrated recognizing the efforts of a person carrying bags of different weight and for multiple people walking at different paces.
Recognizing human action efforts: an adaptive three-mode PCA framework
Proceedings Ninth IEEE International Conference on Computer Vision ; 1463-1469 vol.2
2003-01-01
3742437 byte
Conference paper
Electronic Resource
English
Recognizing Human Action Efforts: An Adaptive Three-Mode PCA Framework
British Library Conference Proceedings | 2003
|Recognizing 50 human action categories of web videos
British Library Online Contents | 2013
|Recognizing Planned, Multiperson Action
British Library Online Contents | 2001
|Human eyebrow recognition in the matching-recognizing framework
British Library Online Contents | 2013
|Recognizing Action at a Distance
British Library Conference Proceedings | 2003
|