We present an approach to detecting and recognizing spoken isolated phrases based solely on visual input. We adopt an architecture that first employs discriminative detection of visual speech and articulate features, and then performs recognition using a model that accounts for the loose synchronization of the feature streams. Discriminative classifiers detect the subclass of lip appearance corresponding to the presence of speech, and further decompose it into features corresponding to the physical components of articulate production. These components often evolve in a semi-independent fashion, and conventional viseme-based approaches to recognition fail to capture the resulting co-articulation effects. We present a novel dynamic Bayesian network with a multi-stream structure and observations consisting of articulate feature classifier scores, which can model varying degrees of co-articulation in a principled way. We evaluate our visual-only recognition system on a command utterance task. We show comparative results on lip detection and speech/non-speech classification, as well as recognition performance against several baseline systems.
Visual speech recognition with loosely synchronized feature streams
Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 ; 2 ; 1424-1431 Vol. 2
2005-01-01
198526 byte
Conference paper
Electronic Resource
English
Visual Speech Recognition with Loosely Synchronized Feature Streams
British Library Conference Proceedings | 2005
|British Library Online Contents | 2017
|Dynamic Depth Recovery from Multiple Synchronized Video Streams
British Library Conference Proceedings | 2001
|SAE Technical Papers | 2016