Videotext recognition is challenging due to low resolution, diverse fonts/styles, and cluttered background. Past methods enhanced recognition by using multiple frame averaging, image interpolation and lexicon correction, but recognition using multi-modality language models has not been explored. In this paper, we present a formal Bayesian framework for videotext recognition by combining multiple knowledge using mixture models, and describe a learning approach based on Expectation-Maximization (EM). In order to handle unseen words, a back-off smoothing approach derived from the Bayesian model is also presented. We exploited a prototype that fuses the model from closed caption and that from the British National Corpus. The model from closed caption is based on a unique time distance distribution model of videotext words and closed caption words. Our method achieves a significant performance gain, with word recognition rate of 76.8% and character recognition rate of 86.7%. The proposed methods also reduce false videotext detection significantly, with a false alarm rate of 8.2% without substantial loss of recall.
A Bayesian framework for fusing multiple word knowledge models in videotext recognition
2003-01-01
273248 byte
Conference paper
Electronic Resource
English
A Bayesian Framework for Fusing Multiple Word Knowledge Models in Videotext Recognition
British Library Conference Proceedings | 2003
|Kabeltext Dortmund - Videotext im internationalen Vergleich
TIBKAT | 1989
|Fusing multiple sources with Bayesian networks to achieve accurate object descriptions [2589-11]
British Library Conference Proceedings | 1995
|