Videotext recognition is challenging due to low resolution, diverse fonts/styles, and cluttered background. Past methods enhanced recognition by using multiple frame averaging, image interpolation and lexicon correction, but recognition using multi-modality language models has not been explored. In this paper, we present a formal Bayesian framework for videotext recognition by combining multiple knowledge using mixture models, and describe a learning approach based on Expectation-Maximization (EM). In order to handle unseen words, a back-off smoothing approach derived from the Bayesian model is also presented. We exploited a prototype that fuses the model from closed caption and that from the British National Corpus. The model from closed caption is based on a unique time distance distribution model of videotext words and closed caption words. Our method achieves a significant performance gain, with word recognition rate of 76.8% and character recognition rate of 86.7%. The proposed methods also reduce false videotext detection significantly, with a false alarm rate of 8.2% without substantial loss of recall.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A Bayesian framework for fusing multiple word knowledge models in videotext recognition


    Contributors:


    Publication date :

    2003-01-01


    Size :

    273248 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    A Bayesian Framework for Fusing Multiple Word Knowledge Models in Videotext Recognition

    Zhang, D.-Q. / Chang, S.-F. / IEEE | British Library Conference Proceedings | 2003


    Kabeltext Dortmund - Videotext im internationalen Vergleich

    Nordrhein-Westfalen, Presse- und Informationsamt | TIBKAT | 1989




    Fusing multiple sources with Bayesian networks to achieve accurate object descriptions [2589-11]

    Davies, S. J. / Marshall, A. D. / Martin, R. R. et al. | British Library Conference Proceedings | 1995