We describe a linguistic postprocessor for character recognizers. The central module of our system is a trainable variable memory length Markov model (VLMM) that predicts the next character given a variable length window of past characters. The overall system is composed of several finite state automata, including the main VLMM and a proper noun VLMM. The best model reported in the literature (Brown et al., 1992) achieves 1.75 bits per character on the Brown corpus. On that same corpus, our model, trained on 10 times less data, reaches 2.19 bits per character and is 200 times smaller (/spl sime/160,000 parameters). The model was designed for handwriting recognition applications but could also be used for other OCR problems and speech recognition.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Design of a linguistic postprocessor using variable memory length Markov models


    Contributors:
    Guyon, I. (author) / Pereira, F. (author)


    Publication date :

    1995-01-01


    Size :

    384613 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Design of a Linguistic Postprocessor Using Variable Memory Length Markov Models

    Guyon, I. / Pereira, F. | British Library Conference Proceedings | 1995


    Postprocessor and vehicle

    HUANG KAI / YANG ZECHEN / ZHU GUANGZHEN et al. | European Patent Office | 2021

    Free access

    Side-hung postprocessor hoop

    TANG LINSEN | European Patent Office | 2020

    Free access

    Vehicle, postprocessor assembly and heavy truck type China VI postprocessor support

    XU SHUN / YANG XINLEI / LIU ZHEN et al. | European Patent Office | 2021

    Free access

    Learning structured behaviour models using variable length Markov models

    Galata, A. / Johnson, N. / Hogg, D. | IEEE | 1999