We describe a linguistic postprocessor for character recognizers. The central module of our system is a trainable variable memory length Markov model (VLMM) that predicts the next character given a variable length window of past characters. The overall system is composed of several finite state automata, including the main VLMM and a proper noun VLMM. The best model reported in the literature (Brown et al., 1992) achieves 1.75 bits per character on the Brown corpus. On that same corpus, our model, trained on 10 times less data, reaches 2.19 bits per character and is 200 times smaller (/spl sime/160,000 parameters). The model was designed for handwriting recognition applications but could also be used for other OCR problems and speech recognition.
Design of a linguistic postprocessor using variable memory length Markov models
Proceedings of 3rd International Conference on Document Analysis and Recognition ; 1 ; 454-457 vol.1
1995-01-01
384613 byte
Conference paper
Electronic Resource
English
Design of a Linguistic Postprocessor Using Variable Memory Length Markov Models
British Library Conference Proceedings | 1995
|Vehicle, postprocessor assembly and heavy truck type China VI postprocessor support
European Patent Office | 2021
|