A probabilistic neural network structure is designed for estimating the parameters of a standard finite normal mixture (SFNM) model in medical image analysis. This neural network employs an unsupervised learning scheme based on the unification of Bayesian and least relative entropy principles, and has Bayes and maximum likelihood neurons which adaptively update the local fuzzy variables in the classification space with the capability of achieving flexible boundary shapes. The optimal network size and hence the number of regions for the SFNM model are determined by various information theoretic criteria, and their performances are compared for images with different stochastic characterizations. A Lloyd-Max quantizer is used to improve the initialization of this self-learning procedure. The performance of this learning technique is tested with both simulated and real medical images, and is shown to be an efficient learning scheme.<>
Probabilistic neural networks for medical image quantification
Proceedings of 1st International Conference on Image Processing ; 3 ; 889-892 vol.3
1994-01-01
312605 byte
Conference paper
Electronic Resource
English
Probabilistic Neural Networks for Medical Image Quantification
British Library Conference Proceedings | 1994
|Probabilistic Neural Networks Application for Vehicle Classification
Online Contents | 2006
|Probabilistic Neural Networks Application for Vehicle Classification
British Library Online Contents | 2006
|