Training multilayer feedforward neural networks of hard-limiting units remains a problem, because of the nondifferentiable output functions. However, if one adds Gaussian noise (with zero means) to each weight the weights will be random variables with smooth distribution functions. So one can use gradients to optimize the weights. This method was first presented by P.L. Bartlett and T. Downs (1992). In order to evaluate the effectiveness of the method, the authors did some experiments on training one-, two- and three-layer feedforward neural networks. The experiment results show that the method of Bartlett and Downs is effective for training one- and two-layer networks, but is very time-consuming for training networks with three (or more) layers.<>


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Some experiments on training multilayer feedforward neural networks of hard-limiting units using random weights


    Contributors:
    Fu Jin (author) / Zhen-Ming Chai (author)


    Publication date :

    1994-01-01


    Size :

    206497 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Some Experiments on Training Multilayer Feedforward Neural Networks of Hard-Limiting Units Using Random Weights

    Jin, F. / Chai, Z. M. / IEEE; Hong Kong Chapter of Signal Processing | British Library Conference Proceedings | 1994


    Efficient Supervised Learning of Multilayer Feedforward Neural Networks

    Osowski, S. / Stodolski, M. / Bojarczak, P. et al. | British Library Conference Proceedings | 1994


    Efficient supervised learning of multilayer feedforward neural networks

    Osowski, S. / Stodolski, M. / Bojarczak, P. | IEEE | 1994



    Backpropagation Learning Using Positive Weights for Multilayer Optoelectronic Neural Networks

    IEEE; Lasers and Electro-Optics Society | British Library Conference Proceedings | 1996