Artificial neural networks (ANNs) have many characteristics that are suitable for massively parallel computation: simple processing units (neurons), small local memory requirement for each neuron, highly parallel operations. Naturally, neural network implementation should be a target for massively parallel computing. This paper presents and discusses techniques to map a neural network algorithm onto a massively parallel computer system. The goal is to maximize parallelism by breaking the ANN computation into basic units and processing these units in parallel. The following strategies are discussed: (1) design special highly parallel computers for artificial neural networks; (2) map the neural network algorithms directly onto the existing general-purpose parallel computers; (3) map the neural network algorithms onto the optical bus based systems; (4) design new structured neural networks that are similar to the topologies of the existing parallel systems; (5) use the divide-and-conquer technique to break a large neural network into many small ones, each will be processed by a PC or workstation.
On embeddings of neural networks into massively parallel computer systems
1997-01-01
680272 byte
Conference paper
Electronic Resource
English
On Embedding of Neural Networks into Massively Parallel Computer Systems
British Library Conference Proceedings | 1997
|Scalable Massively Parallel Artificial Neural Networks
AIAA | 2008
|Scalable Massively Parallel Artificial Neural Networks
AIAA | 2005
|Optical Interconnections for the Massively Parallel Computer
British Library Conference Proceedings | 1996
|