ABSTRACT
Different neural network architectures are widely described in the literature [W89,Z95,W96,WJK99,H99,WB01,W07]. The feedforward neural networks allow only for one directional signal flow. Furthermore, most of the feedforward neural networks are organized in layers. An example of the three layer feedforward neural network is shown in Figure 6.1. This network consists of three input nodes: two hidden layers and an output layer. Typical activation functions are shown in Figure 6.2. These continuous activation functions allow for the gradient-based training of multilayer networks. Usually it is difficult to predict required size of neural networks. Often it is done by trial and error method. Another approach would be to start with much larger than required neural network and to reduce its size by applying one of pruning algorithms [FF02,FFN01,FFJC09]. MLP type architecture 3-3-4-1 (without connections across layers). https://s3-euw1-ap-pe-df-pch-content-public-u.s3.eu-west-1.amazonaws.com/9781315218427/17107b07-64a9-4c38-a742-42ca6efeb711/content/fig6_1.tif"/> Typical activation functions: (a) bipolar and (b) unipolar. https://s3-euw1-ap-pe-df-pch-content-public-u.s3.eu-west-1.amazonaws.com/9781315218427/17107b07-64a9-4c38-a742-42ca6efeb711/content/fig6_2.tif"/>