## ABSTRACT

Consider the solution of every optimization problem as a point of d-dimensional space, namely call it particle. For each particle, iterative search is updated according to equations (1) and (2), namely it follows two optimal points (or two extrema ) to refresh itself until the destination is reached[3]:

v w v r c r

⋅ + ⋅c ⋅ + ⋅ ⋅

( )pbest x− −

( )gbest x− −

(1)

x vi ix i= +−1 , (2)

where pbest and gbest are the current individual extreme point and swarm’s extreme point, respectively; vi and vi−1 are the current and previous move velocities of the particles, respectively; r1 and r2 are random numbers between 0 and 1; c1 and c2 are learning factors; and w is inertia weight factor. In addition, the regulation of velocity is shown as follows:

if v V v V if v V

⎧⎨⎩ a max

(3)

Selection of proper parameter guarantees iterative convergence[4]. The concrete steps of training neural network by using PSO are as follows:

1 INTRODUCTION

The Particle Swarm Optimization (PSO), introduced by Kennedy and Eberhart, draws inspiration from social behavior of bird flocking or fish schooling, and has been successfully applied in many areas. Different from traditional search algorithms, PSO, like other evolutionary computation techniques, works on a population of potential solutions in the search space. Through cooperation and competition among the potential solutions, this technique can often find optima more quickly when applied to some optimization problems. Furthermore, unlike Genetic Algorithm (GA), PSO has no evolution operators such as crossover and mutation; therefore, it is easy to implement and there are few parameters to adjust. The neural network application described in ref.[1] showed that the particle swarm optimizer could train Artificial Neural Network (ANN) weights as effectively as the usual error backpropagation method. Intriguing informal indications are that the trained weights found by particle swarms sometimes generalize from a training set to a test set better than solutions found by gradient descent[2]. However, this network studying algorithm based on PSO is still in its development stage. In some special fields, experiments show that using PSO to train ANN is not an efficient way for optimization. For example, it failed to perform global search in nearly each run using this way to determine the cement quality. Therefore, a modified PSO is presented and introduced to train ANN. Its application shows that the new algorithm gives some balance between global

1. Random initiation of species swarm. Suppose there are d weight values and threshold values that need to be determined in a neural network with only one hidden layer. Then, considering each particle as a point in d-dimensional space, the number of particles is K. Initiate the velocity v vi i i d( ,v ), ,i , and destination x x xi i i d( ,x ), ,i , of each particle, whose values are random numbers between 0 and 1. The initial value of pbest for each particle is the initial value of xi.