You are on page 1of 16

JID: INS

ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

Information Sciences xxx (2015) xxx–xxx

Contents lists available at ScienceDirect

Information Sciences
journal homepage: www.elsevier.com/locate/ins

Evolutionary predator and prey strategy for global optimization


Q1 J.J. Chen a, Q.H. Wu a,b, T.Y. Ji a, P.Z. Wu c, M.S. Li a,∗
a
School of Electric Power Engineering, South China University of Technology, Guangzhou 510641, China
b
Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK
c
Institute of Advanced Technology (SIAT), Chinese Academy of Sciences, Institute of Biomedical and Health Engineering, Paul C. Lauterbur
Research Center for Biomedical Imaging, Shenzhen 518055, China

a r t i c l e i n f o a b s t r a c t

Article history: This paper presents a novel evolutionary predator and prey strategy (EPPS) for global opti-
Received 29 January 2015 mization. The EPPS is based on a dynamic predator-prey model stemmed from the study of an-
Revised 29 May 2015
imal group living behaviors. In the model, concepts of experienced predators, strategic preda-
Accepted 7 August 2015
tors, the prey and its safe location are developed to simulate three typical animal behaviors:
Available online xxx
scanning, hunting and escaping. To validate the applicability and practicability of EPPS, exper-
Keywords: iments were undertaken on a set of 20 benchmark functions and three real-world problems,
Escaping respectively. The results show that EPPS has a more superior performance in comparison with
Evolutionary algorithm other recently developed methods reported in the literature.
Global optimization
© 2015 Published by Elsevier Inc.
Predator-prey
Safe location

1 1. Introduction

2 In the past decade, nature has served as a fertile source of concepts, principles and mechanisms for designing various artificial
3 computation systems to optimize many types of complex computational problems [23,54,8,3]. Based on studying and simulat-
4 ing the search behaviors of animals in the wild, several different types of evolutionary algorithms (EAs) have been developed.
5 For instance, ant colony optimization (ACO), which develops from the collective intelligent behaviors of real ants, has been in-
6 vestigated in both applications and theories [5,9]. Differential evolution (DE) stems from the study of adaptation in natural and
7 artificial systems and has been widely used for problem solving [50,40]. Artificial bee colony (ABC) is one of the most recent
8 swarm intelligence based optimization algorithms which simulates the foraging behavior of honey bee colonies [27,39]. Particle
9 swarm optimization (PSO) takes inspiration from the social behaviors of bird flocking or fish schooling and has been shown as
10 an effective tool for solving global optimization problems [29,32]. Group search optimizer (GSO), which works on the bases of
11 animal search behaviors and group living theory, has been successfully applied for solving large-scale multimodal benchmark
12 functions and real-world problems [23,60]. Recently, teaching-learning-based optimization (TLBO) has been developed based
13 on the influence of a teacher on the output of learners in a class [43,7]. It is well known that local exploitation and global ex-
14 ploration abilities are the two critical criteria for evaluating the performance of the population-based optimization algorithms.
15 However, most of the aforementioned EAs just involve one single species, which limits the interaction among multiple species in
16 the evolutionary process. Therefore, the local exploitation and global exploration abilities of these EAs cannot be well balanced.
17 In animal kingdom, group living is a widespread phenomenon, and has been comprehensively studied during the last decade
18 [24,46]. One consequence of living together is that group hunting allows group members to increase finding rates as well as to


Corresponding author.
E-mail addresses: qhenrywu@gmail.com (Q.H. Wu), mengshili@scut.edu.cn (M.S. Li).

http://dx.doi.org/10.1016/j.ins.2015.08.014
0020-0255/© 2015 Published by Elsevier Inc.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

2 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

19 reduce the variance of search success [41]. Another consequence of living together is that group escaping is able to develop the
20 dilution protection effect, which can provide greater chance for group members to survive [49,19]. The dingoes and sheep are
21 two representative group living species found mainly in Australia, as well as Southeast Asia [51]. As early as 1992s, Thomson [51]
22 argued that dingoes could adjust their hunting paths to suit circumstances and mainly possess two kinds of behaviors: a directly
23 hunting behavior once locking on the sheep, and an exploratory behavior by communicating with other dingoes for adjusting
24 their hunting paths. In addition, Allen [1] argued that a dingo may have the ability to chase and outrun a sheep, only to turn
25 away suddenly if the sheep escaped to a safe place or a more desirable sheep was found. Inspired by this natural phenomenon,
26 this paper develops a dynamic predator-prey model, in which the concepts of experienced predators, strategic predators, the
27 prey and its safe location are transplanted to design an evolutionary algorithm, and proposes an evolutionary predator and prey
28 strategy (EPPS) to construct a good balance between the algorithm’s local search and global search abilities.
29 The EPPS works on the philosophy of predator’s hunting and prey’s escaping, in which the interaction behaviors between the
30 dingoes and the sheep have been comprehensively studied. In consideration of the two kinds of hunting behaviors of the dingoes
31 and the escaping behavior of the sheep, this paper transplants the concepts of experienced predators, strategic predators, the
32 prey and its safe location to simulate three typical scenarios: dingoes hunting, sheep scanning and sheep escaping. In EPPS, the
33 experienced predators denote the dingoes associated with directly hunting and the strategic predators denote the dingoes that
34 could adjust their hunting paths. The prey denotes the sheep pursued by the dingoes and the place which the prey scans for is
35 denoted as its safe location. Based on these concepts, animal scanning strategy, animal hunting strategy and animal escaping
36 strategy are employed metaphorically by EPPS to design the optimum search mechanisms. Numerical studies undertaken on
37 a set of 20 well-known benchmark functions and three real-world problems show that EPPS has a superior performance in
38 comparison with several other peer algorithms.

39 2. Evolutionary predator and prey strategy

40 The EPPS is a population-based optimization algorithm, which takes inspiration from the group living behaviors of dingo
41 hunting and sheep escaping. The population of EPPS is called a group and each individual in the group is called a member. Each
42 member represents a position vector of an n−dimensional search space and is randomly positioned in the beginning. Here, n is
43 the dimension of the objective function. In each generation, the members of the group are classified into four different types,
44 representing experienced predators, strategic predators, the prey and its safe location, to cope with three typical scenarios,
45 predators hunting, prey scanning and prey escaping.
46 In EPPS, the member that corresponds to the best fitness value of the group is chosen as the prey, and the member that
47 corresponds to the worst fitness value of the group is chosen as the safe location of the prey; the rest of the members are
48 classified randomly as either the experienced predator or the strategic predator. The EPPS investigates three processes from the
49 perspective of the predators’ hunting and the prey’s escaping. When a group of predators lock onto a prey, the experienced
50 predators run experientially for hunting; the prey realizes the danger and tries to escape from its dilemma by scanning for its
51 safe location; as for the strategic predators, they run strategically for hunting. The search behaviors of the predators and the prey
52 are described in detail as follows.

53 2.1. Experienced predators’ search mechanism

54 For each search generation, a number of group members are selected as the experienced predators. The experienced preda-
55 tors will determine their search paths by accumulatively learning for the successful paths of the predators of the group. Here,
56 the successful paths indicate the directions of fitness value decreasing. In order to get a reliable estimator for the paths, the ex-
57 perienced predators adopt the concept of adaptive covariance matrix [22]. The adaptive mechanism is based on the assumption
58 that the successful evolutionary paths of the predators used in recent past generations may also be successful in the follow-
59 ing generation. Gradually, the most suitable evolutionary paths can be developed automatically to guide the search behavior of
60 each experienced predator in different evolutionary stages. The predatory behavior of the ith experienced predator at generation
61 (g + 1) can be modeled as follows:
x(i g+1) = m(g) + σ (g) N (0, C (g) ), i = 1, . . . , λ (1)
62 where m and C are mean value and covariance matrix of the predators, respectively, developed by the position vectors of the
63 predators of the group, N (0, I) corresponds to a multivariate normal distribution with zero mean and unity covariance matrix,
64 σ (σ > 0) is the step size and λ is the number of the experienced predators.
65 During the search process of EPPS, if an experienced predator finds a better location than the current prey and other preda-
66 tors, in the next search generation it will switch to be the prey and all the other predators, including the prey in the previous
67 search generation, will perform hunting mechanism; and if an experienced predator finds a worse location than the current safe
68 location, in the next search generation it will switch to be the safe location and the prey performs escaping mechanism to this
69 location. The prey and the strategic predator, which will be introduced in the following paragraphs, are also implementing these
70 switching mechanisms in each search process. Thus, different types of members can play different roles during each search gen-
71 eration, and even the same member can play different roles during different search generations. Thereupon, EPPS could escape
72 from local minima in the earlier search bouts and obtain a good balance between its local exploration and global exploitation
73 abilities.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 3

74 2.2. Prey’s search mechanism

75 During each search generation, xp and xs denote the prey and its safe location, respectively. When the prey is aware of its
76 dangerous situation, it will scan its safe location so as to escape from the dilemma.
77 In order to reach an efficient search performance, the basic scanning strategy inspired from white crappies [37], which is
78 characterized by maximum pursuit angle and maximum pursuit height, is employed as the scanning directions of the prey.
79 Additionally, a scanning distance shown in Eq. (5) has been proposed based on the position vectors of the prey and its safe location
80 to shorten the prey’s scope for searching, which enables the prey to explore the nonsearched areas with a higher probability than
81 that reached by scanning the whole search scope [23].
82 Based on the scanning directions and scanning distance, the prey initially scans at zero degree by using Eq. (2) and then scans
83 laterally by randomly sampling two points in the scanning field by using Eqs. (3) and (4). At the (g + 1)th generation, the first
84 position that the prey escapes by scanning at zero degree

xz = x(pg) + lmax
(g) (g)
D p (ϕ(g) ) (2)
85 the second position in the right hand side

xr = x(pg) + lmax
(g) (g) (g)
D p (ϕ + r1 θmax /2) (3)
86 and the third position in the left hand side

xl = x(pg) + lmax
(g) (g)
D p (ϕ(g) − r1 θmax /2) (4)

87 where r1 ∈ Rn−1 is a uniformly distributed random sequence in the range (0,1), ϕ(g) ∈ Rn−1 is the heading angle and the unit
88 vector D(ϕ) ∈ Rn can be calculated from ϕ via a polar to Cartesian coordinate transformation [36].
89 The scanning distance at the gth generation can be calculated as follows:

(g) (g) (g)

n
lmax = x p − xs = (x(pgi ) − x(sig) )2 (5)
i=1

90 where x pi and xsi are the elements for the ith dimension of xp and xs , respectively.
91 If the fitness value of the current prey is worse than one of the other members’ fitness values, in the following generation, the
92 new prey will be chosen from the group and execute an escaping mechanism by turning its head to a new randomly generated
93 angle
ϕ(g+1) = ϕ(g) + r1 αmax (6)
94 where αmax ∈ R1
is the maximum turning angle.
95 If the fitness value of the current prey does not change after a generations, it would turn its head back to zero degree
ϕ(g+a) = ϕ(g) (7)
96 where a ∈ R1 is a constant.

97 2.3. Strategic predators’ search mechanism

98 The remaining members are selected as the strategic predators. Compared to the experienced predators, the strategic preda-
99 tors will adjust theirs search paths according to the prey’s position in each search bout. That is, the strategic predators do not run
100 to the prey’s position directly. Instead, they will flock to the prey’s escaping direction, which is developed based on the position
101 vectors of the prey and its safe location. At the (g + 1)th generation, the ith strategic predator can be described as

x(jg+1) = x(pg) − r2 · (x(pg) − x(sg) ), j = 1, . . . , μ. (8)

102 However, in reality, the strategic predators may not be able to exactly catch the prey’s escaping traces. In consideration of this, a
103 perturbation vector has been added to Eq. (8) so as to maintain group diversity for jumping out of the potential local optima:

x(jg+1) = x(pg) − r2 · (x(pg) − x(sg) ) + r3 · (xt(1g) − xt(2g) ) + r4 · (xt(3g) − xt(4g) ) (9)

104 where xt1 = xt2 = xt3 = xt4 are randomly chosen from the set of strategic predators, r2 , r3 , r4 are the uniformly distributed random
105 numbers in the range [0,1], and μ is the number of the strategic predators.
106 Eq. (9) can help the strategic predators explore more resources distributed around the prey and achieve their own historical
107 best positions, rather than crowd around the prey that is likely to be associated with a local optimum. The strategic predator’s
108 search path is shown in Fig. 1.
109 In order to maximize the chances of finding resources, there are several strategies to restrict their search to a profitable patch.
110 One of the most efficient strategies is turning back into a patch when its edge is detected [18]. In the current paper, EPPS employs
111 this strategy to handle the bounded search space: when a member is outside the search space, it will turn back into the search
112 space by setting the variables that violated bounds to its previous values.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

4 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

Fig. 1. The typical search path of the strategic predator.

Fig. 2. The typical search paths of the predators and the prey.

113 It can be seen from the above description that the search mechanisms of EPPS are similar to DE, PSO, or covariance matrix
114 adaptative evolution strategy (CMA-ES) [22]. However, there are many differences with the most marked one being the dis-
115 tinction in the concept. EPPS is inspired from animal search behavior and group living theory which are adopted to develop an
116 escaping mechanism and a classification mechanism to construct a good balance between its local search and global search abil-
117 ities. Our EPPS consists of experienced predators, strategic predators, the prey and its safe location, which have never been used
118 to develop an evolutionary computation algorithm. In addition, the search mechanisms of EPPS are radically different from those
119 of DE and PSO. This is the novelty of our work presented in this paper. The predator-prey procedure of adjacent two generations
120 in EPPS is presented in Fig. 2. It is worth mentioning that, in order to depict a dynamic predator-prey scenario, in this figure we
121 artificially placed the prey and its safe location into two different positions respectively in the adjacent two generations. There-
122 fore, the experienced predators and the strategic predators will adjust their search directions, respectively. The steps involved
are presented in Algorithm 1.

Algorithm 1 The pseudocode of EPPS algorithm.


1: Generate initial group and set up parameters for each member;
2: Evaluate each member, and determine the prey and the safe location;
3: Set g := 0, the maximum number of generations := Max_Gen, population size := Pop, and the index of the prey := index(0) ;
4: while g ≤ Max_Gen do
(g)
5: Calculate the scanning distance, lmax , according to (5);
6: for i = 1 : Pop do
7: if i == index(g) then
8: Perform prey’s search mechanism according to (2)-(4);
9: else if rand < 0.3 then
10: Perform experienced predators’ search mechanism according to (1);
11: else
12: Perform strategic predators’ search mechanism according to (9);
13: end if
14: end for
15: Modify the position of each members to satisfy the constraints, if necessary;
16: Calculate the fitness values of each member, and update the prey, the safe location and index;
17: g = g + 1;
18: end while

123

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 5

Table 1
Twenty high-dimensional benchmark functions, where n is the dimension of the function, S is the search range, and fmin is the global minimum value of the
function.

n S fmin

Unimodal functions

f1 = ni=1 x2i 30 [−100, 100]n 0
 
f2 = ni=1 |xi | + ni=1 |xi | 30 [−10, 10]n 0
n i 2
f3 = i=1 j=1 x j 30 [−100, 100]n 0

f4 = (x1 − 1)2 + ni=2 i(2x2i − xi−1 )2 30 [−10, 10]n 0

f5 = ni=1 (xi + 0.5)
2
30 [−100, 100]n 0

f6 = ni=1 ix4i + random[0, 1) 30 [−1.28, 1.28]n 0

f7 = ni=1 ix2i 30 [−10, 10]n 0
Multimodal functions

i=1 (100(xi − xi+1 ) + (xi − 1) )
f8 = n−1 2 2 2
30 [−30, 30]n 0
 
f9 = − ni=1 (xi sin( |xi | )) 30 [−500, 500]n
−12569.5
n
f10 = (x2i − 10cos(2π xi ) + 10)2 30 [−5.12, 5.12]n 0
i=1  
 1 n 
f11 = −20 exp −0.2 1
n
n
i=1 x2i − exp n i=1 cos(2π xi )
+20+e 30 [−32, 32]n 0

1 n xi n
f12 = 4000 i=1 x2i − i=1 cos √ +1 30 [−600, 600]n 0
i

= πn 10sin (π y1 ) + i=1 (yi − 1 )2 [1 + 10sin (π yi+1 )]
2 29 2
f13
 30
+(yn − 1) + i=1 u(xi , 10, 100, 4)
2

yi = 1 + 1
(xi + 1) 30 [−50, 50]n 0
4
29
f14 = 0.1 sin (π 3x1 ) + (xi − 1)2 [1 + sin2 (3π xi+1 )]
2
  i=1
+(xn − 1)2 [1 + sin (2π x30 )] + 30 i=1 u(xi , 5, 100, 4)
2
30 [−50, 50]n 0
Rotated and shifted multimodal functions

i=1 (100(zi − zi+1 ) + (zi − 1) )
f15 = n−1 2 2 2
 2.048(x−o) 
z=M +1 30 [−100, 100]n 0
 100  
1 n 2  1 n 
f16 = −20 exp −0.2 z − exp i=1 cos(2π zi )
n i=1 i n

+20+e, z = M((x − o ) 30 [−100, 100]n 0


 k max k  k max k
= ni=1 k=0 [a cos (2π b (zi + 0.5))] − n k=0 [a cos (2π b · 0.5)]
k k
f17
 0.5(x−o) 
a = 0.5, b = 3, k max = 20, z = M 30 [−100, 100]n 0
100

1 n n zi  600(x−o) 
f18 = 4000 i=1 zi −
2
i=1 cos √ + 1, z = M 100
+1 30 [−100, 100]n 0
i
n
32 |2 j zi −round(2 j zi )|

10
n1.2
10 10
f19 = n2 i=1 1+i j=1 2j
− n2
 5(x−o) 
z=M 100
+1 30 [−100, 100]n 0
f20 = g(z1 , z2 ) + g(z2 , z3 ) + . . . + g(zn−1 , zn ) + g(zn , z1 )

( sin2 ( x2 +y2 )−0.5)
g(x, y) = 0.5 + (1+0.001(x2 +y2 ))2 , z = M (x − o) + 1 30 [−100, 100]n 0

In f13 and f14 ,

k2 (xi − k1 )k3 , xi > k1
u(xi , k1 , k2 , k3 ) = 0, −k1 ≤ xi ≤ k1 .
k2 ( − xi − k1 )k3 , xi < −k1

In f15 − f20 , o is a shifted vector and M is a transformation matrix. Please refer to [34].

124 3. Numerical studies

125 3.1. Experiments setting

126 To evaluate the applicability of EPPS, we do experiments on 20 canonical benchmark functions in 30-dimensional case. These
127 test functions, which are shown in Table 1, can be classified into three groups. The first seven functions f1 − f7 are unimodal
128 functions. As the preservation of diversity in many EAs is at the cost of slower convergence, the unimodal functions are used
129 to test if EPPS has the feature of fast-convergence. The next seven functions f8 − f14 are multimodal functions with many local
130 optima. These functions are used to test the global search ability of EPPS in avoiding premature convergence. Finally, the last six
131 functions f15 − f20 are shifted and rotated with more complex characteristics, and can be used to compare the performance of
132 different algorithms in a more systematic manner [8].
133 All the simulations are carried out by MATLAB 7.11 on an Intel Core i5, 3.1GHz computer with 4GB RAM. During each run,
134 the maximum number of function evaluations is set to 150,000 for f1 − f14 and 300,000 for f15 − f20 , respectively. In order to

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

6 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

Table 2
Comparison of EPPS with CMA-ES, DE, GSO and SPSO on benchmark functions f1 − f7 . All
results have been averaged over 30 runs.

f Algorithms EPPS CMA-ES DE GSO SPSO

1 Mean 0 0 0 1.9481E-8 0
Std. 0 0 0 1.1629E-8 0
h − 0 0 1 0
2 Mean 0 0 0 3.7039E-5 0
Std. 0 0 0 8.6185E-5 0
h − 0 0 1 0
3 Mean 0 0 8.7670E-6 5.7829 6.1780E-10
Std. 0 0 1.3560E-6 3.6813 7.5592E-10
h − 0 1 1 1
4 Mean 0 0.6667 0.8664 0.1078 0.7125
Std. 0 0 5.4937E-3 3.9981E-2 2.5918E-4
h − 1 1 1 1
5 Mean 0 0 0 1.6000E-2 0.9667
Std. 0 0 0 0.1333 1.2726
h − 0 0 1 1
6 Mean 1.1069E-5 0.2180 1.0816E-2 7.3773E-2 3.6191E-3
Std. 1.1103E-5 0.1692 1.1105E-2 9.2557E-2 4.8209E-2
h − 1 1 1 1
7 Mean 0 0 0 3.8926E-8 0
Std. 0 0 0 6.7362E-8 0
h − 0 0 1 0
(#+, #−, # ∼ ) − (2,0,5) (3,0,4) (7,0,0) (4,0,3)

135 make a coherent comparison, the fitness values below 10−16 are assumed to be 0 in all experiments. To validate the effectiveness
136 of EPPS, we compared EPPS with group search optimizer (GSO) [23], the latest standard PSO (SPSO) [38], covariance matrix
137 adaptative evolution strategy (CMA-ES) [22] and differential evolution (DE) [50]. The selection of these EAs in the comparison is
138 based on the following reasons. For one thing, the CMA-ES and SPSO are characterized by its fast-converging feature on simple
139 unimodal functions. Therefore, by comparing EPPS with CMA-ES and SPSO, we can learn whether EPPS can present the fast-
140 converging feature. For another, GSO and DE selected in the comparisons are representative and well-performed algorithms in
141 terms of global search ability. Thus, by comparing EPPS with GSO and DE, we can learn whether EPPS can prevent premature
142 convergence while still maintain the fast-convergence feature as well. To reduce statistical errors, each test is repeated for 30
143 times independently. √
144 The parameters of EPPS are set as follows: a = round( n + 1) (round(X) rounds the elements of X to the nearest integers),
145 θmax = π /a , αmax = θmax /2, the population size is 200, σ (0) = 0.5, and the percentage of the strategic predators is 30%. These
2

146 parameters settings are, empirically, applied for all the benchmark functions used in this paper. Parameter settings of the GSO,
147 DE, CMA-ES and SPSO used in the comparisons can be found in the original papers [23,50,22,38], respectively.

148 3.2. Results analysis

149 Tables 2–4 list the mean and standard deviation of the fitness values obtained by EPPS, CMA-ES, DE, GSO, and SPSO over
150 30 independent runs on functions f1 − f20 . It should be mentioned that the algorithm that performs best in one problem will
151 be highlighted in boldface. In order to assess whether the results obtained by EPPS are statistically different from the results
152 obtained by the other four algorithms, the non-parametric statistical test called Wilcoxon Signed-Rank Tests [15,17] are employed
153 for pairwise comparisons where the confidence level has been fixed to 95%. In the following tables, an h value of one indicates
154 that the performances of the two algorithms are statistically different with 95% certainty, whereas an h value of zero implies that
155 the performances are not statistically different. In addition, #+, #−, and # ∼ mean that the performance of EPPS is significantly
156 better than, significantly worse than, and statistically equivalent to the performance of its rival in terms of the statistically test
157 results, respectively. For example, the simulation results obtained by comparing EPPS with CMA-ES on unimodal benchmark
158 functions are (2, 0, 5), which means that EPPS achieves significantly better results than, significantly worse results than, and
159 statistically equivalent results to CMA-ES on 2, 0 and 5 problems, respectively. In [56], it is claimed that the convergence speed
160 can be measured by the mean number of function evaluations required. Therefore, in the following tables, if algorithm A can
161 obtain a smaller fitness value than algorithm B within the same number of function evaluations, it indicates that the convergence
162 speed of algorithm A is faster than that of algorithm B.
163 On unimodal functions f1 − f7 , it is relatively easy to converge the global optimum, and thus we focus on comparing the
164 performance of the algorithms in terms of solution accuracy and convergence speed. From the comparison of the results on
165 these functions, we can see that EPPS performs better than GSO in terms of the mean and the standard deviation on f1 − f7 . EPPS
166 surpasses all other algorithms on functions f4 and f6 , and has the same performance as CMA-ES, DE and SPSO on functions f1 , f2
167 and f7 . As for functions f3 and f5 , EPPS performs better than three other algorithms on f3 and two algorithms on f5 . According to

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 7

Table 3
Comparison of EPPS with CMA-ES, DE, GSO and SPSO on benchmark functions f8 − f14 . All results have been
averaged over 30 runs.

f Algorithms EPPS CMA-ES DE GSO SPSO

8 Mean 9.1667E-4 29.5072 8.3493 49.8359 33.1737


Std. 4.2813E-3 5.4334 2.2737 30.1771 31.2810
h − 1 1 1 1
9 Mean −12569.4882 −8930.7622 −6680.9926 −12569.4882 −9776.3545
Std. 1.9249E-4 672.6874 194.9523 2.2140E-2 375.5870
h − 1 1 0 1
10 Mean 0 49.2348 99.6636 2.7415 22.5064
Std. 0 11.6365 12.4016 1.4651 6.3818
h − 1 1 1 1
11 Mean 8.8818E-16 14.1209 7.3477E-8 2.6548E-5 0.9879
Std. 0 6.3862 1.4371E-8 3.0820E-5 0.8246
h − 1 1 1 1
12 Mean 0 6.3818E-4 1.9362E-8 3.0792E-2 5.8107E-3
Std. 0 7.5655E-4 5.3913E-8 3.0867E-2 7.7213E-3
h − 1 1 1 1
13 Mean 0 3.4606E-3 1.3629E-10 2.7648E-11 1.7316E-2
Std. 0 1.8862 3.3128E-10 9.1674E-11 6.1435E-2
h − 1 1 1 1
14 Mean 0 7.3223E-4 3.9023E-10 4.6948E-5 2.9116E-4
Std. 0 2.7904E-3 5.0326E-10 7.0010E-4 4.9549E-4
h − 1 1 1 1
(#+, #−, # ∼ ) − (7,0,0) (7,0,0) (6,0,1) (7,0,0)

Table 4
Comparison of EPPS with CMA-ES, DE, GSO and SPSO on benchmark functions f15 − f20 .
All results have been averaged over 30 runs.

f Algorithms EPPS CMA-ES DE GSO SPSO

15 Mean 0 105.1028 177.8154 45.7762 201.1326


Std. 0 36.7813 58.9623 33.2185 97.6771
h − 1 1 1 1
16 Mean 20.0006 21.1835 21.0342 20.0362 20.7861
Std. 2.0012E-4 0.6389 0.8325 9.6251E-2 0.2711
h − 1 1 0 1
17 Mean 0.9014 46.2053 27.1266 17.6507 28.6448
Std. 0.7817 4.9311 3.6881 8.6319 2.9650
h − 1 1 1 1
18 Mean 0 13.1172 25.1039 1.4128E-3 13.6717
Std. 0 11.6325 16.7864 2.6925E-3 10.1201
h − 1 1 1 1
19 Mean 0.2827 5.7714 2.3029 0.3225 1.6543
Std. 0.3108 2.0183 1.4388 0.4128 1.3192
h − 1 1 1 1
20 Mean 11.4117 14.9326 13.4917 12.9826 13.2468
Std. 1.2309 2.6368 2.5771 2.0117 7.8195
h − 1 1 1 1
(#+, #−, # ∼ ) − (5,0,0) (5,0,0) (4,0,1) (5,0,0)

168 the results of the non-parametric Wilcoxon Signed-Rank Tests, EPPS significantly outperforms four other algorithms on f4 and f6 .
169 Overall, EPPS manages to find accurate solutions within the same running conditions on all of these unimodal functions.
170 On multimodal functions f8 − f14 , the global optimum is much more difficult to locate. Therefore, in the comparison, we can
171 study the performance of the algorithms in terms of the solution accuracy, convergence speed and reliability. In Table 4, it can
172 be clearly seen that EPPS performs better than four other algorithms on f8 , f10 − f14 and three algorithms on f8 − f14 in terms
173 of mean value and standard deviation. EPPS performs the same performance with GSO on function f9 in terms of mean value.
174 According to the results of the non-parametric Wilcoxon Signed-Rank Tests, EPPS significantly outperforms four other algorithms
175 on f8 , f10 − f14 . These comparison results validate the capability of EPPS in optimizing multimodal functions in terms of solution
176 accuracy, convergence speed, and robustness.
177 On shifted and rotated functions f15 − f20 , the dimensions of these functions become nonseparable, and thus the resulting
178 problems become more difficult for EAs to solve. The comparison results on shifted and rotated functions are tabulated in Table 4.
179 From the table, it can be seen that EPPS achieves significantly better results than four other algorithms on functions f15 , f17 − f20,
180 and three other algorithms on functions f15 − f20 . Therefore, EPPS is among one of the best performance EAs for solving these
181 shifted and rotated functions within the compared set of algorithms.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

8 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

10 f8 3 f9 f10
10 −10
CMS−ES CMA−ES 0
DE DE 10
GSO
Function value

5 GSO

Function value
Function value
10 SPSO SPSO
EPPS EPPS
−10 CMA−ES
0 10 DE
10
4 GSO
−10 SPSO
EPPS
−5 −20
10 10
0 5 10 15 0 5 10 15 0 5 10 15
Function evaluations 4
Function evaluations 4 Function evaluations 4
x 10 x 10 x 10
f11 f12 f13
0 0
0 10 10
10 Function value
Function value

Function value
−10
CMA−ES 10 CMA−ES
CMA−ES
DE DE
10
−20 DE
GSO GSO
GSO
SPSO SPSO −50
10 SPSO
EPPS EPPS
−20
EPPS
10
0 5 10 15 0 5 10 15 0 5 10 15
Function evaluations 4 Function evaluations 4 Function evaluations 4
x 10 x 10 x 10

Fig. 3. The comparison of convergence rates among CMA-ES, DE, GSO, SPSO and EPPS on seven multimodal benchmark functions, f8 − f13 , respectively.

6
x 10
3
EPPS
2.5
Function evaluations

GSO
2 SPSO
CMA−ES
1.5
DE
1

0.5

0
50 100 150 200 250 300
Dimensions

Fig. 4. The comparison of function evaluations consumed by EPPS, CMA-ES, DE, GSO, and SPSO with different dimensionality, on function f14 .

182 The comparison of convergence rates among EPPS, CMA-ES, DE, GSO, and SPSO is also carried out on the 7 multimodal func-
183 tions, by observing the evolvement of the fitness values recorded in the optimization process. Fig. 3 only shows the convergence
184 rates on functions f8 − f13 since the convergence rates on functions f13 and f14 are similar. For functions f10 − f13 , it is obvi-
185 ously that EPPS converges much faster than the other four algorithms. As for function f8 , EPPS has a slower convergence rate at
186 beginning and shows a faster convergence rate at end. In addition, EPPS has almost the same performance as GSO on function f9 .

187 3.3. Computation complexity

188 In order to investigate the relationship between the dimensionality of the multimodal functions to be solved and the number
189 of consumed function evaluations, EPPS, CMA-ES, DE, GSO, and SPSO are used to solve f14 , over 30 independent runs, whose
190 dimensionality n is set to 15, 30, 50, 100, 150, 200, 250 and 300, respectively. For different dimensions of the function, the
191 iteration will be terminated when the fitness value reaches an acceptable accuracy 1 × 10−3 or the function evaluations reach
192 the maximum number of function evaluations 3 × 106 . The reason for choosing function f14 is that this benchmark function is
193 representative in function optimization and the amount of the local optima of the benchmark function increases with increasing
194 dimension. Moreover, CMA-ES, DE, GSO, and SPSO could reach the acceptable accuracy (1 × 10−3 ) in the 30 dimension case.
195 Fig. 4 illustrates the number of function evaluations consumed in optimizing f14 by the five algorithms where the dimensionality
196 is increased from 15 to 300. From the figure, it can be clearly seen the number of function evaluations consumed by CMA-ES
197 and SPSO increases sharply as the dimensionality linearly increases. Even worse, these two algorithms could not converge when

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 9

f8
250
240
230

lmax
220
210
200
190
0 300 600 900 1200 1500
Generations

f14
200

180

160
lmax

140

120

100
0 300 600 900 1200 1500
Generations

Fig. 5. Convergence of lmax evaluated on functions f8 and f14 , respectively.

198 the dimensionality reaches 50 and 60, respectively. DE and GSO could converge to the acceptable accuracy and the number of
199 function evaluations consumed increases almost following 55000 × e(n/80) . As for our proposed EPPS, it always converge with
200 different dimensions and the number of function evaluations consumed increases almost following 10000 × e(n/80) . That is to
201 say, EPPS offers a 5.5 times higher speed than DE and GSO in optimizing f14 , which is measured by the mean number of function
202 evaluations needed to reach an acceptable solution. Therefore, EPPS has more robust and much faster than CMA-ES, DE, GSO, and
203 SPSO.

204 3.4. Performance analysis

205 In order to exhibit the optimization process of EPPS, the scanning distance, lmax , evaluated on functions f8 and f14 has been
206 shown in Fig. 5. From the figure, we can see that the scanning distance, lmax , does not shrink to zero with the increase of genera-
207 tion numbers. This indicates that EPPS could keep a global search ability all the time with a certain level of population diversity.
208 According to the results shown in Tables 2–4, EPPS could effectively optimize these benchmark functions and obtain more ac-
209 curacy solutions in most cases. This implies that EPPS achieves a good balance between its local search ability and global search
210 ability.
211 According to above comparisons and discussions, we can obtain the following conclusions. First, based on the prey’s search
212 mechanism, the diversity of the group members has been maintained in the process of the whole optimization. Thus, EPPS
213 offers a novel global search ability. Second, the experienced predators’ search mechanism can provide a reliable estimator for
214 the evolution path and step size and thus EPPS provides an efficient local search ability. Finally, based on the strategic predators’
215 search mechanism, more resources distributed around the best member are explored, which means that EPPS could preserve the
216 group diversity without significantly impairing the fast-converging feature and thus both its global search ability and the local
217 search ability are enhanced.

218 4. Comparison between EPPS and other state-of-the-art algorithms

219 4.1. Comparison of EPPS with seven algorithms on functions f1 − f14

220 This section presents a comparative study of EPPS with other seven state-of-the-art algorithms on functions f1 − f14 . These
221 algorithms are Backtracking Search Optimization Algorithm (BSA) [13], Cuckoo-search Algorithm (CK) [14], Artificial Cooperative
222 Search (ACS) [12], Strategy Adaptation Based Differential Evolution Algorithm (SADE) [42], Differential Search Algorithm (DSA)
223 [11], Biogeography-Based Optimization (BBO) [47], and Comprehensive Learning Particle Swarm Optimizer (CLPSO) [33]. Their
224 experimental results on functions f1 − f14 , which were reported in Refs. [13,12], respectively, are directly adopted for comparison
225 in this paper. In order to have a fair comparison, the maximum number of function evaluations is set to 2,000,000, which is the
226 same as that suggested in [14,12]. The comparisons of the mean value between EPPS and other seven algorithms are listed in
227 Table 5.
228 It can be seen from Table 5 that EPPS outperforms SADE, CLPSO, BBO, CK, DSA, ACS and BSA on functions f4 , f6 , f9 and f11 , and
229 has the same performance with these algorithms on functions f1 , f2 , f5 , f7 , f13 and f14 . EPPS surpasses CLGSO, BBO, DSA and ACS

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

10 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

Table 5
Comparison of EPPS with BSA, CK, ACS, SADE, DSA, BBO, and CLPSO on benchmark functions f1 − f14 . All results have been averaged over 30 runs.

f SADE CLPSO BBO CK DSA ACS BSA EPPS

1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0
3 0 3.2611 7.7318 0 3.8238E-10 1.6643E-11 0 0
4 0.6667 7.1037E-4 0.6673 6.7783E-3 4.3093E-10 0.3333 0.6444 0
5 0 0 0 0 0 0 0 0
6 1.5317E-3 1.1305E-3 5.5095E-4 1.2437E-3 5.8586–3 1.4076E-3 1.9955E-3 0
7 0 0 0 0 0 0 0 0
8 2.1984E-2 2.7530 71.9104 0 0 0 0.3987 0
9 −12569.4866 −12214.1716 −12569.4836 −12569.4866 −12569.4866 −12569.4866 −12569.4866 −12569.4882
10 0.9950 0 0 0 0 0 0 0
11 0.9313 8.0000E-15 9.0000E-16 4.4000E-15 2.2200E-14 8.0000E-15 1.0500E-14 8.8818E-16
12 1.7239E-2 0 2.5964E-2 0 0 0 4.9307E-4 0
13 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 0

Table 6
Comparison of EPPS with iCMAES-ILS and NBIPOP-aCMA on benchmark functions f15 − f20 . All re-
sults have been averaged over 30 runs.

f Algorithms Best Worst Median Mean Std

15 NBIPOP-aCMA 0 0 0 0 0
iCMAES-ILS 0 0 0 0 0
EPPS 0 0 0 0 0
16 NBIPOP-aCMA 20.80 21.01 20.95 20.94 4.80 × 10−2
iCMAES-ILS 20.80 21.00 20.90 20.90 6.23 × 10−2
EPPS 20.0000 20.0011 20.0000 20.0006 0.0002
17 NBIPOP-aCMA 0.40 7.63 2.77 3.30 1.83
iCMAES-ILS 7.10 × 10−2 8.06 4.53 4.34 1.72
EPPS 3.4222 × 10−2 4.1381 0 0.9014 0.7817
18 NBIPOP-aCMA 0 0 0 0 0
iCMAES-ILS 0 0 0 0 0
EPPS 0 0 0 0 0
19 NBIPOP-aCMA 1.40 × 10−2 2.78 4.10 × 10−2 0.44 0.93
iCMAES-ILS 1.48 × 10−2 1.21 0.43 0.38 0.27
EPPS 5.7074 × 10−2 0.6915 0.2172 0.2827 0.3108
20 NBIPOP-aCMA 11.12 13.64 13.13 12.94 0.60
iCMAES-ILS 12.10 15.00 14.50 14.40 0.74
EPPS 9.0741 12.3264 11.2537 11.4117 1.2309

230 on functions f3 and f8 . As for function f12 , SADE, BBO and BSA cannot find the global optimum. In addition, SADE also cannot find
231 the global optimum on function f10 .

232 4.2. Comparison of EPPS with two algorithms on functions f15 − f20

233 iCMAES-ILS [34] and NBIPOP-aCMA [35] are within the rank of the top two algorithms for rotated and shifted benchmark
234 functions [34]. By comparing EPPS with these two algorithms, we can validate the performance of EPPS more comprehensively.
235 Table 6 lists the simulated results obtained by EPPS, NBIPOP-aCMA and iCMAES-ILS, including the best, worst, median, mean
236 and standard deviation. The experimental results obtained by iCMAES-ILS and NBIPOP-aCMA, which were reported in [34,35],
237 respectively, are directly adopted for comparison in this paper, because the codes of these algorithms are not available. From
238 Table 6, it is clearly seen that EPPS performs better than the other two algorithms on functions f16 , f17 and f20 . EPPS has the same
239 best fitness values as iCMAES-ILS and NBIPOP-aCMA on functions f15 , f18 . As for function f19 , the best fitness value of EPPS is
240 inferior to that of iCMAES-ILS and NBIPOP-aCMA, respectively.

241 5. Application of EPPS to three real-world problems

242 5.1. Solving unit commitment problem

243 Unit commitment is a significant practical task for power system operation and plays an important role in the deregulated
244 electricity markets. The unit commitment problem in a power system refers to finding a unit commitment schedule that min-
245 imizes the commitment and dispatch costs, subject to various constraints [30]. This problem is commonly formulated as a
246 complex nonlinear and mixed-integer combinational optimization problem with a series of prevailing equality and inequal-

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 11

Table 7
Simulation results of 10 unit system with 10% of spinning reserve.

Algorithms Best cost($) Mean cost($) Worst cost($)

EP 564,551 565,352 566,231


PSO 564,212 565,103 565,783
IPSO 563,954 564,162 564,579
HPSO 563,942 NA NA
SA 565,828 565,988 566,260
QEA-UC 563,938 564,012 564,711
IQEA-UC 563,938 563,938 563,938
BCPSO 563,947 564,285 565,002
C&B 563,938 NA NA
GSA 563,938 564,008 564,241
GSO 563,938.5123 563,938.9536 563,939.2514
EPPS 563,937.6863 563,937.6872 563,937.8490

NA: not available.

247 ity constraints [58]. Moreover, the number of combinations of 0–1 variables grows exponentially as being a large-scale problem.
248 Therefore, the problem is considered as one of the most difficult problems in power system.
249 In this paper, in order to investigate the capability of EPPS to solve practical problems, it is used to optimize a 10-unit system
250 from literature [28]. The mathematical formulation of unit commitment is shown as follows [55]:
⎧ T N
i=1 [ f i (Pi ) + STi (1 − ui )]uti
t−1
⎪minimize F = t=1 t t

⎪ 

⎪ N
i=1 Pi ui = PD
t t t


subject to
⎨ N
i=1 Pi max ui ≥ PD + PRt
t t t
(10)

⎪ Pi min ≤ Pi ≤ Pi max
t



⎪ t
Ti,ON ≥ Ti,up


Ti,OFF ≥ Ti,down
t

251 where

fi (Pit ) = ai + bi Pit + ci (Pit )2 (11)

252 and

Shi t
if Ti,OFF ≤ Ti,down + Tcoldi
STit = (12)
Sci t
if Ti,OFF > Ti,down + Tcoldi

253 where fi is the fuel cost of the ith unit which is taken as quadratic function; N = 10 is number of generators; T = 24 is total
254 scheduling period; Pit is generation of unit i at time t; uti ∈ {0, 1}, is ON/OFF status of unit i at time t (ON=1 and OFF=0); STit is
255 start-up cost of unit i at time t; ai , bi , ci represent the unit cost coefficients; Shi is hot start-cost of unit i; Sci is cold start-up cost
256 t
of unit i; Tcoldi is cold start time of unit i; Ti, down is minimum down time of unit i; Ti,OFF is continuous down time of unit i up to
257 time t.
258 In the equality and inequality constraints of (10), PDt denotes the system load demand at time t; PRt is spinning reserve at time
259 t; Pimin and Pimax are minimum and maximum generation limit of unit i, respectively; Ti,ON t is continuously up time of unit i up
260 to time t and Ti, up is the minimum up time of unit i.
261 The scheduling time horizon T is chosen as one day with 24 intervals of 1 h each. The spinning reserve requirement is set to
262 be 10% of total load demand. The input date are described in [28].
263 The 10-unit system with 10% spinning reserve is considered in this subsection to further demonstrate the effectiveness of the
264 proposed algorithm. The optimum dispatch of committed generating units, fuel cost, start-up cost and spinning reserve at all the
265 time horizons are shown in Table 8. To validate the computational efficiency of the proposed approach, the simulation results of
266 EPPS are compared with those obtained by EP [26], PSO [57], IPSO [57], HPSO [52], SA [48], QEA-UC [10], IQEA-UC [10], C&B [59],
267 BCPSO [6], GSA [44], and GSO. From Table 7, it is clearly suggested that EPPS is computationally more efficient than the other
268 methods in terms of solution quality.

269 5.2. Solving economic emission dispatch problem

270 In the past few years, the economic emission dispatch (EED) problem has become an important active research area because
271 it considers the pollutant emissions as well as economic advantages. In general, the unit outputs of the best economic dispatch
272 does not lead to minimum pollution emissions and vice versa. Therefore, it could not solve such problem simply by optimizing
273 a single economic dispatch (ED) problem. In this case, the emission dispatch is added as a second objective to the economic
274 dispatch problem which leads to combined economic emission dispatch (CEED) [53,21]. In consideration of valve loading effects,

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

12 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

Table 8
Generation schedule of the 10-unit system with 10% of spinning reserve obtained by EPPS for 24 h.

Hour Unit Operating Startup Reserve

1 2 3 4 5 6 7 8 9 10 cost($) cost($) %

1 455 245 0 0 0 0 0 0 0 0 13,683 0 30


2 455 295 0 0 0 0 0 0 0 0 14,554 0 21.33
3 455 370 0 0 25 0 0 0 0 0 16,809 900 26.12
4 455 455 0 0 40 0 0 0 0 0 18,598 0 12.84
5 455 390 0 130 25 0 0 0 0 0 20,020 560 20.20
6 455 360 130 130 25 0 0 0 0 0 22,387 1100 21.09
7 455 410 130 130 25 0 0 0 0 0 23,262 0 15.83
8 455 455 130 130 30 0 0 0 0 0 24,150 0 11.00
9 455 455 130 130 85 20 25 0 0 0 27,251 860 15.15
10 455 455 130 130 162 33 25 10 0 0 30,058 60 10.86
11 455 455 130 130 162 73 25 10 10 0 31,916 60 10.83
12 455 455 130 130 162 80 25 43 10 10 33,890 60 10.80
13 455 455 130 130 162 33 25 10 0 0 30,058 0 10.86
14 455 455 130 130 85 20 25 0 0 0 27,251 0 15.15
15 455 455 130 130 30 0 0 0 0 0 24,150 0 11.00
16 455 310 130 130 25 0 0 0 0 0 21,514 0 26.86
17 455 260 130 130 25 0 0 0 0 0 20,642 0 33.20
18 455 360 130 130 25 0 0 0 0 0 22,387 0 21.09
19 455 455 130 130 30 0 0 0 0 0 24,150 0 11.00
20 455 455 130 130 162 33 25 10 0 0 30,058 490 10.86
21 455 455 130 130 85 20 25 0 0 0 27,251 0 15.15
22 455 455 0 0 145 20 25 0 0 0 22,736 0 12.45
23 455 425 0 0 0 20 0 0 0 0 17,645 0 10.00
24 455 345 0 0 0 0 0 0 0 0 15,427 0 13.75
Total Cost ($) = 563,937 559,847 4090

275 the characteristic of CEED is mathematically described as non-smooth and non-convex generation objective function with heavy
276 equality as well as inequality constraints. In general, the formulation of CEED problem is expressed as follows:
CT = F (P ) + Pλ E (P ) (13)
277 where the total cost function F($/h) can then be expressed as follows [2]:


NG
 
F (P ) = ai + bi Pi + ci Pi2 + |ei × sin ( fi × (Pi,min − Pi ))| (14)
i=1

278 and the total emission function E(ton/h) is defined in the following equation [20]:


NG
 
E (P ) = 10−2 αi + βi Pi + γi Pi2 + ξi exp(ηi Pi ) (15)
i=1

279 where Pi is the real power output of unit i, ai , bi and ci are the cost coefficients of unit i, ei , fi are the coefficients of unit i reflecting
280 valve point effects, NG is the number of units, Pi, min is the minimum generation limit of ith unit, and α i , β i , γ i , ξ i and ηi are the
281 emission coefficients of unit i. P is the vector of real power outputs of units and defined as
P = [P1 , P2 , . . . , PNG ]T . (16)
282 The Pλ = (Pλ1 , . . . , PλN ) is the price penalty factor ($/ton) which is described as follows [53]:
G

ai + bi Pi,max + ci Pi,max
2
+ |ei × sin ( fi × (Pi,min − Pi,max ))|
Pλi =   (17)
10−2 αi + βi Pi,max + γi Pi,max
2 + ξi exp(ηi Pi,max )
283 where Pi, max is the maximum output of unit i.
284 The quality and inequality constraints of CEED are given as follows:


NG
Pi = PD (18)
i=1
285
Pi,min ≤ Pi ≤ Pi,max (19)
286 where PD is the total load demand.
287 Here, a 40-generating units with valve point effects and emission is considered. The input data for this system come from
288 [53]. The best compromising cost of the test system obtained by EPPS is 191582.0515 $/h and the best compromising solutions

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 13

Table 9
Best compromise solution found with EPPS for 40-unit system.

Unit Pi, min Pi, max Generation Unit Pi, min Pi, max Generation

1 36 114 114 21 254 550 437.3795


2 36 114 114 22 254 550 437.5076
3 60 120 120 23 254 550 438.0195
4 80 190 178.2003 24 254 550 437.9089
5 47 97 97 25 254 550 437.8020
6 68 140 129.4232 26 254 550 437.7312
7 110 300 300 27 10 150 19.5172
8 135 300 299.5459 28 10 150 19.5123
9 135 300 298.6392 29 10 150 19.5183
10 130 300 130 30 47 97 97
11 94 375 307.5167 31 60 190 175.7608
12 94 375 306.9795 32 60 190 175.8295
13 125 500 433.9502 33 60 190 175.7765
14 125 500 409.2836 34 90 200 200
15 125 500 411.5396 35 90 200 200
16 125 500 411.2092 36 90 200 200
17 220 500 452.0986 37 25 110 104.2640
18 220 500 452.1572 38 25 110 104.2654
19 242 550 437.4438 39 25 110 104.2351
20 242 550 437.4653 40 242 550 437.5198
TP 10500 FC 128729.6391
TE 178569.2016 PPF 0.35198
EC 62852.7876 TC 191582.0515

TP: total power generation (MW); FC: fuel cost ($/h); TE: total emission (ton/h); PPF: price
penalty factor ($/ton); EC: emission cost ($/h); TC: total generation cost ($/h).

Table 10
Comparison of compromise cost of different methods on 40-unit system.

Methods Minimum total cost ($/h) Maximum total cost ($/h) Mean total cost ($/h)

DE 191594.5053 192251.3565 191695.4643


MBFA 190149.1967 NA NA
DE-HS 191589.5164 191828.5087 191607.8309
PSO 191598.5325 192305.1021 191699.3392
GSO 191588.5563 191852.2013 191611.3926
EPPS 191582.0515 191726.0181 191590.4817

5
x 10
2.2
EPPS
Compromise cost ($/h)

2.1

1.9
0 0.5 1 1.5 2 2.5 3
Function evaluations 5
x 10

Fig. 6. Convergence of EPPS for minimum compromise cost.

289 are listed in Table 9. The simulation results obtained by EPPS are compared to DE [4], MBFA [25], DE-HS [45], PSO [38], GSO [23],
290 and the comparison results are given in Table 10. It is clearly seen that EPPS could provide better results than other methods in
291 minimum, maximum and mean.
292 Convergence characteristic for minimum compromise cost of EPPS is shown in Fig. 6. We can see from the figure that EPPS
293 could converge to the global optimum at very early iterations. The statistical results on CEED over 30 independent runs by EPPS
294 are depicted in Fig. 7. From this figure, it is observed that EPPS consistently produces solutions at or very near to the global
295 optimum, indicating a good convergence characteristic.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

14 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

191583

Compromise cost ($/h)


191582

191581
0 5 10 15 20 25 30
Number of runs

Fig. 7. Compromise solution cost obtained by EPPS over 30 trials.

Table 11
Comparison of estimation error of an FM synthesizer.

Methods Minimum value Maximum value Mean value Standard deviation

SLPSO 0 13.79 4.18 26.99


APSO 0 34.22 11.33 41.13
CLPSO 0.007 14.08 3.82 23.53
CPSOH 3.45 42.53 27.08 60.61
SPSO 0 18.27 9.88 33.85
JADE 0 13.92 7.55 26.18
HRCGA 0 17.59 8.41 32.54
G-CMA-ES 3.326 55.09 38.75 16.77
GSO 0.002 16.89 8.76 32.21
EPPS 0 13.90 3.69 23.07

296 5.3. Estimating the parameters of an FM synthesizer

297 The third real-world problem is to estimate the parameters of an FM synthesizer [16]. It is a highly complex multimodal prob-
298 lem with six parameters, where the vector to be optimized is x = (a1 , ω1 , a2 , ω2 , a3 , ω3 ). The fitness function is the summation
299 of square errors between the estimated wave and the target wave as follows:

100
f (x) = (y(t ) − y0 (t ))2 (20)
t=0

300 where the estimated sound is


y(t ) = a1 · sin (ω1 · t · θ + a2 · sin (ω2 · t · θ + a3 · sin (ω3 · t · θ ))) (21)
301 and the target sound is
y0 (t ) = 1.0 · sin (5.0 · t · θ + 1.5 · sin (4.8 · t · θ + 2.0 · sin (4.9 · t · θ ))) (22)
302 θ = 2π /100 and the parameters are defined in the range [−6.4 6.35].
303 The total number of function evaluations is set to 30,000 for this problem. Table 11 summarizes the minimum, mean, max-
304 imum and standard deviation values achieved by EPPS, compared with these of SLPSO [31], APSO [56], CLPSO [33], CPSOH [31],
305 SPSO [38], JADE [31], HRCGA [31], G-CMA-ES [31] and GSO. Actually, the minimum value demonstrates the local search ability
306 of the algorithm, the mean value represents the quality of the results obtained by each algorithm, the maximum value reveals
307 the global search ability of the algorithm, and the standard deviation value shows the robustness of the algorithm in optimizing
308 fitness function. It can be seen from Table 11 that none of the nine algorithms can find the global optimum for all the 30 inde-
309 pendent runs. By observing the minimum values, SLPSO, APSO, SPSO, JADE, HRCGA and EPPS have found the global optimum at
310 least once in 30 runs, while CLGSO, CPSOH, G-CMA-ES and GSO did not manage to find the global optimum. As for the maximum
311 value and standard deviation, EPPS ranks second, as SLPSO reaches a smaller value in maximum value and G-CMA-ES reaches
312 a smaller value in standard deviation. Additionally, EPPS performs better than the other nine algorithms in terms of the mean
313 value.

314 6. Conclusion

315 This paper has proposed a novel global optimization algorithm, evolutionary predator and prey strategy (EPPS). The EPPS
316 is conceptually simple and easy to implement. To validate its applicability, EPPS has been applied to optimize 20 canonical
317 benchmark functions, including unimodal, multimodal, shifted and rotated ones, and the results obtained have been compared
318 with those of the other EAs. The comparison results have demonstrated that EPPS has advantages over the other algorithms in
319 terms of convergence speed, solution quality and robustness. Moreover, the convergence speed of EPPS is more than 5 times

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx 15

320 faster than those of SPSO, GSO, CMA-ES and DE in optimizing the multimodal benchmark function f14 evaluated from 15 to 300-
321 dimensional. Additionally, by analyzing the scanning distance, we can find that EPPS could keep a global search ability all the time
322 with a certain level of population diversity. To validate the practicability, EPPS has also been used to solve the unit commitment
323 problem, economic emission dispatch problem and parameters estimating of an FM synthesizer. Advantages of EPPS have been
324 demonstrated by offering much better performance than the other EAs in terms of minimum cost, maximum cost and mean cost.
325 That is, EPPS can overcome the problem of premature convergence without significantly impairing the fast-converging feature
326 to some extent. It can be concluded that EPPS provides higher applicability and practicability, and thus it is a viable alternative
327 for solving global optimization problems.

328 Acknowledgment

329 The work is funded by the State Key Program of National Natural Science of China (No. 51437006), Guangdong Inno-
330 vative Research Team Program, Guangdong, China (No. 201001N0104744201), and Shenzhen Basic Research Program (No.
331 JCYJ20130401170306880).

332 References

333 [1] L. Allen, P. Fleming, Review of canid management in Australia for the protection of livestock and wildlife–potential application to coyote management,
334 Sheep Goat Res. J. 19 (2004) 97–104.
335 [2] V. Aragón, S. Esquivel, C.C. Coello, An immune algorithm with power redistribution for solving economic dispatch problems, Inform. Sci. 295 (2015) 609–632.
336 [3] A. Banitalebi, M.I.A. Aziz, A. Bahar, Z.A. Aziz, Enhanced compact artificial bee colony, Inform. Sci. 298 (2015) 491–511.
337 [4] M. Basu, Economic environmental dispatch using multi-objective differential evolution, Appl. Soft Comput. 11 (2) (2011) 2845–2853.
338 [5] E. Bonabeau, M. Dorigo, G. Theraulaz, Inspiration for optimization from social insect behaviour, Nature 406 (6791) (2000) 39–42.
339 [6] S. Chakraborty, T. Ito, T. Senjyu, A.Y. Saber, Unit commitment strategy of thermal generators by using advanced fuzzy controlled binary particle swarm
340 optimization algorithm, Int. J. Elect. Power Energy Syst. 43 (1) (2012) 1072–1080.
341 [7] D. Chen, F. Zou, Z. Li, J. Wang, S. Li, An improved teaching–learning-based optimization algorithm for solving global optimization problem, Inform. Sci. 297
342 (2015) 171–190.
343 [8] W.N. Chen, J. Zhang, Y. Lin, N. Chen, Z.H. Zhan, H.S.H. Chung, Y. Li, Y.H. Shi, Particle swarm optimization with an aging leader and challengers, IEEE Trans.
344 Evolut. Comput. 17 (2) (2013) 241–258.
345 [9] S.C. Chu, J.F. Roddick, J.S. Pan, Ant colony system with communication strategies, Inform. Sci. 167 (1) (2004) 63–76.
346 [10] C. Chung, H. Yu, K.P. Wong, An advanced quantum-inspired evolutionary algorithm for unit commitment, IEEE Trans. Power Syst. 26 (2) (2011) 847–854.
347 [11] P. Civicioglu, Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm, Comput. Geosci. 46 (2012) 229–
348 247.
349 [12] P. Civicioglu, Artificial cooperative search algorithm for numerical optimization problems, Inform. Sci. 229 (2013) 58–76.
350 [13] P. Civicioglu, Backtracking search optimization algorithm for numerical optimization problems, Appl. Math. Comput. 219 (15) (2013) 8121–8144.
351 [14] P. Civicioglu, E. Besdok, A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algo-
352 rithms, Artif. Intell. Rev. 39 (4) (2013) 315–346.
353 [15] W.J. Conover, W. Conover, Practical Nonparametric Statistics, Wiley, New York, 1980.
354 [16] S. Das, P. Suganthan, Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization
355 problems, Jadavpur University, Nanyang Technological University, Kolkata, India, 2010.
356 [17] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and
357 swarm intelligence algorithms, Swarm Evolut. Comput. 1 (1) (2011) 3–18.
358 [18] A.F.G. Dixon, An experimental study of the searching behaviour of the predatory coccinellid beetle adalia decempunctata (L.), J. Animal Ecol. 28 (2) (1959)
359 259–281.
Q2 360 [19] W. Foster, J. Treherne, Evidence for the dilution effect in the selfish herd from fish predation on a marine insect, Nature 293 (1981).
361 [20] M.R. Gent, J.W. Lamont, Minimum emission dispatch, IEEE Trans. Power Appar. Syst. 90 (6) (1971) 2650–2660.
362 [21] A. Glotić, A. Zamuda, Short-term combined economic and emission hydrothermal optimization by surrogate differential evolution, Appl. Energy 141 (2015)
363 42–56.
364 [22] N. Hansen, S. Müller, P. Koumoutsakos, Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES),
365 Evolut. Comput. 11 (1) (2003) 1–18.
366 [23] S. He, Q.H. Wu, J.R. Saunders, Group search optimizer: an optimization algorithm inspired by animal searching behavior, IEEE Trans. Evolut. Comput. 13 (5)
367 (2009) 973–990.
368 [24] R.A. Hinde, Animal Behavior: A Synthesis of Ethology and Comparative Psychology, New York: McGraw-Hill, 1966.
369 [25] P. Hota, A. Barisal, R. Chakrabarti, Economic emission load dispatch through fuzzy based bacterial foraging algorithm, Int. J. Elect. Power Energy Syst. 32 (7)
370 (2010) 794–803.
371 [26] K. Juste, H. Kita, E. Tanaka, J. Hasegawa, An evolutionary programming solution to the unit commitment problem, IEEE Trans. Power Syst. 14 (4) (1999)
372 1452–1459.
373 [27] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. 42 (1)
374 (2014) 21–57.
375 [28] S.A. Kazarlis, A. Bakirtzis, V. Petridis, A genetic algorithm solution to the unit commitment problem, IEEE Trans. Power Syst. 11 (1) (1996) 83–92.
376 [29] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, 4, IEEE Press, Piscataway, NJ,
377 1995, pp. 1942–1948.
378 [30] C. Lee, C. Liu, S. Mehrotra, M. Shahidehpour, Modeling transmission line constraints in two-stage robust unit commitment problem, IEEE Trans. Power Syst.
379 29 (3) (2014) 1221–1231.
380 [31] C. Li, S. Yang, T.T. Nguyen, A self-learning particle swarm optimizer for global optimization problems, IEEE Trans. Syst. Man, Cybernet. Part B 42 (3) (2012)
381 627–646.
382 [32] Y. Li, Z.H. Zhan, S. Lin, J. Zhang, X. Luo, Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimiza-
383 tion problems, Inform. Sci. 293 (2015) 370–382.
384 [33] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans.
385 Evolut. Comput. 10 (3) (2006) 281–295.
386 [34] T.J. Liao, T. Stutzle, Benchmark results for a simple hybrid algorithm on the CEC 2013 benchmark set for real-parameter optimization, in: Proceedings of IEEE
387 Congress on Evolutionary Computation, 2013, pp. 1938–1944.
388 [35] I. Loshchilov, CMA-ES with restarts for solving CEC 2013 benchmark problems, in: Proceedings of IEEE Congress on Evolutionary Computation, 2013, pp. 369–
389 376.
390 [36] D. Mustard, Numerical integration over the n-dimensional spherical shell, Math. Comput. 18 (88) (1964) 578–589.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014
JID: INS
ARTICLE IN PRESS [m3Gsc;August 24, 2015;10:28]

16 J.J. Chen et al. / Information Sciences xxx (2015) xxx–xxx

391 [37] W.J. O’Brien, B.I. Evans, G.L. Howick, A new view of the predation cycle of a Planktivorous fish, white crappie (pomoxis annularis), Can. J. Fisheries Aquatic
392 Sci. 43 (10) (1986) 1894–1899.
393 [38] M.G.H. Omran, M. Clerc, Standard particle swarm optimisation, 2011, http://www.particleswarm.info/.
394 [39] C. Ozturk, E. Hancer, D. Karaboga, A novel binary artificial bee colony algorithm based on genetic operators, Inform. Sci. 297 (2015) 154–170.
395 [40] I. Poikolainen, F. Neri, F. Caraffini, Cluster-based population initialization for differential evolution frameworks, Inform. Sci. 297 (2015) 216–235.
396 [41] H.R. Pulliam, G.C. Millikan, Social organization in the nonreproductive season, Avian Biol. 6 (1982) 169–197.
397 [42] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evolut. Comput.
398 13 (2) (2009) 398–417.
399 [43] R. Rao, V. Savsani, D. Vakharia, Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems, Inform. Sci.
400 183 (1) (2012) 1–15.
401 [44] P.K. Roy, Solution of unit commitment problem using gravitational search algorithm, Int. J. Elect. Power Energy Syst. 53 (2013) 85–94.
402 [45] S. Sayah, A. Hamouda, A. Bekrar, Efficient hybrid optimization approach for emission constrained economic dispatch with nonsmooth cost curves, Int.
403 J. Elect. Power Energy Syst. 56 (2014) 127–139.
404 [46] G.B. Schaller, The Serengeti Lion: A Study of Predator-Prey Relations, University of Chicago Press, 2009.
405 [47] D. Simon, Biogeography-based optimization, IEEE Trans. Evolut. Comput. 12 (6) (2008) 702–713.
406 [48] D.N. Simopoulos, S.D. Kavatza, C.D. Vournas, Unit commitment by an enhanced simulated annealing algorithm, IEEE Trans. Power Syst. 21 (1) (2006) 68–76.
407 [49] J.M. Smith, The theory of games and the evolution of animal conflicts, J. Theor. Biol. 47 (1) (1974) 209–221.
408 [50] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optimiz. 11 (4) (1997)
409 341–359.
410 [51] P.C. Thomson, The behavioural ecology of dingoes in north-western Australia. II. Activity patterns, breeding season and pup rearing, Wildlife Res. 19 (5)
411 (1992) 519–529.
412 [52] T. Ting, M. Rao, C. Loo, A novel approach for unit commitment problem via an effective hybrid particle swarm optimization, IEEE Trans. Power Syst. 21 (1)
413 (2006) 411–418.
414 [53] P. Venkatesh, R. Gnanadass, N.P. Padhy, Comparison and application of evolutionary programming techniques to combined economic emission dispatch
415 with line flow constraints, IEEE Trans. Power Syst. 18 (2) (2003) 688–697.
416 [54] Q.H. Wu, H.L. Liao, Function optimisation by learning automata, Inform. Sci. 220 (2013) 379–398.
417 [55] L. Yang, J. Jian, Y. Zhu, Z. Dong, Tight relaxation method for unit commitment problem using reformulation and lift-and-project, IEEE Trans. Power Syst. 30
418 (2015) 13–23.
419 [56] Z.H. Zhan, J. Zhang, Y. Li, H.H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. Man, Cybernet. Part B 39 (6) (2009) 1362–1381.
420 [57] B. Zhao, C.X. Guo, B.R. Bai, Y.J. Cao, An improved particle swarm optimization algorithm for unit commitment, Int. J. Elect. Power Energy Syst. 28 (7) (2006)
421 482–490.
422 [58] C. Zhao, J. Wang, J.P. Watson, Y. Guan, Multi-stage robust unit commitment considering wind and demand response uncertainties, IEEE Trans. Power Syst.
423 28 (3) (2013) 2708–2717.
424 [59] H. Zheng, J. Jian, L. Yang, R. Quan, A deterministic method for the unit commitment problem in power systems, Comput. Operat. Res. 1 (2015) 1–7.
425 [60] J.H. Zheng, J.J. Chen, Q.H. Wu, Z.X. Jing, Reliability constrained unit commitment with combined hydro and thermal generation embedded using self-learning
426 group search optimizer, Energy 81 (2015) 245–254.

Please cite this article as: J.J. Chen et al., Evolutionary predator and prey strategy for global optimization, Information Sciences
(2015), http://dx.doi.org/10.1016/j.ins.2015.08.014

You might also like