开发工具:
文件大小: 263kb
下载次数: 0
上传时间: 2019-07-27
详细说明:
Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right
balance between objective and penalty functions. This paper introduces a novel approach to balance objective
and penalty functions stochastically, i.e., stochastic ranking, and presents a RUNARSSON AND YAO: STOCHASTIC RANKING
II. CONSTRAINT IIANDLING BY STOCHASTIC RANKING
A. Penalty Method
For a given penalty coefficient rg >>0 let the ranking of X individuals be
(x1)≤y(x2)≤.≤y(xx)
where l is the transformation function given by equation(4). Let us examine the adjacent pair i and i+I in
the ranked order
f+rg0;≤f+1+7g+1
where the noTation Ji=/(xi) and i-o(g, (xi), j-1,., r)) are used for convenience. We now introduce
a parameter, ri. which will be referred to as the critical penalty coefficient for the adjacent pair i and i +l
(f:11-f2)/(2-(211),form2≠d
For the given choice of ra 20, there are three different cases which may give rise to the inequality(7)
1.f≤力1andφ;≥2+: the comparison is said to be dominated by the objective function and0<7g≤r
because the objective function f plays the dominant role in determining the inequality. When individuals
are feasible i=i+1=0 and Ti
2. fi 2 fill and pi oill: the comparison is said to be dominated by the penalty function, and oTg: All comparisons are based only on the penalty function. rg is so large that the impact, of the
objective function can be ignored. We will call this over'-penalizalionu
3.Ly Tg) is often used in order to locate a good feasible individual. Such a dynamic
method would work well for problems for which the unconstrained global optimum is close to its constrained
global optimum. It is unlikely to work well for problems for which the constrained global optimum is far away
4
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2000. TEC#311R
from its unconstrained one because the initial low r value would drive search towards the unconstrained
global optimum and thus further away from the constrained onc
The traditional constraint-handling technique used in evolution strategies falls roughly into the category
of over-penalization since all infeasible individuals are regarded worse than feasible ones [ 201,[4,11]
fact, canonical evolution strategies(ES ) allow only feasible individuals in the initial population. To perform
constrained optimization an Es may be used to find a feasible initial population by minimizing the penalty
function(20], page 115). Once a feasible population is found, the ES algorithm will then minimize the
objective function and reject a ll infeasible solutions generated
It has been widely recognized that neither under- nor over-penalization is a good constraint-handling tech-
nique and there should be a balance between preserving feasible individuals and rejecting infeasible ones [7
In other words, ranking should be dominated by a combination of objective and penalty functions and so the
penalty coefficient rg should be within the bounds
∫(1+1)then
sp(l;,l;+1)
fi
else
f((I1)>p(2+1))
p(lj,I;+1)
五i
14
od
15
if
done break fi
Fig 1. Stochastic ranking using a bubble-sort-like procedure where U(0, 1) is a uniform random number generator and
is the number of sweeps going through the whole population. When Pf=0 the ranking is an over-penalization
and for Pf= l the ranking is an under-penalization. The initial ranking is always generated at random
In the case where adjacent individuals are both feasible Pu= Pfw. We would like to examine the probability
of winning k: more comparisons than losses. Then the total number of wins must beh=(N+k)/ 2 where N
is the total number of comparisons made. The probability of winning h' comparisons out of N is given by the
binomial distribution
N
Pa(y-h)
(1-P,)N-k
k
The probability of winning at least k' comparisons is
4)=1∑(2)P2(1-Pa)
Equations(10) and(11) show that the grcatcr the number of comparisons(N) thc lcss influcncc the initial
ranking will have. It is worth noting that the probability Pa is usually different for different individuals in
different stages of ranking(sorting). Now consider a case where Pu is constant during the entire ranking
proccdurc, which is the casc when faφj≠i,=1,…,λ. Then Pfw=1and
0. If
we choose Pf= then P=2. here will be an equal chance for a comparison to be made based on the
objective or penalty function. Since we are only interested in feasible individuals as final solutions, Pf should
be less than so that there is a bias against infeasible solutions. The strength of the bias can be adjusted
easily by adjusting only Pf. When parameter N, the number of sweeps, approaches oo then the ranking
ill be determined by the bias Pf. That is if Pf>2 the ranking is based on the objective function, and
when Pf<,the ranking is the over-penalty ranking. Hence, an increase in the number of ranking sweeps
is effectively equivalent to changing parameter Pf, i.e., making it smaller or larger. Thus we can fix N=X
and adjust Pf to achieve the best performance. We illustrate these points by optimizing a set of benchmark
functions presented in Appendix A using different Pt values. Table I presents the average results over 30
indcpcndent runs of our algorithm. The numbcrs in the table indicatc the percentage of fcasiblc individuals
in the final population. The details about the experiment will be given in the following section. It is quite
clear from the table that as Pf>2 finding feasible solutions becomes very difficult unless the unconstrained
optimum happens to bc the samc as the constrained optimum, as is the casc for problcm g12
The standard deviation of the binomial distribution is vNPu(1-Pw)
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2000. TEC#311R
工 ABLE I
AVERAGE PERCENTAGE FEASIBLE INDIVIDUALS IN THE FINAL POPULATION AS A FUNCTION OF PF(N=A)AND
TEST FUNCTION PRESENTED IN APPENDIX A
fcn\Pf0.5250.50004750.4500.000
0400
58
0
g
78
g05
0
g
0
g08
00000
100
100
100
g09
10
0
g11
0
12
100
100
100
g
III. EXPERIMENTAL STUDIES
A. Evolution Strategy
The evolutionary optimization algorithm described in this section is based on ES [20. One reason for
choosing es is that il does not introduce any specialized constraint-handling variation operators. We would
like to show that specialized and complex variation operators for constrained optimization problems are
unnecessary although they may be quitc uscful for particular types of problems(scc for cxamplc [17).A
simple extension to the Es, i.e., the use of the stochastic ranking scheme proposed in the previous section
tcchniquc based on the stochastic ranking schemc can be uscd in any evolutionary algorithm, not just Eo ng
can achieve significantly better results than other more complicated techniques. The constraint-handlir
In the(u, A)-ES algorithm the individual
t of real- valued vectors,(ⅹ:,σ),yi∈{1,,)}.The
initial population of x is generated according to a uniform n-dimensional probability distribution over the
search space S. Let a m be an approximate measure of the expected distance to the global optimum, then the
initial setting for the 'mean step sizes should be [ 20, page 117
x/Y≈(x;-x)m,i∈{1,……,外,j∈{1
where oi. denotes the j-th component of the vector i. We use these initial values as upper bounds on o
Following the stochastic ranking scheme given previously, the evaluated objective f(x)and penalty function
o(gk(x);k=1,., m) for each individual(xi,vi),viEil,., A are used to rank the individuals in a
population and the best(highest-ranked) u individuals out of A are selected for the next generation. The
truncation level is set at u/A1/7 [ 1, page 79
Variation of strategy parameters is performed before the modification of objective variables. We generate
A new strategy parameters from u old ones so that we can use the A new strategy parameters in generating X
offspring latcr. The 'mcan step sizes'arc updated according to the lognormal update rulc 20: i=1,., A
-1,,,λ,andj-1,,,n,
(g+1)
h, i exp(T'N
(0,1)+7N(0.1))
(13)
where N(o, 1) is a normally distributed one-dimensional random variable with an expectation of zero and
variance one. The subscript j in N;(0, 1) indicates that the random number is generated anew for each value
of j. The learning rates'T and Tare set equal to */v2Vn and o*/ 2n respectively where p* is the
expected rate of convergence(20, page 144)and is set to one 1, page 72. Recombination is performed on
the self-adaptive parameters before applying the update rule given by(13). In particular, global intermediate
recombination(the average) between two parents 20, page 148 is implemented as
(g)
)/2,k;∈{1
RUNARSSON AND YAO: STOCHASTIC RANKING
where hj is an index generated at random and anew for each j
Having varied the strategy parameters, each individual(xi, 0i
ViE fI,.,pF: creates A/u offspring on
average, so that a total of a offspring are generated
(g+1)(g)⊥x(9+1)
N(O,1)
Recombination is not used in the variation of objeclive variables. When an ollspring is generaled outside
the parametric bounds defined by the problem, the mutation (variation of the objective variable will be
retried until the variable is within its bounds. In order to save computation time the mutation is retried only
10 lines and then ignored, leaving the objecl variable in its original stale within the paraneter bounds
B. Eeperimental Results and Discussion.s
Thirteen benchmark functions were used. The first 12 were taken from [11 and the 13th from [15. The
details, including the original sources, of these functions are listed in the appendix. Problems go2, g03. g08
and g12 are maximization problems. They were transformed into minimization problems using-f(x). For
each of the benchmark problems, 30 independent runs were performed using a(30, 200 )-ES. All experiments
were performed in MAfLAB. The source code may be obtained from the authors upon request. All runs were
terminated after G=1750 generations except for g 12 which was run for 175 generations. Problem g12 is the
harder version studied in [11] where the feasible region of the search space consists of 93 disjointed spheres
with a radius of 0. 25
Table II summarizes the experimental results we obtained using Pf=0.45. The median number of gen
erations for finding the best solution in each run is indicated by gm in the table. The table also shows the
known optimal' solution for each problem and statistics for the 30 independent runs. These include the best
objective value found, median, mean, standard deviation and worst found. The statistics are based on feasible
solutions only. All equality constraints have been converted into inequality constraints, h(x)l-8<0, using
the degree of violation 0=0.0001. As a result of this approximation
results might be better than the
optimum. However, the tolerated violation is more stringent than others 15 where 5=0.001 was used
In comparison with the latest results in the literature [14], the results in table II are significantly better for
all but one problem. While 70 x 20000 function evaluations were used for each problem and only 20 runs were
carried out in [14(for Experiment #2 in[14, which gave beller resulis than Experiment #1), we have used
a maximum of only 200 x 1750 function evaluations for each problem and carried out 30 independent runs for
a ll problems.
For problcms go1, g03, g04, gO8, g11, and g12, our algorithm has consistently found thc optimal solution
for all 30 runs, while the algorithm in [14 did not lind any for problems go1, g03, g04, and g12(the more
difficult version), and found the optimal solution only in some out of 20 runs for problem g08. The average
result in [14 for go8 was 0.0891568 when the optimal solution was-.095825
For problem go2, the algorithm given by 14 was more consistent and performed better on average, but
worse in terms of the best result. Its average result was -0.79671 over 20 runs, while ours was -0.781975 over
30 runs. However, our algorithm was capable of finding better solutions. The best solution found by our
algorithm was-0.803515, whilc the best in [14] was -0 79953. It is also interesting to note that the median
of our results was-0.785800, which was llluch betler than the average. A closer look at our resulis revealed
that 6 out of 30 runs obtained a solution better than the best offered in[14
For problem go4, -30664.5 was reported as being "by far the best value reported by any evolutionary
system for this test case! 11. Our algorithm has now improved this record' substantially by finding the
optimum consistently. Homaifar et al. [10] found a similar solution to ours for g04 using a genetic algorithm
Unfortunately that solution violated two constraints. Another similar solution was found by Colville 3 using
The minus sign was added to the average result because we transformed the maximization problem into the minimization one.
D Aftcr this paper had bccn submitted, Pctrowski and Hamida [18] rcportcd another algorithm which could also find thc optimum
consistently. However, few details about the algorithm, the parameters used, and experimental setup were described in their one
page paper. The optinal solution found by then was only given for one digit after the decimal point. Problerns go3, go5, g11
12, and g13 were not included in their studies
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2000. TEC#311R
TABLE II
EXPERIMENTAL RESULTS ON THIRTEEN BENCHMARK FUNCTIONS USING ES WITH STOCHASTIC RANKING(Pf=0. 15)
30 INDEPENDENT RUNS WERE CARRIED OUT
optimal
median
mean
worst Gm
01
15.000
15.000
15.000
15.00000E-00
15000741
02
0.803619
0.803515
0785800078197520E-020.7262881086
g03
1.000
1.000
1.000
19E-01
1.0001146
g04-30665539-3066553930665.5393066553920-05
065539441
g055126.498
5126.497
127.372
128.8813.5F-00
5142.472258
g06
6961.814-6961.8146961.814
68759401.6E-02
6350.262590
4.307
1.357
4.3746.6E-02
g08-0.095825
0.095825
0.095825
0.09582526E-17
0.095825381
09
680.630
80.630
680.641
680.6563.4E-02
680.763
57
g10
7049.331
7054.316
7372.613
559.1925,3E-02
8835655642
g11
0.750
0.750
0.750
0.7508OE05
0.750
g12
1.000000
1.000000
1.000000
1.0000000.0F-00
1.000000
0.053950
0.053957
0.057006
0.0675433.1E-02
0.216915349
a mathematical programming technique. Ilowever, it is unclear how those two techniques 10,3 would
perform on a la
arger set of benchmark functions as we llsed here
For problem go5 which involves equality constraints, the algorithm given in [14"did not provide quality
results. Hence no results were given in their paper. Our algorithm has found consistently feasible solutions
Some very good results were obtained. For example, the best result found was 5126. 497 and the average was
5128.811. The best result was even better than the optimal solution of 5126.498. This is the consequence of
using inequalities to approximate each equality although we used a very small o
For problem g06, our algorithm performed significantly better than the algorithm in[14 in terms of the
average as well as best results. Our average result was.940, while Theirs was.6. Our algorithim
has found the global optimum 20 times out of 30 runs, while their algorithm had never found the optima
solution
For problems gO7, go9 and g10, our algorithm outperformed the algorithm given in 14 again in terms of
all three criteria: the average, best alld worst resulls. Both algorithms perforned well and found the optimal
solution for problem g11
For problem g13, our algorithm outperformed all six constraint handling methods studied in [15] in terms
of the best, median, and worst results. Table Iii summarizes the comparison between our results and the
latest results.[23] we can lind in the literature
TABLE III
COMPARISON BETWEEN OUR(INDICATED BY RY) AND KOZIEL AND MICHALEWICZ'S(INDICATED BY KM [14
ALGORITIIMS. FOR PROBLEM g13, TIIE RESULT WAS TAKEN FROM METIIOD #4 IN [15. TIIE TWO VALUES IN TIIE
MEAN COLUMN FOR PROBLEM g13 REPRESENT MEDIANS.
Best result
Mean result
Worst Result
optimal
KM
RY
KN
RY
KM
go1
15.000
15.000
l4.7864
15.000
14.7082
15.000
14.6154
0.803619
0.803515
0.7S1975
0.79671
0.726288
1.000
0.9989
1.000
9978
g04-3066539-30665.539
30664.5-30665.539
306553-30665.539
30645.9
5126.498
5126.497
5128.881
5142.472
06
6961.814
6961.814
6952.1
6875.940
6342.6
6350.262
5473.9
24.306
24307
24.620
24.374
24.826
24.642
go8
-0.095825
0.095825
0.0958250
0.095825
0.0891568
0.095825
0.0291438
g09
680.630
680.630
680.91
680.656
681.16
650.763
683.18
7054.316
71479
559.192
8163.6
8835.655
969.3
0.750
0.750
0.750
0.75
0.750
g
1.000000
1.000000
0.99999857-100000
0.999134613
1.000000-0.991950498
g13
0.053950
0.053957
0.054
0.057006
0.064
0.216915
0.557
Our algorithm will find consistent ly the optimum when Pf-0.425, see also the results for Pf-0 in table IV
RUNARSSON AND YAO: STOCHASTIC RANKING
TABLE IV
EXPERIMENTAL RESULTS ON THIRTEEN BENCHMARK FUNCTIONS USING ES WITH STOCHASTIC RANKING(Pf=0)
30 INDEPENDENT RUNS WERE CARRIED OUT
optimal
median
mean
worst
G
01
15.000
15.000
15.000
15.0000.0E-00
15000697
02
0.803619
0.803578
0.785253
0.7830491.5E-02
0.7506561259
g03
0.327
0.090
0.10572E-02
0.014
g04-30665.539-3066553930665.58-30664.7103.8E-00-3064.:97632
g055126.498
5126.945
225.100
348.68327F-026050.66
g06
6961814
6961814
69618141.9E-12
6961.814946
322
1.367
4.38259E-02
24598546
g08-0.095825
0.095825
0.095825
0.0958252.7E-17
0.095825647
09
680.630
680.632
680.657
680.6713.8E-02
680772414
g10
7049.331
7117.416
7336.280
7457.5973.4E-02
8464.816
530
g11
0.750
0.750
0.953
0.男375,4E02
0.9731750
g12
1.000000
0.999972-0.999758
0.9997661.4F-0
0.999488
0.053950
0.919042
0.9979120.9933721.5E-02
09983161750
n order to evaluate the impact, of Pr on the results generated by our algorithm, we have run the same set.
of experiments many times using P/=0,0.025,.., 0.525. As expected, neither small nor large i.e., >0.5)
f gave very good resulls. The best resulls were obtained when 0. 4
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.