#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Self-adaptive dual-strategy differential evolution algorithm


Authors: Meijun Duan aff001;  Hongyu Yang aff001;  Shangping Wang aff003;  Yu Liu aff002
Authors place of work: National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu, China aff001;  College of Computer Science, Sichuan University, Chengdu, China aff002;  Science and Technology on Electronic Information Control Laboratory, Chengdu, China aff003
Published in the journal: PLoS ONE 14(10)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0222706

Summary

Exploration and exploitation are contradictory in differential evolution (DE) algorithm. In order to balance the search behavior between exploitation and exploration better, a novel self-adaptive dual-strategy differential evolution algorithm (SaDSDE) is proposed. Firstly, a dual-strategy mutation operator is presented based on the “DE/best/2” mutation operator with better global exploration ability and “DE/rand/2” mutation operator with stronger local exploitation ability. Secondly, the scaling factor self-adaption strategy is proposed in an individual-dependent and fitness-dependent way without extra parameters. Thirdly, the exploration ability control factor is introduced to adjust the global exploration ability dynamically in the evolution process. In order to verify and analyze the performance of SaDSDE, we compare SaDSDE with 7 state-of-art DE variants and 3 non-DE based algorithms by using 30 Benchmark test functions of 30-dimensions and 100-dimensions, respectively. The experiments results demonstrate that SaDSDE could improve global optimization performance remarkably. Moreover, the performance superiority of SaDSDE becomes more significant with the increase of the problems’ dimension.

Keywords:

Algorithms – Optimization – Species diversity – Mutation detection – Evolutionary algorithms – Convergent evolution – Evolutionary immunology – Bacterial evolution

Introduction

DE is a stochastic optimization method based on population for real-parameter optimization problems, which is proposed by Storn and Price [1]. As a kind of heuristic global search evolution algorithm, differential evolution (DE) has an evolution mechanism similar to the other evolution algorithms, all of which are mutated, crossed and selected. Unlike that the other evolution algorithms (EAs) use the defined probability distribution function, the DE selects individuals from the current population to do differential operations and multiplies by scaling factor. It currently is a very attractive evolutionary algorithm for optimization in continuous search spaces, mainly for its simplicity, a small number of parameters to tune and notable performance. At present, DE is widely used in the research and engineering fields, such as classification [2], neural network training [34], data clustering [5], solar energy [6].

Exploration and exploitation are contradictory for most EAs [7]. To further improve the exploitation and exploration capability of DE algorithm, a novel self-adaptive dual-strategy differential evolution algorithm (SaDSDE) is proposed. In summary, the main contributions of this paper are as follows. (1) A dual-strategy mutation operator is proposed. In the proposed mutation operator, the exploitation and exploration capability of mutation operator are simultaneously considered. (2) The scaling factor self-adaption strategy is proposed in an individual-dependent and fitness-dependent way. (3) A dynamic adjustment strategy of exploration ability control factor is introduced to adjust the exploration ability of DE in different stages of evolution process. (4) The performance of SaDSDE is verified by comparing it with 7 existing state-of-art DE variants and 3 non-DE algorithms on 30 benchmark functions. The experimental results demonstrate the effectiveness of the proposed SaDSDE. The performance sensitivities to population size, crossover probability and the impacts of the proposed algorithmic components are also investigated.

The rest of this paper is organized as follows. Section 2 briefly introduces the original DE algorithm and reviews the related works on the DE variants. In Section 3, the proposed SaDSDE algorithm is presented in detail. The experiment comparisons and analysis with 7 state-of-art algorithms and 3 non-DE algorithms are given in Section 4. Finally, Section 5 draws the conclusions of this work, and future work is also described in this section.

Related work

2.1 Original differential evolution algorithm

DE is based on population evolution. Generally, DE is composed of four components: initialization, mutation, crossover and selection. After initialization, DE enters a loop of mutation, crossover and selection. The details of main components are introduced as follows. Without loss of generality, this paper considers minimization problems. For D-dimensional minimum optimization problems:

where i = {1, 2, ⋯, Np}, Np is the population size, xjmin and xjmax are respectively the lower bound and upper bound of xj.

2.1.1 Mutation

It is mainly to generate mutation vector by scaling the difference of different individuals. There are many mutation operators proposed by Storn and Price [8, 9] and a widely used mutation strategy is listed as follows:

where t indicates the generation, the indexes r1 ~ r3 are distinct integers randomly chosen from the set {1, 2, ⋯, Np}\{i}, Vit is the mutation vector of the ith target vector Xit. The scaling factor F controls the amplification of the difference vector and is closely related to convergence speed. The small scaling factor determines the exploitation ability. On the contrary, the large scaling factor determines the exploration ability. It is note that the optimization performance of DE mainly depends on the choice of mutation strategies and the setting of control parameters.

2.1.2 Crossover

By crossover operation on the mutation vector Vit and the ith target vector Xit, the trial vector is generated. The diversity of the population can be increased by crossover operation. There are mainly three classic crossover operators: binomial crossover, exponential crossover and rotationally invariant arithmetic crossover operators. The following binomial crossover operator is the most commonly used.

where randj is a uniformly distributed random variable within (0, 1). jrand is randomly chosen from the set {1, 2,⋯, D}, which can prevents a direct copy from Xit to Uit. The crossover probability CR is a user-defined crossover factor restricted to (0, 1], which controls the diversity of the population and is closely connected with exploration power.

2.1.3 Selection

After crossover, the objective function value of the trial vector can be obtained according to the optimization problem. The newly generated trial vectors are evaluated and compared with the target vectors. A greedy strategy is adopted to perform the selection operation in DE. The superior of the target vector Xit and trial vector Uit will survive in the next generation. In mathematics, one has

2.2 Literature review

The performance of DE is mainly influenced by mutation mode, control parameters (i.e. population size (Np), scaling factor (F) and crossover probability (CR)) and crossover mode. Inappropriate configurations of mutation strategies and control parameters can cause stagnation or premature convergence. Therefore, many scholars have proposed a series of improved DE algorithms [1046].

To avoid manually tuning parameters, researchers have developed some techniques to automatically set the parameter values. Ryoji and Alex [10] used a historical memory of successful control parameters to guide the selection of future control parameters and proposed a parameter adaptation technique for DE (SHADE). Rcr-JADE [11] was an improved version of JADE [12] which employs successful parameters to repair crossover rate. To enhance the performance of L-SHADE algorithm, Awad and Ali et al. [13] proposed LSHADE-EpSin by using an adaptive approach based on sinusoidal formulas to adapt the scaling factor. In addition, some scholars have proposed adaptive strategies for the population size. Zhu et al. [14] proposed an adaptive population tuning scheme. In SapsDE [15], a self-adaptive population resizing mechanism was employed to adjust the population size. Chen and Zhao [16] et al. proposed population adaptive differential evolution (PADE). Award and Ali [17] et al. presented ensemble sinusoidal differential evolution with niching-based population reduction (called EsDEr-NR).

A large number of studies have carried out on improving mutation strategies. A part of studies focused on single-mutation mode. In JADE [12], an optional external archive is combined with a mutation strategy DE/current-to-pbest/1 that utilizes historical information to direct population searching. Wang et al. [18] proposed a self-adaptive differential evolution algorithm with improved mutation mode (IMMSADE) by improving “DE/rand/1”. Cai et al. [19] designed a neighborhood-dependent directional mutation operator and presented a neighborhood-adaptive DE (NaDE). A novel “DE/current-to-SP-best-ring/1” mutation operation is introduced in decentralizing and coevolving differential evolution (DCDE), proposed by Tang [20]. A novel and effective adaptation scheme is used to update the crossover rate in adaptive guided differential evolution algorithm (AGDE) [21]. He and Zhou [22] presented a novel DE variant with covariance matrix self-adaptation (DECMSA). In EFADE [23], a new triangular mutation operator is introduced. Cai et al. [24] presented an adaptive social learning (ASL) strategy to extract the neighborhood relationship information. The best search strategies and parameters of an EA are generally different in solving different optimization problems. Therefore, in order to improve the performance of an EA like DE, researchers make efforts to realize an ensemble of multiple strategies and parameters [2533]. Qin et al. [25] proposed a self-adaptive DE algorithm (SaDE), both trial vector generation strategies and their associated control parameter values were gradually self-adapted. Wang et al. [26] introduced a Composite Differential Evolution algorithm (CoDE), which used three trial vector generation strategies and three control parameter settings. Mallipeddi et al. [27] employed an ensemble of mutation strategies and control parameters with the DE (EPSDE). Elsayed et al. [28] used multiple search operators in conjunction with multiple constraint handing techniques. Wu and Mallipeddi et al. [29] proposed a multi-population ensemble DE (MPEDE). YEH et al. [30] mixed Gauss mutation and the “DE/best/1” operator. Cui et al. [31] proposed an adaptive multiple-elites-guided composite differential evolution algorithm with a shift mechanism (AMECoDEs). Wu et al. [32] focused on the high-level ensemble of different DE variants and proposed a new algorithm named EDEV. Lin and Ma et al. [33] proposed an adaptive immune-inspired multi-objective algorithm (AIMA).

In addition to the modification of mutation and control parameters optimization, enhancements in crossover operators have also been investigated, such as covariance matrix learning operator [34], hybrid linkage crossover [35], a crossover operator utilizing eigenvectors of covariance matrix of individual solutions [36], superior-inferior crossover scheme [37], an adaptive hybrid crossover operator(AHX) [38], optional blending crossover scheme [39] and a multiple exponential recombination that inherits all the main advantages of existing crossover operators [40].

Through the synergistic mechanism, a hybrid algorithm could take advantage of various merits within different algorithms, and then yields more favorable performance than a single algorithm. Some preliminary research manifest that hybrid optimizers are effective and competent for global optimization. Li et al. [41] proposed a new hybrid algorithm, called as differential evolution algorithm (DE) / artificial bee colony (ABC) algorithm. Vaisakh et al. [42] came up with a hybrid approach involving differential evolution (DE) and bacterial foraging optimization algorithm (BFOA). Ponsich and Coello [43] hybridized DE with Tabu Search (TS). Gu et al. [44] mixed binary differential evolution (BDE) and Tabu search (TS) to propose the memetic algorithm. Le et al. [45] merged differential evolution and harmony search. Nenavath and Jatoth [46] hybridized sine cosine algorithm with differential evolution.

SaDSDE algorithm

3.1 Dual-strategy mutation operator

In the original mutation strategies [8, 9], “DE/best/1” and “DE/best/2” conduct mutation on the best individual with better local exploitation ability and fast convergence speed, but they are easy to suffer premature convergence. “DE/rand/1” and “DE/rand/2” do mutation based on random individuals with stronger global exploration ability, but they are lack of the guidance of the optimal individual, so the convergence speed is slow. “DE/current-to-best/1” performs mutation based on the parent individual and the optimal individual with high convergence precision, but it is easy to fall into local optimum. Above mutation strategies are either too greedy or too stochastic. Therefore, in order to balance the exploitation and exploration better, a novel dual-strategy mutation operator is proposed based on “DE/best/2” and “DE/rand/2”, which is shown in Eqs (5)(7).

where Xbt is the best solution at the current generation t. r1r2r3r4r5 are randomly chosen from the set {1, ⋯, Np}\{i}, Fit is the scaling factor of the ith individual. ωi_1 and ωi_2 are the weights of two mutation operators, which take the value of either 0 or 1. If randi () < 0.5, ωi_1 = 1, ωi_2 = 0; otherwise, ωi_1 = 0, ωi_2 = 1. Two weights satisfy ωi_1 + ωi_2 = 1. In the above mutation scheme, only one of the two strategies is used for each vector depending on a uniformly distributed random value within the range (0, 1). λ is the exploration ability control factor and adjusts the exploration ability of mutation operator.

3.2 The scaling factor self-adaption strategy

The solution quality and the fitness are closely related. For the solution quality, the fitness is the larger the better. Therefore, the scaling factor is tuned according to the fitness in a self-adaptive and individual-dependent way, as is described in Eq (8).

Where f(Xit) is the fitness of the individual Xit, fmaxt is the maximum fitness at the current generation, and fmint is the minimum fitness. In particular, when fmaxt is equal to fmint, Fit is set to a uniformly distributed random variable within (0, 1) to increase disturbance. Superior individuals tend to be assigned with smaller parameter values so as to exploit their neighborhoods in which better solutions, while inferior individuals tend to be assigned with larger parameter values so as to explore further areas in the solution space.

3.3 The dynamic adjustment strategy of the exploration ability control factor λ

A key problem in many evolutionary algorithms is premature convergence, especially in the later stage of evolution. However, promoting strong global exploration ability at all stages of an evolutionary process might even be counterproductive in a phase where high exploitation is needed. In general, most algorithms perform more effectively with a high convergence rate in the earlier stage of the searching process relative to the later stage, especially for multimodal functions. At the later stage, the algorithms are easily trapped in local optimal solutions due to poor population diversity. Therefore, the dynamic adjustment strategy of the exploration ability control factor is introduced according to the evolution property, which is shown in Eq (9).

Taking 1000 generations as an example, the dynamic adjustment curve of the exploration ability control factor is shown in the Fig 1. During evolution, the population diversity is better and λ is assigned a smaller value at the earlier stage. In response to the decreased diversity of the population at the later stage of evolution, λ is assigned a larger value to increase the proportions of the explorative mutation operator.

Fig. 1. The dynamic adjustment curve of λ.
The dynamic adjustment curve of λ.

Based on all the components described above, the pseudo-code of the complete procedure of SaDSDE is outlined in Algorithm 1. Comparing to the basic DEs, the time complexity of proposed SaDSDE is still O(T * Np * D) without extra time complexity. Herein, T, Np and D are respectively the maximum generation number, the population size and the dimension of problem.

Algorithm 1. Pseudo-Code for the SaDSDE

Begin

#01 t = 0;

#02 Randomly initialize an initial population X(0)={Xi,j(0)|∀i,i=1,…,Np,∀j,j=1,…,D}, Xi,j(0)∈[xjmin,xjmax], T = 1000, CR = 0.9;

#03 For t = 1 to T Do

#04  Calculate the exploration ability control factor λ = 1 − cos((t/T)2);

#05  Evaluate f(Xit|∀i,i=1,⋯,Np) and find Xbt;

#06  For each individual Xit in current population Do

#07   If fmaxt≠fmint Then

#08   Update Fit=(fmaxt−f(Xit))/(fmaxt−fmint);

#09   Else

#10   Set Fit=rand(0,1) to increase disturbance;

#11   End

#12   Generate a random real number rit=rand(0,1);

#13   If rit<0.5 Then

#14   ωi_1 = 1 and ωi_2 = 0;

#15   Else

#16   ωi_1 = 0 and ωi_2 = 1;

#17   End

#18   Generate r1r2r3r4r5 from the set {1,⋯, Np}\{i} randomly;

#19    Vi_1t=Xbt+Fit⋅(Xr1t−Xr2t)+Fit⋅(Xr3t−Xr4t);

#20    Vi_2t=Xr1t+Fit⋅(Xr2t−Xr3t)+Fit⋅(Xr4t−Xr5t);

#21   For j = 1 to D Do

#22    Generate jrand = randint(1, D);

#23    If j = jrand or rand(0,1) < CR Then

#24     Ui,jt=ωi_1⋅Vi_1,jt+λ⋅ωi_2⋅Vi_2,jt;

#25    Else

#26     Ui,jt=Xi,jt;

#27    End If

#28   End For

#29   If f(Uit)≤f(Xit) Then

#30    Xit+1=Uit;

#31   Else

#32    Xit+1=Xit;

#33   End If

#34  End For

#35 t = t + 1

#36 End While

End

Experiments analysis

4.1 Experiments setup

In our experiments, 30 benchmark test functions from references [4749] are used to test the performance of SaDSDE, which are listed in Table 1. f1—f11 are unimodal functions, f12-f30 are multimodal functions.

Tab. 1. Benchmark test functions.
Benchmark test functions.

4.2 Time complexity

The time complexity is calculated as described in [47]. The codes are implemented in Matlab 2015a and run on a PC with an Intel (R) Core (TM) i5-6500 CPU (3.20GHz) and 8GB RAM. The algorithm complexity is listed in Table 2. In Table 2, T0 denotes the running time of the following program:

Tab. 2. Time complexity (time in seconds).
Time complexity (time in seconds).

T1 is the computing time just for Function 3 (Elliptic function) for 200,000 evaluations at a certain dimension D. T2 is the complete computing time for the algorithm with 200,000 evaluations of D dimensional Elliptic Function. T2 is evaluated five times, and T^2 is used to denote the mean T2. At last, the complexity of the algorithm is reflected by: T0, T1, T^2, (T^2-T1)/T0.

4.3 Sensitivity analysis to control parameters

4.3.1 Sensitivity analysis to population size

The effect of population size in SaDSDE is investigated by testing 30-dimensional problems. Np is set to 50, 75, 100, 125, 150, 175, 200, 225 and 250. For each test function with a different population size, the maximum generation number is still set to 1000. From the statistical Friedman test [50] results in Fig 2, the performance of SaDSDE decreases with reducing population size, because a larger population size has high exploration ability.

Fig. 2. Friedman test results at D = 30 over 30 independent runs.
Friedman test results at <i>D</i> = 30 over 30 independent runs.

Although SaDSDE with Np = 250 is best in Fig 2, there is no obvious performance difference in Table 3. Therefore, we can say that SaDSDE is not sensitive to the population size.

Tab. 3. The results of Wilcoxon’s rank-sum test over 30 independent runs.
The results of Wilcoxon’s rank-sum test over 30 independent runs.

4.3.2 Sensitivity analysis to crossover probability

The impact of the crossover probability CR on the performance of proposed algorithm is also analyzed. The candidate values of CR are 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9, respectively. We perform Friedman test and Wilcoxon’s rank-sum test [50] among the optimization result on different CR values, respectively. The test results are shown in Fig 3 and Table 4, respectively. From Fig 3, we can observe that the performance of SaDSDE is best at CR = 0.9.

Fig. 3. The result of Friedman test with 30 variables over 30 independent runs.
The result of Friedman test with 30 variables over 30 independent runs.
Tab. 4. The results of Wilcoxon’s rank-sum test over 30 independent runs.
The results of Wilcoxon’s rank-sum test over 30 independent runs.

From Table 4, it can be observed that SaDSDE is not sensitive to CR except for CR = {0.1, 0.2, 0.3} at the 0.1 significance level. Based on the trade-off of the convergence precision and convergence rate, we think that CR = 0.9 is a more appropriate value. Therefore, we set CR = 0.9 in the following series of experiments unless noted otherwise.

4.4 Parameter settings and involved algorithms

For rigorous performance verification, SaDSDE is compared with the following 7 well-established DE variants (i.e., Rcr-JADE [11], EsDEr-NR [17], IMMSADE [18], AGDE [21], EFADE [23], MPEDE [29] and EDEV [32]) and 3 non-DE algorithms (i.e., social learning particle swarm optimization (SL-PSO) [51], grey wolf optimization (GWO) [52] and whale optimization algorithm (WOA) [53]). Most parameters of compared algorithms are kept the same as used in their original literatures, which are shown in Table 5.

Tab. 5. Parameter settings.
Parameter settings.

The common parameters are set as follows. The max number of iterations is set to T = 1000, the population size is set to Np = 100, and the times of all experiment runs is set to 30.

4.5 Experimental comparisons

4.5.1 Numerical analysis

The optimization performance of DE is evaluated in terms of the Mean and Standard Deviation (STD) of the function solutions, and the best and the second best results are respectively marked in bold and italic. In order to show the performance clearly, the number of the best and second best results are summarize graphically. In addition, 4 non-parametric statistical tests [50] are used to compare the performances among all the algorithms, including Wilcoxon signed-rank test, Friedman test, Kruskal_Wallis test and Wilcoxon rank-sum test. Friedman test and Kruskal_Wallis test are conducted based on the optimization results to evaluate the overall performance and we can obtain the rankings of algorithms. Wilcoxon signed-rank test at the 0.05 significance level and Wilcoxon rank-sum test respectively are used to verify the differences among algorithms for single and multiple problems. Signs “+”, “-” and “≈” indicate that the performance of SaDSDE is significantly better than, worse than and similar to its competitor, respectively. R+ denotes the sum of ranks for the test problems in which the first algorithm performs better than the second algorithm, and R represents the sum of ranks for the test problems in which the first algorithm performs worse than the second algorithm. Larger ranks indicate larger performance discrepancy. “p-value” indicates the probability of rejecting the hypothesis. “Yes” indicates that the performance of the first algorithm is better than the second algorithm significantly, and “No” indicates that there is no significant performance discrepancy.

A. Comparison with 7 improved DE variants

In this experiment, the proposed algorithm is compared with 7 improved DE variants on 30-dimensional and 100-dimensional problems. The experimental results are listed in Tables 6 and 7. To show the performance clearly, we summarize the number of the best, the second best results and global optimums graphically in Figs 4 and 5, respectively. Wilcoxon signed-rank test results and the average rankings of all the DEs are shown in Figs 6 and 7, respectively. In addition, Wilcoxon rank-sum test results are represented in Table 8.

Tab. 6. The optimization results obtained by SaDSDE and 7 DE variants at D = 30.
The optimization results obtained by SaDSDE and 7 DE variants at <i>D</i> = 30.
Tab. 7. The optimization results obtained by SaDSDE and 7 DE variants at D = 100.
The optimization results obtained by SaDSDE and 7 DE variants at <i>D</i> = 100.
Tab. 8. The results of Wilcoxon rank-sum test over 30 independent runs.
The results of Wilcoxon rank-sum test over 30 independent runs.
Fig. 4. Number of cases on which each algorithm performs the best and second best in the comparison.
Number of cases on which each algorithm performs the best and second best in the comparison.
(A) D = 30. (B) D = 100.
Fig. 5. Number of cases on which each algorithm obtains global optimum.
Number of cases on which each algorithm obtains global optimum.
(A) D = 30. (B) D = 100.
Fig. 6. Wilcoxon signed-rank test results at the 0.05 significance level.
Wilcoxon signed-rank test results at the 0.05 significance level.
(A) D = 30. (B) D = 100.
Fig. 7. Non-parametric statistical test results over 30 independent runs.
Non-parametric statistical test results over 30 independent runs.
(A) Friedman test results. (B) Kruskal_Wallis test results.

From Tables 6 and 7, we can draw the following conclusions: (1) SaDSDE can obtain the global optimal solution on 10 unimodal functions (f1-f10) and 12 multimodal functions (f12-f16, f18, f20, f22-f24 and f25-f28). (2) Except for f11, f17, f20-f21, f27 and f29-f30, the convergence precision of SaDSDE is best at D = 30 and D = 100, and SaDSDE is comparable to the other compared algorithms on f19, f22 and f28. In addition, at D = 30, Rcr-JADE is best on f17 and f29, EsDEr-NR is best on f11, f25 and f29, IMMSADE is best on f22, AGDE is best on f21, MPEDE is best on f17, f20 and f29; at D = 100, EsDEr-NR is best on f11, IMMSADE is best on f17, MPEDE is best on f20-f21, f25 and f29, EDEV is best on f30. (3) The optimization performance of other algorithms decreases rapidly for the high dimensions, but SaDSDE still keep the excellent optimization performance.

Form the statistical histogram in Fig 4, SaDSDE and Rcr-JADE, EsDEr-NR, IMMSADE, AGDE, EFADE, MPEDE and EDEV can obtain the best results on 23, 6, 7, 6, 2, 1, 8 and 5 functions and obtain the second best results on 0, 1, 10, 2, 1, 4, 9 and 3 functions at D = 30, respectively. For the high-dimensional problem tests, the advantage of the proposed SaDSDE is more prominent. SaDSDE and 7 improved DE variants can obtain the best results on 23, 0, 1, 4, 0, 0, 5 and 1 functions and obtain the second best results on 2, 1, 2, 11, 1, 2, 9 and 2 functions at D = 100, respectively.

According to the statistical results in Fig 5, SaDSDE can obtain global optimums on 22 (= 10 unimodal cases+12 multimodal cases) functions in all dimensions, while 7 DE variants in turn obtain global optimums on 3 (= 1+2), 3 (= 1+2), 5 (= 1+4), 1 (= 1+0), 1 (= 1+0), 4 (= 1+3) and 4 (= 1+3) functions at D = 30. As the dimension increases, the number of global optimum decreases greatly for compared algorithms.

As shown in Fig 6, SaDSDE is significantly better than Rcr-JADE, EsDEr-NR, IMMSADE, AGDE, EFADE, MPEDE and EDEV on 19, 18, 19, 22, 21, 17 and 18 functions out of 30 functions at D = 30, respectively. On the contrary, SaDSDE is worse than 7 DE variants only on 6, 7, 2, 5, 6, 6 and 7 functions, respectively. At D = 100, SaDSDE is significantly better than all the competitors at least in 20 out of 30 functions and the advantage of proposed SaDSDE is more prominent.

According to Fig 7, SaDSDE is the best, and EsDEr-NR and MPEDE are respectively the second best at D = 30 and D = 100.

As shown in Table 8, we can observe that SaDSDE gets higher R+ values than R− values for all the compared DEs, and all the p-values are less than 0.05. It proves that SaDSDE outperforms other compared DE algorithms significantly.

B. Comparison with 3 non-DE algorithms

DE simulates the natural evolution (such as mutation, crossover and selection) for global search. Unlike DE algorithm, SL-PSO [51], GWO [52] and WOA [53] are swarm intelligence optimization algorithms which simulate the social behavior of natural biological groups (such as foraging, nesting, migration, hunting, predation, etc.). SL-PSO, GWO and WOA are highly competitive swarm intelligence optimization algorithms. Therefore, in order to further verify the effectiveness of SaDSDE algorithm, SaDSDE is compared with SL-PSO, GWO and WOA algorithms. The obtained results (i.e., Mean and STD) and Wilcoxon signed-rank test results at the 0.05 significance level are shown in Tables 9 and 10. The number of the best and the second best results are shown in Fig 8. Additionally, the statistical analysis results for all test functions are listed in Fig 9 and the Wilcoxon rank-sum test results are represented in Table 11.

Tab. 9. The optimization results obtained by SaDSDE and 3 non-DE algorithms at D = 30.
The optimization results obtained by SaDSDE and 3 non-DE algorithms at <i>D</i> = 30.
Tab. 10. The optimization results obtained by SaDSDE and 3 non-DE algorithms at D = 100.
The optimization results obtained by SaDSDE and 3 non-DE algorithms at <i>D</i> = 100.
Tab. 11. The results of Wilcoxon rank-sum test over 30 independent runs.
The results of Wilcoxon rank-sum test over 30 independent runs.
Fig. 8. Number of cases on which each algorithm performs the best and second best in the comparison.
Number of cases on which each algorithm performs the best and second best in the comparison.
(A) D = 30. (B) D = 100.
Fig. 9. Non-parametric statistical test results over 30 independent runs.
Non-parametric statistical test results over 30 independent runs.
(A) Friedman test results. (B) Kruskal_Wallis test results.

Analyzing the test results in Tables 9 and 10, SaDSDE is significantly better than SL-PSO, GWO and WOA on 17, 20 and 15 functions out of 30 functions at D = 30, respectively. At D = 100, SaDSDE performs better than 3 non-DE algorithms on 23, 19 and 12 functions. From the diagram in Fig 8, SaDSDE obtains best results on 24 and 23 functions at D = 30 and D = 100, respectively. Fig 9 shows that SaDSDE gets the best ranking, followed by WOA. SaDSDE is better than 3 non-DE variants significantly, as shown in Table 11.

4.5.2 Convergence analysis

In order to study the evolution properties, convergence curves are utilized to prove the difference, as illustrated in Fig 10. Furthermore, to illustrate the distribution of the results of each algorithm, the box plots of function solutions of all the algorithms on 6 functions (1 unimodal function and 5 multimodal functions) at D = 30 are depicted in Fig 11.

Fig. 10. Convergence curves of the mean function error values for six test functions at D = 30.
Convergence curves of the mean function error values for six test functions at <i>D</i> = 30.
The horizontal axis and the vertical axis are generations and the mean function error values over 30 independent runs. The legends of Fig 10 (B-F) are the same as Fig 10(A). (A) f8. (B) f12. (C) f18. (D) f23. (E) f24. (F) f26.
Fig. 11. Box plots of the result of solution error at D = 30 over independent 30 runs.
Box plots of the result of solution error at <i>D</i> = 30 over independent 30 runs.
(A) f8. (B) f12. (C) f18. (D) f23. (E) f24. (F) f26.

Fig 10 shows that SaDSDE can obtain the global optimum and has the best convergence rate on 6 functions in the minimum generations, while other compared algorithms have strapped into "evolution stagnation" on different functions. For example, Non-Continuous Rastrigin’s Function (f26) is multi-modal, non-separable and asymmetrical function and the local optima’s number is huge, DE is easy to suffer premature convergence to be trapped in one of its many local minima. SaDSDE can find the global optimal solution. However, IMMSADE, EFADE, AGDE and GWO have trapped into “evolution stagnation”. In addition, we can see that SaDSDE performs more consistently than other algorithms on these problems, as shown in Fig 11.

As evident from above analysis, the proposed SaDSDE demonstrates the better performance of convergence precision and robustness than most of the compared algorithms. To summarize, the proposed SaDSDE has the best convergence precision and stability. The reason is that the proposed mutation strategy, scaling factor self-adaption strategy and dynamic adjustment strategy of the exploration ability control factor achieve a better balance of exploration and exploitation.

4.6 Efficiency analysis of proposed algorithmic components

The proposed algorithm represents a combined effect. Therefore, we do the efficiency analysis of proposed algorithmic components, including dual-strategy mutation operator, scaling factor self-adaption strategy and dynamic adjustment strategy of exploration ability control factor. Some variants of SaDSDE are listed as follows.

  • To study the effectiveness of scaling factor self-adaptation strategy, SaDSDE variants adopt dynamic λ and fixed scaling factor of F = 0.3, F = 0.7 and random real number in [0, 1], which are respectively named SaDSDE-1, SaDSDE-2 and SaDSDE-3 one by one.

  • To verify the contribution of dynamic adjustment strategy of the exploration ability control factor (λ), SaDSDE variants with self-adaptive F, λ = 0.3, λ = 0.7 and random real number in [0, 1] are respectively named SaDSDE-4, SaDSDE-5 and SaDSDE-6 for short.

In order to evaluate and compare the performance of different SaDSDE variants, Friedman test and Wilcoxon’s rank-sum test are employed, and the test results are plotted in Fig 12 and Table 12, respectively. Observing the test results in Fig 12 and Table 12, we can obtain the following conclusions easily. (1) From Fig 12, the proposed SaDSDE and SaDSDE-1 are respectively the best and the second best, followed by SaDSDE-4, SaDSDE-3, SaDSDE-2, SaDSDE-6 and SaDSDE-5. It is clear that the combined effect of proposed algorithm components is best. (2) From Table 12, there is no significant performance difference between proposed SaDSDE and SaDSDE variants with exploration ability control factor, while proposed SaDSDE is better than SaDSDE-5 with a larger exploration ability control factor significantly. The validity of two proposed strategies is verified by means of above experimental comparisons. It is note that the contribution of dynamic adjustment strategy of the exploration ability control factor is larger than scaling factor self-adaptation strategy. In other words, although the scaling factor self-adaption is effective, SaDSDE is less susceptible to scaling factor.

Fig. 12. The Friedman test results over 30 independent runs.
The Friedman test results over 30 independent runs.
Tab. 12. The results of Wilcoxon’s rank-sum test of proposed SaDSDE and 6 SaDSDE variants over 30 independent runs.
The results of Wilcoxon’s rank-sum test of proposed SaDSDE and 6 SaDSDE variants over 30 independent runs.

4.7 Adaptability analysis to high-dimensional problems

In order to evaluate the adaptability to high-dimensional problems of each algorithm, we conduct Wilcoxon’s signed-rank test and Wilcoxon rank-sum test at the 0.05 significance level for each algorithm between its own 30 variables and 100 variables. The nonparametric statistical test results are shown in Fig 13 and Table 13. From Fig 13, we can obtain the following observations: (1) For EsDEr-NR, AGDE and EFADE, the low-dimensional performance is significantly better than high-dimensional performance on all the test functions. (2) For SaDSDE, the low dimensional performance is significantly better than, worse than and similar to high dimensional performance on 7 and 23 out of 30 functions, respectively. (3) For IMMSADE, GWO, WOA and SaDSDE, the difference is not statistically significant in Table 13, while the gap between R+ and R- of SaDSDE is significantly smaller than IMMSADE, GWO and WOA. Therefore, SaDSDE has the best adaptability to high-dimensional problems.

Fig. 13. The Wilcoxon’s signed-rank test results with a significance level of 0.05 over 30 independent runs.
The Wilcoxon’s signed-rank test results with a significance level of 0.05 over 30 independent runs.
Signs“+”, “-” and “≈” indicate the number of functions that the performance of algorithm for low dimensional problems is better than, worse than and similar to the performance for high dimensional problems, respectively.
Tab. 13. The results of Wilcoxon rank-sum test over 30 independent runs.
The results of Wilcoxon rank-sum test over 30 independent runs.

4.8 Discussions on the comparison results

A series of experiment comparisons have proved the effectiveness of SaDSDE. The reasons can be concluded as follows. (1) The novel dual-strategy mutation operator gives attention to exploitation and exploration ability simultaneously. Exploration can make the algorithm search every promising solution area with good diversity, while exploitation can make the algorithm execute a local search in some promising solution areas to find the optimal point with a high convergence rate. (2) In scaling factor self-adaption strategy, the scaling factors of each individual are set for individuals to make full use of the differences in their fitness. Therefore, the superior individuals are assigned relatively smaller values to exploit their neighborhoods in which better solutions. On the contrary, the inferior individuals are assigned larger values to explore further areas in the solution space. (3) The exploration ability control factor is adjusted dynamically according to the generations, which takes into account the evolutionary property in different evolution stages. It can get a good balance between exploration and exploitation ability to make the algorithm search promising solution area and help algorithm jump over the trap of local optimal solution.

Conclusions

The novel mutation operator with “DE/best/2” operator and “DE/rand/2” operator is proposed in this paper. The “DE/rand/2” mutation operator can expand the search area in the later stage of evolution and avoid suffering premature convergence. The “DE/best/2” mutation operator can accelerate the convergence speed greatly. The performance of the algorithm is excellent even if CR is set to a fixed value. Experiment results show that: (1) SaDSDE is not sensitive to the population size and it is easier to implement. (2) SaDSDE has the best performance among all the compared variants. (3) For the high-dimensional functions, with the same-scale population and iterations, SaDSDE can still get the excellent global optimization performance, while there is serious performance degradation for other compared algorithms. In other words, SaDSDE has the best adaptability to high-dimensional problems.

As a continuation of this research, we will focus on multi-objective optimization (MOO) problems and some actual engineering applications, such as flight conflict resolution and image registration, etc.


Zdroje

1. Storn R, Price K. Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces. California: University of California, Berkeley. 1995.

2. Zhai S, Jiang T. A new sense-through-foliage target recognition method based on hybrid differential evolution and self-adaptive particle swarm optimization-based support vector machine. Neuro computing. 2015; 149:573–584.

3. Bui NT, Hasegawa H. Training Artificial Neural Network Using Modification of Differential Evolution Algorithm. International Journal of Machine Learning and Computing. 2015; 5(1):1–6.

4. Arce F, Zamora E, Sossa H, Barrón R. Differential evolution training algorithm for dendrite morphological neural networks. Applied Soft Computing. 2018; 68:303–313.

5. Das S, Abraham A, Konar A. Automatic clustering using an improved differential evolution algorithm. IEEE Trans. Syst., Man, Cybern. A, Syst., Humans. 2008; 38(1): 218–236.

6. El-Quliti SA, Mohamed AW. A large-scale nonlinear mixed binary goal programming model to assess candidate locations for solar energy stations: an improved real-binary differential evolution algorithm with a case study. J Comput Theor Nanosci. 2016; 13(11):7909–7921.

7. ĈrepinŜek M, Liu SH, Mernik M. Exploration and exploitation in evolutionary algorithms: a survey. ACM Computing Surveys. 2013; 45:1–33.

8. Price K, Storn R, Lampinen J. Differential Evolution: A Practical Approach to Global Optimization. Berlin, Germany: Springer-VerlagR. 2005.

9. Storn R, Price K. Home Page of Differential Evolution. Int. Comput. Sci. Inst., Berkeley, CA, USA. 2010.

10. Tanabe R, Fukunaga A. Success-History Based Parameter Adaptation for Differential Evolution. IEEE Congress on Evolutionary Computation (CEC). 2013; 71–78.

11. Gong WY, Cai ZH, Wang Y. Repairing the crossover rate in adaptive differential evolution. Applied Soft Computing. 2014; 15:149–168.

12. Zhang J, Sanderson AC. JADE: Adaptive differential evolution with optional external archive. IEEE Trans on Evolutionary Computation. 2009; 13(5):945–958.

13. Awad NH, Ali MZ, Suganthan PN, Reynolds RG. An ensemble sinusoidal parameter adaptation incorporated with L-SHADE for solving CEC2014 benchmark problems. Proceedings of the IEEE Congress on Evolutionary Computation. 2016; 2958–2965.

14. Wu Z, Yang T, Fang JA, Zhang WB. Adaptive population tuning scheme for differential evolution. Information Sciences. 2013; 223:164–191

15. Wang X, Zhao SG. Differential Evolution Algorithm with Self-Adaptive Population Resizing Mechanism. Mathematical Problems in engineering. 2013; Article ID:419372.

16. Chen L, Zhao S, Zhu W, L YY, Z WB. A self-Adaptive differential evolution algorithm for parameters identification of stochastic genetic regulatory networks with random delays. Arabian Journal for Science and Engineering. 2014; 39(2):821–835.

17. Awad NH, Ali MZ, Suganthan PN. Ensemble of parameters in a sinusoidal differential evolution with niching-based population reduction. Swarm and Evolutionary Computation. 2018; 39:141–156.

18. Wang SH, Li YZ, Yang HY. Self-adaptive differential evolution algorithm with improved mutation mode. Applied Intelligence. 2017; 47:644–658.

19. Cai YQ, Sun G, Wang T, Tian H, Chen YH, Wang JH. Neighborhood-adaptive differential evolution for global numerical optimization. Applied Soft Computing. 2017; 59:659–706.

20. Tang RL. Decentralizing and coevolving differential evolution for large-scale global optimization problems. Applied Intelligence. 2017; 47:1208–1223.

21. Mohamed AW, Mohamed AK. Adaptive guided differential evolution algorithm with novel mutation for numerical optimization. International Journal of Machine Learning and Cybernetics. 2017; 1–23.

22. He XY, Zhou YR. Enhancing the performance of differential evolution with covariance matrix self-adaptation. Applied Soft Computing. 2018; 64:227–243.

23. Mohamed AW, Suganthan PN. Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Computing. 2018; 22(10):3215–3235.

24. Cai YQ, Liao JL, Wang T, Chen YH, Tian H. Social learning differential evolution. Information Sciences. 2018; 433–444:464–509.

25. Qin AK, Huang VL, Suganthan PN. Differential evolution algorithm with strategy adaption for global numerical optimization. IEEE Trans on Evolutionary Computation. 2009; 13:398–417.

26. Wang Y, Cai Z, Zhang Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans on Evolutionary Computation. 2011; 15(1):55–66.

27. Mallipeddi R, Suganthan PN, Pan QK, Tasgetiren MF. Differential evolution algorithm with ensemble of parameters and mutation strategies. Applied Soft Computing. 2011; 11(2): 1679–1696.

28. Elsayed SM, Sarker RA, Essam DL. A self-adaptive combined strategies algorithm for constrained optimization using differential evolution. Applied Mathematics and Computation. 2014; 241:267–282.

29. Wu GH, Mallipeddi R, Suganthan PN, Wang R, Chen HK. Differential evolution with multi-population based ensemble of mutation strategies. Information Sciences. 2016; 329:329–345.

30. YEH MF, LU HC, CHEN TH, LEU MS. Modified Gaussian barebones differential evolution with hybrid crossover strategy. Proceedings of the 2016 International Conference on Machine Learning and Cybernetics. 2017; 7–12.

31. Cui LZ, Li GH, Zhu ZX, Lin QZ, Wong KC, Chen JY, et al. Adaptive multiple-elites-guided composite differential evolution algorithm with a shift mechanism. Information Sciences. 2018; 422:122–142.

32. Wu GH, Shen X, Li HF, Chen HK, Lin AP, Suganthan PN. Ensemble of differential evolution variants. Information Sciences. 2018; 423:172–186.

33. Lin QZ, Ma YP, Chen JY, Zhu QL, Coello CAC, W KC, et al. An adaptive immune-inspired multi-objective algorithm with multiple differential evolution strategies. Information Sciences. 2018; 430–431:46–64.

34. Wang Y, Li HX, Huang T, Li L. Differential evolution based on covarianc matrix learning and bimodal distribution parameter setting. Appl. Soft Comput. 2014; 18:232–247.

35. Cai YQ, Wang JH. Differential evolution with hybrid linkage crossover. Inf. Sci 2015; 320:244–287.

36. Guo SM, Yang CC. Enhancing Differential Evolution Utilizing Eigenvector-Based Crossover Operator. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION. 2015; 19(1):31–49.

37. Xu YL, Fang JA, Zhu W, Wang XP, Zhao LD. Differential evolution using a superior-inferior crossover scheme. Comput Optim Appl. 2015; 61:243–274.

38. Zhu QL, Lin QZ, Du ZH, Liang ZP, Wang WJ, Zhu ZX, et al. A novel adaptive hybrid crossover operator for multiobjective evolutionary algorithm. Information Sciences. 2016; 345:177–198.

39. Ghosh A, Das S, Mullick SS, Mallipeddi R, Das AK. A switched parameter differential evolution with optional blending crossover for scalable numerical optimization. Applied Soft Computing. 2017; 57:329–352.

40. Qiu X, Tan KC, Xu JX. Multiple Exponential Recombination for Differential Evolution. IEEE TRANSACTIONS ON CYBERNETICS. 2017; 47(4):995–1005. doi: 10.1109/TCYB.2016.2536167 28113880

41. Li X, Yin M. Hybrid differential evolution with artificial bee colony and its application for design of a reconfigurable antenna array with discrete phase shifters. Iet Microwaves Antennas & Propagation. 2012; 6(6):1573–1582.

42. Vaisakh K, Praveena P, Sujatah KN. Differential evolution and bacterial foraging optimization based dynamic economic dispatch with non-smooth fuel cost functions. Swarm, Evolutionary, and Memetic Computing. 2013; 583–594.

43. Ponsich A, Coello CAC. A hybrid differential evolution-Tabu search algorithm for the solution of job-shop scheduling problems. Applied Soft Computing. 2013; 13(1):462–474.

44. Gu XP, Li Y, Jia JH. Feature selection for transient stability assessment based on kernelized fuzzy rough sets and memetic algorithm. Electrical Power and Energy Systems. 2015; 64:664–670.

45. Le LD, Vo D, Nguyen TH, Le AD. A hybrid differential evolution and harmony search for non-convex economic dispatch problems. IEEE Conference on Power Engineering and Optimization. 2013; 238–243.

46. Nenavath H, Jatoth RK. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Applied Soft Computing. 2018; 62:1049–1043.

47. Suganthan PN, Hansen N, Liang J, Deb K. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. 2005.

48. Liang JJ, Qu BY, Suganthan PN, Chen Q. Problem definition and evaluation criteria for the CEC 2015 competition on learning-based real-parameter single objective optimization. 2015.

49. Awad NH, Ali MZ, Suganthan PN, Liang JJ, Qu BY. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization. 2016.

50. Derrac J, García S, Molina D, Herrera F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evolutionary Computation. 2011; 1:3–18.

51. Cheng R, Jin YC. A social learning particle swarm optimization algorithm for scalable optimization. Information Sciences. 2015; 291: 43–60.

52. Mirjalili S, Mirjalili SM, Lewis A. Grey Wolf Optimizer. Advances in Engineering Software. 2014; 69:46–61.

53. Mirjalili S, Lewis A. The Whale Optimization Algorithm. Advances in Engineering Software. 2016; 91:51–67.


Článek vyšel v časopise

PLOS One


2019 Číslo 10
Nejčtenější tento týden
Nejčtenější v tomto čísle
Kurzy

Zvyšte si kvalifikaci online z pohodlí domova

plice
INSIGHTS from European Respiratory Congress
nový kurz

Současné pohledy na riziko v parodontologii
Autoři: MUDr. Ladislav Korábek, CSc., MBA

Svět praktické medicíny 3/2024 (znalostní test z časopisu)

Kardiologické projevy hypereozinofilií
Autoři: prof. MUDr. Petr Němec, Ph.D.

Střevní příprava před kolonoskopií
Autoři: MUDr. Klára Kmochová, Ph.D.

Všechny kurzy
Kurzy Podcasty Doporučená témata Časopisy
Přihlášení
Zapomenuté heslo

Zadejte e-mailovou adresu, se kterou jste vytvářel(a) účet, budou Vám na ni zaslány informace k nastavení nového hesla.

Přihlášení

Nemáte účet?  Registrujte se

#ADS_BOTTOM_SCRIPTS#