Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.18 No.1 pp.35-51
DOI : https://doi.org/10.7232/iems.2019.18.1.035

An Efficient Modified Immigrant Population Search Algorithm for Optimization over Unconstrained Spaces

Hamid-Reza Kamali, Parisa Shahnazari-Shahrezaei*
Independent Researcher
Department of Industrial Engineering, Firoozkooh Branch, Islamic Azad University, Firoozkooh, Iran
Corresponding Author, E-mail: parisa_shahnazari@yahoo.com
June 5, 2017 November 14, 2019 October 26, 2018

ABSTRACT


Immigrant Population Search Algorithm (IPSA) is a combinatorial optimization method that is based on the population of solutions. In this paper, the structure of this algorithm is explained for unconstrained optimization. The steps of the algorithm include initialization, new population group immigration, removal of undesirable population groups and local search. After presenting the steps of the algorithm, a comparison with other meta-heuristic algorithms and their performance is done based on a number of mathematical optimization functions. The result of the comparison and statistical analysis shows that the proposed algorithm has a better performance over other algorithms that have been studied.



초록


    1. INTRODUCTION

    In solving many real scientific problems, such as issues related to engineering, physics, chemistry and molecular biology (More and Wu, 1995) needs to mathematical optimization arises. Many of these issues are in the category of NP-hard or NP-complete problems that solving them in large scales, takes very long and it is almost impossible (Cormen et al., 2001). Some of these problems can be noted as integer or nonlinear problems. The only way to solve these problems is using heuristic solutions that provide an approximate solution. These methods, however, do not guarantee obtaining an optimal solution but often obtain an optimal or a nearly optimal solution. A set of heuristic methods have a general format and are applicable to different problems and their ability to solve different problems have been proven in practice. These solving methods are known as metaheuristic methods. Some of the metaheuristic methods are based on the population of solutions. These algorithms are mostly inspired from a natural phenomenon and offer a solution to the problems by applying collective intelligence methods (Bonabeau et al., 1999).

    Optimization problems are divided into constrained problems and unconstrained problems (Bader, 2009). The aim of solving a constrained problem is optimizing the objective function in a determined area that is justified by the constraints of the problem. While the aim of solving an unconstrained problem that does not have any restrictions is to optimize the objective function.

    Genetic Algorithm (GA) is inspired from the genetic mechanism of the nature (D’Ambrosio et al., 2013;Chuang et al., 2016). In this algorithm each solution is considered as a chromosome and the populations of chromosomes are upgraded by genetic operators in each iteration. The chance of each member in the formation of a new population is proportional to the quality. This iteration continues until all members of the population resemble or the population reaches a certain level of quality. Particle Swarm Optimization algorithm (PSO) is inspired by from the behavior of animals such as fish and birds that live in groups (Marini and Walczak, 2015;Meng et al., 2016). The direction and movement speed of the members of this group that are together in the same space looking for food is based on the best experience of personal and group members. In iterations after determining the direction and speed of each particle, the new location of the particle is calculated. The iterations are carried out until the group members focus on a point or their quality reaches to a certain extent.

    The Differential Evolution algorithm (DE), searches for solutions probabilistically (Yi et al., 2016). So that based on distance and direction information, a new population is created from the previous population. The algorithm is from the genetic algorithms family, evolutionary programming (Kota and Jarmai, 2015) and evolutionary strategies (Sun et al., 2009). In this algorithm, each element has the chance to take part in the production of new population and does not depend on the quality of the solution, and for all members is the same. Artificial Bee Colony algorithm (ABC) is inspired from bees’ collective behavior (Yurtkuran and Emel, 2015;Jadhav and Bamane, 2016). In this algorithm, bees are divided into three categories, worker bees, onlooker and scout which together smartly search the solution space to find areas with more nectar. They improve the solution of the problem step by step through the mechanism searching the neighborhood and discarding poor results.

    Ant Colony Optimization algorithm (ACO) is inspired from the natural behavior of ants groups (Guan and Lin, 2016;Gilani and Sattarvand, 2016). In this algorithm the ants search for a shorter path to reach the food. The algorithm uses a factor called pheromones. Each ant after crossing paths leaves an amount of pheromone on the path. Therefore, the amount of pheromones indicates the frequency of choosing a path in old solutions. This factor, along with the visual quality of a path is effective in selecting the track by other ants. Shuffled Frog Leaping Algorithm (SFLA) is inspired by the collective behavior of frogs groups that are looking for food (Lei and Guo, 2015;Luo et al., 2015). Frogs are divided into several sub-groups that carry out local search independently. Frogs in a subgroup can affect the other frogs in the same subgroup and thereby evolve. After evolution, the following groups are combined, and thereby solutions are optimized in global scope. Then new sub-groups are created and combining local search and global search continues until convergence is met.

    Imperialist Competitive Algorithm (ICA) has been inspired from the social behavior of imperialist countries (Sharifi and Mojallali, 2015;Chen and Chen, 2016). In this algorithm, each imperialist country has several colony countries and each colony promotes itself to the imperialist country by leaning. Each country is a solution for the algorithm and that country’s power is equal to the objective value of considered solution. Imperialist countries compete with each other to capture more colonies and the competition continues until just one imperialist remains. Cuckoo Optimization Algorithm (COA) is inspired by the process of laying eggs and cuckoos population migration (Rajabioun, 2011;Mellal and Williams, 2015). Any eggs or a bird represents a solution and quality of it is the objective function value in the solution. The algorithm optimizes its solution with selection of better eggs, removing weak cuckoos, grouping cuckoos and migrate cuckoos to better living areas. Immigrant Population Search Algorithm (IPSA) is inspired from the social behavior of human population groups to find the best area to live in (Kamali et al., 2015). This method is based on the population of the solutions and has been developed for optimization on the constrained space.

    This article, expresses Immigrant Population Search Algorithm to optimize unconstrained structures and shows its ability on this field for the first time. The algorithm’s structure for unconstrained optimization is a bit different from the algorithm’s structure for constrained optimization. However both algorithms have proper performance in their area, but there are some major differences in their stages. On the other side, some constrained problems are used to prove the efficiency of Immigrant Population Search Algorithm over the constrained space, whereas several mathematical functions and different problem instances are applied to demonstrate the performance of Immigrant Population Search Algorithm over the unconstrained space. In Section 2 the general idea for the proposed algorithm is presented. Section 3 explains the steps of proposed algorithm and Section 4 makes a comparison between the performance of proposed algorithm and some other metaheuristics. Finally, Section 5 describes the obtained results and suggests some topics for the future research.

    2. THE IDEA OF THE PROPOSED ALGORITHMS DEVELOPMENT

    The land where humans live in has geographically different conditions such as temperature, rainfall, access to water, wind, intensity of sunlight, the fertility of the land, access to food, natural defenses and more. The result of the impact of the total living conditions in each area determines the quality of the region. Of course, these conditions do not have an equal impact on the life’s area quality. Some have minor impacts and some have a more impacts. As it can be observed in different regions of the country accumulation of human groups in some places is more and in some places is less. This is due to that there is a direct relationship between quality of the living areas and accumulation of human populations living in it. More accumulation of the human population lives in a neighborhood with higher quality than areas with lower quality.

    The establishment of the human population in a land is the result of the process of searching, observing and comparing. So that when the human population discovers a new land, since there is no previous knowledge of the land and not knowing the finest land of the area, they distribute randomly. But then, when new groups enter the territory, they use the experience of choosing habitats used by previous groups and by assessing and comparing the quality of their habitat, choose one of the previous population’s habitats and settle in its neighborhood. This selection is somewhat randomly but the quality of life in the zone is effective. That means that higher quality living is more likely to be elected by new groups. All newly arrived human groups use this process.

    Another factor that affects the accumulation of human groups in a territory is the process of death. This process, randomly removes some of the populations that have settled in the land. But the probability of death of a population depends on the quality of the life area. Meaning that a group of people who live in an inferior area have a greater risk of dying. On the other hand, the populations after settling in a living area try to improve it. Meaning that they look for a better life area in their neighborhood and then find a place to live and settle in it. The entrance cycle of new human groups and death of existing groups is repeated constantly and finally, the entire human groups are settled in the finest area.

    3. IMMIGRANT POPULATION SEARCH ALGORITHM (IPSA)

    In this algorithm, the solution space (land) has different solutions (living areas). Also, each solution (living area) has several dimensions (living conditions) that the solution will be calculated for the objective function (the quality of the living area). X j i indicates the jth dimension of i solution, where i = 1, 2, ..., n, j =1,2, ..., m and n is the Solution’s population and m is the number the problem dimensions. Every solution is located in the solution space between the bottom and top of the range (lower and upper bounds of kth dimension are Lk and Uk, respectively) FXi = f (Xi) is the objective function for the solution Xi.

    The proposed algorithm has 4 parameters. n is the population of Initial solutions and population size of new solutions. MI is the number of iterations of the algorithm, LI is the number of local search repetitions, ε is the final ratio of the range local search. During the different stages of the algorithm, C is the loop counter, RN is the ratio range of neighborhood, RL is the ratio of local search range, XRL is the ratio range of local search, X is the population of the current solution, Y is the population of the new solution, FX is the objective function of the current population and FY is the objective function of the new population. The algorithm’s overall process flowchart is shown in Figure 1 and the algorithm stages are described after it.

    3.1 Initialization

    In the first phase of the algorithm, population groups are located in empty land. It is to say that due to the lower and upper bounds of all the problem aspects, the initial population means that Xi are initialized randomly in the solution space. After that, RN, RL, XRL values are initialized according to Algorithm 1. In this regard, U(0, 1) is a real random number between 0 and 1. Initialization of XLRRL (after MI multiply repeated times) is equal to the final value that was set means ε.

    Algorithm 1:

    1. For i = 1 to n do 1.1 For k = 1 to m do 1.1.1 λ = U 0 , 1 1.1.2 X k i = L k + λ U k L k 1.1.3 End for 1.2 F X i = f X ι 1.3 End for 2. R N = 1 , R L = 1 , X R L = ε 1 M I

    3.2 Migration of New Population Groups

    At this stage of the algorithm, n new population groups enter the land. Each of them chooses where to live (Initialization of Yi), by selecting one of the current population groups (that lives in Xt) based on the rule of roulette wheel and is settled in its neighborhood. If FXi is the objective function for the Xi solution, then their selection probability for neighboring (Pi) in the case of maximization of the objective function is calculated with Equation (1), and for minimizing the objective function is calculated with Equation (2). An overview of locating process and the establishment of a new population is shown in Figure 2.

    P i = F X i j = 1 n F X j
    (1)

    M X = M a x i { F X i } , P i = M X F X i j = 1 n ( M X F X j )
    (2)

    To establish new population groups in the neighborhood of current population groups, Algorithm 2 is used. In this regard, Random{1, 2, ..., m} is an integer random number between 1 and m, λ is a real random number between 0 and 1, and therefore; the random value of (2λ −1) is placed between -1 to 1. The algorithm selects a solution (Xt) among the current population based on the rule of roulette wheel (row 1-1 of Algorithm 2) then changes its gth dimension to generate a new solution (Yi). After this calculation, if Y g i is out of the upper and lower limits of problem, the value is placed according to the relevant constraints (rows 1-7 and 1-8). Then FYi, that it is the objective function value will be calculated.

    Algorithm 2:

    1. For i = 1 to n do 1.1 For X t from X 1.2 Y i = X i 1.3 g = R a n d o m 1 , 2 , , m 1.4 R = R N U g L g 1.5 λ = U 0 , 1 1.6 Y g i = X g t + R 2 λ 1 1.7 if Y g i < L g then Y g i = L g 1.8 if Y g i > U g then Y g i = U g 1-9 F Y i = f Y i 1-10 End for

    3.3 Removal of Weak Populations

    With the arrival of new populations on the land, the number of located groups’ increases, therefore it increases the amount of calculations and slows the speed of the algorithm. To prevent this event, the death process is introduced. This process removes a number of the solutions of the problem so that the number of solutions remains constant. So that the solution population Z is defined in the form of the union of the solution population of X and Y that has 2n members and FZi is the objective value of the ith member. At first the most desirable solution in the solutions population Z (that has the best FZi) is chosen and after removing from this population is added to the solutions population W that is initially empty. Then n−1 times, according to the rule of the roulette wheel, a solution is chosen from the population Z and then after removing from the population it is added to population W. Finally the solution population X is equaled to the solution population of W and then the solution population W is removed. As it is noticed, the number of solution population X is equal to n.

    3.4 Local Search to Find Better Places to Live

    At this stage, each population group searches around their area to find a higher quality living area. In other words, each solution searches the local area around itself. For this purpose, the algorithm can follow two policies for local search. The first policy of the algorithms, is to do the local search for all the solutions in the entire current population. The other policy is, to perform local search only for the finest solution of current population. In the first policy, better solutions are obtained compared with the second policy, but more computational time is consumed.

    The local search process is shown in Algorithm 3. In this algorithm, Xt is the current solution that the local search is done for it. Random{1, 2, ..., m} is an integer random number between 1 and m and U(0,1) is a real random number between 0 and 1. If after calculations the neighbor solution Yj is outside the upper and lower limits of the problem’s scale then the relevant limit is placed as its value. The value of the objective function is calculated for the solution. If the obtained solution (Yj) is better than the current solution (Xt), it will be replaced; else it will be ignored (row 1-8 is for maximization problems and row 1-9 is for minimization problems). This process is repeated LI times.

    Algorithm 3:

    1. For j = 1 to L I do 1.1 Y j = X t 1.2 g = R a n d o m 1 , 2 , , m 1.3 R = R L U g L g 1.4 λ = U 0 , 1 1.5 Y g j = X g t + R 2 λ 1 1.6 If Y g i < L g then Y g i = L g 1.7 If Y g i > U g then Y g i = U g 1.8 Max : F X t < f Y j then X t = Y j , F X t = f Y j 1-9 Min : if F X t > f Y j then X t = Y j , F X t = f Y j 1-10 End for

    3.5 Updating Coefficients

    Existing solutions should be converged as the algorithm proceeds. For this purpose, the ratio of neighborhood range of the algorithm should be reduced by proceeding the algorithm. To increase the focus on an area and to find a more accurate solution with the algorithm’s progresses, local search range ratio should also be reduced. Test calculations show that a reduction for the neighborhood range through linear and reduction ratio for local search range through geometric, gives better results. Reducing these coefficients is carried out by Equation (3). In this regard, C indicates the repetition number of algorithm steps that is started from 1 and continues until MI times.

    R N M I C + 1 M I , R L R L × X R L
    (3)

    3.6 Convergence of the Algorithm

    With Looking at the structure of this algorithm, three causes of the dispersion of solutions can be noticed. The algorithm works in such a way that the effect of these factors on dispersion of solutions is reduced by proceeding the algorithm as follows.

    • The initial population: although the initial population is scattered in the space, but with the progression of the algorithm, possibility of placing new solutions around the top previous solutions increases and so they are finally focused around the optimal solution.

    • Finding Neighbors: neighboring amplitude ratio (RN) is declined as the algorithms progresses, so when the algorithm progresses the neighborhoods of each solution does not go too far and it is focused around it.

    • Local search: Local search amplitude ratio (RL) is declined as the algorithm progresses so as the algorithm progresses the local search distance does not increase from the previous solution.

    According to the above three factors and algorithm’s mechanisms for reducing their impact, it can be expected that with the progress of the algorithm, solutions accumulate around the best part of the population (optimal solution).

    3.7 Explanations of Algorithm Stages

    • Initialization: In this stage, solutions are distributed in the total solution space and existing the number of population of solutions increases the chance of achievement to optimal solution or neighborhood of optimal solution. However more the number of population is, neighborhood with optimal solution will increase.

    • Migration of new population groups: In this stage, concentration toward the points that have more desirable solutions increases and population is centralized around the better points (global or local optimum). This matter enhances the chance of achievement to optimal solution. In fact, the space close to existing solutions is searched. It seems that if search is performed in vicinity of current better solutions, the chance of reaching to better solutions is decreased. Without this stage (if new solutions are distributed completely random on the solutions space), the probability of finding the better solutions is decreased and either the proposed algorithm discovers the better solutions later or never achieves to some better solutions.

    • Removal of weak populations: In order to stabilize the size of population of solutions, number of solutions must be removed in each stage. Removing the weaker solutions decreases the risk of eliminating near-optimal solutions and it is expected that this stage does not endamage the performance of algorithm in relation to achievement to optimal solution. This stage keeps the current number of population of solutions constant and does not allow the problem to get large. If this stage is omitted, the problem gets larger by proceeding the algorithm and after several stages the computational time increases and continuing the algorithm becomes impossible. Of course, while removing some members of population of solutions, the probability of removing the desirable solutions is less than the inferior solutions. If this stage is not used, the desirable solutions are removed sooner and the efficiency of algorithm is decreased.

    • Local search to find better places to live: In this stage, if a solution is near the optimal solution of solution space, it will be led to that optimal solution until the distance between them becomes very small. This stage is the same local search that is used in many meta-heuristic algorithms to raise the quality of generated solutions in each phase. In other words, if a better solution exists around a solution, the better solution is considered. Without this stage, the trend of reaching the algorithm to better solutions is continued more slowly.

    • Updating coefficients: By decreasing the coefficients “ratio of neighborhood range” and “local search range ratio” simultaneous with developing the algorithm, it is tried to concentrate more on the acquired desirable points. In other words, the diversification property of algorithm is reduced and its intensification property is increased. In metaheuristic algorithms, two matters are lionized: “intensification and diversification”. In intensification, it is tried to search the better solution in the confine of current solution space; but in diversification, it is attempted to search unexplored solution spaces in order to reach the better solution. In the primary steps of meta-heuristic algorithms the emphasis is on diversification: but by proceeding the algorithm, intensification is emphasized. This stage of proposed algorithm does the same. At first steps, it tries to search a larger space and then it is focused on this existing space to find the better solution in final steps. If this stage is removed, the less steps of algorithm are explored and also, the better solutions are not found in the confine of current space. In other words, the probability of achieving the better solutions is decreased.

    4. THE RESULTS

    The efficiency of the proposed algorithm is evaluated as follows. First, an example of optimization is shown by this algorithm. Then, the algorithm efficiency is compared to a few metaheuristic algorithms with a few numbers of local searches. Finally, algorithm’s efficiency is compared to a few other metaheuristic algorithms with a large number of local searches. In this section the programming is performed by Borland Delphi 7 software.

    4.1 Presentation of the Proposed Optimization Algorithm

    In this section a two-dimensional mathematical function, which the formula is showed in Equitation (4) is used to minimize the problem. The optimized point of this function is (9.038991, 8.668188) with an objective function value of -18.554721. Four parameters of this algorithm are assumed as MI = 40, n =10, LI =10, ε = 1E −5 . The local search is just performed for the best point.

    f 1 X 1 , X 2 = X 1 sin 4 X 1 + 1.1 X 2 sin 2 X 2 0 X 1 , X 2 10
    (4)

    Figures 3 to 6 show the solutions of the algorithm in different iterations. In these figures, small circles are the solution population, and diamond shapes are the best solution and the large circle, show the optimum location of the function. In Figure 3, the optimization problem was solved at the beginning and solutions population is randomly scattered in the solution space. After the first iteration (30 times objective function evaluation) Figure 4 is achieved. In this figure the best solution has been able to bring itself to optimal function point. After the 14th iteration (290 objective function evaluation) as is shown in Figure 5, he best solution has been able to bring itself to optimal point. After the 40th iteration (810 objective function evaluation) all the solutions have convergence at the optimal point.

    4.2 Algorithm’s Performance Measurement with Low Local Search

    To assess the ability of the proposed algorithm (IPSA) with a few numbers of local searches, the performance of it is compared with Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE) and Artificial Bee Colony (ABC). In these algorithms, number of population members and also total number of checking the objective function have been selected equivalently in order to compete the algorithms in equal condition.

    Since the quality of the results obtained from the use of algorithms is very dependent on their initial population, in this comparison the same initial population has been used by the algorithms. The first function is two-dimensional and its formula is according to Equation (4), while the second function is a ten dimensional Rastrigin function and the formula is according to Equation (5). This function has the minimum zero point for all sizes and minimum value is zero. The total number of iterations for the first function is 50 and for the second function is 1000.

    To select the proper value of considered algorithms’ parameters, several values are regarded for each parameter and objective function is evaluated for all possible combined cases. Finally, the best combination is chosen. These values are as the following:

    • GA: crossover rate = 0.7, 0.8, 0.9; mutation rate = 0.1, 0.2, 0.3

    • PSO: cognitive = 1.5, 2.0, 2.5; social = 4 - cognitive; inertia ratio = 0.9, 1.0, 1.1; maximum rate = 10%* Change range, 20%* Change range, 30%* Change range

    • DE: scaling factor = 0.4, 0.5, 0.6; crossover constant = 0.7, 0.8, 0.9

    • ABC = number of scout bees = 1, 5, 10

    • IPSA: number of local search repetitions = 5, 10, 20; local search domain final ratio = 1E -1, 1E-5, 1E-10

    The best obtained combination is as follows:

    • GA: Population = 40, 4-bit binary coding with 6- bit integer, uniform crossover, crossover rate = 0.8, mutation rate = 0.3

    • PSO: Population = 40, cognitive and social parameters = 2, inertia ratio = 1.1, maximum rate = 10% of change range

    • DE: Population = 40, scaling factor = 0.5, crossover constant = 0.8

    • ABC: The number of worker bees = Number of onlooker bees = 20, number of scout bees = 1, the maximum number of solution being constant = number of worker bees multiplied by the dimension of the problem (Karaboga and Akay, 2009)

    • IPSA: Number of current solutions = number of new solutions = 20, number of local search repetitions = 20, local search domain final ratio = 1E-10, and local search for the best solution

    f 2 X 1 , X 2 , ... , X 10 = 100 + i = 1 10 X i 2 10 cos 2 π X i 5.12 X i 5.12 , i = 1 , 2 , ... , 10
    (5)

    Table 1 represents the value of the objective function after the implementation of a number of algorithms and iterative algorithms that for the first time obtained a good solution. Here a good solution is a solution that the distance is less than 0.01 with the optimal value of the objective function. As it can be seen in Table 1, the proposed algorithm performance, both in terms of solution’s value and in terms of response speed to the solution, is better than the other algorithms.

    In IPSA, however more the value of local search for the best solution is, more searches will be performed to modify the solution and it is expected that the better solutionis found. To investigate the sensitivity of algorithm to local search domain final ratio, several values have been considered for this parameter and their effect on the solution obtained by optimizing Equation (5) has been brought in Table 2. As seen, the most proper value for local search domain final ratio is equal to 1E-10.

    The proposed algorithm in comparison with other algorithms for second function optimization (10-dimensional Rastrigin) is examined. The settings and algorithm’s parameters are like the previous comparison and the total number of iterations is equal to 20. The trend of the best solution has been studied in the progress of algorithms and it is shown in the graph of Figure 7. As it can be seen, the proposed algorithm reaches faster to the final solution and gives a better end result.

    To evaluate the necessity of each stage of IPSA algorithm and their effect on reaching to final solution, a comparison is made between the case of fully running the algorithm and the case of lack of some algorithm stages. In this evaluation, a 10-dimensional Rastrigin function (with the same previous characteristics) is used. W0 is the case that all stages of algorithm are implemented. W1 is the case that all stages of algorithm except “Migration of new population groups” and “Removal of weak populations” are implemented. W2 is the case that the stage of “Local search to find better places to live” is omitted from the stages of algorithm. Also, W3 is the case that the stage of “Updating coefficients” is eliminated. These cases are displayed and compared in Figure 8. As shown, lack of each stage lessens the efficiency of IPSA algorithm.

    4.3 Algorithm’s Performance Measurement with Large Local Search

    In order to evaluate the performance of the proposed algorithm with a high number of local search, 50 functions that optimization of them is difficult due to the having many local optimum are used. The functions (Karaboga and Akay, 2009) have been taken along with ranges, dimension, characteristic (U: Unimodal-M: Multimodal-S: Separable-N: Non-Separable) and the minimum are shown in Table 3. GA, PSO, DE, and ABC algorithms are implemented for optimizing these 50 functions (Karaboga and Akay, 2009) and in this section the results are compared with the results of the proposed algorithm. In this section the specifications and parameters of the proposed algorithm are as follows: The current population number = the new population number = 50, number local search = 600, local search domain ratio = 1E-17, and local search is done for the entire population. The number of population in this algorithm is considered like other four algorithms. The total number of iterations of the algorithm is considered in a matter so that the number of objective function evaluation becomes 25 million times, that is the same as that for the other four algorithms so the algorithm competes on equal terms with other algorithms. The proposed algorithm for minimizing has been implemented 30 times on each of these functions and the mean and standard deviation results with the results of other four algorithms are summarized in Table 3. In this table, for convenience, values less than 1E-12 are listed as zero.

    In order to analyze the results and to see whether significant difference between the performance of the proposed algorithm and other four algorithms: GA, PSO, DE, and ABC exist or not, t-student means comparison test (Montgomery et al., 2009) has been done. Tables 4 to 7 are the content of the relevant statistical data.

    The columns of this table include studied function name, the t-statistic for the test, p value of the t-test, rank of p value, the reverse of the rank of p value, new value of α and finally recognition test (to determine the best algorithm). Since the performance of algorithms is based on probability and restarting each of them may produce a different result, Therefore in this section to perform statistical test, direct comparison of α with the p value has been avoided and the Bonferroni correction (Bland and Altman, 1995) has been used. In this method, the new α is obtained from dividing the previous α (5%) by the reverse of the rank of p value. If the new α value is greater than p value, then it can be concluded that the performance of the two algorithms is different, otherwise the performance is assumed equal.

    Table 4 (comparison between the performance of GA and IPSA) displays that among 50 examined functions, the performance of both algorithms is equal in 15 cases, GA has better performance in 2 and IPSA in 33 cases. According to Table 5 (comparison between the performance of PSO and IPSA), it can be seen that in 32 cases the performance of both the algorithms are the same and in 18 cases the performance of IPSA is better. According to Table 6 (comparison between the performance of DE and IPSA) it can be seen that in 38 cases the performance of both the algorithms are the same and in 9 cases the performance of IPSA is better and in 3 cases the performance of DE is better. According to Table 7 (comparison between the performance of ABC and IPSA) it can be seen that in 36 cases the performance of both the algorithms are the same and in 11 cases the performance of IPSA is better and in 3 cases the performance of ABC is better. Figure 9 shows the comparison of results. In general, it can be concluded that the performance of IPSA is either better than other algorithms or the same as other algorithms.

    For final evaluation of the performance of proposed algorithm, comparing the mean absolute error of the optimized functions is used. However, since the minimum value of the function (47) is not known, it is not included in this calculation. Figure 10 shows a comparison among the mean absolute error of proposed algorithm and of 4 other algorithms that due to the large difference between the values, the logarithmic graph has been used. This graph shows the superiority of the proposed algorithm to other algorithms.

    5. CONCLUSION

    In this article, the Immigrant Population Search Algorithm that is inspired by the pattern of human population migration, is used to solve the unconstrained optimization problems for the first time. The steps of the algorithm are initialization, migration of new population groups, removal of weak solutions and local search to find better solutions. In order to evaluate the performance of proposed algorithm, a comparison is made among it and 4 other metaheuristics, namely; Genetic Algorithm, Particle Swarm Optimization algorithm, Differential Evolution algorithm, and Artificial Bee Colony. In this comparison two algorithm running modes are used: one with low local search and the other one with a large number of local searches are done. So by 30 times studying 49 hard functions, the algorithms mean absolute error for GA, PSO, DE, ABC, and the proposed algorithm (IPSA), are respectively, 4251.25, 175.31, 63.71, 0.19, and 0.025.

    Acknowledgement

    This paper has been extracted from part of dissertation of the first author (Dr. Hamid-Reza Kamali), supervised by Dr. Ahmad Sadegheih, with some modifications on the stages of proposed algorithm, mathematical functions and problem instances used to prove the efficiency of algorithm. Authors acknowledge Dr. Ahmad Sadegheih for his valuable guidance and comments on the dissertation.

    Figure

    IEMS-18-1-35_F1.gif

    The flowchart of the different steps of proposed algorithm.

    IEMS-18-1-35_F2.gif

    Finding a location in the neighborhood of current population by the new population.

    IEMS-18-1-35_F3.gif

    The position of obtained results in iteration 0

    IEMS-18-1-35_F4.gif

    The position of obtained results in iteration 1.

    IEMS-18-1-35_F5.gif

    The position of obtained results in iteration 14.

    IEMS-18-1-35_F6.gif

    The position of obtained results in iteration 40.

    IEMS-18-1-35_F7.gif

    Chart of changes trend of the best solution among solutions population.

    IEMS-18-1-35_F8.gif

    The performance of IPSA algorithm in case of lack of some stages.

    IEMS-18-1-35_F9.gif

    Comparison among the number of superiority of IPSA and four other metaheuristics.

    IEMS-18-1-35_F10.gif

    Comparison among the mean absolute error of IPSA and four other metaheuristics.

    Table

    Comparing the performance of proposed algorithm with 4 other meta-heuristics in case of low number of local searches

    The results obtained by sensitivity analysis of IPSA to local search domain final ratio for equation (5)

    A comparison among the optimized results of proposed algorithm and 4 other metaheuristics

    Comparison of the performance of IPSA with GA

    Comparison of the performance of IPSA with PSO

    Comparison of the performance of IPSA with DE

    Comparison of the performance of IPSA with ABC

    REFERENCES

    1. Atashpaz-Gargari, E. and Lucas, C. (2007), Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition, Proceedings of the IEEE Congress on Evolutionary Computation 2007, Singapore, 4661-4667.
    2. Bader, B. W. (2009), Constrained and unconstrained optimization, Comprehensive Chemometrics, 1, 507-545.
    3. Bland, J. M. and Altman, D. G. (1995), Multiple significance tests: The bonferroni method, BMJ: British Medical Journal, 310, 170.
    4. Bonabeau, E. , Marco, D. , and Théraulaz, G.(1999), Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press, New York, NY.
    5. Chen, C. H. and Chen, W. H. (2016), Bare-bones imperialist competitive algorithm for a compensatory neural fuzzy controller, Neurocomputing, 173, 1519-1528.
    6. Chuang, Y. C. , Chen, C. T. , and Hwang, C.(2016), A simple and efficient real-coded genetic algorithm forconstrained optimization, Applied Soft Computing, 38, 87-105.
    7. Cormen, T. H. , Leiserson, C. E. , Rivest, R. L., and Stein, C.(2001), Introduction to Algorithms (2nd), MIT Press, Cambridge, MA.
    8. D’Ambrosio, D. , Spataro, W. , Rongo, R., and Iovine, G. G. R.(2013), Genetic Algorithms, Optimization, and Evolutionary Modeling, Academic Press, San Diego, CA.
    9. Gilani, S. O. and Sattarvand, J. (2016), Integrating geological uncertainty in long-term open pit mine production planning by ant colony optimization, Computers & Geosciences, 87, 31-40.
    10. Guan, J. and Lin, G. (2016), Hybridizing variable neighborhood search with ant colony optimization for solving the single row facility layout problem, Euro-pean Journal of Operational Research, 248(3), 899-909.
    11. Jadhav, H. T. and Bamane, P. D. (2016), Temperature dependent optimal power flow using g-best guided artificial bee colony algorithm, International Journal of Electrical Power & Energy Systems, 77, 77-90.
    12. Kamali, H. R. , Sadegheih, A. , Vahdat-Zad, M. A., and Khademi-Zare, H.(2015), Immigrant population search algorithm for solving constrained optimization problems, Applied Artificial Intelligence, 29, 243-258.
    13. Karaboga, D. and Akay, B. (2009), A comparative study of artificial bee colony algorithm, Applied Mathematics and Computation, 214(1), 108-132.
    14. Kota, L. and Jarmai, K. (2015), Mathematical modeling of multiple tour multiple traveling salesman problem using evolutionary programming, Applied Mathematical Modelling, 39(12), 3410-3433.
    15. Lei, D. and Guo, X. (2015), A shuffled frog-leaping algorithm for hybrid flow shop scheduling with two agents, Expert Systems with Applications, 42(23), 9333-9339.
    16. Luo, J. , Li, X. , Chen, M. R., and Liu, H.(2015), A novel hybrid shuffled frog leaping algorithm for vehicle routing problem with time windows, Information Sciences, 316, 266-292.
    17. Marini, F. and Walczak, B. (2015), Particle swarm optimization (PSO) A tutorial, Chemometrics and Intelligent Laboratory Systems, 149, 153-165.
    18. Mellal, M. A. and Williams, E. J. (2015), Cuckoo optimization algorithm with penalty function for combined heat and power economic dispatch problem, Energy, 93, 1711-1718.
    19. Meng, A. , Li, Z. , Yin, H., Chen, S., and Guo, Z.(2016), Accelerating particle swarm optimization using crisscross search, Information Sciences, 329, 52-72.
    20. Montgomery, D. C. , Runger, G. C. , and Hubele, N. F.(2009), Engineering Statistics, John Wiley & Sons.
    21. More, J. J. and Wu, Z. (1995), Global smoothing and continuation for large-scale molecular optimization, Conference: Global Smoothing and Continuation for Large-Scale Molecular Optimization, Argonne National Laboratory, IL, US.
    22. Rajabioun, R. (2011), Cuckoo optimization algorithm, Applied Soft Computing, 11(8), 5508-5518.
    23. Sharifi, M. A. and Mojallali, H. (2015), A modified imp erialist competitive algorithm for digital IIR filter design,Optik-International Journal for Light and Electron Optics, 126(21), 2979-2984.
    24. Sun, Y. , Wierstra, D. , Schaul, T., and Schmidhuber, J.(2009), Efficient natural evolution strategies, Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, Montreal, Québec, Canada, 539-546.
    25. Yi, W. , Zhou, Y. , Gao, L., Li, X., and Mou, J.(2016), An improved adaptive differential evolution algorithm for continuous optimization, Expert Systems with Applications, 44, 1-12.
    26. Yurtkuran, A. and Emel, E. (2015), An adaptive artificial bee colony algorithm for global optimization, Applied Mathematics and Computation, 271, 1004-1023.