• Editorial Board +
• For Contributors +
• Journal Search +
Journal Search Engine
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.13 No.1 pp.43-51
DOI : https://doi.org/10.7232/iems.2014.13.1.043

# Application of Adaptive Particle Swarm Optimization to Bi-level Job-Shop Scheduling Problem

Chompoonoot Kasemset*
Department of Industrial Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai, Thailand
Corresponding Author, E-mail: chompoonoot.kasemset@cmu.ac.th
May 9, 2013 November 1, 2013 January 15, 2014

## ABSTRACT

This study presents an application of adaptive particle swarm optimization (APSO) to solving the bi-level job-shop scheduling problem (JSP). The test problem presented here is 10×10 JSP (ten jobs and ten machines) with tri-bottleneck machines formulated as a bi-level formulation. APSO is used to solve the test problem and the result is compared with the result solved by basic PSO. The results of the test problem show that the results from APSO are significantly different when compared with the result from basic PSO in terms of the upper level objective value and the iteration number in which the best solution is first identified, but there is no significant difference in the lower objective value. These results confirmed that the quality of solutions from APSO is better than the basic PSO. Moreover, APSO can be used directly on a new problem instance without the exercise to select parameters.

## 1.INTRODUCTION

Bi-level structure is the simplest class of multilevel formulation. Many planning and scheduling problems of supply chain management and production management can be modeled with this structure. Bi-level scheduling problem is a problem when the decision made on how to schedule jobs considers two levels objectives, upper and lower, as a hierarchy structure. The decision of the upper level generally influences the decision on the lower level. The bi-level scheduling problem is found in many cases, for example: the case of multi-level production systems by Lin et al. (1997) and Semnani and Zamanifar (2010), the case of cellular manufacturing by Logendran et al. (1995), and the case of job-shop scheduling by Kasemset and Kachitvichyanukul (2010).

When the multi-level structure was applied, solution procedures as exact and heuristics algorithm have been developed by many researchers, but the first technique, exact algorithm, is difficult to get the solution in reasonable computational time so a heuristics method has been widely developed in many research works. The examples of well-known heuristics techniques are: genetic algorithm (GA) applied by Kimms (1999) and Pezzella et al. (2008); particle swarm optimization (PSO) applied by Zhang et al. (2009), Kuo and Huang (2009), and Chander et al. (2011); ant colony optimization used by Semnani and Zamanifar (2010); and combined heuristics used by Logendran et al. (1995) and Kuo and Han (2011).

Particle swarm optimization or PSO is a population based search method. The effectiveness of PSO was compared with GA and differential evolution in Kachitvichyanukul (2012) and PSO was mention that its mechanism facilitated in solution improvement within short computational time compared with GA. PSO was widely applied in many scheduling problems, for examples, flow-shop scheduling by Rahimi-Vahed and Mirghorbani (2007) and Pan et al. (2008); job-shop scheduling (JSP) by Sha and Hsu (2006), Lei (2008), Lian et al. (2006), Xia and Wu (2005), Pongchairerks and Kachitvichyanukul (2009), Zhang et al. (2009), Pratchayaborirak and Kachitvichyanukul (2011), Wisittipanich and Kachitvichyanukul (2013). However, most researchers in this field used PSO in a single and multiple objective optimizations, in contrast, there was only a few researchers working on multi-level programming problems. In addition, the initial step of basic PSO is to setup PSO-para-meter, i.e., acceleration constants, number of iteration, number of particle, etc., as optimal values to accelerate the solution improving during PSO process. The parameter optimization for PSO is a tedious job. Many research works applied the concept of design of experiment (DOE) to find optimal value of PSO-parameter.

One method to avoid the step of parameter optimization is to design multi-level PSO proposed by Pongchairerks and Kachitvichyanukul (2009) that separated the top level PSO to fine-tune and assigned the parameter values for PSO and the lower level PSO developed for finding JSP solution. However, this approach has the disadvantage of long computational time.

The initiation of adaptive PSO (APSO) is developed to circumvent the step of parameter optimization. An idea of a self-adaptive PSO is developed to allow an automatic parameter setting within the PSO process. Many research works dealt with APSO for adaptive inertia weight as proposed by Shi and Eberhart (1998), Ueno et al. (2005), Gao and Ren (2007), Arumugam and Rao (2008) and Ai and Kachitvichyanukul (2008a); and for acceleration constants proposed by Ai and Kachitvichyanukul (2008b).

The main contribution of this research is to present the application of APSO to solving the bi-level job-shop scheduling problem. This APSO is applied to solve the same problem that was solved by using the basic PSO. The comparisons are made on three parameters: 1) the upper level objective value, 2) the lower level objective value, and 3) the iteration number in which the best so-lution is first identified. Three parameters are used to display the performance of PSO and APSO in order to compare the quality of the solution and the speed in finding the near-optimal solution.

The organization of this paper is as follows. Preliminaries, including problem formulation, PSO and adaptive concept, are explained in Section 2. Numerical illustration and conclusion are presented in Sections 3 and 4.

## 2.PRELIMINARIES

### 2.1.The Bi-level Multi-objective Job-Shop Scheduling Formulation

The bi-level multi-objective job-shop scheduling was first proposed by Kasemset and Kachitvichyanukul (2010). The mathematical model was formulated as bilevel form with the upper level of this model that aims to minimize the total idle time on bottleneck machines (Bn) as Eq. (1) based on the concept of Theory of Constraints (TOC) that aims to maximize the utilization of bottlenecks. The lower level objective is to improve the schedule performance measures formulated as multiple objectives by minimizing the aggregated maximum value of completion time (Cmax), tardiness (Tmax), and earliness (E max) using weighting technique as Eq. (1) to (5). When, W1, W2, and W3 are the weights for Cmax, T max, and E max, respectively.

Objective:

Minimize

$∑ b − 1 B m ax i , b X i , b + s ib + U i p ib − ∑ i = 1 n s ib + U i p ib$
(1)
where Xib solves

Minimize

$W 1 C max + W 2 T max + W 3 E max$
(2)
and;
$C max = ma x i , j X ij + s ij + U i p ij$
(3)
$T max = ma x i , j 0 , X ij + s ij + U i p ij − D i$
(4)
$E max = ma x i , j 0 , D i − X ij + s ij + U i p ij$
(5)

When i, j, and b represent jobs, machines, and bottleneck machines, respectively. X ij are decision variables representing starting time of job i on machine j. Ui and Di are demand and due date of job i. sijand p ijare set up time and processing time of job i on machine j. Those objectives are subjected to the set of constraint of precedence constraints, machine conflict constraints, job ready time constraint, earliest starting time constraints calculated from transfer lot size, non-negativity constraints and binary constraints. The completed mathematical model and numerical examples are presented in detail as in Kasemset and Kachitvichyanukul (2010).

When solving small size JSP, optimal solution can be derived by using any optimizer program. However, when the size of problem is increased, the solution cannot be obtained by optimizers due to long computational time. Thus, PSO based method was proposed by Kasemset and Kachitvichyanukul (2012) to facilitate in solving JSP when the size of problem was increased. The PSO based method is considered to facilitate in searching for a near optimal solution within reasonable computational time.

### 2.2.PSO Based Method for Solving Bi-level Multi-objective JSP

PSO is a population based search method introduced by Kennedy and Eberhart (1995). A particle inside the swarm is similar to a chromosome in a population of the GA, but the difference point between PSO and GA is that there are no genetic operations, i.e., crossover and mutation, during PSO process.

PSO process starts with randomly initializing the swarm. The dimension of each particle is designed to match with the solution of any problem. During the process, each particle moves through the solution space with its own assigned velocity accelerated toward its previous best position called pbest (personal best) and the best position of the swarm called gbest (global best). The exchanged experience allows the particles to move to better solutions.

In the basic PSO algorithm, each iteration step mainly consists of only two set of updating equations: velocity as in Eq. (6) and position as in Eq. (7)

$v id = wv id + c p u p id − x id + c g u p gd − x id$
(6)
$x id = x id + v id$
(7)

The velocity equation as Eq. (6) consists of three elements its velocity, cognitive behavior, and social behavior (detail can be found in Ai and Kachitvichyanukul, 2007).

The PSO based method for bi-level multi-objective JSP proposed by Kasemset and Kachitvichyanukul (2012) were developed based on basic PSO, but there are two unique points designed for handling the bi-level model and transfer lot concept that are: 1) The movement of particles occurs by considering two fitness values representing both level objective values from bi-level formulation. 2) The schedule representation step is differently designed comparing with any PSO method for JSPs because the starting time, Xij, should be derived by considering one additional parameter called “Earliest Starting Time” and this parameter can be calculated based on transfer lot concept proposed in Kasemset and Kachitvichyanukul (2010).

The PSO based method for bi-level multi-objective JSP from Kasemset and Kachitvichyanukul (2012) is briefly described as three main steps.

1. Encoding and decoding scheme: This PSO applied operation-based representation and random keys representation proposed addresses in Cheng et al. (1996) for decoding process. The advantages of these methods are that every schedule decoded always yield a feasible schedule.

For an n job and m machine JSP, a chromosome contains n×m genes. Then, a solution is encoded using random keys representation. Each gene contains number of position and random number. Then, all genes are sorted in ascending order before starting operation-based representation by repeating genes based on number of operations of each job. For n job and m machine, each job appears on the chromosome exactly m times. After that, all genes are sorted back to its position number to present a sequence of operations and ready for schedule representation in the following step.

2. Schedule representation: The decoded particle is transformed to a schedule. In this step, the starting time of each job on each machine (X ij) is calculated. When X ij are known, the finish time of each job can be calculated and the performance measurements, Cmax, Tmax and E max, can be also determined in the next step.

3. Fitness value evaluation and updating: As previously mention, when X ijare calculated, the finish time of each job can be derived, so Bn, Cmax, Tmax, and Emax can be calculated following Eqs. (1) and (3)–(5). In this step, two fitness values, Bn and Ft (following Eq. (2)), are used for particle’s movement. To select gbest, Bn is firstly considered due to its higher priority as the upper level objective. When there are many particles with minimum Bn, Ft will then be considered. This is the unique point of PSO based method for bi-level formulation.

Addressed by Ai and Kachitvichyanukul (2008b), the problem when basic PSO is adopted is how to set up initial parameters of PSO, i.e., inertia weight, acceleration constants (cp and cg), number of particles and other parameters as optimal values. Those parameters surely have effects on the quality of the solution obtained from PSO, so in many research works, the concept of DOE is needed to find the optimal value of those parameters.

One method to avoid the step of parameter optimization is the concept of parameter-adaptive or selfadaptive parameter. PSO with self-adaptive parameter, called APSO, is developed to circumvent the step of parameter optimization. The detail of APSO used in this research is briefly explained in the next section.

### 2.3.Concept of Self-adaptive Parameter in PSO

The concept of APSO is developed to avoid the process of PSO-parameter selection because the steps for finding the optimal set of parameters are not easy to be carried out. When the problem is changed, the set of these parameters may be different.

Key parameters in PSO can be listed as;

1. Inertia weight (w): Particles move with the new velocity from the combination vectors. Inertia weight is a weight to control the magnitude of the current velocity on updating the new velocity. This parameter is one factor playing important role in the velocity update as in Eq. (6) and this parameter controls the search behavior of the swarm, as well.

2. Acceleration constants: For basic PSO, acceleration constants are cp and cg giving relative importance of pbest and gbest position when the velocity is updated. Each constant controls the distance that a particle is allowed to move from its current position to any best position. Other acceleration constants (i.e., cl respects to the local best, cn respects to near-neighbor best, etc.) can be also used depended on structure of modified PSO.

3. Number of particles: Number of particles or popula tion size represents the number of particles in the system. This parameter has effects on fitness value and computational time. Generally, a small population size leads to poor convergence while a large population size yields good convergence but time consuming.

Beside those parameters previously mention, other parameters, i.e., number of iterations, re-initialization related factors, and so on, are also affected the performance of PSO. In this research, APSO proposed by Ai and Kachitvichyanukul (2008a , b)are used to solve the bi-level JSP. This APSO applied the concept of self-adaptive parameter for inertia weight and acceleration constants (cp and cg). The short explanations are address for these two parameters as follows.

Proposed by Ai and Kacitvichyanukul (2008a), the inertia weight is set in the range of minimum (wmin) and maximum value (wmax). Whenever the swarm velocity index ($ω ¯$) as in Eq. (8) is lower than the desired velocity index (ω*) as in Eq. (9), the inertia weight is increased, and reversely when the swarm velocity index is greater than the desired velocity index, the inertia weight is decreased. The amount of increases and decreases of inertia weight depends on the difference between the velocity index of the swarm and the desired velocity index.

$ω ¯ = ∑ l = 1 L ∑ h = 1 H ω lh L × H$
(8)
$ω ∗ = 1 − 1.8 τ / T ω max ⋅ 0 ≤ τ ≤ T / 2 0.2 − 0.2 τ / T ω max ⋅ T / 2 ≤ τ ≤ T$
(9)
where, l is particle index (l = 1, 2, …, L), h is dimension index (h = 1, 2, …, H), τ is iteration index (τ = 1, 2, …, T), and ωmax is maximum velocity index.

The updating of inertia weight is as follow Eqs. (10)–(13).

$Δ w = ω ∗ − ω ¯ ω max ω max − ω min$
(10)
$w = w + Δ w$
(11)
(12)
(13)

The concept of adaptive acceleration used in this research is modified from Ai and Kachitvichyanukul (2008b). For this research, adaptive concept is used only for cp and cg while the original work proposed to use adaptive concept for four acceleration constants. The concept used here starts with determining the difference between the corresponding objective function of particle's position and the objective function of respective term. In minimizing problem, the large difference means high priority of that term. Then, particles intend to move toward to that term.

The acceleration constants can be determined as the proportion of the respective degree of importance to the constant c*, which is defined as the sum of the acceleration constants. The degree of importance of the whole swarm consisting L particles can be express as Eqs. (14)–(15).

$Δ Z p = ∑ l − 1 L max Z l − Z p , 0$
(14)
$Δ Z g = ∑ l − 1 L max Z l − Z g , 0$
(15)

For any iteration, Zl is the fitness value of particle l and Zp and Zg are the fitness value of the pbest and gbest at current iteration. The acceleration constants can be derived as the proportion of degree of importance and updated using exponential weighted moving average technique for avoiding rapid changing of parameters following Eqs. (16)–(18).

$Δ Z = Δ Z p + Δ Z g$
(16)
$c p = ac p + 1 − a Δ Z p Δ Z c ∗$
(17)
$c g = ac g + 1 − a Δ Z g Δ Z c ∗$
(18)

In this research paper, cp and cg are adaptive parameters. The process of adaptive cp and cg can be presented as pseudo-code as in Figure 1.

From Figure 1, starting from particles initialization or encoding process, particles are improved since step 04 based on Eqs. (6) and (7) as a concept of basic PSO. After that, they are decoded and evaluated using two fitness functions following Eq. (1) for Bn value and Eq. (2) for Ft value. Then, Bn is used for the first priority to set pbest and gbest and Ft is used as the second priority as step 07–14. Then, inertia weight, cp and cg are updated as step 15–18. This process is continued until the number of iteration is completed.

## 3.NUMERICAL ILLUSTRATION

The test problem from Kasemset (2009) is solved by applied both PSO and APSO. This test problem is the modified JSP with ten jobs and ten machines (10×10) considering monthly demand, job due-date, transfer lot size and set-up time of each operation. The detail of job sequences is provided in Appendix.

Bottleneck machines are known as machine number 2, 7, and 9. (The detail of bottleneck identification can be found in detail as addressed in Kasemset (2009) and Kasemset and Kachitvichyanukul (2007). Based on the bi-level formulation, the upper level objective is to minimize the total idle time of three bottleneck ma-chines and the lower level objective is to minimize the aggregated value of Cmax, Tmax, and Emax with W1, W2, and W3 = 1.

### 3.1.Solving by PSO

Previously, this 10×10 JSP is solved by PSO based method proposed by Kasemset and Kacitvichyanukul (2012) with parameter setting as shown in Table 1.

The result conclusion of PSO proposed by Kasemset and Kacitvichyanukul (2012) shown in Table 2. Three parameters are collected as 1) 1st level objective value (Bn), 2) 2nd level objective value (Ft), and 3) iteration number in which the best solution is first identified.

### 3.2.Solving by APSO

APSO is used to solve the test problem and the results are also collected 30 replications. Parameters, excepted cp and cg, are used as previous PSO test. Initially, cp and cg are equally set at 1, c* is 4 and α is 0.8. The test results are shown in Table 3.

### 3.3.Results Comparison

#### 3.3.1.Bn comparison

The Bn results of PSO and APSO are presented as the difference in mean comparison shown in Table 4.

From Table 4, one-side upper Bn-mean comparison is tested between PSO and APSO. The test hypothesizes are set as follow:

H0: Mean Bn solved by PSO is equal to mean Bn solved by APSO.

H1: Mean Bn solved by PSO is greater than mean Bn solved by APSO

The p-value from the test is 0.0018 and less than the significant level (α) 0.05, so the null hypothesis is rejected. Thus, it can be concluded that mean Bn solved by PSO differs from mean Bn solved by APSO. In fact, there is strong evidence that mean Bn solved by PSO exceeds mean Bn solved by APSO. For this test problem, APSO can help in Bn improvement.

#### 3.3.2.Ft comparison

The Ft results are presented as the difference in mean comparison as shown in Table 5.

From Table 5, one-side upper Ft-mean comparison is tested between PSO and APSO. The test hypothesizes are set as follow:

H0: Mean Ft solved by PSO is equal to mean Ft solved by APSO.

H1: Mean Ft solved by PSO is greater than mean Ft solved by APSO.

The p-value is 0.2031 and greater than the significant level (α) 0.05, so the null hypothesis is accepted. It can be concluded that there is no difference in Ft-mean from both methods. It implies that APSO cannot help in Ft improvement for this test problem.

#### 3.3.3.Iteration number in which the best solution is first identified

The number of iteration found the best solution of each replication for 30 replications of PSO and APSO are compared and presented as the difference in mean comparison shown in Table 6

To confirm this, Table 6 shows one-side upper of this value mean comparison as the test hypothesizes are:

H0: Mean iteration number from PSO is equal to mean iteration number from APSO.

H1: Mean iteration number from PSO is greater than mean iteration number from APSO.

The test p-value is less than 0.0001. When p-value is very small, the null hypothesis is rejected. Thus, there is strong evidence that mean iteration number from PSO exceeds mean iteration number from APSO. For this test problem, it can be concluded that the convergence speed of APSO is faster than PSO.

Three comparisons addressed here show that APSO outperforms PSO in term of better Bn value and the iteration number in which the best solution is first identified in this research work.

### 3.4.Solving by PSO with cp and cg set from APSO

To confirm the performance of APSO, the additional test is set to test that APSO can help in finding the optimal PSO-parameter. From Table 3 in Section 4.2, results from APSO, average cp and cg are 0.794 and 1.059, respectively. Table 7 shows the results of 30 replication solved by PSO with adjusted cp and cg.

The parameter mean comparisons between basic PSO and PSO with cp and cg values set from APSO are shown in Table 8. The test hypothesizes are set as follow:

PSO: particle swarm optimization, APSO: adaptive PSO.

These results show that Bn and the iteration number in which the best solution is first identified are significantly different (using α = 0.05). For Ft value, it cannot be concluded that there is any difference in mean of Ft from both methods.

These test results led us to conclude that APSO is useful in finding the optimal cp and cg for improving the Bn value and convergence speed of PSO.

## 4.CONCLUSION

This study presented the use of PSO and APSO in bi-level problem solving. PSO and APSO are used to solve the same problem of 10×10 bi-level JSP. The results are represented as the comparisons of three factors: 1) 1st level objective value (Bn), 1) 2nd level objective value (F t), and 3) iteration number in which the best solution is first identified.

The results show that the solutions solved by APSO are better than PSO in terms of Bn and the iteration number in which the best solution is first identified when the mean comparison of the two parameters are tested at α = 0.05. In contrast, F t values from both methods are not significantly different. For Bn and the iteration number in which the best solution is first identified, the comparisons confirm that the solution quality of PSO and APSO is significantly different, which indicates that APSO can improve the solution of the bi-level problem in this study.

In an additional test, when cp and cg are set for PSO from average values of those parameters from APSO, the performance of PSO was improved in terms of Bn and the convergence speed, as well. The results demonstrate that the advantage of APSO allows researchers to remove the step of parameter optimization and parameter setting required when the basic PSO is used with the equivalent solution quality and the number of iteration needed.

## Figure

Pseudo-code of proposed adaptive particle swarm optimization.

## Table

Parameters used in basic particle swarm optimization

Adapted from Kasemset and Kacitvichyanukul (2012).

Results from particle swarm optimization (30 replications)

Results from adaptive particle swarm optimization

Bn mean comparison

PSO: particle swarm optimization, APSO: adaptive PSO.

Ft mean comparison

PSO: particle swarm optimization, APSO: adaptive PSO.

No. of iteration mean comparison

PSO: particle swarm optimization, APSO: adaptive PSO.

Results from PSO with cp and cg from APSO

PSO: particle swarm optimization, APSO: adaptive PSO.

3-Parameter mean comparisons

PSO: particle swarm optimization.

Test problem 10×10 job-shop scheduling problem

## REFERENCES

1. Ai T. J , Kachitvichyanukul V (2007) Dispersion and velocity indices for observing dynamic behavior of particle swarm optimization. , pp.3264-3271
2. Ai T. J , Kachitvichyanukul V (2008a) A study on adaptive particle swarm optimization for solving vehicle routing problems ,
3. Ai T. J , Kachitvichyanukul V (2008b) Adaptive particle swarm optimization algorithms , Shanghai China, pp.460-469
4. Arumugam M. S , Rao M. V. C (2008) On the improved performances of the particle swarm optimization algorithms with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems , Applied Soft Computing, Vol.8 (1) ; pp.324- 336
5. Chander A , Chatterjee A , Siarry P (2011) A new social and momentum component adaptive PSO algorithm for image segmentation. , Expert Systems with Applications, Vol.38 (5) ; pp.4998-5004
6. Cheng R , Gen M , Tsujimura Y (1996) A tutorial survey of job-shop scheduling problems using genetic algorithms: I. Representation , Computers and Industrial Engineering, Vol.30 (4) ; pp.983-997
7. Gao Y , Ren Z (2007) Adaptive particle swarm optimization algorithm with genetic mutation operation , pp.211-215
8. Kachitvichyanukul V (2012) Comparison of three evolutionary algorithms: GA PSO DE , Industrial Engineering and Management Systems., Vol.11 (3) ; pp.215-223
9. Kasemset C (2009) TOC based job-shop scheduling, dissertation, Pathumthani, Thailand,
10. Kasemset C , Kachitvichyanukul V (2007) Simulation-based procedure for bottleneck identification , In: AsiaSim 2007, Springer, pp.47-55
11. Kasemset C , Kachitvichyanukul V (2010) Bi-level multi-objective mathematical model for job-shop scheduling: the application of Theory of Constraints , International Journal of Production Research, Vol.48 (20) ; pp.6137-6154
12. Kasemset C , Kachitvichyanukul V (2012) A PSObased procedure for a bi-level multi-objective TOCbased job-shop scheduling problem , International Journal of Operational Research, Vol.14 (1) ; pp.50-69
13. Kennedy J , Eberhart R (1995) Particle swarm optimization , pp.1942-1948
14. Kimms A (1999) A genetic algorithm for multi-level, multi-machine lot sizing and scheduling , Computers and Operations Research, Vol.26 (8) ; pp.829-848
15. Kuo R. J , Huang C. C (2009) Application of particle swarm optimization algorithm for solving bilevel linear programming problem , Computers and Mathematics with Applications, Vol.58 (4) ; pp.678-685
16. Kuo R. J , Han Y. S (2011) A hybrid of genetic algorithm and particle swarm optimization for solving bi-level linear programming problem: a case study on supply chain model , Applied Mathematical Modelling, Vol.35 (8) ; pp.6905-3917
17. Lei D (2008) A Pareto archive particle swarm optimization for multi-objective job shop scheduling , Computers and Industrial Engineering, Vol.54 (4) ; pp.960-971
18. Lian Z , Jiao B , Gu X (2006) A similar particle swarm optimization algorithm for job-shop scheduling to minimize makespan , Applied Mathematics and Computation, Vol.183 (2) ; pp.1008-1017
19. Lin F. R , Shaw M. J , Locascio A (1997) Scheduling printed circuit board production systems using the two-level scheduling approach , Journal of Manufacturing Systems, Vol.16 (2) ; pp.129-149
20. Logendran R , Mai L , Talkington D (1995) Combined heuristics for bi-level group scheduling problems , International Journal of Production Economic, Vol.38 (2/3) ; pp.133-145
21. Pan Q. K , Fatih Tasgetiren M , Liang Y. C (2008) A discrete particle swarm optimization algorithm for the no-wait flowshop scheduling problem , Computers and Operations Research, Vol.35 (9) ; pp.2807-2839
22. Pezzella F , Morganti G , Ciaschetti G (2008) A genetic algorithm for the flexible job-shop scheduling problem , Computers and Operations Research, Vol.35 (10) ; pp.3202-3212
23. Pongchairerks P , Kachitvichyanukul V (2009) A two-level particle swarm optimisation algorithm on job-shop scheduling problems , International Journal of Operational Research, Vol.4 (4) ; pp.390-411
24. Pratchayaborirak T , Kachitvichyanukul V (2011) A two-stage PSO algorithm for job shop scheduling problem , International Journal of Management Science and Engineering Management, Vol.6 (2) ; pp.83-92
25. Rahimi-Vahed A. R , Mirghorbani S. M (2007) A multi-objective particle swarm for a flow shop scheduling problem , Journal of Combinatorial Optimization, Vol.13 (1) ; pp.79-102
26. Semnani S. H , Zamanifar K (2010) New approach to multi-level processor scheduling , International Journal on Artificial Intelligence Tools, Vol.19 (3) ; pp.335-346
27. Sha D. Y , Hsu C. Y (2006) A hybrid particle swarm optimization for job shop scheduling problem , Computers and Industrial Engineering, Vol.51 (4) ; pp.791-808
28. Shi Y , Eberhart R (1998) A modified particle swarm optimizer, pp.69-73
29. Ueno G , Yasuda K , Iwasaki N (2005) Robust adaptive particle swarm optimization , pp.3915-3920
30. Wisittipanich W , Kachitvichyanukul V (2013) An efficient PSO algorithm for finding Pareto-frontier in multi-objective job shop scheduling problems , Industrial Engineering and Management Systems, Vol.12 (2) ; pp.151-160
31. Xia W , Wu Z (2005) An effective hybrid optimization approach for multi-objective flexible jobshop scheduling problems , Computers and Industrial Engineering, Vol.48 (2) ; pp.409-425
32. Zhang G , Shao X , Li P , Gao L (2009) An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem , Computers and Industrial Engineering, Vol.56 (4) ; pp.1309-1318