1. INTRODUCTION
Multiobjective optimization is an integral part of optimization activities that practically occupies a tremendous amount of importance (Deb, 2014). It is a field of multiple criteria decision making (MCDM) that is focused on mathematical optimization problems in which more than one conflicted objective function are optimized, simultaneously. The main objective of multiobjective optimization is to help the decision maker to choose the best alternative among the available feasible alternatives, Jankowski (1995). Multiobjective optimization has been adopted in several fields of science, including, but not limited, engineering (c.f., Marler and Arora, 2004;Triantaphyllou et al., 1998), logistics (c.f., Najafi et al., 2013), finance (c.f., Zhu et al., 2011), economics (c.f., Penna et al., 2015), and scheduling (c.f., Yagmahan and Yenisey, 2008;Akbari and Rashidi, 2016); where optimal decisions must be taken by making some tradeo s between at least two conflicting objectives. Most industrial scheduling problems require the multiobjective optimization to provide a set of optimal solutions, known as Pareto optimal set that meets the matchless and the conflicting criteria. Each solution of this set is not dominated by any other solution, where a non dominant solution is the one with strictly a better objective function value for at least one objective function, and with a better or equal objective function value for other objective functions, compared to all other solutions. Most industrial scheduling problems require the multiobjective optimization to provide a set of an optimal solutions, known as Pareto optimal set that meets the matchless and the conflicting criteria. Each solution of this set is not dominated by any other solution, where a non dominant solution is the one with strictly a better objective function value for at least one objective function, and with a better or equal objective function value for other objective functions, compared to all other solutions.
In this article, a multiple objectives optimization of a single machine weighted tardiness and total setup costs scheduling with sequence dependent setup times and sequence dependent setup cost (MOSTWTSCSD) is considered to minimize the total weighted tardiness and the total setup costs for all jobs simultaneously. The MOSTWTSCSD problem is illustrated in the following description. A set of n jobs N = {1, 2, 3, ..... N} to be processed once without interruption on a machine that can handle no more than one job at a time. Each job has a processing time (p_{j}), due date (d_{j}), weight (w_{j}) which represents the priority of job j relative to other jobs, setup time (s_{ij}) is the time required to prepare a machine to perform a job that is incurred when job j instantly follows job i on the machine, and setup cost (c_{ij}) is the cost required to get machine ready prior to the execution of a job which is incurred when job j instantly follows job i on the machine. It is assumed that all these parameters are nonnegative integers. It worth to mention that all of these parameters can be measured from historical data except the weights. They are estimated based on the importance of the customers, profit, and other factors. A feasible solution for (MOSTWTSCSD) problem is to examine a set of solutions that satisfy the objectives at an acceptable level without being dominated by any other solution. The objectives of minimizing the sum of the weighted tardiness and the sum of setup cost for the set of jobs to be processed can be expressed, respectively, as:
where (T_{j}) is the tardiness of job j and (x_{ij}) is a binary variable equals to one if job j is processed after job i, and to zero otherwise.
Scheduling problem with setup times and costs has a significant role in the manufacturing and the service sectors. Setup activities include, for example, changing machine tools or dies, testing initial output for ensuring that it matches the required specifications, cleaning up, setting up new equipment, etc.
Scheduling problems with setups can be categorized into a sequence dependent and a sequence independent, Allahverdi et al. (2008) and Allahverdi et al. (1999). The setup is called sequence dependent if a given job depends on both the current job and the one immediately preceding it, Allahverdi et al. (1999). On the other hand, in the sequenceindependent setup, the given job only depends on the current processing job, Allahverdi and Soroush (2008).
Scheduling problems involving a sequence dependent setup cost are still not well understood. Most studies in single machine scheduling focused only on the sequence dependent setup time, as will be explained in the literature review. Because of the presence of these sequence dependent setups, which are considered as a nonvalue added; the scheduling decisions become critical to fulfill the desired objectives. In this problem, the relationship between setup time and setup cost is not clear, which occurs in some machines, but for sure not all machines. This means that the long setup time between two jobs in sequence does not necessarily reflect high cost, or vice versa. The optimal solutions of the MOSTWTSCSD problem ensure to have the best combination of time and cost but may not guarantee obtaining the minimum total setup times and minimum total setup costs.
Recently, metaheuristic particle swarm optimization (PSO) has become a very popular and a widely used approach for multiobjective optimization. PSO can generate several elements of the Pareto optimal set with the use of randomized initialization and stochastic search within its iterations, Sierra and Coello (2005) and Nebro et al. (2009). Moreover, it has a high speed of convergence, exploration and exploitation due to the local and global search capabilities of the algorithm, Shi (2014), in this paper, a multiobjective discrete particle swarm optimization (MOPSO) model is used to find the approximate Pareto optimal solutions set for the MOSTWTSCSD by using a weighted sum of the multiple objectives to transform the multiobjective problem into a single objective problem. Furthermore, the concept of Paretodominance is incorporated into a discrete particle swarm optimization (DPSO) procedure to find the nondominated set of solutions that are required by the decision maker; hence, DPSO model simplifies the difficulties of applying optimization and provides decision support for that important problem.
The rest of this paper is organized in the following manner. Literature review is reviewed in the next section. Description and formulation of the considered problem in section 3. Data Instances and Numerical Analysis in section 4. Lastly, the paper ends by summarizing the implemented work associated with a look at future research directions.
2. LITERATURE REVIEW
The research on scheduling problems involving setup times or costs is still less than 10% of the available research, Allahverdi et al. (2008), and Allahverdi (2015). However, Kopanos et al. (2009) pointed out that these setup activities are appeared in an increasing number in the industrial and service applications. For example, Trovinger and Bohn (2005) reduced the setup time by more than 80% in a printed circuit board assembly plant, which resulted in a direct annual saving of $1.8 million. Loveland et al. (2007) developed a new methodology for a scheduling problem in of Dell Inc. Corporation which included optimization and heuristic components, to minimize the total setup costs. As a result, the production volume of the company was increased about 35%, and thus, the company had a direct annual saving over $1 million. Another methodology, the greedy randomized adaptive search procedure (GRASP) that is proposed by Monkman et al. (2008) for scheduling problem at electronic manufacturing, in which the cost was minimized about 21% over the savings by Loveland et al. (2007). More complete list of the applications and the importance of reducing setup times and set up costs in scheduling problems is provided by Allahverdi and Soroush (2008).
According to Zhu and Wilhelm (2006), the sequence dependent problem involves both setup times and costs that is considered as complicated scheduling problems, however, it was addressed in few researches. For example, Haase and Kimms (2000) proposed a mixed integer programming (MIP) model for lot sizing and scheduling of a single machine considering sequence dependent setup times and setup costs, to determine a schedule for items that may be produced in lots, and thus, reduce their holding costs and their setups’ expenditures. They suggested small scale instance to validate the proposed model and a fast enumeration method of the branchandbound type is made to find an efficient and an optimal solution for problem instances. Sourd (2005) addressed the single machine scheduling problem involving setup times and setup costs, earliness and tardiness penalties by applying branchand bound algorithm, efficient heuristic algorithm and mathematical programming formulations of the problem to solve some data instances. Furthermore, Sourd (2006) applied the dyna search heuristic in order to improve the solution that was achieved found by the simple descent algorithm for the same problem.
The problem of minimizing the total weighted tardiness in a single machine scheduling problems using DPSO with only one type of sequence dependent setups which could be time or cost are studied by a few literature on scheduling. Example, Anghinolfi and Paolucci (2009) proposed a new DPSO to solve single machine scheduling problem with sequence dependent setup times in order to minimize the total weighted tardiness. Additionally, Li and Tian (2015) proposed a mixed integer linear programming model with an improved PSO to solve the selective single machine scheduling in the presence of sequence dependent setup costs and downstream demands, however, Tasgetiren et al. (2004) used PSO to solve the same problem without setup considerations. On the other hand, Al Theeb and Alhwiti (2014) applied DPSO to solve a single machine scheduling considering both sequence dependent setup time and setup cost from a single objective point of view.
For additional details of the latest research on scheduling problems involving setup considerations, Allahverdi et al. (1999) provided the first comprehensive review that covers about 200 papers between 1960 and 1988. In addition, Allahverdi et al. (2008), surveyed about 300 different papers between 1998 and 2006. Moreover, a third comprehensive review by Allahverdi (2015), which covered about 500 papers between 2006 and 2014 which provided the most extensive review of scheduling problems ever.
Most industrial scheduling problems have large number of alternatives with several contradictory criteria. Those must be optimized to choose the best tradeo s among them. These problems are rarely a single objective optimization which identifies a maximum or minimum single optimal solution, which lumps all different alternatives into only one alternative, but being unable of achieving a tradeo among different alternatives. On the contrary, multiobjective optimization provides a set of optimal solutions alternatives that meets all conflicting objectives, known as Pareto optimal set, Savic (2002). The most suitable and used techniques for solving multiobjective optimization problems are metaheuristics, because they are less sensitive to be influenced with the shape or continuity of the Pareto front, Jones et al. (2002) and Talbi et al. (2012), i.e. they can easily deal with concave or discontinuous Pareto fronts.
Among these metaheuristics, Suresh and Mohanasundaram (2006) studied multiobjective job shop scheduling problem in order to minimize the makespan and the mean flow time of jobs by using a metaheuristic procedure based on the simulated annealing algorithm to find nondominated solution sets. Prasad et al. (2006) considered multiobjective scheduling problems in a Kanban controlled flow shop with intermediate buffer input and output buffer constraints based on genetic algorithm to minimize both the mean completion time of containers and the mean completion time and standard deviation of part types. Additionally, Yulan et al. (2008) proposed the same algorithm to solve another conflicting multiobjective problem, they integrated preventive maintenance planning and production scheduling for a single machine together to optimize the overall performance of the system considering the objectives of maximizing machine availability, and minimizing maintenance cost, makespan, total weighted completion time of jobs and total weighted tardiness. Reddy and Nagesh Kumar (2007) proposed elitistmutated multiobjective particle swarm optimization (EMMOPSO) approach to solve a conflicting and a competing multiobjective reservoir operation problem in India. These objectives involve the minimization of irrigation deficit, the maximization of hydropower generation from the reservoir and the satisfaction level of downstream water quality requirements. The proposed EMMOPSO of Reddy and Nagesh Kumar (2007) is an efficient approach in yielding a diverse set of alternatives for multiobjective reservoir operation along the true Pareto optimal fronts as well as it is easy to use and implement. Arroyo et al. (2011) solved the single machine scheduling with sequence dependent setup times and distinct due windows based on multiobjective variable neighborhood search (MOVNS) algorithm considering the minimization of the total earliness/tardiness penalties and the total flow time. They proposed two new MOVNS algorithms, known as MOVNS2 and MOVNS3. Their new MOVNS algorithms were enhanced by using an intensification procedure which showed significant improvement on the solution quality and better performance than the original MOVNS. Wongwien and Nanthavanij (2017) proposed an integer model and heuristic to schedule workforce with multiple objectives; minimizing the number of workers, maximizing the productivity, and minimizing the changeover.
Among the metaheuristic techniques and until recently, the particle swarm optimization PSO is not applied to multiobjective optimization for single machine scheduling with both sequence dependent setup times and setup costs, separately. However, some articles have used PSO for multiobjective in some other fields relate to scheduling, such as Liu and Kachitvichyanukul (2015), who have used DPSO to provide Pareto optimal solutions for the location routing problem with two objective functions; maximizing the demand delivered to customers and minimizing the total cost. Kim and Lee (2017) proposed PSO to optimize the inventory policy of medicine supply chains. Dealing with this aspect will provide the first contribution of this article. Because of the presence of these multiple objective considerations and sequence dependent setups, the single machine scheduling problem is strongly classified as (NP) hard problem. Therefore, it is unlikely to develop an efficient approach to solve it exactly, Du and Leung (1990). Thus, developing metaheuristic approaches to provide representative Pareto fronts in reasonable computational e orts for such problems is considered in this paper and represents the second contribution.
For additional reviews and details of the latest research on MOPSO problems, ReyesSierra and Coello (2006) provided a comprehensive review of the various MOPSO problems reported in the specialized literature. Lalwani et al. (2013) summarized all the applications’ areas of MOPSO in variant areas, which provided additional knowledge for the researchers working in related fields, including but not limited to as environment, industries, flow shop and job shop scheduling, data mining, engineering, hospitals’ operations, image processing and others.
3. SOLUTION APPROACH
In this section, brief concepts of PSO and multiobjective optimization are first presented and then the proposed algorithm is explained.
3.1 Discrete Particle Swarm Optimization (DPSO)
PSO is a swarm intelligence method that was introduced and investigated by Kennedy and Eberhart (1995). In any swarm, many particles are moving to find better places for food. This concept is used in the in the coded (computerized) PSO, where each particle represents a candidate solution. Let (m) be the number of particles (solutions) in the swarm. Each job (j) in each particle (k) having randomly initialized position (pos_{kj}) and velocity (velo_{kj}) in an ndimensional search space; such that the position array has a size of m by n, as each job has a position value is each particle. Based on the fitness (objective) values, each particle stores its updated values of the current position (pos), the current velocity (velo), the local bests which are the best visited positions among subgroups of overall particles (LB), global best which is the best visited position among all positions visited by overall particles (GB), and the personal best which has the same value as the objective function (PB).
During the iterations, the velocities of each job in all particles are incremented by using one of the Equations 3, 4, and 5. In Equation 3, the global system is used where each particle is guided by both its personal best and global best positions. This means that the job at iteration t moves in a velocity $vel{o}_{kj}^{t}$ resulted as a combination between its best place has been found by itself, the best place has been found the whole particles, and its velocity in the previous iteration $vel{o}_{kj}^{t1}$. Weights of global and personal bests in the velocity updating are determined by the acceleration factors R_{1} and R_{2}.
The inertia weight factor (w_{t}) is used to control exploration and exploitation in the search space. At the beginning of iteration, value of w_{t} is taken to be high (=1), then it is reduced gradually by multiplying it by 0.99. Keeping w_{t} with high value makes the velocity term in Equations 35 dominant and reduce the effect of other terms: global, personal, and local bests. In the opposite side, reducing the value of wt quickly by multiplying it with less number, as 0.9, will eliminate the effect of velocity term and provide the other best terms the whole effect. Value of the starting w_{t} and value of the multiplication factor are selected through some experimental analysis to provide the best results, and could be different for other kind of problems.
Similarly, in Equation 4, the local system is used where each particle is guided by both its personal best and local best positions, i.e., the job moves in a velocity derived from its best place and local best place that has been found by a subgroup of particles. The last type of velocity updating is in Equation 5, which combines the previous two systems.
where

w_{t}: The inertia weight factor at iteration t.

R_{1}, R_{2}, and R_{3}: Positive constant factors called acceleration coefficients.

A_{1}, A_{2}, and A_{3}: Random numbers, uniformly distributed in [0,1].
The positions are incremented by the velocity, as demonstrated by Equation 6. The selection of velocity equation and value of all parameters are discussed later in Section 3.3. As going through the search, all particles will try to move in a direction of better solution following its best solution, the best local solution, and the global best solution. At the end of search, one or more of the particles will find the best place, which represents the best solution that can be found.
3.2 MultiObjective Optimization Fundamental and Pareto Optimality
A general multiobjective optimization (MOOP) problems are formulated as: minimize a function f(x) which includes many objective functions, subject to the J constraints.
where

k: number of objectives.

$x=\left\{{x}_{1},{x}_{2},\cdots ,{x}_{n}\right\}$: set of ndimensional decision variables.

X: feasible search space.

R^{n}: ndimensional space.
The MOOP problems involve several objectives to be optimized simultaneously. Similar problems have been applied in many applications that require optimal decisions due to the presence of tradeoffs between the nature of the conflicting criteria nature of such problems. Hence, there is no single solution that can optimize each single objective simultaneously, and thus a feasible solution x^{*}ϵX to multiobjective problems is to find a set of alternatives optimal solutions, known as Pareto optimal set f(x^{*}), each of, which satisfies the different objectives at an acceptable level without being dominated by each other, i.e., a solution x^{*} is considered as nondominated by another solution x, if and only if, x^{*} is equally good or better than x regarding to all objectives, and is strictly better in at least one objective function. Additionally, each feasible solution x^{*} meets the subjective preferences of decision makers. The set of Pareto optimal solution is often called the Pareto front.
3.3 Random Selected MultiObjective DPSO Approach (RSDPSO)
In this paper, the single machine scheduling problem is studied with the multiple objectives of minimizing the levels of the total weighted tardiness and total setup cost of a schedule for a set of jobs simultaneously. To achieve that, the priorities (importance) for jobs to be scheduled first are generated based on two factors. First, the weight (w_{j}), where the jobs with higher weight should be given higher priority to be processed. Second, the available time to be processed, which can be expressed as the difference between the due date (d_{j}) and processing time (p_{j}). So, jobs with early due date and long processing should be given higher priority. Therefore, the new parameter called the importance value (IM_{j}) is generated to incorporate all of these parameters into one parameter and calculated for each job j in the queue according to Equation:
After that, these importance values IMj^{0}s are sorted in descending order so whenever a machine is released, the job of the highest important job is ready to begin processing so that no job will be extended at the end of the schedule, and the completion time of the last job is lengthened dramatically so that the tardiness penalties is minimized. The position and velocity values for jobs are initialized considering the parameter, as in Equations 11 and 12:
Tuning parameters (A_{1}) and (A_{2}) are randomly generated between [0.8, 1.2] before each time they are used. Setting the values these parameter within an appropriate interval is very crucial to ensure obtaining good performance. In other words, selecting a wide interval will generate an extreme randomized solutions and, thus, working against the role of the importance values. Additionally, a narrow interval means that all IMj^{0}s are very similar that are resulted in a rapid stagnation. Best interval for such parameters are determined through design of experiments.
Through some experimental works, it is found that using the importance values to generate the initial position and velocity values improves the results by providing solutions with better objective function values compared to the solutions provided without using importance values by 1015%. Giving that the comparison is carried out at the same number of iterations. This has been also proved by Al Theeb and Alhwiti (2014) who used almost similar importance values, called priorities, to generate the initial velocity and position values and achieve 13% improvement. Additionally, Lee et al. (1997) have used priority index to solve the problem of minimizing the total weighted tardiness of single machine scheduling with sequence dependent setups.
An updated DPSO is proposed to find the approximate Pareto optimal (not dominated) set of solutions. The discrete procedures based on the smallest position value (SPV) rule is used to update position and velocity values. Rule of SPV is used to arrange the particle in an ascending order based of their position values, such that the jobs with smaller position value is scheduled first. For example, if three jobs with positions of 2.2, 3.5, and 1.2. The permutation of these jobs based of SPV is 3, 1, and 2. If the current position of a job is close to its personal best position and best global position, its place within the schedule will not be greatly affected based in the SPV rule and based on velocity updating equation. In the other hand, if the job is far from personal best location and/or global best location, SPV rule will greatly change its place to help in finding better solution.
There are many factors used within the DPSO including the inertia weight (W), the cognitive factor (R_{1}), the social factor (R_{2}), the number of particles (m), the number (4) that is used in the importance equation, and the number of iterations. An extensive design of experiment procedure is conducted to many selected datasets to determine the best value of these parameters.
It is found that the best starting value of the inertia weight is 1 and updated as in line 31. The cognitive and social factors are determined to be 1 and 1.5, respectively. Number of particles is selected to be 50 in both phases and the number of iterations is 150. For the number (4) used in the importance equation, it is used to decrease the value of maximum position value. Additionally, we restrict the value of positions between some max and min values to avoid the exploration in wide range of search space.
The concept of this proposed method is to select an objective function randomly, then apply the DPSO using this objective function. The objective values of the second objective function are calculated and saved. Then, the best particles produced by the first phase are taken, and then PSO is applied again for these selected particles with other objective function. Thus, it is tried to optimize an objective function on some particles that they are good in other objective function; which will potentially produce solutions that are good in both objectives. Objective values of both functions are saved. At the end, the whole objective values are used to find the Pareto optimal.
The RSDPSO algorithm can be summarized in the following steps. To evaluate the performance of this suggested algorithm, the same sets are solved through the traditional multiobjective DPSO, which is the linearly weighted multiobjective procedure, as described in the next section.
3.4 Linearly Weighted DPSO Approach
One of the most common techniques to solve the multi objective problems is the linearly weighted objectives (LWDPSO), as the all objectives are summed in one objective with different weights. For the objectives in the research, the single weighted objectives is written as in Equation 13, where C_{1} and C_{2} are the weights and C_{1} + C_{2} = 1.
This Technique is used here to evaluate the performance of the suggested approach RSDPSO described above in Section 3.3. Algorithm 1 can be used to run the LWDPSO after updating line 10, where the objective used is the one shown in Equation 13 with random generated values of C_{1} and C_{2} = 1 C_{1}, instead of randomly selected objective function, as in the previous section. Lines 3336 are removed in this method and phase is not performed. Instead of phase 2, the total number of iterations for both phases in RSDPSO is used here for a single phase, and to avoid the effect of objective weights, the objective value is measured for both objectives separately without weight in line 22 of Algorithm 1. This allows creating two Pareto fronts; the weighted tardiness and total costs.
4. DATA INSTANCES AND NUMERICAL RESULTS
4.1 Data Instances
Six different values of the sample size of the test problems are considered to estimate the performance of the proposed DPSO heuristic variants. these scale are n ϵ {30; 50; 80; 100; 150; 200}. Uniform distributions are used for the purpose of data generation: Due date is generated by ~Unif (400; 2000), processing time by ~Unif (50; 150), setup cost by ~Unif (0; 10000), setup time by ~Unif (0; 10), and weight ~Unif (0; 20). These distributions are selected to be with the same frame of those were used in literature, such as Cicirello (2003).
4.2 Numerical Results
Each scale problem consists of 60 data sets that were generated randomly by using data generation via MATLAB codes that is coded by the authors of this article. All data sets are solved by the proposed DPSO to find the Pareto fronts, then, a data set from each size is randomly selected to show its results here. Criteria that are used to evaluate the performance of any proposed MOPSO are: the ability of the solution approach to provide a diverse solutions that cover most of Pareto areas, Deb et al. (2005), the number of options (solutions), and the computational time.
4.2.1 Small Scale Data Sets Results
Results of small scale problems by using random selected MOPSO with n=30 and n=50 show that the solutions for both sets cover a wide range of options with no huge gaps, as shown in Figures 1a and 1b. However, there are some small gaps in some sets of Pareto front. Even though, the achieved solutions produced in Figure 1 cover most the areas in the Pareto front and provide a good number of options for decision makers.
Results produced by linearly weighted MOPSO show that less number of solutions (10 using LWDPSO vs 18 using RSDPSO for the set with n=30, 14 using LWDPSO vs 20 using RSDPSO for the set with n=50). It can be also obtained that the solutions produced by LWDPSO are clustered in approximately half of area covered by solutions of RSDPSO. However, there is one solution among the LWDPSO solutions in the set with n=30 competes with the solutions of RSDPSO and dominates two of them. Additionally, in the set with n=50, eight solutions of LWDPSO compete with RSDPSO solutions, but defeat only one of them as these 8 solutions are clustered in only small area of the figure.
4.2.2 Medium Scale Data Sets Results
Similarly to small scale problems, solutions of medium scale problems with n=80 and n=100 provide wide range of solutions with high number of options. In Figures 2a and 2b, 12 solutions were produced for the set with n=80 and 15 solutions were produced for the set with n=100 covering wide range of Pareto fronts. It is obtained that there is small gaps between solutions in the Pareto front of the set with n=80. However, these gaps are much smaller in the results of the set with n=100.
For the comparison between RSDPSO and LWDPSO, it is obtained that the solution produced by LWDPSO fails to compete with solutions produced by RSDPSO for the set with n=80. However, for the set with n=100, higher number of solutions are produced by LWDPSO and one of them competes with the RSDPSO’s solutions and can be a part of the Pareto front.
4.2.3 Large Scale Data Sets Results
In the large scale problems with n=150 and n=200, the same conclusions can be obtained here; good number of solutions covering wide range of options. Number of solutions are 10 and 11 for the sets with n=150 and n=200, respectively. In the set with n=150, LWDPSO provides solutions cover less range compared with the range covered by the solutions provided by RSDPSO. One of the LWDPSO solutions can enter the Pareto front of RSDPSO. In the other hand, in the set with n=200, LWDPSO provides solutions cover less range compared with RSDPSO with no solution from the LWDPSO solutions can compete with RSDPSO solutions.
Number of solutions for all data sets is shown in Table 1, where it is found that the proposed solution approach is able to produce a good number of nondominated solutions. Finally, the last criterion is the computation time which was very short ranging between 3.6 seconds for set with 30 jobs and 48 seconds with sets with 200 jobs, using a desktop with i7 processor.
5. CONCLUSIONS AND FUTURE WORK
This paper presents an optimization approach for the SingleMachine Total Weighted Tardiness and Setup Costs Scheduling Problem with Sequence Dependent Setup Times and Sequence Dependent Setup Cost. In such problem, there is a cost for the machine setup to be able to process each job, where the setup cost depends on the job that is just processed. In addition, there is a consumed setup time to prepare the machine that also depends on the preceding job. It is assumed that there is no any relation between both setup time and setup cost in that costly setups do not require long time, and vice versa.
A discrete particle swarm optimization is proposed to solve several data instances (DPSO). This proposed approach depends on generating solutions with some dependency on job’s importance, where jobs of higher weights, earlier due dates, and longer processing times are given higher priority for processing. Then, the PSO optimization applied by using one of the available objective functions that is randomly selected. After that, a second phase is performed on the best solutions produced in the first phase with the other objective function. Results shows that the proposed DPSO is able to produce Pareto fronts of a good number of solutions and able to cover a wide range of options. Figure 3a and 3b
To compare the proposed heuristic with other heuristic, same data sets are resolved by using DPSO with the same number of iterations, but with linearly weighted objective function. It is found the multiphases DPSO with random selected objective function provides better Pareto fronts that the DPSO with linrealy weighted DPSO in term of number of solutions and area coved by the nondominated solutions.
This research can be extended by using different algorithms and heuristics to solve the STWTTSCSDSC, such as ant colony, simulated annealing, and genetic algorithm. One clear improvement opportunity is to use different improvement techniques to enhance the performance of the DPSO, after the second phase or with phases. For example, swapping heuristics, as in Lin and Ying (2007), might be used to swap some jobs in the global best solution to slightly improve its objective value. The insertion heuristics might improve the best set of solutions by taking one job from its place and insert it between another two jobs. Additionally, an intensification phase, similar to the phase that has been used by Anghinolfi and Paolucci (2009), might be used at the end of the approach to improve the global best solution.