• About Us +
• Editorial Board +
• For Contributors +
• Journal Search +
Journal Search Engine
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.17 No.4 pp.805-818
DOI : https://doi.org/10.7232/iems.2018.17.4.805

Rescheduling a Single Machine with Various Objectives

Jaehwan Yang*
Department of Business Administration, The University of Seoul, Seoul, Republic of Korea
* Corresponding Author, E-mail: jyang@uos.ac.kr
December 29, 2017 July 19, 2018 August 26, 2018

ABSTRACT

We consider a single machine rescheduling problem with two types of disruptions such as order cancelations and new orders after the initial scheduling. The original objective is minimizing total completion time and after the disruption, we reschedule remaining (not cancelled) and new orders with one of the three objectives under the condition of noidle time and no-tardiness of remaining orders. They are the objective of minimizing total earliness of remaining orders with no-change in sequence of remaining orders, the objective of minimizing total completion time of new orders, and finally, a composite objective, which is minimizing the sum of total completion time of new orders and total earliness of remaining orders. We first prove complexity of the problems. Then, we develop an optimal solution procedure for one objective and four intuitive heuristics for the other two objectives. We theoretically analyze performance of each heuristic. Finally, we empirically evaluate each heuristic, and the results indicate that heuristic H2, which uses the LPT rule to select a new order to be scheduled first, performs better than H1 and H3 and in turn, H1, which uses the LS rule to select a new order, performs better than H3, which uses the SPT rule.

1. INTRODUCTION

Disruptions such as cancelations of orders, arrival of new orders, changes in order priority, processing delays, changes in release dates, machine breakdowns, and the unavailability of raw materials, personnel, or tools are commonly found in manufacturing facilities (Hall and Potts, 2004). However, most studies in the scheduling field assume scheduling situations without any of these disruptions. This paper considers the rescheduling problem where there exist cancelations of orders and arrival of news orders occur at the same time.

In a given period of time, say in the last 24 hours, if cancelations of orders and arrival of news orders occur, then the operation manager (OM) has to update or reschedule the current schedule while considering both cancelled orders and new orders. It may be possible to ignore the current schedule and develop an entirely new schedule but it will cause other problems with respect to reallocations of raw materials and resources including labor, tools and equipment, which had been already prepared for the current schedule. Moreover, the OM has to deal with the unhappy sales department or customers thanks to substantial changes in completion time of some orders. Hence, the OM may need to consider the trade-off between the efficiency related cost of scheduling and the disruption related cost of scheduling when the new schedule is generated.

For example, consider a facility where orders are scheduled with a traditional objective such as minimizing total completion time. After an initial schedule is generated, each order receives estimated completion time or so called quoted due date (QDD), which is used to notify customers when their order would be shipped from the factory. Then, whenever rescheduling is necessary due to the order disruptions, these quoted completion times are often hard-pegged as fixed completion times. Obviously the company does not want to see the changes in the already determined completion times and thus, they can minimize the effect of the disruptions.

Specifically, this paper considers a single machine rescheduling problem where the original (efficiency related) objective is minimizing total completion time. Then, we consider the situation where disruptions such as order (job) cancelations and newly arrived orders occur after the initial scheduling, and we reschedule this disrupted schedule with one of the three new objectives under the conditions of no-idle time and no tardiness of remaining jobs. We assume that idle time is not allowed mainly because the original objective is minimizing the total completion time, which focuses on completing orders as quickly as possible on average. Regarding no tardiness of remaining jobs, we can easily find real world situations where tardiness of remaining orders is not allowed. For instance, when a buyer places an order to a manufacturer, then the manufacturer usually returns ‘a promised delivery date’ to the buyer after creating or updating their schedule. Then, this promised delivery date becomes a due date which must be met all the time.

The disruption related objective measures the impact of the disruptions as difference of completion times in remaining (not canceled) jobs before and after the disruptions. The artificial due dates for remaining jobs are set to completion times in the original schedule while newly arrived orders do not have due dates. Then, our first objective of the rescheduling is minimizing the total earliness with no change in the sequence of remaining orders. With this objective, we mainly focus on the disruption related measure without considering quality of the schedule on newly arrived orders. The second objective of the rescheduling is minimizing the total completion time of newly arrived orders. With this objective, we do not consider the disruption related objective measures explicitly except for there is no tardiness for remaining orders. However, it is turned out that there exists an optimal schedule with no change in the sequence of remaining orders. Finally, we consider a composite objective which includes both a disruption related measure and an efficiency related measure, and that is minimizing the summation of total completion time of new orders and total earliness of remaining orders with no change in the sequence of remaining orders. With this objective, we can see the trade-off between the disruptions related objective and the efficiency related objective.

For general discussions on rescheduling, see Hall and Potts (2004). For practical significance on this type of work, see Clausen et al. (2001) and Kopanos et al. (2008). There are several studies which are closely related to the paper. Wu et al. (1993) consider a single machine rescheduling problem where job ready times and tails exist. This machine failure is considered as disruption, and the objective is to minimize the makespan and the disruption from the original schedule. The disruption is measured with the objective of the minimization of total earliness and tardiness and the start times of jobs in the original schedule are used as dues dates. Unal et al. (1997) study a single machine with newly arrived jobs that have setup times that depend on their part types. One objective is to minimize the total weighted completion time and the other is to minimize makespan of the new schedule. Hall and Potts (2004) also consider scheduling a single machine problem with newly arrived jobs as disruptions. Two separate objective functions such as minimizing position difference between starting times in the original schedule and the new schedule and minimizing total earliness and tardiness as the start times of jobs in the original schedule as dues dates are used to minimize disruptions. They provide several intractability results and heuristics with the performance evaluation. Qi et al. (2006) study a single machine and two parallel machine rescheduling problems where unexpected machine breakdown or changes in processing time occur. The efficiency related objective is to minimize of total completion time and the disruption related objective is to minimize total earliness and tardiness as the start times of jobs in the original schedule as dues dates. A composite objective function is used to address the both types of objectives.

For rescheduling problems in a flowshop environment, see Katragjini et al. (2013). Also, for rescheduling on identical parallel machines, see Yin et al. (2016). Finally, for rescheduling in an unrelated parallel shop where unexpected machine failure occurs, see Ozlen and Azizoğlu (2011). Their efficiency related objective is to minimize total completion time and the disruption related objective is to minimize the number of jobs which are assigned to a different machine or to minimize the cost associated with the number of jobs that are assigned to a different machine.

Yang and Posner (2012) consider a single machine rescheduling problem where the objective is to minimize the maximum deviation where both tardiness and earliness are allowed. They establish the complexity of the problem and develop a few heuristics. They also find that their results can be easily extended to a special case of the problem with the objective of minimizing total earliness and tardiness. Finally, Yang (2013) considers a single machine rescheduling problem where the objective is to minimize the maximum lateness of the remaining jobs. The types of analysis are similar to those in Yang and Posner (2012).

While this work is similar to Yang and Posner (2012) and Yang (2013), this paper consider rescheduling with a variety of objectives. They are minimization of total completion time of newly arrived orders, minimization of total earliness of remaining orders, and finally minimization of the sum of total completion time of newly arrived orders and total earliness of remaining orders. Notice that the last objective is the composite one with two different measures, and hence the trade-off between the disruptions related objective and the efficiency related objective can be examined. Finally, this work is differentiated from some other studies which consider the order disruption because we consider both newly arrived orders and canceled orders while the other studies consider only newly arrived orders except for Yang and Posner (2012) and Yang (2013).

In the next section, we introduce some notation and describe the problem. In Section 3, we present assumptions and preliminary results. Then, we prove the complexity of a various versions of the problems and present an optimal procedure for one of the problems in Section 4. Four heuristics are examined in Section 5. We first introduce a heuristic which always generates a maximum value of total earliness and is used as an upper bound for all the other three heuristics. Then, we develop three heuristics which are based on scheduling jobs either in an index order, in the longest processing time order, or in the shortest processing time order at the first available idle times created by cancelled jobs. In Section 6, we analytically investigate each of our heuristics and prove some worst case bounds in Section 7. Finally, we empirically evaluate the heuristics in Section 8.

2. NOTATION

The parameters of the problem are

• n0 : number of original jobs

• nc : number of cancelled jobs

• nr = nonc: number of remaining jobs

• nn : number of new jobs

• n = nrnn : number of jobs to be scheduled

• Nr : set of remaining jobs = {1, 2,…, n}

• Nn : set of new jobs = {nr + 1, nr + 2, …, nr + nn}

$N = N r ∪ N n = { 1 , 2 , … , n }$

• pj : processing time of job j for jN

• σd : disrupted schedule.

The decision variables are

• σ : schedule of all jobs

• Cj (σ) : completion time of job j in schedule σ for jN

• Ej (σ) : earliness cost of remaining job j in schedule σ for jNr

• z(σ ) : value of schedule σ

• σ* : an optimal schedule

• z* : value of optimal schedule σ*.

When no confusion exists, we replace Cj(σ) and Ej(σ) with Cj, and Ej, respectively. The standard classification scheme for scheduling problems (Graham et al., 1979) is $α 1 | α 2 | α 3 ,$ where α1 describes the machine structure, α2 gives the job characteristics and restrictions, and α3 defines the objective. Following the standard scheduling classification schedule of Graham et al. (1979), we refer to the problem of minimizing total earliness of remaining jobs with no change in the sequence of remaining orders as $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where dsrpt implies that the problem considers disruptions related rescheduling, no-idle means there exists no-idle time in a schedule, no-trdy indicates that tardiness is not allowed for remaining jobs, and πR denotes that remaining jobs maintain their original sequence. Note that only remaining jobs have due dates which are set to the completion times in the original schedule. We assume that the remaining and new jobs are available at the start of the planning process. Also, preemptions of jobs are not allowed.

In this paper, a schedule defines a job order on the machine. Since no idle time is allowed, the job order determines the start and completion time of jobs on the machine. An original schedule is the schedule which is created and established before disruptions occur. As we noted, the execution of the original schedule is disrupted because of cancelled and newly arrived jobs. Thus, there is a rescheduling. A disrupted schedule is the original schedule with canceled jobs removed. We assume that the disrupted schedule starts at the beginning of time horizon.

For notational convenience, each block of idle times created by canceled jobs is called gap. Let G be a set of gaps such that $G = { G 1 , G 2 , … , G q }$ where q is the number of gaps in disrupted schedule σd . Then, σd can be described as a sequence of jobs and gaps. For instance, $σ d = ( 1 , G 1 , 2 , G 2 , 3 , 4 )$ means that there are four remaining jobs 1, 2, 3, and 4, and one gap between jobs 1 and 2 and another gap between jobs 2 and 3. We also, let qj be duration of gap Gj for j∈{1, 2, …, q}. Note that gj > 0 for all j∈{1, 2, …, q}. Also, we let Ri be a set of remaining jobs processed between gaps i and i + 1 for i = 0,1,…, q−1. We also let ri be the number of jobs in Ri for i = 0,1,…, q. Then, r0 ≥ 0 and ri > 0 for i = 1, 2, …, q.

3. PRELIMINARY RESULTS AND ASSUMPTIONS

In this section, we establish some results that provide the basis for our analysis. First, we prove the following lemma which establishes relationships between σd and σ*. For notational convenience, we let $d j = C j ( σ d )$ for j = 1, 2, …, nr and dj is considered as due date in the rescheduling problem. For the problem with an objective of minimizing total completion time of new jobs, we have the following result.

Lemma 1.For problem$1 | d s r p t , n o − i d l e , n o − t r d y | | ∑ N n C j$, there exists an optimal schedule where the order of the remaining jobs is the same as that in σd.

Proof. Suppose there exist adjacent remaining jobs i and jNr such that job i precedes job j in σd but job j precedes job i in σ*. Notice that some new jobs in Nn can be scheduled between these two remaining jobs in σ*. In this case, we can simply switch the order of the remaining jobs i and j in σ* and schedule all new jobs between the two jobs right before job i. This does not create any tardiness for the remaining jobs. Also, $∑ N n C j$ either decreases if there are some new jobs or does not change if there are no new jobs between the two jobs. Therefore, the result holds. □

Hence, even though the problem $1 | d s r p t , n o − i d l e , n o − t r d y | | ∑ N n C j$ does not explicitly focus on reducing disruption related impact, it preserves the order of remaining jobs while rescheduling in addition to no tardiness of the remaining jobs.

For the problems which include an objective of minimizing total earliness of the remaining jobs, we assume that the remaining jobs maintain their original sequence in a new schedule in order to honor the sequence which was fixed in the original schedule. Hence, for problems $1 | d s r p t , n o − i d l e , n o − t r d y , π R | | ∑ N r E j$ and $1 | d s r p t , n o − i d l e , n o − t r d y , π R | | ∑ N n C j$ it is assumed that the remaining jobs maintain their original sequence during the rescheduling.

As a result of Lemma 1 and the assmption above, we only consider a schedule where the order of remaining jobs is the same as in σd .

Lemma 2.There exists an optimal schedule where two remaining jobs process consecutively in σd still process consecutively in σ*.

Proof. Suppose that there exists an optimal schedule where some new jobs are scheduled between two consecutively processing remaining jobs in σd . Suppose that rNn is the first such job and job r is scheduled between two remaining jobs i and j for i, jNr in σ*. If earliness cost incurs either on both jobs i and j or job i only in σ*, then we can reduce the earliness cost by scheduling job r before job i. Notice that tardiness cannot occur because job j does not generate any tardiness and jobs i and j are processed consecutively in σd . This contradicts that the schedule is optimal. We use the same argument for all new jobs which are scheduled between two consecutively processing remaining jobs in σd . Similar argument can be used to prove the case where the objective is minimizing total completion time of new jobs. □

As a result of Lemma 2, when we consider an optimal schedule, we assume that no new jobs are scheduled between two consecutively processing remaining jobs in σd .

4. COMPLEXITY RESULTS

We begin by introducing an algorithm which generates an optimal schedule for $1 | d s r p t , n o − i d l e , n o − t r d y | ∑ N n C j .$ Algorithm A1 uses the SPT (shortest processing time job first) rule to select a new job to be scheduled at each gap, and uses the index order to select the next gap to be considered.

Algorithm A1

• 0. Set i = 0.

• 1. If G = ∅ or Nn = ∅, then go to Step 6.

• 2. 2. Set r = 1 Nn 1, j = nr , and i = i +1. Reindex new jobs in Nn such that pjpj+1 for j = nr + 1, nr + 2, …, nr + r−1.

• 3. Set j = j +1 .

If pjgi , then schedule job j at Gi, gi = gipj, and Nn = Nn \{j} . Otherwise, go to Step 5.

• 4. If job j +1 exists in Nn and gi > 0 , then go to Step 3.

• 5. Schedule jobs in Ri without idle time, G = G \{Gi}, gi+1 = gi+1 + gi , and go to Step 1.

• 6. Eliminate remaining gaps by expediting jobs.

• 7. All unscheduled new jobs are scheduled at the end of the current schedule.

• 8. Calculate solution cost and stop.

Algorithm A1 runs in O(n log n) time, and the resulting schedule does not contain inserted idle time. Let σA1 be the schedule of new jobs, which is generated by algorithm A1 and zA1 be the cost of schedule σA1. The following result shows that algorithm A1 generates an optimal schedule for problem $1 | d s r p t , n o − i d l e , n o − t r d y | ∑ N n C j$.

Theorem 1.For problem

$1 | d s r p t , n o − i d l e , n o − t r d y | ∑ N n C j$algorithm A1 generates an optimal schedule.

Proof. If $min j ∈ N n { p j } > ∑ i = 1 q g i$, then no new job can be scheduled at any of the gaps without causing tardiness of the remaining jobs. Hence, all new jobs should be scheduled at the end of the remaining jobs. Since the SPT rule is optimal for problem $1 | | ∑ ​ C j ,$ algorithm A1 should generate an optimal schedule for our problem. Alternatively, if $min j ∈ N n { p j } ≤ ∑ i = 1 q g i ,$ then there exists at least one new job which should be scheduled at a gap.

Suppose that σA1 is not optimal. Further suppose that new jobs in σA1 are ordered in their index order. Then in σ*, there exists job j which is scheduled at the i th position for j > i and i, jNn. Since pj > pi, we can simply switch the positions of jobs i and j in σ* without causing any tardiness of the remaining jobs. Let σ′ be this modified schedule. Then, $C j ( σ * ) > C i ( σ ′ )$ and $C i ( σ * ) = C j ( σ ′ ) .$ Contradiction. Therefore, σA1 should be an optimal schedule. □

The next result establishes the complexity of a problem. We show that problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | | ∑ N r E j$ is unary NP-complete by using the reduction from 3-Partition which is a known unary NP-complete problem.

3-Partition (Garey and Johnson, 1979): Given 3 elements with integer sizes a1, a2,…, a3, where $∑ j = 1 3 ℓ a j = ℓ y$ and $y / 4 < a j < y / 2$ for j =1, 2,…, 3, does there exist a partition A1, A2 ,…, A of the index set {1, 2,…, 3} such that $| A i | = 3$ and $∑ j ∈ A i a j = y$ for i = 1, 2,…, ?

Theorem 2.Problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | | ∑ N r E j$is unary NP-complete.

Proof. Given an instance of 3-Partition, we construct the following instance of $1 | d s r p t , n o − i d l e , n o − t r d y , π R | | ∑ N r E j$ :

$n r = ℓ , n n = 3 ℓ , p j = 1 , j = 1 , 2 , … , ℓ p j = a j − ℓ , j = ℓ + 1 , ℓ + 2 , … , 4 ℓ g i = y , i = 1 , 2 , … , ℓ , σ d = ( G 1 , 1 , G 2 , 2 , … , G ℓ , ℓ ) , z = 0.$

The decision version of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | | ∑ N r E j$ is in NP since we can calculate $∑ N r E j$ in polynomial time. We assume without loss of generality that $∑ j = 1 3 ℓ a j = ℓ y .$ We prove that there exists a solution to 3-Partition if and only if there exists a solution to problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where $∑ N r E j = 0.$.

(⇒) Suppose that there exists a solution to 3-Partition. Then, there exist disjoint set A3, A2 ,…, A such that $∑ a j ∈ A i a j = y$ for i = 1, 2,…, . We assume without loss of generality that, if there exists a solution to 3-Partition, then the elements are indexed such that $a 3 i − 2 + a 3 i − 1 + a 3 i = y$ for i = 1, 2,…, .

Consider schedule σ for problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where(1)

$σ = ( ℓ + 1 , ℓ + 2 , ℓ + 3 , 1 , ℓ + 4 , ℓ + 5 , ℓ + 6 , 2 , … , 4 ℓ − 2 , 4 ℓ − 1 , 4 ℓ , ℓ ) .$
(1)

Note that $p ℓ + 3 i − 2 + p ℓ + 3 i − 1 + p ℓ + 3 i = y$ for i = 1, 2,…, . Thus, jobs + 3i − 2, + 3i −1, and + 3i are scheduled at Gi without generating earliness cost on job i for i = 1, 2,…, . Therefore, $∑ N r E j = 0$ for problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$.

(⇐) Suppose that there exists a solution to problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where $∑ N j E j = 0.$ Since $∑ N j E j = 0 ,$ new jobs should be scheduled at each gap without creating any idle time. Further, new jobs should not generate any tardiness. This implies that the total processing time of new jobs which are scheduled at each gap must be exactly y. Observe that y / 4 < pj < y / 2 for j = +1, + 2,…, 4. Also, there are gaps and 3 new jobs. Hence, solving the problem with $∑ N r E j = 0$ is possible only if a set of three new jobs are scheduled at each gap and the sum of the processing times of each set of three jobs is exactly y. This is only possible if there exists a solution to 3-Partition. □

The next result establishes the complexity of a special case of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where there exists one gap in a disrupted schedule. We use the reduction from the following binary NPcomplete problem.

Partition (Garey and Johnson, 1979): Given a multiset A of 2n integers and a positive integer b, can set A = {a1, a2, …, a2n} be partitioned into two disjoint subsets A3 and A2 such that the sum of the elements in A3 equals the sum of the elements in A2 such that $∑ a j ∈ A 1 a j = ∑ a j ∈ A 2 a j = b$?

Theorem 3.Even if there exists only one gap in σd and $g 1 < ∑ j ∈ N n p j ,$then the recognition version of the problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$where there exists fixed number of gaps in a disrupted schedule is at least binary NP-complete.

Proof. Given an instance of Partition problem, we construct the following instance of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ with one gap, and $g 1 < ∑ j ∈ N n p j .$ We assume that there exist one remaining job, job 1 and 2l new jobs, and let p1 = 1 and pj = aj for jNn. We also set g1 to the size of a partition such that g1 = b, and $σ d = ( G 1 , 1 ) .$

Notice that the decision version of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ one gap, and $g 1 < ∑ j ∈ N n p j$ is in NP since we can calculate r σN Ej in polynomial time. We prove that there exists a solution to Partition if and only if there exists a solution to problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where there exists one gap and $g 1 < ∑ j ∈ N n p j$ with $∑ N r E j = 0 .$

First, having a partition with the size of b implies that there exists a set of jobs in Nn such that their total processing time is equal to b which is also equal to g1. Then, we schedule those jobs at G1 and the rest of jobs in Nn at the end of σd. Since the scheduled new jobs at the gap do not create any earliness cost on job 1, the solution value of this schedule is 0.

In the other direction, having a schedule for the problem with 0 r σN Ej = implies that there exist a set of jobs in Nn such that their total processing time is equal to b. Since pj = aj for jNn, there exists a partition.

Therefore, the partition is possible only if there exists an optimal schedule for problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j$ where there exists one gap and $g 1 < ∑ j ∈ N n p j$ with $∑ N r E j = 0 .$

The next result establishes that problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j + ∑ N n C j$ is unary NP-complete by using the reduction from the following unary NPcomplete problem.

Numerical 3-Dimensional Matching (Garey and Johnson, 1979): We are given disjoint sets of W, X, and Y, each containing l elements, integer sizes a1, a2, …, a3l, where $a j ∈ W ∪ X ∪ Y ,$, and a positive integer bound b. The problem is to partition $W ∪ X ∪ Y$ into l disjoint sets $A 1 , A 2 , … , A l$ such that each Ai contains exactly one element from each of W, X, and Y and $∑ a j ∈ A i a j = b$ for i = 1, 2, …, l.

Theorem 4.The decision version of problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j + ∑ N n C j$is unary NP-complete.

Proof. For notational convenience, let $S W = ∑ a j ∈ W a j , S X = ∑ a j ∈ X a j ,$ and $S Y = ∑ a j ∈ Y a j$ where W = {3 +1, 3 +2,…, 4}, X = {4 +1, 4 + 2,…, 5}, and Y = {5 +1, 5 +2,…, 6}. Given an instance of Numerical 3-Dimensional Matching, we construct the following instance of $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j + ∑ N n C j$:

$n r = 4 ℓ , n n = 3 ℓ , p j = 10 L , j = 1 , 2 , … , 3 ℓ p j = 10 L + a j − 3 ℓ , j = 4 ℓ + 1 , 4 ℓ + 2 , … , 5 ℓ , p j = 11 L + a j − 3 ℓ , j = 5 ℓ + 1 , 5 ℓ + 2 , … , 6 ℓ p j = 12 L + a j − 3 ℓ , j = 6 ℓ + 1 , 6 ℓ + 2 , … , 7 ℓ g i = 33 L + b , i = 1 , 2 , … , ℓ σ d = ( G 1 , 1 , 2 , 3 , 4 , G 2 , 5 , 6 , 7 , 8 , G 3 , … , G ℓ , 4 ℓ − 2 , 4 ℓ − 1 , 4 ℓ ) z = 109.5 ℓ 2 L − 45.5 ℓ L + 1.5 ℓ 2 b − 0.5 ℓ b + 2 S W + S Y$

where $L = 10 ∑ j = 1 3 l a j$

The decision version of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j + ∑ N n C j$ is in NP because we can calculate $∑ N r E j + ∑ N n C j$ in polynomial time.

We prove that there exists a solution to Numerical 3- Dimensional Matching if and only if there exists a solution to $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j + ∑ N n C j$ where $∑ N r E j + ∑ N n C j ≤ z .$

(⇒) We assume without loss of generality that if there exists a solution to Numerical 3-Dimensional Matching, then the elements are indexed such that $a j + a ℓ + j + a 2 ℓ + j = b$ for $j = 1 , 2 , … , ℓ .$ Consider the schedule where

$σ = ( 4 ℓ + 1 , 5 ℓ + 1 , 6 ℓ + 1 , 1 , 2 , 3 , 4 , 4 ℓ + 2 , 5 ℓ + 2 , 6 ℓ + 2 , 5 , 6 , 7 , 8 , … , 5 ℓ , 6 ℓ , 7 ℓ , 4 ℓ − 2 , 4 ℓ − 1 , 4 ℓ )$
(2)

and there exists no inserted idle time. Since there exists no idle time in σ , we have

$∑ j = 3 ℓ + 1 6 ℓ p j ∑ j = 1 ℓ ( p 3 ℓ + j + p 4 ℓ + j + p 5 ℓ + j ) = ∑ j = 1 ℓ ( 10 L + a j + 11 L + a ℓ + j + 12 L + a 2 ℓ + j ) = 33 L + b .$

Note that gi = 33L + b for all i = 1, 2,…, . Hence, $∑ N r E j = 0$. Then, we calculate the sum of the total completion time as follows.

$∑ j = 3 ℓ + 1 6 ℓ C j = ∑ j = 3 ℓ + 1 4 ℓ C j + ∑ j = 4 ℓ + 1 5 ℓ C j + ∑ j = 5 ℓ + 1 6 ℓ C j = ∑ j = 1 ℓ { ( j − 1 ) 33 L + ( j − 1 ) 40 L + ( j − 1 ) b + 10 L + a j } + ∑ j = 1 ℓ { ( j − 1 ) 33 L + ( j − 1 ) 40 L + ( j − 1 ) b + 21 L + a j + a ℓ + j } + ∑ j = 1 ℓ { ( j − 1 ) 33 L + ( j − 1 ) 40 L + ( j − 1 ) b + 33 L + a j + a ℓ + j + a 2 ℓ + j } = ∑ j = 1 ℓ { 3 ( j − 1 ) ( 73 L + b ) + 64 L + 2 a j + a ℓ + j + b } = 0.5 ( 219 L + 3 b ) ( l 2 − l ) + 64 L l + 2 S W + S Y + l b = 109.5 ℓ 2 L − 45.5 ℓ L + 1.5 ℓ 2 b − 0.5 ℓ b + 2 S W + S Y = z .$
(3)

(⇐) Suppose that there exists a solution to problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ N r E j + ∑ N n C j$ where $∑ N r E j + ∑ N n C j ≤ 109.5 ℓ 2 L − 45.5 ℓ L + 1.5 ℓ 2 b − 0.5 ℓ b + 2 S W + S Y .$ Let $I 1 ∈ { 1 , 2 , … , 3 ℓ } , I 2 ∈ { 3 ℓ + 1 , 3 ℓ + 2 , … , 4 ℓ } , I 3 ∈ { 4 ℓ + 1 , 4 ℓ + 2 , … , 5 ℓ } ,$ and $I 4 = { 5 ℓ + 1 , 5 ℓ + 2 , … , 6 ℓ } .$ Since all jobs in I1 are identical, we assume that in an optimal schedule, jobs in I1 are processed in their index order.

We first consider the job order in an optimal solution. Since gi = 33L + b for i = 1, 2, …, , no more than three new jobs can be scheduled at one gap. Specifically, we will show that in an optimal schedule, one from each of W, X, and Y is scheduled at each gap.

Notice that jobs are scheduled in σ from (2) such that no earliness cost incurred by remaining jobs and all new jobs are scheduled at one of the gaps. An optimal schedule must not be worse than σ , and since $∑ j = 1 ℓ g i = ℓ ( 33 L + b ) ,$ it can be seen that in an optimal schedule, all new jobs should be scheduled at one of the gaps.

Since gi = 33L + b for i = 1,2, …, , five different combinations of new jobs can be scheduled at the first gap in an optimal schedule. They are three jobs in W, two jobs in W and one job in X, one job in W and two jobs in X, one job in each of W, X, and Y, and three jobs in X. For notational convenience, let, js, jt, and jr be the first three new jobs scheduled at g1. Without loss of generality, it can be assumed that a smaller job is scheduled ealier than a bigger job. If three jobs in W are scheduled at g1, then the total completion time of the three jobs is $60 L + 3 p j s + 2 p j t$ and the job jr finishes at $30 L + p j s + p j t + p j r .$ Then, $∑ j = 1 3 E j = 9 L + 3 ( b − p j s − p j t − p j r )$. However, if one job in each of W, X, and Y are scheduled at g1, then the total $64 L + 3 p j s + 2 p j t$ and the job jr finishes at $33 L + p j s + p j t + p j r .$ Then, $∑ j = 1 3 E j = 3 ( b − p j s − p j t − p j r ) .$ Hence, in terms of total cost, scheduling one job in each of W, X, and Y at g1 is much better than the other case. Similarly, if two jobs in W and one job in X are scheduled at g1, then the total completion time of the three jobs is $61 L + 3 p j s + 2 p j t + p j r$ and the job jr finishes at $31 L + p j s + p j t + p j r$. Then, $∑ j = 1 3 E j = 6 L + 3 ( b − p j s − p j t − p j r ) .$ Hence, in terms of total cost, scheduling one job in each of W, X, and Y at g1 is much better than scheduling two jobs in W and one job in X. We can find the similar results with the remaining two cases.

Now, in an optimal schedule, let the jobs in I2, I3, and I4 be processed in the order $v 1 , v 2 , … , v 3 ℓ .$ For notational convenience, let $v 1 = 3 ℓ + 1 , v 2 = 4 ℓ + 1 , v 3 = 5 ℓ + 1 , … , v 3 ℓ − 2 = 4 ℓ , v 3 ℓ − 1 = 5 ℓ , v 3 ℓ = 6 ℓ .$. First, we describe an optimal schedule. Then, we show that a schedule does not have a solution value ≤ z unless it is optimal.

First, consider jobs scheduled in G1 and R1. Let $d j = b − ( a j + a ℓ + j + a 2 ℓ + j )$ for $j = 1 , 2 , … , ℓ .$. Then,

$∑ j ∈ G 1 C j + ∑ j ∈ R 1 E j = 10 L + 3 a 1 + 11 L + 2 a ℓ + 1 + 12 L + a 2 ℓ + 1 + 4 d 1 = 64 L + 2 a 1 + a ℓ + 1 + b − d 1 + 4 d 1 = 64 L + 2 a 1 + a ℓ + 1 + b + 3 d 1 .$

Note that d1 ≥ 0 because there exists no tardiness for the remaining jobs. We can extend this calculation for a general case. Consider jobs scheduled in Gi and Ri for $i ∈ { 2 , 3 , … , ℓ } .$. Then,

$∑ j ∈ G i C j + ∑ j ∈ R i E j = { 3 ( i − 1 ) ( 33 L + 40 L ) + 3 a i + 2 a ℓ + i + a 2 ℓ + i − 3 ∑ j = 1 i − 1 d j } + 4 ∑ j = 1 i d j = 3 ( i − 1 ) 73 L + 2 a i + a ℓ + i + b − d i − 3 ∑ j = 1 i − 1 d j + 4 ∑ j = 1 i d j = 3 ( i − 1 ) 73 L + 2 a i + a ℓ + i + b + ∑ j = 1 i − 1 d j + 3 d i .$

Then, from (3) the total cost for all jobs can be calculated as follows.

$∑ j = 4 ℓ + 1 7 ℓ C j + ∑ j = 1 4 ℓ E j = ∑ j = 1 ℓ { 3 ( j − 1 ) 73 L + 2 a j + a ℓ + j + b + ∑ i = 1 j − 1 d i + 3 d j } = ∑ j = 1 ℓ { 3 ( j − 1 ) 73 L + 2 a j + a ℓ + j + b } + ∑ j = 2 ℓ ∑ i = 1 j − 1 d i + 3 ∑ j = 1 ℓ d j = 109.5 ℓ 2 L − 45.5 ℓ L + 1.5 ℓ 2 b − 0.5 ℓ b + 2 S W + S Y + ∑ j = 2 ℓ ∑ i = 1 j − 1 d i + 3 ∑ j = 1 ℓ d j = 109.5 ℓ 2 L − 45.5 ℓ L + 1.5 ℓ 2 b − 0.5 ℓ b + 2 S W + S Y + ∑ j = 2 ℓ ∑ i = 1 j − 1 d i .$
(4)

Notice that $∑ j = 1 ℓ d j = 0$ as long as all new jobs are scheduled at one of the gaps without any tardiness cost. This is possible since $∑ i = 1 ℓ g i = ∑ j ∈ N n p j .$. Also, $∑ i = 1 j − 1 d i ≥ 0$ for all j ={2, 3,…, } since there exists no tardiness for all the remaining jobs. From (4), solution value for any feasible schedule ≥109.52L − 45.5ℓL +1.52b − 0.5ℓb + 2SW + SY. Moreover, the solution value is minimized if di = 0 for all i =1, 2, …, . This is only possible if $a j + a ℓ + j + a 2 ℓ + j = b$ for j =1, 2, …, , that is, there exists a solution to Numerical 3-Dimensional Matching. □

5. HEURISTICS

The heuristics in this section are developed based on one of the simple and intuitive heuristics for the binpacking problem which is similar to our problem in a way that items (jobs) should be packed (scheduled) at multiple bins (gaps) as much as possible. As first-fit (FF) and firstfit decreasing (FFD) heuristics in Johnson et al. (1974), we developed heuristics which schedule new jobs at gaps with different scheduling rules such as the LS (list scheduling) rule, where jobs are scheduled in their index order, the LPT (longest processing time job first) rule, and SPT (shortest processing time job first) rule. Now, we formally describe each of the heuristics.

We begin by describing Heuristic H0, which generates a schedule with maximum earliness cost for each remaining job. It simply processes all remaining jobs without idle time before any new job. Then, all new jobs are scheduled at the end of σd . Even though this does not seem to be a good heuristic, we suspect that many practitioners actually use this type of heuristic for their rescheduling situations. In our case, it also provides a good upper bound for the analysis of the other heuristics.

Heuristic H0

• 1. Processes all remaining jobs without idle time before any new job.

• 2. All new jobs are scheduled at the end of the disrupted schedule.

The following heuristic, H1 uses the LS rule to select a new job to be scheduled at each gap, and uses the index order to select the next gap to be considered. After new jobs are scheduled at each gap, they are reordered so that they are in an SPT order inside each gap. Also, all news jobs, which are scheduled at the end of the remaining jobs, are reordered such that they are in an SPT order.

Heuristic H1

• 0. Set i = 0.

• 1. If G = or Nn = , then go to Step 5.

• 2. Set $r = | N n | , j = n r ,$ and i = i +1.

Reindex new jobs in Nn from nr + 1 to nr + r without changing the order of jobs.

• 3. Set j = j +1.

If pjgi, then schedule job j at Gi, gi = gipj, and Nn = Nn\{j}.

• 4. If job j + 1 exists in Nn and gi > 0, then go to Step 3. Otherwise, schedule jobs in Ri without idle time, G = G\{Gi}, gi+1 = gi+1 + gi, and go to Step 1.

• 5. Eliminate remaining gaps by expediting jobs.

• 6. All scheduled new jobs at each gap are reordered so that they are in an SPT order inside each gap.

• 7. All unscheduled new jobs are scheduled at the end of the current schedule such that they are in an SPT order.

• 8. Calculate solution cost and stop.

Heuristic H1 runs in O(n log n) time, and the resulting schedule does not contain inserted idle time. Let σH1 be the schedule of new jobs, which is generated by algorithm H1 and zH1 be the cost of schedule σH1.

Next, we introduce Heuristic H2 which is identical to Heuristic H1 except for the fact that it uses the LPT rule instead of the LS rule to select a new job to be scheduled first. The LPT rule is used to improve the heuristic in terms of minimizing the $∑ N r E j .$. However, as in H1 scheduled new jobs are reordered so that they are in an SPT order inside each gap. Also, all news jobs, which are scheduled at the end of the remaining jobs, are reordered such that they are in an SPT order.

Heuristic H2

• 1. Follow H1, but use the LPT rule instead of the LS rule to schedule new jobs at each gap.

Finally, we introduce Heuristic H3 which is identical to H1 except for the fact that it uses the SPT rule instead of the LS rule to select a new job to be scheduled first. The SPT rule is used to improve the performance of the heuristic in terms of minimizing $∑ N n C j .$ Also, H3 is basically identical to Algorithm A1 except for that the A3 does not calculate the earliness cost.

Heuristic H3

• 1. Follow H1, but use the SPT rule instead of the LS rule to schedule new jobs at each gap.

Heuristic H2 and H3 run in O(n log n) time, and the resulting schedule does not contain inserted idle time. Let σH2 and σH3 be the schedules of jobs, which is generated by algorithm H2 and H3, respectively. Also, let zH2 and zH3 be the costs of schedule σH2 andσH3, respectively.

6. EVALUATION MEASURES FOR HEURISTICS

In this section, we establish the evaluation measures for the problems with different objectives. For the problem with the objective of minimizing $∑ j ∈ N r E j + ∑ j ∈ N n C j ,$ we use the relative error measure zr = zH/z* to analyze the heuristics where zH is the solution value of a heuristic. However, for the problem with the objective of minimizing $∑ j ∈ N r E j ,$, it is possible that z* = 0. Hence, it is unsuitable to use relative error measure zH/ z* or (zHz*) / z* to analyze the heuristics. As an alternative, we consider the z-approximation ratio by Hassin and Khuller (2001). An algorithm is an α z-approximation if it runs in polynomial time and produces a solution whose distance from the optimal one is at most α times the distance between the optimal solution and the worst possible solution. Then, the goal is to find a schedule σwith the property that

$z ( σ ) − z * z ( σ w ) − z * ≤ α$
(5)

where 0 ≤α ≤ 1 and σw is the worst schedule which generates a maximum value of earliness.

However, the regular z-approximation ratio described above may have z(σw) = z*. For instance, suppose σd = (G1, 1) and g1 =1 and p1 =1 with the objective of minimizing $∑ j ∈ N r E j .$ Also suppose that there exists a new job, job 2 such that p2 = 2. Schedule (1, 2) can be an optimal and worst case schedule. Then, the denominator of the z-approximation ratio becomes zero. This concern is a technical issue which can be overcome by specifying a value for the denominator when this situation occurs. Before we define a new z-approximation ratio, we begin by establishing the heuristic which generates the worst case solution value.

The following lemma establishes that heuristic H0 generates the worst case solution value if the remaining job sequence is fixed as in σd .

Lemma 3. For problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j ,$ heuristic H0 always generates the worst case solution value.

Proof. Heuristic H0 schedules each job in Ri for i = 1, 2, …, q at the earliest possible time. □

As a result of Lemma 3, we replace z(σw) by zH0 in (5). Any heuristic should be no worse than heuristic H0 for problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j .$ Hence, we have the following formal definition for the modified z-approximation ratio.

$z r ′ = { 0 i f z ( σ H 0 ) = z * = z ( σ ) z ( σ ) − z ( σ * ) z ( σ H 0 ) − z * i f z ( σ H 0 ) = z *$

where σH0 is a schedule by H0 and σ is a schedule by the heuristic being evaluated. Notice that zr′ ≤ 1. Checking whether z(σH0) = z* can be easily done as follows. If $∑ i = 1 q g i > min j ∈ N n { p j } ,$ then $z ( σ H 0 ) > z * .$ Otherwise, $z ( σ H 0 ) = z * .$

7. WORST CASE BOUND ANALYSIS

In this section, we theoretically analyze the heuristics by using the worst case bound analysis. The results show that in terms of the worst case analysis, the performance of all the heuristics is marginal. We present the results for each of the problems.

7.1. Problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j$

For problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j$ heuristic H0 always generates the worst case solution value (Lemma 3). Hence, zr′ =1 for heuristic H0.

The following result establishes the tight worst case bound on the modified z-approximation for H1.

Theorem 5.For problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j ,$ $( z H 1 − z * ) / ( z H 0 − z * ) ≤ 1 − ε$for some ε > 0 and this bound is tight.

Proof. If an optimal schedule has at least one job scheduled at any gap, then H1 also schedules at least one job at some gap. Hence, zr′ cannot be equal to 1 and should be less than 1. Thus, we establish the result by presenting an instance where zr′ can be as close to 1 as possible by controlling ε for some ε > 0.

Consider an instance of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j$ where Nr ={1}, Nn ={2, 3}, and σd = (G1,1). Also, g1 =1, p1 =1, p2 =ε , and p3 =1 for some small ε > 0. Since H1 uses the LS to schedule a job, the first job available, job 2 is scheduled at G1 in σH1. Hence, σH1 = (2,1, 3) with the solution value of zH1=1−ε. However, optimal schedule σ* = (3,1, 2) with the solution value of z* = 0. For this instance, σH0 = (1, 2, 3) with the solution value of zH0=1. Therefore, zr′ = (1−ε)/1=1−ε≈1. Notice that zr′ can be as close to 1 as we want by reducing ε . Hence, it should be the worst case example of H1. Also, since there exists an instance of the problem, the bound is tight. □

The similar result can be established for heuristic H2.

Theorem 6.For problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j , ( z H 2 − z * ) / ( z H 0 − z * ) ≤ 1 − ε$for some ε > 0 and this bound is tight.

Proof. If an optimal schedule has at least one job scheduled at any gap, then heuristic H2 also schedules at least one job at some gap. Hence, zr′ cannot be equal to 1 and should be less than 1. Thus, we establish the result by presenting an instance where zr′ can be as close to 1 as possible by controlling ε for some ε > 0.

Consider an instance of problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j$ where Nr ={1, 2}, Nn ={3, 4}, and $σ d = ( G 1 , 1 , G 2 , 2 ) .$ Also, g1 = g2 =1, p1 = p2 =1, p3 =ε , and p4 = 2 for some small ε > 0. Since p4 > g1, job 3 should be scheduled first in σH2. Then, job 4 is considered for G2 but scheduling job 4 would generate a schedule with tardiness. Hence, σH2 = (3,1, 2, 4) with the solution value of $z H 2 = 1 − ε + 2 − ε = 3 − 2 ε .$ However, optimal schedule σ* = (1, 4, 2, 3) with the solution value of z* = 1. For this instance, σH0 = (1, 2, 3, 4) with the solution value of zH0 = 1 + 2 = 3 and $z r ' = ( 3 − 2 ε − 1 ) / ( 3 − 1 ) = ( 2 − 2 ε ) / 2 = 1 − ε ≈ 1.$. Notice that zr′ can be as close to 1 as we want by reducing ε . Hence, it is the worst case example of H2. Also, since there exists an instance of the problem, the bound is tight. □

Finally, the same result can be established for H3.

Corollary 1.For problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j , ( z H 3 − z * ) / ( z H 0 − z * ) ≤ 1 − ε$for some ε > 0 and this bound is tight.

Proof. Since H3 uses the SPT rule to schedule new jobs, the same instance in the proof of Theorem 5 can be used. □

7.2. Problem 11 dsrpt, no −idle, no −trdy, πR 1 r σj∈N Ej n +σj∈N Cj

The following result establishes that the worst case bound is not bounded from above for problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j + ∑ j ∈ N n C j$ for heuristic H0.

Theorem 7.For problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j + ∑ j ∈ N n C j , z H 0 / z *$is not bounded from above.

Proof. Consider an instance where $N r = { 1 } , N n = { 1 , 2 , … , ℓ + 1 } ,$ and $σ d = ( G 1 , 1 ) .$ Also, g1 = 0, p1 =1, p2 = 3, and pj =1 for j = 3, 4,…, + 2. Since H0 does not use any specific rule to schedule new jobs, we can assume without loss of generality that all new jobs are scheduled in their index order. Also new jobs are scheduled after job 1 with E1 = 0. However, an optimal solution schedules new jobs in the SPT order after job 1. Then,

$z H 0 z * = ∑ j ∈ N r E j ( σ H 0 ) + ∑ j ∈ N n C j ( σ H 0 ) ∑ j ∈ N r E j ( σ * ) + ∑ j ∈ N n C j ( σ * ) = ( ℓ + 1 ) + ( ℓ + 1 ) p 2 + 0.5 ℓ ( ℓ + 1 ) ( ℓ + 1 ) + 0.5 ℓ ( ℓ + 1 ) + ℓ + p 2 = 2 ( ℓ + 1 ) p 2 + ℓ 2 + 2 ℓ + 1 2 p 2 + ℓ 2 + 4 ℓ + 1 = 2 ℓ 4 + 2 ℓ 3 + ℓ 2 + 2 ℓ + 1 2 ℓ 3 + ℓ 2 + 4 ℓ + 1 .$
(6)

Observe that equation (6) → as . □

The similar result can be established for the H1 as follows.

Theorem 8.For problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j + ∑ j ∈ N n C j ,$is not bounded from above.

Proof. Consider an instance where Nr ={1}, Nn = {1, 2,…, }, and σd = (G1,1). Also, g1 = 3 + , p1 = 1, p2 = 3, and pj =1 for j = 3, 4,…, + 2. Since H1 uses the LS rule to schedule new jobs, all new jobs are scheduled in their index order. Also new jobs can be scheduled at G1 without generating any earliness for the remaining job because $g 1 = ∑ j ∈ N n p j .$ However, an optimal solution schedules new jobs in the SPT order. Then,

$z H 1 z * = ∑ j ∈ N r E j ( σ H 1 ) + ∑ j ∈ N n C j ( σ H 1 ) ∑ j ∈ N r E j ( σ * ) + ∑ j ∈ N n C j ( σ * ) = ( ℓ + 1 ) p 2 + 0.5 ℓ ( ℓ + 1 ) 0.5 ℓ ( ℓ + 1 ) + ℓ + p 2 = 2 ( ℓ + 1 ) p 2 + ℓ 2 + ℓ 2 p 2 + ℓ 2 + 3 ℓ = 2 ℓ 4 + 2 ℓ 3 + ℓ 2 + ℓ 2 ℓ 3 + ℓ 2 + 3 ℓ .$
(7)

Observe that equation (7) → as . □

Corollary 2.For problem $1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j + ∑ j ∈ N n C j , z H 2 / z *$is not bounded from above.

Proof. Since heustic H2 uses the LPT rule to schedule new jobs, the same instance used in the proof of Theorem 8 can be used. □

Finally, the same result can be established for H3.

Theorem 9.For problem$1 | d s r p t , n o − i d l e , n o − t r d y , π R | ∑ j ∈ N r E j + ∑ j ∈ N n C j , z H 3 / z *$is not bounded from above.

Proof. Consider an instance where Nr ={1, 2, …, }, Nn = { + 1, + 2}, and σd = (G1, 1, 2, …, ). Also, g1 = , pj = 1, for j = 1, 2,…, , p+1 = ε, and p+2 = . Since H3 uses the SPT rule to schedule new jobs, jobs + 1 and + 2 are scheduled in their index order. In σH3, only job +1 can be scheduled at gap 1 and job + 2 is scheduled after the last remaining job, which is job . As a result, each remaining job generates earliness cost of 1 −ε . However, an optimal solution schedules job +1 at gap 1 first and hence does not generate any earliness cost. Then,

$z H 3 z * = ∑ j ∈ N r E j ( σ H 3 ) + ∑ j ∈ N n C j ( σ H 1 ) ∑ j ∈ N r E j ( σ * ) + ∑ j ∈ N n C j ( σ * ) = ( ℓ − ε ) n r + p ℓ + 1 + p ℓ + 1 + ℓ + p ℓ + 2 p ℓ + 2 + p ℓ + 2 + ℓ + ε = ( ℓ − ε ) ℓ + ε + ε + ℓ + ℓ ℓ + ℓ + ℓ + ε = ℓ 2 + ( 2 − ε ) ℓ + 2 ε 3 ℓ + ε .$
(8)

Observe that equation (8) → as . □

8. COMPUTATIONAL STUDY

In this section, heuristics H1, H2, and H3 are empirically evaluated. The heuristics are tested on the problem with one of the two objectives, minimizing $∑ N r E j$ and minimizing $∑ N r E j + ∑ N n C j .$

Because finding z* is computationally intensive, we use a lower bound zL = 0 for the problems with the objective of minimizing $∑ N r E j$ Hence, the performance indicators for H1, H2, and H3 are the upper bounds on the modified z-approximation, $z H 1 / z H 0 , z H 2 / z H 0 ,$ and $z H 3 / z H 0 ,$ respectively.

Similarly, for the objective of minimizing $∑ N r E j + ∑ N n C j ,$ we use a lower bound zA1 instead of z*. Recall that zA1 is an optimal solution value for problem $1 | d s r p t , n o − i d l e , n o − t r d y | ∑ N n C j .$ Hence, it is assumed that $∑ N r E j = 0$ and $∑ N r E j = 0$ is the minimum possible value for problem $1 | d s r p t , n o − i d l e , n o − t r d y | ∑ N r E j + ∑ N n C j .$ Then, the performance indicators for H1, H2, and H3 are the upper bounds on $z r = z H / z * , z H 1 / z A 1 , z H 2 / z A 1 , and z H 3 / z A 1 ,$, respectively.

We compare the performances of H1, H2, and H3 under various conditions. Specifically, we observe the impact of different factors such as no, nc, and nn. For a given set of test problems, no, nc, and nn are fixed. We generate 810 test problems under 27 conditions for the problem.

To test the effects of varying the number of original jobs no, three different values of no are considered: 10, 50, and 100. To determine the effect of varying the number of canceled jobs nc , three different values of nc are considered for each of the three no values: 0.1no, 0.2no, and 0.3no. To observe the impact from different number of new jobs, we consider three different values of nn for each of the three nc values: nc , 2nc, and 3nc. It is assumed that pj ~ DU[1, 99] for j∈{1, 2, …, no + nn} and the standard deviation is 28.88 where DU[, u] is a discrete random variable uniformly distributed between and u.

To avoid excessive testing, nn = 2nc are used as default parameters. For instance, when we test the effects of varying the number of new jobs with no =100 and nc = 20, only the instances with nn = 40 are used. The mean zr and mean zr′ are the arithmetic mean of the relative error ratio and the modified z-approximation, respectively. Each mean is calculated over 30 instances of each problem type. The program is implemented in C language and run on the PC with a Core i5-3470 processor and 3.20 GHz plus 4 GB RAM.

In Table 1, we present the mean modified zapproximation for H1, H2, and H3 where zL is used as a lower bound for z* = 0. The results in the table are presented side by side for comparison of the heuristics. The results indicate that performances of the heuristics are much better than H0 which represents the schedule where all new jobs are scheduled at the end of a disrupted schedule. The mean modified z-approximation of the all heuristics becomes smaller as no increases, and with the same no value, it becomes smaller as nc increases. Hence, we may conclude that the performance of H1, H2, and H3 gets better as no increases, and with the same no value, the heuristics performs better as nc (and so nn) increases.

The mean modified z-approximation of H2 is clearly the smallest among the three heuristics for each cell where no, nc, and nn are fixed. Consequently, H2 performs better than H1 and H3. As expected, it seems that the LPT rule performs better than the LS and SPT rules in filling the gaps with newly arrived jobs. Regarding the comparison between H1 and H3, H1 performs better than H3 for all cases. Hence, it can be seen that filling the gap with new jobs by the SPT rule does not appear to be a good idea. Also, H1 and H2 are structured so that new jobs are reordered in an SPT order inside each gap. Moreover, all news jobs, which are scheduled at the end of the remaining jobs, are reordered such that they are in an SPT order. As a result, H1 and H2 take advantage of the SPT rule at some degree in terms of minimizing the total completion time of new jobs and hence perform better than H3.

For the same no and nc values, we vary nn to see the effect of different nn values. The results are presented in Tables 2, 3, and 4, and they indicate that for the all heuristics, the mean modified z-approximation becomes smaller as nn increases except for the case where nc =1 and nn = 3 for H1 and H3 and nc = 5 and nn =15 for H1. In other words, if there exist more new jobs to schedule at gaps, then the performances of H1, H2, and H3 generally improve.

In Table 5, we present the mean relative error ratio for H1, H2, and H3 where zA1 is used as a lower bound for z*. The results in the table are presented as in Table 5. The mean relative error ratio of the all heuristics becomes smaller as no increases, and with the same no value, it becomes smaller as nc increases. Hence, we may conclude that the performance of H1, H2, and H3 gets better as no increases, and with the same no value, the heuristics performs better as nc (and so nn ) increases.

As for the case where the objectives is minimizing $∑ N r E j ,$, the mean relative error ratio of H2 is clearly the smallest among the three heuristics for each cell where no, nc, and nn are fixed. Consequently, H2 perform better than both H1 and H3. Regarding the comparison between H1 and H3, H1 performs better than H3 for all cases. The reasons should be similar to those for the case with the objective of minimizing $∑ N r E j .$. It is worthwhile to note that even though A1, which is almost identical to H3, generates an optimal schedule for the problem with the objective of minimizing $∑ N n C j ,$ The H3 is the worst performer for the problem with minimizing $∑ N r E j + ∑ N n C j .$

For the same no and nc values, we vary nn to see the effect of different nn values. The results are presented in Tables 6, 7, and 8, and they indicate that for the all heuristics, the mean relative error ratio becomes smaller as nn increases for all cases. In other words, if there exist more new jobs to schedule at gaps, then the performances of H1, H2, and H3 improve.

9. SUMMARY AND FURTHER RESEARCH

In this paper, we explored a single machine rescheduling problem with two types of disruptions such as order cancelations and newly arrived orders. The original objective is minimizing total completion time and after the disruptions occur, we reschedule the remaining and newly arrived orders with one of the three new objectives under the condition of no-idle time and no tardiness of remaining orders.

First, we consider the problem where the objective of the rescheduling is minimizing the total earliness with no change in the sequence of remaining orders. Second, since the efficiency related objective measure is still minimizing total completion, we minimize the total completion time of new jobs. Third, we consider a composite objective which is minimizing the sum of total completion time of new jobs and total earliness of remaining jobs with no change in the sequence of remaining orders.

We establish the complexity of the several variations of the problem, and for the objective of minimizing the total completion time of new jobs, we develop an optimal solution procedure. For the other two objectives, we develop four simple but intuitive heuristics. For each of the heuristics, we analyze the performance by using the worst case bound analysis. Finally, we empirically analyzed the three heuristics except for H0 because H0 is used as an upper bound for the other heuristics. The results indicate heuristic H2 performs better than H1 and H3 and in turn, H1 performs better than H3 for the both objectives. While A3, which is almost identical to H3, generates an optimal schedule for the problem with the objective of minimizing $∑ N n C j ,$ is the worst performer for the problem with minimizing $∑ N r E j + ∑ N n C j .$.

We believe that major contributions of this paper include the identification of a new rescheduling problem, the proofs of the complexity, and the development and analysis of the heuristics. There are several important possible extensions of this research. First, the multiple machine cases would be worthwhile to consider different original objectives. Also, worth considering are the situations with different types of order disruptions such as changes in order priority and due dates.

ACKNOWLEDGMENT

This work was supported by the 2017 Research Fund of the University of Seoul.

Table

Performances of the heuristics

Performances of the heuristics with various nn when no =10

Performances of the heuristics with various nn when no =50

Performances of the heuristics with various nn when no =100

Performances of the heuristics

Performances of the heuristics with various nn when no = 10

Performances of the heuristics with various nn when no = 50

Performances of the heuristics with various nn when no = 100

REFERENCES

1. Clausen, J. , Larsen, J. , Larsen, A. , and Hansen, J. (2001), Disruption management-operations research between planning and execution, OR/MS Today, 28, 40-43.
2. Garey, M. D. and Johnson, D. S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco.
3. Graham, R. L. , Lawler, E. L. , Lenstra, J. K. , and Rinnooy Kan, A. H. G. (1979), Optimization and approximation in deterministic sequencing and scheduling: A survey, Annals of Discrete Mathematics, 5, 287-326.
4. Hall, N. G. and Potts, C. N. (2004), Rescheduling for new orders, Operations Research, 52(3), 440-453.
5. Hassin, R. and Khuller, S. (2001), z-Approximations, Journal of Algorithms, 41(2), 429-442.
6. Johnson, D. S. , Demers, A. , Ullman, J. D. , Garey, M. R. , and Graham, R. L. (1974), Worst-case performance bounds for simple one-dimensional packing algorithms, SIAM Journal on Computing, 3(4), 299-325.
7. Katragjini, K. , Vallada, E. , and Ruiz, R. (2013), Flow shop rescheduling under different types of disruption,International Journal of Production Research, 51(3), 780-797
8. Kopanos, G. M. , Capón-García, E. , Espuna, A. , and Puigjaner, L. (2008), Costs for rescheduling actions: A critical issue for reducing the gap between scheduling theory and practice, Industrial and Engineering Chemistry Industry, 47(22), 8785-8795.
9. Ozlen, M. and Azizoğlu, M. (2011), Rescheduling unrelated parallel machines with total flow time and total disruption cost criteria, Journal of the Operational Research Society, 62(1), 152-164.
10. Qi, X. , Bard, J. F. , and Yu, G. (2006), Disruption management for machine scheduling: The case of SPT schedules, International Journal of Production Economics, 103(1), 166-184.
11. Unal, A. T. , Uzsoy, R. , and Kiran, A. S. (1997), Rescheduling on a single machine with part-type dependent setup times and deadlines, Annals of Operations Research, 70, 93-113.
12. Wu, S. D. , Storer, R. H. , and Chang, P. C. (1993), Onemachine rescheduling heuristics with efficiency and stability as criteria, Computers and Operations Research, 20(1), 1-14.
13. Yang, J. (2013), No tardiness rescheduling with order disruptions, Industrial Engineering & Management Systems, 12(1), 51-62.
14. Yang, J. and Posner, M. E. (2012), Rescheduling with order disruptions, Working Paper, The Ohio State University, OH, USA.
15. Yin, Y. , Cheng, T. C. E. , and Wang, D. J. (2016), Rescheduling on identical parallel machines with machine disruptions to minimize total completion time, European Journal of Operational Research, 252(3), 737-749.