1. INTRODUCTION
Disruptions such as cancelations of orders, arrival of new orders, changes in order priority, processing delays, changes in release dates, machine breakdowns, and the unavailability of raw materials, personnel, or tools are commonly found in manufacturing facilities (Hall and Potts, 2004). However, most studies in the scheduling field assume scheduling situations without any of these disruptions. This paper considers the rescheduling problem where there exist cancelations of orders and arrival of news orders occur at the same time.
In a given period of time, say in the last 24 hours, if cancelations of orders and arrival of news orders occur, then the operation manager (OM) has to update or reschedule the current schedule while considering both cancelled orders and new orders. It may be possible to ignore the current schedule and develop an entirely new schedule but it will cause other problems with respect to reallocations of raw materials and resources including labor, tools and equipment, which had been already prepared for the current schedule. Moreover, the OM has to deal with the unhappy sales department or customers thanks to substantial changes in completion time of some orders. Hence, the OM may need to consider the tradeoff between the efficiency related cost of scheduling and the disruption related cost of scheduling when the new schedule is generated.
For example, consider a facility where orders are scheduled with a traditional objective such as minimizing total completion time. After an initial schedule is generated, each order receives estimated completion time or so called quoted due date (QDD), which is used to notify customers when their order would be shipped from the factory. Then, whenever rescheduling is necessary due to the order disruptions, these quoted completion times are often hardpegged as fixed completion times. Obviously the company does not want to see the changes in the already determined completion times and thus, they can minimize the effect of the disruptions.
Specifically, this paper considers a single machine rescheduling problem where the original (efficiency related) objective is minimizing total completion time. Then, we consider the situation where disruptions such as order (job) cancelations and newly arrived orders occur after the initial scheduling, and we reschedule this disrupted schedule with one of the three new objectives under the conditions of noidle time and no tardiness of remaining jobs. We assume that idle time is not allowed mainly because the original objective is minimizing the total completion time, which focuses on completing orders as quickly as possible on average. Regarding no tardiness of remaining jobs, we can easily find real world situations where tardiness of remaining orders is not allowed. For instance, when a buyer places an order to a manufacturer, then the manufacturer usually returns ‘a promised delivery date’ to the buyer after creating or updating their schedule. Then, this promised delivery date becomes a due date which must be met all the time.
The disruption related objective measures the impact of the disruptions as difference of completion times in remaining (not canceled) jobs before and after the disruptions. The artificial due dates for remaining jobs are set to completion times in the original schedule while newly arrived orders do not have due dates. Then, our first objective of the rescheduling is minimizing the total earliness with no change in the sequence of remaining orders. With this objective, we mainly focus on the disruption related measure without considering quality of the schedule on newly arrived orders. The second objective of the rescheduling is minimizing the total completion time of newly arrived orders. With this objective, we do not consider the disruption related objective measures explicitly except for there is no tardiness for remaining orders. However, it is turned out that there exists an optimal schedule with no change in the sequence of remaining orders. Finally, we consider a composite objective which includes both a disruption related measure and an efficiency related measure, and that is minimizing the summation of total completion time of new orders and total earliness of remaining orders with no change in the sequence of remaining orders. With this objective, we can see the tradeoff between the disruptions related objective and the efficiency related objective.
For general discussions on rescheduling, see Hall and Potts (2004). For practical significance on this type of work, see Clausen et al. (2001) and Kopanos et al. (2008). There are several studies which are closely related to the paper. Wu et al. (1993) consider a single machine rescheduling problem where job ready times and tails exist. This machine failure is considered as disruption, and the objective is to minimize the makespan and the disruption from the original schedule. The disruption is measured with the objective of the minimization of total earliness and tardiness and the start times of jobs in the original schedule are used as dues dates. Unal et al. (1997) study a single machine with newly arrived jobs that have setup times that depend on their part types. One objective is to minimize the total weighted completion time and the other is to minimize makespan of the new schedule. Hall and Potts (2004) also consider scheduling a single machine problem with newly arrived jobs as disruptions. Two separate objective functions such as minimizing position difference between starting times in the original schedule and the new schedule and minimizing total earliness and tardiness as the start times of jobs in the original schedule as dues dates are used to minimize disruptions. They provide several intractability results and heuristics with the performance evaluation. Qi et al. (2006) study a single machine and two parallel machine rescheduling problems where unexpected machine breakdown or changes in processing time occur. The efficiency related objective is to minimize of total completion time and the disruption related objective is to minimize total earliness and tardiness as the start times of jobs in the original schedule as dues dates. A composite objective function is used to address the both types of objectives.
For rescheduling problems in a flowshop environment, see Katragjini et al. (2013). Also, for rescheduling on identical parallel machines, see Yin et al. (2016). Finally, for rescheduling in an unrelated parallel shop where unexpected machine failure occurs, see Ozlen and Azizoğlu (2011). Their efficiency related objective is to minimize total completion time and the disruption related objective is to minimize the number of jobs which are assigned to a different machine or to minimize the cost associated with the number of jobs that are assigned to a different machine.
Yang and Posner (2012) consider a single machine rescheduling problem where the objective is to minimize the maximum deviation where both tardiness and earliness are allowed. They establish the complexity of the problem and develop a few heuristics. They also find that their results can be easily extended to a special case of the problem with the objective of minimizing total earliness and tardiness. Finally, Yang (2013) considers a single machine rescheduling problem where the objective is to minimize the maximum lateness of the remaining jobs. The types of analysis are similar to those in Yang and Posner (2012).
While this work is similar to Yang and Posner (2012) and Yang (2013), this paper consider rescheduling with a variety of objectives. They are minimization of total completion time of newly arrived orders, minimization of total earliness of remaining orders, and finally minimization of the sum of total completion time of newly arrived orders and total earliness of remaining orders. Notice that the last objective is the composite one with two different measures, and hence the tradeoff between the disruptions related objective and the efficiency related objective can be examined. Finally, this work is differentiated from some other studies which consider the order disruption because we consider both newly arrived orders and canceled orders while the other studies consider only newly arrived orders except for Yang and Posner (2012) and Yang (2013).
In the next section, we introduce some notation and describe the problem. In Section 3, we present assumptions and preliminary results. Then, we prove the complexity of a various versions of the problems and present an optimal procedure for one of the problems in Section 4. Four heuristics are examined in Section 5. We first introduce a heuristic which always generates a maximum value of total earliness and is used as an upper bound for all the other three heuristics. Then, we develop three heuristics which are based on scheduling jobs either in an index order, in the longest processing time order, or in the shortest processing time order at the first available idle times created by cancelled jobs. In Section 6, we analytically investigate each of our heuristics and prove some worst case bounds in Section 7. Finally, we empirically evaluate the heuristics in Section 8.
2. NOTATION
The parameters of the problem are

n_{0} : number of original jobs

n_{c} : number of cancelled jobs

n_{r} = n_{o} − n_{c}: number of remaining jobs

n_{n} : number of new jobs

n = n_{r} − n_{n} : number of jobs to be scheduled

N_{r} : set of remaining jobs = {1, 2,…, n}

N_{n} : set of new jobs = {n_{r} + 1, n_{r} + 2, …, n_{r} + n_{n}}
$N={N}_{r}\cup {N}_{n}=\{1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}n\}$

p_{j} : processing time of job j for j∈N

σ^{d} : disrupted schedule.
The decision variables are

σ : schedule of all jobs

C_{j} (σ) : completion time of job j in schedule σ for j∈N

E_{j} (σ) : earliness cost of remaining job j in schedule σ for j∈Nr

z(σ ) : value of schedule σ

σ^{*} : an optimal schedule

z^{*} : value of optimal schedule σ^{*}.
When no confusion exists, we replace C_{j}(σ) and E_{j}(σ) with C_{j}, and E_{j}, respectively. The standard classification scheme for scheduling problems (Graham et al., 1979) is ${\alpha}_{1}\left{\alpha}_{2}\right{\alpha}_{3},$ where α_{1} describes the machine structure, α_{2} gives the job characteristics and restrictions, and α_{3} defines the objective. Following the standard scheduling classification schedule of Graham et al. (1979), we refer to the problem of minimizing total earliness of remaining jobs with no change in the sequence of remaining orders as $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where dsrpt implies that the problem considers disruptions related rescheduling, noidle means there exists noidle time in a schedule, notrdy indicates that tardiness is not allowed for remaining jobs, and π_{R} denotes that remaining jobs maintain their original sequence. Note that only remaining jobs have due dates which are set to the completion times in the original schedule. We assume that the remaining and new jobs are available at the start of the planning process. Also, preemptions of jobs are not allowed.
In this paper, a schedule defines a job order on the machine. Since no idle time is allowed, the job order determines the start and completion time of jobs on the machine. An original schedule is the schedule which is created and established before disruptions occur. As we noted, the execution of the original schedule is disrupted because of cancelled and newly arrived jobs. Thus, there is a rescheduling. A disrupted schedule is the original schedule with canceled jobs removed. We assume that the disrupted schedule starts at the beginning of time horizon.
For notational convenience, each block of idle times created by canceled jobs is called gap. Let G be a set of gaps such that $G=\{{G}_{1},\hspace{0.17em}{G}_{2},\hspace{0.17em}\dots ,\hspace{0.17em}{G}_{q}\}$ where q is the number of gaps in disrupted schedule σ^{d} . Then, σ^{d} can be described as a sequence of jobs and gaps. For instance, ${\sigma}^{d}=(1,\hspace{0.17em}{G}_{1},\hspace{0.17em}2,\hspace{0.17em}{G}_{2},\hspace{0.17em}3,\hspace{0.17em}4)$ means that there are four remaining jobs 1, 2, 3, and 4, and one gap between jobs 1 and 2 and another gap between jobs 2 and 3. We also, let q_{j} be duration of gap G_{j} for j∈{1, 2, …, q}. Note that g_{j} > 0 for all j∈{1, 2, …, q}. Also, we let R_{i} be a set of remaining jobs processed between gaps i and i + 1 for i = 0,1,…, q−1. We also let r_{i} be the number of jobs in R_{i} for i = 0,1,…, q. Then, r_{0} ≥ 0 and r_{i} > 0 for i = 1, 2, …, q.
3. PRELIMINARY RESULTS AND ASSUMPTIONS
In this section, we establish some results that provide the basis for our analysis. First, we prove the following lemma which establishes relationships between σ^{d} and σ^{*}. For notational convenience, we let ${d}_{j}={C}_{j}({\sigma}^{d})$ for j = 1, 2, …, n_{r} and d_{j} is considered as due date in the rescheduling problem. For the problem with an objective of minimizing total completion time of new jobs, we have the following result.
Lemma 1.For problem$1\leftdsrpt,\hspace{0.17em}noidle,notrdy\right{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}\text{}$, there exists an optimal schedule where the order of the remaining jobs is the same as that in σ^{d}.
Proof. Suppose there exist adjacent remaining jobs i and j∈Nr such that job i precedes job j in σ^{d} but job j precedes job i in σ^{*}. Notice that some new jobs in N_{n} can be scheduled between these two remaining jobs in σ^{*}. In this case, we can simply switch the order of the remaining jobs i and j in σ^{*} and schedule all new jobs between the two jobs right before job i. This does not create any tardiness for the remaining jobs. Also, $\sum}_{{N}_{n}}\text{}{C}_{j$ either decreases if there are some new jobs or does not change if there are no new jobs between the two jobs. Therefore, the result holds. □
Hence, even though the problem $1\leftdsrpt,\hspace{0.17em}noidle,notrdy\right{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}\text{}$ does not explicitly focus on reducing disruption related impact, it preserves the order of remaining jobs while rescheduling in addition to no tardiness of the remaining jobs.
For the problems which include an objective of minimizing total earliness of the remaining jobs, we assume that the remaining jobs maintain their original sequence in a new schedule in order to honor the sequence which was fixed in the original schedule. Hence, for problems $1\leftdsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}\text{}$ and $1\leftdsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}\text{}$ it is assumed that the remaining jobs maintain their original sequence during the rescheduling.
As a result of Lemma 1 and the assmption above, we only consider a schedule where the order of remaining jobs is the same as in σ^{d} .
Lemma 2.There exists an optimal schedule where two remaining jobs process consecutively in σ^{d} still process consecutively in σ^{*}.
Proof. Suppose that there exists an optimal schedule where some new jobs are scheduled between two consecutively processing remaining jobs in σ^{d} . Suppose that r ∈ N_{n} is the first such job and job r is scheduled between two remaining jobs i and j for i, j∈Nr in σ^{*}. If earliness cost incurs either on both jobs i and j or job i only in σ^{*}, then we can reduce the earliness cost by scheduling job r before job i. Notice that tardiness cannot occur because job j does not generate any tardiness and jobs i and j are processed consecutively in σ^{d} . This contradicts that the schedule is optimal. We use the same argument for all new jobs which are scheduled between two consecutively processing remaining jobs in σ^{d} . Similar argument can be used to prove the case where the objective is minimizing total completion time of new jobs. □
As a result of Lemma 2, when we consider an optimal schedule, we assume that no new jobs are scheduled between two consecutively processing remaining jobs in σ^{d} .
4. COMPLEXITY RESULTS
We begin by introducing an algorithm which generates an optimal schedule for $\text{1}dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}.$ Algorithm A1 uses the SPT (shortest processing time job first) rule to select a new job to be scheduled at each gap, and uses the index order to select the next gap to be considered.
Algorithm A1

0. Set i = 0.

1. If G = ∅ or N_{n} = ∅, then go to Step 6.

2. 2. Set r = 1 N_{n} 1, j = n_{r} , and i = i +1. Reindex new jobs in N_{n} such that p_{j} ≥ p_{j+1} for j = n_{r} + 1, n_{r} + 2, …, n_{r} + r−1.

3. Set j = j +1 .
If p_{j} ≤ g_{i} , then schedule job j at G_{i}, g_{i} = g_{i} − p_{j}, and N_{n} = N_{n} \{j} . Otherwise, go to Step 5.

4. If job j +1 exists in N_{n} and g_{i} > 0 , then go to Step 3.

5. Schedule jobs in R_{i} without idle time, G = G \{G_{i}}, g_{i}_{+1} = g_{i}_{+1} + g_{i} , and go to Step 1.

6. Eliminate remaining gaps by expediting jobs.

7. All unscheduled new jobs are scheduled at the end of the current schedule.

8. Calculate solution cost and stop.
Algorithm A1 runs in O(n log n) time, and the resulting schedule does not contain inserted idle time. Let σ^{A}^{1} be the schedule of new jobs, which is generated by algorithm A1 and z^{A}^{1} be the cost of schedule σ^{A}^{1}. The following result shows that algorithm A1 generates an optimal schedule for problem $\text{1}dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}$.
Theorem 1.For problem
$\text{1}dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}$algorithm A1 generates an optimal schedule.
Proof. If ${\text{min}}_{j\in {N}_{n}}\left\{{p}_{j}\right\}>{\displaystyle {\sum}_{i=1}^{q}{g}_{i}}$, then no new job can be scheduled at any of the gaps without causing tardiness of the remaining jobs. Hence, all new jobs should be scheduled at the end of the remaining jobs. Since the SPT rule is optimal for problem $1\left\right{{\displaystyle \sum}}^{\text{}}\text{}{C}_{j},$ algorithm A1 should generate an optimal schedule for our problem. Alternatively, if ${\text{min}}_{j\in {N}_{n}}\left\{{p}_{j}\right\}\le {\displaystyle {\sum}_{i=1}^{q}{g}_{i}},$ then there exists at least one new job which should be scheduled at a gap.
Suppose that σ^{A}^{1} is not optimal. Further suppose that new jobs in σ^{A}^{1} are ordered in their index order. Then in σ^{*}, there exists job j which is scheduled at the i th position for j > i and i, j∈N_{n}. Since p_{j} > p_{i}, we can simply switch the positions of jobs i and j in σ^{*} without causing any tardiness of the remaining jobs. Let σ′ be this modified schedule. Then, ${C}_{j}({\sigma}^{\text{*}})>{C}_{i}({\sigma}^{\prime})$ and ${C}_{i}({\sigma}^{\text{*}})={C}_{j}({\sigma}^{\prime}).$ Contradiction. Therefore, σ^{A}^{1} should be an optimal schedule. □
The next result establishes the complexity of a problem. We show that problem $1\leftdsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}\text{}$ is unary NPcomplete by using the reduction from 3Partition which is a known unary NPcomplete problem.
3Partition (Garey and Johnson, 1979): Given 3ℓ elements with integer sizes a_{1}, a_{2},…, a_{3ℓ}, where ${\sum}_{j=1}^{3\ell}{a}_{j}=\ell y}\text{$ and $y/4<{a}_{j}<y/2$ for j =1, 2,…, 3ℓ, does there exist a partition A_{1}, A_{2} ,…, A_{ℓ} of the index set {1, 2,…, 3ℓ} such that $\left{A}_{i}\right\hspace{0.17em}=3$ and ${\sum}_{j\in {A}_{i}}{a}_{j}=y}\text{$ for i = 1, 2,…, ℓ?
Theorem 2.Problem$1\leftdsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}\text{}$is unary NPcomplete.
Proof. Given an instance of 3Partition, we construct the following instance of $1\leftdsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}\text{}$ :
$\begin{array}{l}{n}_{r}=\ell ,\\ {n}_{n}=3\ell ,\\ {p}_{j}=1,\hspace{0.17em}\hspace{0.17em}j=1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}\ell \\ {p}_{j}={a}_{j\ell},\hspace{0.17em}\hspace{0.17em}j=\ell +1,\hspace{0.17em}\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}4\ell \\ {g}_{i}=y,\text{\hspace{0.05em}}\hspace{0.17em}\hspace{0.17em}\text{\hspace{0.05em}}i=1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}\ell ,\\ {\sigma}^{d}=\hspace{0.17em}({G}_{1},\hspace{0.17em}1,\hspace{0.17em}{G}_{2},\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}{G}_{\ell},\hspace{0.17em}\ell ),\\ z=0.\end{array}$
The decision version of problem $1\leftdsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}\text{}$ is in NP since we can calculate $\sum}_{{N}_{r}}\text{}{E}_{j$ in polynomial time. We assume without loss of generality that ${\sum}_{j=1}^{3\ell}{a}_{j}=\ell y}\text{.$ℓℓ We prove that there exists a solution to 3Partition if and only if there exists a solution to problem $1\leftdsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where ${\sum}_{{N}_{r}}{E}_{j}}=0.$.
(⇒) Suppose that there exists a solution to 3Partition. Then, there exist ℓ disjoint set A_{3}, A_{2} ,…, A_{ℓ} such that ${\sum}_{{a}_{j}\in {A}_{i}}{a}_{j}=y}\text{$ for i = 1, 2,…, ℓ. We assume without loss of generality that, if there exists a solution to 3Partition, then the elements are indexed such that ${a}_{3i2}+{a}_{3i1}+{a}_{3i}=y$ for i = 1, 2,…, ℓ.
Consider schedule σ for problem $1\leftdsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where(1)
Note that ${p}_{\ell +3i2}+{p}_{\ell +3i1}+{p}_{\ell +3i}=y$ for i = 1, 2,…, ℓ. Thus, jobs ℓ + 3i − 2, ℓ + 3i −1, and ℓ + 3i are scheduled at G_{i} without generating earliness cost on job i for i = 1, 2,…, ℓ. Therefore, ${\sum}_{{N}_{r}}{E}_{j}=0$ for problem $1\leftdsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$.
(⇐) Suppose that there exists a solution to problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where ${\sum}_{{N}_{j}}{E}_{j}=0.$ Since ${\sum}_{{N}_{j}}{E}_{j}=0,$ new jobs should be scheduled at each gap without creating any idle time. Further, new jobs should not generate any tardiness. This implies that the total processing time of new jobs which are scheduled at each gap must be exactly y. Observe that y / 4 < p_{j} < y / 2 for j = ℓ +1, ℓ + 2,…, 4ℓ. Also, there are ℓ gaps and 3ℓ new jobs. Hence, solving the problem with ${\sum}_{{N}_{r}}{E}_{j}=0$ is possible only if a set of three new jobs are scheduled at each gap and the sum of the processing times of each set of three jobs is exactly y. This is only possible if there exists a solution to 3Partition. □
The next result establishes the complexity of a special case of problem $1dsrpt,\hspace{0.17em}\hspace{0.17em}noidle,\hspace{0.17em}\hspace{0.17em}notrdy,\hspace{0.17em}\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where there exists one gap in a disrupted schedule. We use the reduction from the following binary NPcomplete problem.
Partition (Garey and Johnson, 1979): Given a multiset A of 2n integers and a positive integer b, can set A = {a_{1}, a_{2}, …, a_{2n}} be partitioned into two disjoint subsets A_{3} and A_{2} such that the sum of the elements in A_{3} equals the sum of the elements in A_{2} such that ${\sum}_{{a}_{j}\in {A}_{1}}{a}_{j}}={\displaystyle {\sum}_{{a}_{j}\in {A}_{2}}\text{}{a}_{j}=b$?
Theorem 3.Even if there exists only one gap in σ^{d} and ${g}_{1}<{\displaystyle {\sum}_{j\in {N}_{n}}{p}_{j}}\text{,}$then the recognition version of the problem$1dsrpt,\hspace{0.17em}\hspace{0.17em}noidle,\hspace{0.17em}\hspace{0.17em}notrdy,\hspace{0.17em}\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$where there exists fixed number of gaps in a disrupted schedule is at least binary NPcomplete.
Proof. Given an instance of Partition problem, we construct the following instance of problem $1dsrpt,\hspace{0.17em}\hspace{0.17em}noidle,\hspace{0.17em}\hspace{0.17em}notrdy,\hspace{0.17em}\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ with one gap, and ${g}_{1}<{\displaystyle {\sum}_{j\in {N}_{n}}{p}_{j}}\text{.}$ We assume that there exist one remaining job, job 1 and 2l new jobs, and let p_{1} = 1 and p_{j} = a_{j} for j∈N_{n}. We also set g_{1} to the size of a partition such that g_{1} = b, and ${\sigma}^{d}=({G}_{1},\hspace{0.17em}1).$
Notice that the decision version of problem $1dsrpt,\hspace{0.17em}\hspace{0.17em}noidle,\hspace{0.17em}\hspace{0.17em}notrdy,\hspace{0.17em}\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ one gap, and ${g}_{1}<{\displaystyle {\sum}_{j\in {N}_{n}}{p}_{j}}\text{}$ is in NP since we can calculate r σN Ej in polynomial time. We prove that there exists a solution to Partition if and only if there exists a solution to problem $1dsrpt,\hspace{0.17em}\hspace{0.17em}noidle,\hspace{0.17em}\hspace{0.17em}notrdy,\hspace{0.17em}\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where there exists one gap and ${g}_{1}<{\displaystyle {\sum}_{j\in {N}_{n}}{p}_{j}}\text{}$ with ${\sum}_{{N}_{r}}{E}_{j}=0}.$
First, having a partition with the size of b implies that there exists a set of jobs in N_{n} such that their total processing time is equal to b which is also equal to g_{1}. Then, we schedule those jobs at G_{1} and the rest of jobs in N_{n} at the end of σ^{d}. Since the scheduled new jobs at the gap do not create any earliness cost on job 1, the solution value of this schedule is 0.
In the other direction, having a schedule for the problem with 0 r σN Ej = implies that there exist a set of jobs in N_{n} such that their total processing time is equal to b. Since p_{j} = a_{j} for j ∈ N_{n}, there exists a partition.
Therefore, the partition is possible only if there exists an optimal schedule for problem $1\leftdsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}$ where there exists one gap and ${g}_{1}<{\displaystyle {\sum}_{j\in {N}_{n}}{p}_{j}}\text{}$ with ${\sum}_{{N}_{r}}{E}_{j}=0}.$□
The next result establishes that problem $1\leftdsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}\right{\displaystyle {\sum}_{{N}_{r}}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}$ is unary NPcomplete by using the reduction from the following unary NPcomplete problem.
Numerical 3Dimensional Matching (Garey and Johnson, 1979): We are given disjoint sets of W, X, and Y, each containing l elements, integer sizes a_{1}, a_{2}, …, a_{3l}, where ${a}_{j}\in W\cup X\cup Y,$, and a positive integer bound b. The problem is to partition $W\cup X\cup Y$ into l disjoint sets ${A}_{1},\hspace{0.17em}{A}_{2},\hspace{0.17em}\dots ,\hspace{0.17em}{A}_{l}$ such that each A_{i} contains exactly one element from each of W, X, and Y and ${\sum}_{{a}_{j}\in {A}_{i}}{a}_{j}=b}\text{$ for i = 1, 2, …, l.
Theorem 4.The decision version of problem$1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}+}{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}$is unary NPcomplete.
Proof. For notational convenience, let ${S}_{W}={\displaystyle {\sum}_{{a}_{j}\in W}{a}_{j},}\hspace{0.17em}\hspace{0.17em}{S}_{X}={\displaystyle {\sum}_{{a}_{j}\in X}{a}_{j}},$ and ${S}_{Y}={\displaystyle {\sum}_{{a}_{j}\in Y}{a}_{j}}$ where W = {3ℓ +1, 3ℓ +2,…, 4ℓ}, X = {4ℓ +1, 4ℓ + 2,…, 5ℓ}, and Y = {5ℓ +1, 5ℓ +2,…, 6ℓ}. Given an instance of Numerical 3Dimensional Matching, we construct the following instance of $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}+}{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}$:
$\begin{array}{l}{n}_{r}=4\ell ,\\ {n}_{n}=3\ell ,\\ {p}_{j}=10L,\hspace{1em}\hspace{1em}\hspace{1em}\hspace{0.17em}j=1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}3\ell \\ {p}_{j}=10L+{a}_{j3\ell},j=4\ell +1,\hspace{0.17em}4\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}5\ell ,\\ {p}_{j}=11L+{a}_{j3\ell},\hspace{0.17em}j=5\ell +1,\hspace{0.17em}5\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}6\ell \\ {p}_{j}=12L+{a}_{j3\ell},\hspace{0.17em}j=6\ell +1,\hspace{0.17em}6\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}7\ell \\ {g}_{i}=33L+b,\hspace{1em}\hspace{1em}\hspace{0.17em}i=1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}\ell \\ {\sigma}^{d}=({G}_{1},\hspace{0.17em}1,\hspace{0.17em}2,\hspace{0.17em}3,\hspace{0.17em}4,\hspace{0.17em}{G}_{2},\hspace{0.17em}5,\hspace{0.17em}6,\hspace{0.17em}7,\hspace{0.17em}8,\hspace{0.17em}{G}_{3},\hspace{0.17em}\dots ,\hspace{0.17em}{G}_{\ell},\hspace{0.17em}4\ell \\ 2,\hspace{0.17em}4\ell 1,\hspace{0.17em}4\ell )\\ z=109.5{\ell}^{2}L45.5\ell L+1.5{\ell}^{2}b0.5\ell b+2{S}_{W}+{S}_{Y}\end{array}$
where $L=10{\displaystyle {\sum}_{j=1}^{3l}\text{}{a}_{j}}$
The decision version of problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}{E}_{j}+}{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}$ is in NP because we can calculate $\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j$ in polynomial time.
We prove that there exists a solution to Numerical 3 Dimensional Matching if and only if there exists a solution to $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}$ where ${\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}\le z.$
(⇒) We assume without loss of generality that if there exists a solution to Numerical 3Dimensional Matching, then the elements are indexed such that ${a}_{j}+{a}_{\ell +j}+{a}_{2\ell +j}=b$ for $j=1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}\ell .$ Consider the schedule where
and there exists no inserted idle time. Since there exists no idle time in σ , we have
$\begin{array}{l}{\displaystyle {\sum}_{j=3\ell +1}^{6\ell}{p}_{j}}{\displaystyle {\sum}_{j=1}^{\ell}\text{}({p}_{3\ell +j}+{p}_{4\ell +j}+{p}_{5\ell +j})}\\ ={\displaystyle {\sum}_{j=1}^{\ell}\text{}(10L+{a}_{j}+11L+{a}_{\ell +j}+12L+{a}_{2\ell +j})}\\ =33L+b.\end{array}$
Note that g_{i} = 33L + b for all i = 1, 2,…, ℓ. Hence, ${\sum}_{{N}_{r}}{E}_{j}=0$. Then, we calculate the sum of the total completion time as follows.
(⇐) Suppose that there exists a solution to problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}\text{}$ where ${\sum}_{{N}_{r}}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}\le 109.5{\ell}^{2}L45.5\ell L+1.5{\ell}^{2}b0.5\ell b+2{S}_{W}+{S}_{Y}.$ Let ${I}_{1}\in \{1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}3\ell \},\hspace{0.17em}{I}_{2}\in \{3\ell +1,\hspace{0.17em}3\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}4\ell \},\hspace{0.17em}{I}_{3}\in \{4\ell +\hspace{0.17em}1,\hspace{0.17em}4\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}5\ell \},$ and ${I}_{4}=\{5\ell +1,\hspace{0.17em}5\ell +2,\hspace{0.17em}\dots ,\hspace{0.17em}6\ell \}.$ Since all jobs in I_{1} are identical, we assume that in an optimal schedule, jobs in I_{1} are processed in their index order.
We first consider the job order in an optimal solution. Since g_{i} = 33L + b for i = 1, 2, …, ℓ, no more than three new jobs can be scheduled at one gap. Specifically, we will show that in an optimal schedule, one from each of W, X, and Y is scheduled at each gap.
Notice that jobs are scheduled in σ from (2) such that no earliness cost incurred by remaining jobs and all new jobs are scheduled at one of the gaps. An optimal schedule must not be worse than σ , and since ${\sum}_{j=1}^{\ell}{g}_{i}}=\ell (33L+b),$ it can be seen that in an optimal schedule, all new jobs should be scheduled at one of the gaps.
Since g_{i} = 33L + b for i = 1,2, …, ℓ, five different combinations of new jobs can be scheduled at the first gap in an optimal schedule. They are three jobs in W, two jobs in W and one job in X, one job in W and two jobs in X, one job in each of W, X, and Y, and three jobs in X. For notational convenience, let, j_{s}, j_{t}, and j_{r} be the first three new jobs scheduled at g_{1}. Without loss of generality, it can be assumed that a smaller job is scheduled ealier than a bigger job. If three jobs in W are scheduled at g_{1}, then the total completion time of the three jobs is $60L+3{p}_{{j}_{s}}+2{p}_{{j}_{t}}$ and the job j_{r} finishes at $30L+{p}_{{j}_{s}}+{p}_{{j}_{t}}+{p}_{{j}_{r}}.$ Then, ${\sum}_{j=1}^{3}{E}_{j}=9L}+3\left(b{p}_{{j}_{s}}{p}_{{j}_{t}}{p}_{{j}_{r}}\right)$. However, if one job in each of W, X, and Y are scheduled at g_{1}, then the total $64L+3{p}_{{j}_{s}}+2{p}_{{j}_{t}}$ and the job j_{r} finishes at $33L+{p}_{{j}_{s}}+{p}_{{j}_{t}}+{p}_{{j}_{r}}.$ Then, ${\sum}_{j=1}^{3}{E}_{j}}=3(b{p}_{{j}_{s}}{p}_{{j}_{t}}{p}_{{j}_{r}}).$ Hence, in terms of total cost, scheduling one job in each of W, X, and Y at g_{1} is much better than the other case. Similarly, if two jobs in W and one job in X are scheduled at g_{1}, then the total completion time of the three jobs is $61L+3{p}_{{j}_{s}}+2{p}_{{j}_{t}}+{p}_{{j}_{r}}$ and the job j_{r} finishes at $31L+{p}_{{j}_{s}}+{p}_{{j}_{t}}+{p}_{{j}_{r}}$. Then, ${\sum}_{j=1}^{3}{E}_{j}}=6L+3(b{p}_{{j}_{s}}{p}_{{j}_{t}}{p}_{{j}_{r}}).$ Hence, in terms of total cost, scheduling one job in each of W, X, and Y at g_{1} is much better than scheduling two jobs in W and one job in X. We can find the similar results with the remaining two cases.
Now, in an optimal schedule, let the jobs in I_{2}, I_{3}, and I_{4} be processed in the order ${v}_{1},\hspace{0.17em}{v}_{2},\hspace{0.17em}\dots ,\hspace{0.17em}{v}_{3\ell}.$ For notational convenience, let ${v}_{1}=3\ell +1,\hspace{0.17em}{v}_{2}=4\ell +1,\hspace{0.17em}{v}_{3}=5\ell +1,\hspace{0.17em}\dots ,\hspace{0.17em}{v}_{3\ell 2}=4\ell ,\hspace{0.17em}{v}_{3\ell 1}=5\ell ,\hspace{0.17em}{v}_{3\ell}=6\ell .$. First, we describe an optimal schedule. Then, we show that a schedule does not have a solution value ≤ z unless it is optimal.
First, consider jobs scheduled in G_{1} and R_{1}. Let ${d}_{j}=b({a}_{j}+{a}_{\ell +j}+{a}_{2\ell +j})$ for $j=1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}\ell .$. Then,
$\begin{array}{l}{\displaystyle {\sum}_{j\in {G}_{1}}{C}_{j}}+{\displaystyle {\sum}_{j\in {R}_{1}}{E}_{j}}=10L+3{a}_{1}+11L+2{a}_{\ell +1}+12L\\ +{a}_{2\ell +1}+4{d}_{1}\\ =64L+2{a}_{1}+{a}_{\ell +1}+b{d}_{1}+4{d}_{1}\\ =64L+2{a}_{1}+{a}_{\ell +1}+b+3{d}_{1}.\end{array}$
Note that d_{1} ≥ 0 because there exists no tardiness for the remaining jobs. We can extend this calculation for a general case. Consider jobs scheduled in G_{i} and R_{i} for $i\in \{2,\hspace{0.17em}3,\hspace{0.17em}\dots ,\hspace{0.17em}\ell \}.$. Then,
$\begin{array}{l}{\displaystyle {\sum}_{j\in {G}_{i}}{C}_{j}}+{\displaystyle {\sum}_{j\in {R}_{i}}{E}_{j}}=\{3(i1)(33L+40L)+3{a}_{i}\\ +2{a}_{\ell +i}+{a}_{2\ell +i}3{\displaystyle {\sum}_{j=1}^{i1}\text{}{d}_{j}}\}\\ +4{\displaystyle {\sum}_{j=1}^{i}\text{}{d}_{j}}\\ =3(i1)73L+2{a}_{i}+{a}_{\ell +i}+b{d}_{i}\\ 3{\displaystyle {\sum}_{j=1}^{i1}{d}_{j}}+4{\displaystyle {\sum}_{j=1}^{i}\text{}{d}_{j}}\\ =3(i1)73L+2{a}_{i}+{a}_{\ell +i}+b\\ +{\displaystyle {\sum}_{j=1}^{i1}{d}_{j}}+3{d}_{i}.\end{array}$
Then, from (3) the total cost for all jobs can be calculated as follows.
Notice that ${\sum}_{j=1}^{\ell}{d}_{j}=0$ as long as all new jobs are scheduled at one of the gaps without any tardiness cost. This is possible since ${\sum}_{i=1}^{\ell}\text{}{g}_{i}}={\displaystyle {\sum}_{j\in {N}_{n}}^{}{p}_{j}}.$. Also, ${\sum}_{i=1}^{j1}{d}_{i}\ge 0$ for all j ={2, 3,…, ℓ} since there exists no tardiness for all the remaining jobs. From (4), solution value for any feasible schedule ≥109.5ℓ^{2}L − 45.5ℓL +1.5ℓ^{2}b − 0.5ℓb + 2S_{W} + S_{Y}. Moreover, the solution value is minimized if d_{i} = 0 for all i =1, 2, …, ℓ. This is only possible if ${a}_{j}+{a}_{\ell +j}+{a}_{2\ell +j}=b$ for j =1, 2, …, ℓ, that is, there exists a solution to Numerical 3Dimensional Matching. □
5. HEURISTICS
The heuristics in this section are developed based on one of the simple and intuitive heuristics for the binpacking problem which is similar to our problem in a way that items (jobs) should be packed (scheduled) at multiple bins (gaps) as much as possible. As firstfit (FF) and firstfit decreasing (FFD) heuristics in Johnson et al. (1974), we developed heuristics which schedule new jobs at gaps with different scheduling rules such as the LS (list scheduling) rule, where jobs are scheduled in their index order, the LPT (longest processing time job first) rule, and SPT (shortest processing time job first) rule. Now, we formally describe each of the heuristics.
We begin by describing Heuristic H0, which generates a schedule with maximum earliness cost for each remaining job. It simply processes all remaining jobs without idle time before any new job. Then, all new jobs are scheduled at the end of σ^{d} . Even though this does not seem to be a good heuristic, we suspect that many practitioners actually use this type of heuristic for their rescheduling situations. In our case, it also provides a good upper bound for the analysis of the other heuristics.
Heuristic H0

1. Processes all remaining jobs without idle time before any new job.

2. All new jobs are scheduled at the end of the disrupted schedule.
The following heuristic, H1 uses the LS rule to select a new job to be scheduled at each gap, and uses the index order to select the next gap to be considered. After new jobs are scheduled at each gap, they are reordered so that they are in an SPT order inside each gap. Also, all news jobs, which are scheduled at the end of the remaining jobs, are reordered such that they are in an SPT order.
Heuristic H1

0. Set i = 0.

1. If G = ∅ or N_{n} = ∅, then go to Step 5.

2. Set $r=\hspace{0.17em}\left{N}_{n}\right,\hspace{0.17em}j=\hspace{0.17em}{n}_{r},$ and i = i +1.
Reindex new jobs in N_{n} from n_{r} + 1 to n_{r} + r without changing the order of jobs.

3. Set j = j +1.
If p_{j} ≤ g_{i}, then schedule job j at G_{i}, g_{i} = g_{i} − p_{j}, and N_{n} = N_{n}\{j}.

4. If job j + 1 exists in N_{n} and g_{i} > 0, then go to Step 3. Otherwise, schedule jobs in R_{i} without idle time, G = G\{G_{i}}, g_{i}_{+1} = g_{i}_{+1} + g_{i}, and go to Step 1.

5. Eliminate remaining gaps by expediting jobs.

6. All scheduled new jobs at each gap are reordered so that they are in an SPT order inside each gap.

7. All unscheduled new jobs are scheduled at the end of the current schedule such that they are in an SPT order.

8. Calculate solution cost and stop.
Heuristic H1 runs in O(n log n) time, and the resulting schedule does not contain inserted idle time. Let σ^{H}^{1} be the schedule of new jobs, which is generated by algorithm H1 and z^{H}^{1} be the cost of schedule σ^{H}^{1}.
Next, we introduce Heuristic H2 which is identical to Heuristic H1 except for the fact that it uses the LPT rule instead of the LS rule to select a new job to be scheduled first. The LPT rule is used to improve the heuristic in terms of minimizing the ${\sum}_{{N}_{r}}\text{}{E}_{j}}.$. However, as in H1 scheduled new jobs are reordered so that they are in an SPT order inside each gap. Also, all news jobs, which are scheduled at the end of the remaining jobs, are reordered such that they are in an SPT order.
Heuristic H2
Finally, we introduce Heuristic H3 which is identical to H1 except for the fact that it uses the SPT rule instead of the LS rule to select a new job to be scheduled first. The SPT rule is used to improve the performance of the heuristic in terms of minimizing ${\sum}_{{N}_{n}}\text{}{C}_{j}}.$ Also, H3 is basically identical to Algorithm A1 except for that the A_{3} does not calculate the earliness cost.
Heuristic H3
Heuristic H2 and H3 run in O(n log n) time, and the resulting schedule does not contain inserted idle time. Let σ^{H}^{2} and σ^{H}^{3} be the schedules of jobs, which is generated by algorithm H2 and H3, respectively. Also, let z^{H}^{2} and z^{H}^{3} be the costs of schedule σ^{H}^{2} andσ^{H}^{3}, respectively.
6. EVALUATION MEASURES FOR HEURISTICS
In this section, we establish the evaluation measures for the problems with different objectives. For the problem with the objective of minimizing ${\sum}_{j\in {N}_{r}}{E}_{j}+}{\displaystyle {\sum}_{j\in {N}_{n}}\text{}{C}_{j}},$ we use the relative error measure zr = zH/z^{*} to analyze the heuristics where z^{H} is the solution value of a heuristic. However, for the problem with the objective of minimizing ${\sum}_{j\in {N}_{r}}\text{}{E}_{j}},$, it is possible that z^{*} = 0. Hence, it is unsuitable to use relative error measure z^{H}/ z^{*} or (z^{H} − z^{*}) / z^{*} to analyze the heuristics. As an alternative, we consider the zapproximation ratio by Hassin and Khuller (2001). An algorithm is an α zapproximation if it runs in polynomial time and produces a solution whose distance from the optimal one is at most α times the distance between the optimal solution and the worst possible solution. Then, the goal is to find a schedule σwith the property that
where 0 ≤α ≤ 1 and σ^{w} is the worst schedule which generates a maximum value of earliness.
However, the regular zapproximation ratio described above may have z(σ^{w}) = z^{*}. For instance, suppose σ^{d} = (G_{1}, 1) and g_{1} =1 and p_{1} =1 with the objective of minimizing ${\sum}_{j\in {N}_{r}}\text{}{E}_{j}}.$ Also suppose that there exists a new job, job 2 such that p_{2} = 2. Schedule (1, 2) can be an optimal and worst case schedule. Then, the denominator of the zapproximation ratio becomes zero. This concern is a technical issue which can be overcome by specifying a value for the denominator when this situation occurs. Before we define a new zapproximation ratio, we begin by establishing the heuristic which generates the worst case solution value.
The following lemma establishes that heuristic H0 generates the worst case solution value if the remaining job sequence is fixed as in σ^{d} .
Lemma 3. For problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}},$ heuristic H0 always generates the worst case solution value.
Proof. Heuristic H0 schedules each job in R_{i} for i = 1, 2, …, q at the earliest possible time. □
As a result of Lemma 3, we replace z(σ^{w}) by z^{H}^{0} in (5). Any heuristic should be no worse than heuristic H0 for problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}.$ Hence, we have the following formal definition for the modified zapproximation ratio.
$z{r}^{\prime}=\{\begin{array}{ll}0\hfill & if\hspace{0.17em}\hspace{0.17em}z({\sigma}^{H0})={z}^{\text{*}}=z(\sigma )\hfill \\ \frac{z(\sigma )z({\sigma}^{\text{*}})}{z({\sigma}^{H0}){z}^{\text{*}}}\hfill & if\hspace{0.17em}\hspace{0.17em}z({\sigma}^{H0})\overline{)=}{z}^{\text{*}}\hfill \end{array}$
where σ^{H}^{0} is a schedule by H0 and σ is a schedule by the heuristic being evaluated. Notice that zr′ ≤ 1. Checking whether z(σ^{H}^{0}) = z^{*} can be easily done as follows. If ${\sum}_{i=1}^{q}{g}_{i}}>{\text{min}}_{j\in {N}_{n}}\left\{{p}_{j}\right\},$ then $z({\sigma}^{H0})>{z}^{\text{*}}.$ Otherwise, $z({\sigma}^{H0})={z}^{\text{*}}.$
7. WORST CASE BOUND ANALYSIS
In this section, we theoretically analyze the heuristics by using the worst case bound analysis. The results show that in terms of the worst case analysis, the performance of all the heuristics is marginal. We present the results for each of the problems.
7.1. Problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}$
For problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}$ heuristic H0 always generates the worst case solution value (Lemma 3). Hence, zr′ =1 for heuristic H0.
The following result establishes the tight worst case bound on the modified zapproximation for H1.
Theorem 5.For problem$1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}{E}_{j}},$ $({z}^{H1}{z}^{\text{*}})/({z}^{H0}{z}^{\text{*}})\le 1\epsilon $for some ε > 0 and this bound is tight.
Proof. If an optimal schedule has at least one job scheduled at any gap, then H1 also schedules at least one job at some gap. Hence, zr′ cannot be equal to 1 and should be less than 1. Thus, we establish the result by presenting an instance where zr′ can be as close to 1 as possible by controlling ε for some ε > 0.
Consider an instance of problem $1dsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}{E}_{j}}$ where N_{r} ={1}, N_{n} ={2, 3}, and σ^{d} = (G_{1},1). Also, g_{1} =1, p_{1} =1, p_{2} =ε , and p_{3} =1 for some small ε > 0. Since H1 uses the LS to schedule a job, the first job available, job 2 is scheduled at G_{1} in σ^{H}^{1}. Hence, σ^{H}^{1} = (2,1, 3) with the solution value of z^{H}^{1}=1−ε. However, optimal schedule σ^{*} = (3,1, 2) with the solution value of z^{*} = 0. For this instance, σ^{H}^{0} = (1, 2, 3) with the solution value of z^{H}^{0}=1. Therefore, zr′ = (1−ε)/1=1−ε≈1. Notice that zr′ can be as close to 1 as we want by reducing ε . Hence, it should be the worst case example of H1. Also, since there exists an instance of the problem, the bound is tight. □
The similar result can be established for heuristic H2.
Theorem 6.For problem$1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}},({z}^{H2}{z}^{\text{*}})/({z}^{H0}{z}^{\text{*}})\le 1\epsilon $for some ε > 0 and this bound is tight.
Proof. If an optimal schedule has at least one job scheduled at any gap, then heuristic H2 also schedules at least one job at some gap. Hence, zr′ cannot be equal to 1 and should be less than 1. Thus, we establish the result by presenting an instance where zr′ can be as close to 1 as possible by controlling ε for some ε > 0.
Consider an instance of problem $1dsrpt,\hspace{0.17em}noidle,notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}$ where N_{r} ={1, 2}, N_{n} ={3, 4}, and ${\sigma}^{d}=({G}_{1},\hspace{0.17em}1,\hspace{0.17em}{G}_{2},\hspace{0.17em}2).$ Also, g_{1} = g_{2} =1, p_{1} = p_{2} =1, p_{3} =ε , and p4 = 2 for some small ε > 0. Since p4 > g_{1}, job 3 should be scheduled first in σ^{H}^{2}. Then, job 4 is considered for G_{2} but scheduling job 4 would generate a schedule with tardiness. Hence, σ^{H}^{2} = (3,1, 2, 4) with the solution value of ${z}^{H2}=1\epsilon +2\epsilon =32\epsilon .$ However, optimal schedule σ^{*} = (1, 4, 2, 3) with the solution value of z^{*} = 1. For this instance, σ^{H}^{0} = (1, 2, 3, 4) with the solution value of z^{H}^{0} = 1 + 2 = 3 and $zr\text{'}=(32\epsilon 1)/(31)=(22\epsilon )/2=1\epsilon \approx 1.$. Notice that zr′ can be as close to 1 as we want by reducing ε . Hence, it is the worst case example of H2. Also, since there exists an instance of the problem, the bound is tight. □
Finally, the same result can be established for H3.
Corollary 1.For problem$1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}},\hspace{0.17em}\hspace{0.17em}({z}^{H3}{z}^{*})/({z}^{H0}{z}^{*})\le 1\epsilon $for some ε > 0 and this bound is tight.
Proof. Since H3 uses the SPT rule to schedule new jobs, the same instance in the proof of Theorem 5 can be used. □
7.2. Problem 11 dsrpt, no −idle, no −trdy, πR 1 r σj∈N Ej n +σj∈N Cj
The following result establishes that the worst case bound is not bounded from above for problem $1dsrpt,noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{j\in {N}_{n}}\text{}{C}_{j}}$ for heuristic H0.
Theorem 7.For problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{j\in {N}_{n}}{C}_{j}},{z}^{H0}/{z}^{\text{*}}$is not bounded from above.
Proof. Consider an instance where ${N}_{r}=\left\{1\right\},\hspace{0.17em}{N}_{n}=\{1,\hspace{0.17em}2,\hspace{0.17em}\dots ,\hspace{0.17em}\ell +1\},$ and ${\sigma}^{d}=({G}_{1},\hspace{0.17em}1).$ Also, g_{1} = 0, p_{1} =1, p_{2} = ℓ^{3}, and p_{j} =1 for j = 3, 4,…, ℓ + 2. Since H0 does not use any specific rule to schedule new jobs, we can assume without loss of generality that all new jobs are scheduled in their index order. Also new jobs are scheduled after job 1 with E_{1} = 0. However, an optimal solution schedules new jobs in the SPT order after job 1. Then,
Observe that equation (6) → ∞ as ℓ →∞ . □
The similar result can be established for the H1 as follows.
Theorem 8.For problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{j\in {N}_{n}}{C}_{j}},$is not bounded from above.
Proof. Consider an instance where N_{r} ={1}, N_{n} = {1, 2,…, ℓ}, and σ^{d} = (G_{1},1). Also, g_{1} = ℓ^{3} + ℓ, p_{1} = 1, p_{2} = ℓ^{3}, and p_{j} =1 for j = 3, 4,…, ℓ + 2. Since H1 uses the LS rule to schedule new jobs, all new jobs are scheduled in their index order. Also new jobs can be scheduled at G_{1} without generating any earliness for the remaining job because ${g}_{1}={\displaystyle {\sum}_{j\in {N}_{n}}{p}_{j}}.$ However, an optimal solution schedules new jobs in the SPT order. Then,
Observe that equation (7) →∞ as ℓ →∞ . □
Corollary 2.For problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,\hspace{0.17em}{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}{E}_{j}}\text{}+{\displaystyle {\sum}_{j\in {N}_{n}}\text{}{C}_{j}},{z}^{H2}/{z}^{\text{*}}$is not bounded from above.
Proof. Since heustic H2 uses the LPT rule to schedule new jobs, the same instance used in the proof of Theorem 8 can be used. □
Finally, the same result can be established for H3.
Theorem 9.For problem$1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy,{\pi}_{R}{\displaystyle {\sum}_{j\in {N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{j\in {N}_{n}}\text{}{C}_{j}},{z}^{H3}/{z}^{\text{*}}$is not bounded from above.
Proof. Consider an instance where N_{r} ={1, 2, …, ℓ}, N_{n} = {ℓ + 1, ℓ + 2}, and σ^{d} = (G_{1}, 1, 2, …, ℓ). Also, g_{1} = ℓ, p_{j} = 1, for j = 1, 2,…, ℓ, p_{ℓ}_{+1} = ε, and p_{ℓ}_{+2} = ℓ. Since H3 uses the SPT rule to schedule new jobs, jobs ℓ + 1 and ℓ + 2 are scheduled in their index order. In σ^{H}^{3}, only job ℓ +1 can be scheduled at gap 1 and job ℓ + 2 is scheduled after the last remaining job, which is job ℓ. As a result, each remaining job generates earliness cost of 1 −ε . However, an optimal solution schedules job ℓ +1 at gap 1 first and hence does not generate any earliness cost. Then,
Observe that equation (8) →∞ as ℓ →∞ . □
8. COMPUTATIONAL STUDY
In this section, heuristics H1, H2, and H3 are empirically evaluated. The heuristics are tested on the problem with one of the two objectives, minimizing $\sum}_{{N}_{r}}\text{}{E}_{j$ and minimizing ${\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}.$
Because finding z^{*} is computationally intensive, we use a lower bound z^{L} = 0 for the problems with the objective of minimizing $\sum}_{{N}_{r}}\text{}{E}_{j$ Hence, the performance indicators for H1, H2, and H3 are the upper bounds on the modified zapproximation, ${z}^{H1}/{z}^{H0},\hspace{0.17em}\hspace{0.17em}{z}^{H2}/{z}^{H0},$ and ${z}^{H3}/{z}^{H0},$ respectively.
Similarly, for the objective of minimizing ${\sum}_{{N}_{r}}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}{C}_{j}},$ we use a lower bound z^{A}_{1} instead of z^{*}. Recall that z^{A}_{1} is an optimal solution value for problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}.$ Hence, it is assumed that ${\sum}_{{N}_{r}}{E}_{j}}=0$ and ${\sum}_{{N}_{r}}{E}_{j}}=0$ is the minimum possible value for problem $1dsrpt,\hspace{0.17em}noidle,\hspace{0.17em}notrdy{\displaystyle {\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}{C}_{j}}.$ Then, the performance indicators for H1, H2, and H3 are the upper bounds on $zr={z}^{H}/{z}^{\text{*}},\hspace{0.17em}{z}^{H1}/{z}^{A1},\hspace{0.17em}{z}^{H2}/{z}^{A1},\hspace{0.17em}\text{and}\hspace{0.17em}{z}^{H3}/{z}^{A1},$, respectively.
We compare the performances of H1, H2, and H3 under various conditions. Specifically, we observe the impact of different factors such as n_{o}, n_{c}, and n_{n}. For a given set of test problems, n_{o}, n_{c}, and n_{n} are fixed. We generate 810 test problems under 27 conditions for the problem.
To test the effects of varying the number of original jobs n_{o}, three different values of n_{o} are considered: 10, 50, and 100. To determine the effect of varying the number of canceled jobs n_{c} , three different values of n_{c} are considered for each of the three n_{o} values: 0.1n_{o}, 0.2n_{o}, and 0.3n_{o}. To observe the impact from different number of new jobs, we consider three different values of n_{n} for each of the three n_{c} values: n_{c} , 2n_{c}, and 3n_{c}. It is assumed that p_{j} ~ DU[1, 99] for j∈{1, 2, …, n_{o} + n_{n}} and the standard deviation is 28.88 where DU[ℓ, u] is a discrete random variable uniformly distributed between ℓ and u.
To avoid excessive testing, n_{n} = 2n_{c} are used as default parameters. For instance, when we test the effects of varying the number of new jobs with n_{o} =100 and n_{c} = 20, only the instances with n_{n} = 40 are used. The mean zr and mean zr′ are the arithmetic mean of the relative error ratio and the modified zapproximation, respectively. Each mean is calculated over 30 instances of each problem type. The program is implemented in C language and run on the PC with a Core i53470 processor and 3.20 GHz plus 4 GB RAM.
In Table 1, we present the mean modified zapproximation for H1, H2, and H3 where z^{L} is used as a lower bound for z^{*} = 0. The results in the table are presented side by side for comparison of the heuristics. The results indicate that performances of the heuristics are much better than H0 which represents the schedule where all new jobs are scheduled at the end of a disrupted schedule. The mean modified zapproximation of the all heuristics becomes smaller as n_{o} increases, and with the same n_{o} value, it becomes smaller as n_{c} increases. Hence, we may conclude that the performance of H1, H2, and H3 gets better as n_{o} increases, and with the same n_{o} value, the heuristics performs better as n_{c} (and so n_{n}) increases.
The mean modified zapproximation of H2 is clearly the smallest among the three heuristics for each cell where n_{o}, n_{c}, and n_{n} are fixed. Consequently, H2 performs better than H1 and H3. As expected, it seems that the LPT rule performs better than the LS and SPT rules in filling the gaps with newly arrived jobs. Regarding the comparison between H1 and H3, H1 performs better than H3 for all cases. Hence, it can be seen that filling the gap with new jobs by the SPT rule does not appear to be a good idea. Also, H1 and H2 are structured so that new jobs are reordered in an SPT order inside each gap. Moreover, all news jobs, which are scheduled at the end of the remaining jobs, are reordered such that they are in an SPT order. As a result, H1 and H2 take advantage of the SPT rule at some degree in terms of minimizing the total completion time of new jobs and hence perform better than H3.
For the same no and nc values, we vary n_{n} to see the effect of different n_{n} values. The results are presented in Tables 2, 3, and 4, and they indicate that for the all heuristics, the mean modified zapproximation becomes smaller as n_{n} increases except for the case where nc =1 and n_{n} = 3 for H1 and H3 and nc = 5 and n_{n} =15 for H1. In other words, if there exist more new jobs to schedule at gaps, then the performances of H1, H2, and H3 generally improve.
In Table 5, we present the mean relative error ratio for H1, H2, and H3 where z^{A}_{1} is used as a lower bound for z^{*}. The results in the table are presented as in Table 5. The mean relative error ratio of the all heuristics becomes smaller as no increases, and with the same no value, it becomes smaller as nc increases. Hence, we may conclude that the performance of H1, H2, and H3 gets better as no increases, and with the same no value, the heuristics performs better as nc (and so n_{n} ) increases.
As for the case where the objectives is minimizing ${\sum}_{{N}_{r}}{E}_{j}},$, the mean relative error ratio of H2 is clearly the smallest among the three heuristics for each cell where n_{o}, n_{c}, and n_{n} are fixed. Consequently, H2 perform better than both H1 and H3. Regarding the comparison between H1 and H3, H1 performs better than H3 for all cases. The reasons should be similar to those for the case with the objective of minimizing ${\sum}_{{N}_{r}}{E}_{j}}.$. It is worthwhile to note that even though A1, which is almost identical to H3, generates an optimal schedule for the problem with the objective of minimizing ${\sum}_{{N}_{n}}{C}_{j}},$ The H3 is the worst performer for the problem with minimizing ${\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}.$
For the same n_{o} and n_{c} values, we vary n_{n} to see the effect of different n_{n} values. The results are presented in Tables 6, 7, and 8, and they indicate that for the all heuristics, the mean relative error ratio becomes smaller as n_{n} increases for all cases. In other words, if there exist more new jobs to schedule at gaps, then the performances of H1, H2, and H3 improve.
9. SUMMARY AND FURTHER RESEARCH
In this paper, we explored a single machine rescheduling problem with two types of disruptions such as order cancelations and newly arrived orders. The original objective is minimizing total completion time and after the disruptions occur, we reschedule the remaining and newly arrived orders with one of the three new objectives under the condition of noidle time and no tardiness of remaining orders.
First, we consider the problem where the objective of the rescheduling is minimizing the total earliness with no change in the sequence of remaining orders. Second, since the efficiency related objective measure is still minimizing total completion, we minimize the total completion time of new jobs. Third, we consider a composite objective which is minimizing the sum of total completion time of new jobs and total earliness of remaining jobs with no change in the sequence of remaining orders.
We establish the complexity of the several variations of the problem, and for the objective of minimizing the total completion time of new jobs, we develop an optimal solution procedure. For the other two objectives, we develop four simple but intuitive heuristics. For each of the heuristics, we analyze the performance by using the worst case bound analysis. Finally, we empirically analyzed the three heuristics except for H0 because H0 is used as an upper bound for the other heuristics. The results indicate heuristic H2 performs better than H1 and H3 and in turn, H1 performs better than H3 for the both objectives. While A_{3}, which is almost identical to H3, generates an optimal schedule for the problem with the objective of minimizing ${\sum}_{{N}_{n}}{C}_{j}},$ is the worst performer for the problem with minimizing ${\sum}_{{N}_{r}}\text{}{E}_{j}}+{\displaystyle {\sum}_{{N}_{n}}\text{}{C}_{j}}.$.
We believe that major contributions of this paper include the identification of a new rescheduling problem, the proofs of the complexity, and the development and analysis of the heuristics. There are several important possible extensions of this research. First, the multiple machine cases would be worthwhile to consider different original objectives. Also, worth considering are the situations with different types of order disruptions such as changes in order priority and due dates.