﻿ :: Industrial Engineering & Management Systems :: • About Us +
• Editorial Board +
• For Contributors +
• Journal Search +
Journal Search Engine
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.12 No.3 pp.244-253

# A Hybrid Method to Improve Forecasting Accuracy Utilizing Genetic Algorithm: An Application to the Data of Processed Cooked Rice

Hiromasa Takeyasu*, Yuki Higuchi, Kazuhiro Takeyasu
*Faculty of Life and Culture, KagawaJuniorCollege, Kagawa, Japan
Department ofEconomics, Osaka Prefecture University, Osaka, Japan
College of Business Administration, Fuji-Tokoha University, Shizuoka, Japan
(Received: October 17, 2012 / Revised: January 3, 2013; August 10, 2013 / Accepted: August14, 2013)

### Abstract

In industries, shipping is an important issue in improving the forecasting accuracy of sales. This paper introduces ahybrid method and plural methods are compared. Focusing the equation of exponential smoothing method (ESM) thatis equivalent to (1, 1) order autoregressive-moving-average (ARMA) model equation, a new method of estimating thesmoothing constant in ESM had been proposed previously by us which satisfies minimum variance of forecasting error.Generally, the smoothing constant is selected arbitrarily. However, this paper utilizes the above stated theoreticalsolution. Firstly, we make estimation of ARMA model parameter and then estimate the smoothing constant. Thus,theoretical solution is derived in a simple way and it may be utilized in various fields. Furthermore, combining thetrend removing method with this method, we aim to improve forecasting accuracy. This method is executed in thefollowing method. Trend removing by the combination of linear and 2nd order nonlinear function and 3rd order nonlinearfunction is executed to the original production data of two kinds of bread. Genetic algorithm is utilized to searchthe optimal weight for the weighting parameters of linear and nonlinear function. For comparison, the monthly trend isremoved after that. Theoretical solution of smoothing constant of ESM is calculated for both of the monthly trend removingdata and the non-monthly trend removing data. Then forecasting is executed on these data. The new methodshows that it is useful for the time series that has various trend characteristics and has rather strong seasonal trend. Theeffectiveness of this method should be examined in various cases.

12-3-08 (244-253) Takeyasu, Higuchi, and Takeyasu.pdf759.3KB

### 1. INTRODUCTION

Many methods for time series analysis have been presented, such as autoregressive (AR) model, autoregressive moving-average (ARMA) model, and exponential smoothing method (ESM) (Box et al., 1994; Brown, 1963; Kobayashi, 1992; Tokumaru et al., 1982). Among these, ESM is said to be a practical simple method.

For this method, various improving methods such as adding compensating item for time lag, coping with the time series with trend (Winters, 1960), utilizing Kalman filter (Maeda, 1984), Bayes forecasting (West and Harrison, 1989), adaptive ESM (Ekern, 1982), exponentially weighted moving averages with irregular updating periods (Johnston, 1993), making averages of forecasts using plural method (Makridakis and Winkler, 1983) are presented. For example, Maeda (1984) calculated smoothing constant in relationship with signal-to-noise ratio under the assumption that the observation noise was added to the system. But he had to calculate under supposed noise because he could not grasp observation noise. It can be said that it does not pursue optimum solution from the very data themselves which should be derived by those estimations. Ishii et al. (1981) pointed out that the optimal smoothing constant was the solution of infinite order equation, but he did not show analytical solution. There are some papers that utilize the neural network. Miki and Yamakawa (1996) generated time series data by nonlinear system. But it is a stationary one and does not have significant trend. Ogasahara and Inoue (2009) utilized hybrid neural network for prediction. It searches neural neighbors which have a similar typed data and utilize them as a forecast. But it needs huge data. Therefore, it cannot be used in the case we are trying to make forecast in which we handle the monthly data. Takaho et al. (2002) used pre data handling such as low/high pass filter. It may be a kind of trend removing. Based on these facts, we proposed a new method of estimating the smoothing constant in ESM in advance (Takeyasu and Nagao, 2008). Focusing on the equation of ESM that is equivalent to (1,1) order ARMA model equation, a new method of estimating the smoothing constant in ESM was derived.

In this paper, utilizing above stated method, a revised forecasting method is proposed. In making forecast such as production data, a trend removing method is devised. Trend removing by the combination of linear and 2nd order nonlinear function and 3rd order nonlinear function is executed to the original production data of two kinds of bread. Genetic algorithm (GA) is utilized to search the optimal weight for the weighting parameters of linear and nonlinear function. For comparison, the monthly trend is removed after that. Theoretical solution of smoothing constant of ESM is calculated for both the monthly trend removing data and the non-monthly trend removing data. Then, forecasting is executed on these data. This is a revised forecasting method. Trend removal and GA are not used in the paper of Takeyasu and Nagao (2008). The variance of forecasting error of this newly proposed method is assumed to be less than those of the previously proposed method. The rest of the paper is organized as follows. In Section 2, ESM is described by ARMA model, and the estimation method of smoothing constant is derived using ARMA model identification. The combination of linear and nonlinear function is introduced for trend removing in Section 3. The monthly ratio is referred in Section 4. Forecasting is executed in Section 5, and estimation accuracy is examined.

### 2. DESCRIPTION OF ESM USING ARMA MODEL

In ESM, forecasting at time t + 1 is stated in the following equation.  Here, : forecasting at t +1
xt : realized value at t
α : smoothing constant (0 <α < 1)

(2) is re-stated as By the way, we consider the following (1,1) order ARMA model. Generally, ( p, q) order ARMA model is stated as Here,

{xt} : Sample process of stationary ergodic Gaussian process x(t) t =1, 2, …, N, …
{et} : Gaussian white noise with 0 mean variance.

MA process in (5) is supposed to satisfy the convertibility condition. Utilizing the relation that we get the following equation from (4). Operating this scheme on t +1, we finally get If we set 1−β =α , the above equation is the same with (1), i.e., equation of ESM is equivalent to (1,1) order ARMA model, or is said to be (0,1,1) order ARIMA model because 1st order AR parameter is −1. Comparing with (4) and (5), we obtain From (1) and (7), Therefore, we get From above, we can get the estimation of smoothing constant after we identify the parameter of MA part of ARMA model. But, generally MA part of ARMA model become nonlinear equations which are described below.

Let (5) be  We express the autocorrelation function of as and from (9) and (10), we get the following nonlinear equations which are well known. For these equations, recursive algorithm has been developed. In this paper, parameter to be estimated is only b1 , so it can be solved in the following way.

From (4), (5), (8), and (11), we get If we set the following equation is derived. We can get b1 as follows. In order to have real roots, ρ1 must satisfy From the invertibility condition, b1 must satisfy From (14), using the next relation, (16) always holds.

As b1 is within the range of Finally we get which satisfies the above condition. Thus we can obtain a theoretical solution by a simple way. Focusing on the idea that the equation of ESM is equivalent to (1,1) order ARMA model equation, we can estimate the smoothing constant after estimating ARMA model parameter. It can be estimated only by calculating 0th and 1st order autocorrelation function.

### 3. TREND REMOVAL METHOD

As trend removal method, we describe the combination of linear and nonlinear function.

#### 3.1 Linear Function

We set as a linear function.

#### 3.2 Nonlinear Function

We set  as a 2nd and a 3rd order nonlinear function. (a2, b2, c2) and (a3, b3, c3, d3) are also parameters for a 2nd and a 3rd order nonlinear functions which are estimated by using least square method.

#### 3.3 The Combination of Linear and Nonlinear Function

We set  as the combination linear and 2nd order nonlinear and 3rd order nonlinear function. Trend is removed by dividing the original data by (21). The optimal weighting parameter α1, α2, α3 are determined by utilizing GA. GA method is precisely described in Section 6.

### 4. MONTHLY RATIO

For example, if there is the monthly data of L years as stated bellow: where xij ∈R in which j means month and i means year and xij is a production data of i-th year, j-th month. Then, monthly ratio ( j =1, …, 12) is calculated as follows. Monthly trend is removed by dividing the data by (23). Numerical examples both of the monthly trend removal case and the non-removal case are discussed in 7.

### 5. FORECASTING ACCURACY

Forecasting accuracy is measured by calculating the variance of the forecasting error. Variance of the forecasting error is calculated by: where the forecasting error is expressed as:  ### 6. SEARCHING OPTIMAL WEIGHTS UTILIZING GA

#### 6.1 Definition of the Problem

We search α1, α2, α3 of (21) which minimizes (24) by utilizing GA. By (22), we only have to determine α1 and α· σε2 (24) is a function of α1 and α2 , therefore we express them as . Now, we pursue the following: We do not necessarily have to utilize GA for this problem which has small member of variables. Considering the possibility that variables increase when we use logistics curve, etc., in the near future, we want to ascertain the effectiveness of GA.

#### 6.2The Structure of the Gene

Gene is expressed by the binary system using {0,1} bit. Domain of variable is [0,1] from (22). We suppose that variables take down to the second decimal place. As the length of domain of variable is 1−0 =1 , seven bits are required to express variables. The decimal number, the binary number, and the corresponding real number in the case of 7 bits are expressed in Table 1. Table 1. Corresponding table of the decimal number, the binary number, and the real number

1 variable is expressed by 7 bits, therefore 2 variables needs 14 bits. The gene structure is exhibited in Table 2. Table 2. The gene structure

#### 6.3 The Flow of Algorithm

The flow of algorithm is exhibited in Figure 1. Figure 1. The flow of algorithm.

##### 6.3.1 Initial population

Generate M initial population. Here, M = 100. Generate each individual so as to satisfy (22).

##### 6.3.2 Calculation of fitness

First, calculate the forecasting value. There are 36 monthly data for each case. We use 24 data (1st to 24th) and remove trend by the method described in Section 3. Then we calculate the monthly ratio by the method described in Section 4. After removing the monthly trend, the method explained in Section 2 is applied and exponential smoothing constant with minimum variance of the forecasting error is estimated. Then 1 step forecast is executed. Thus, data is shifted to 2nd to 25th and the forecast for 26th data is executed consecutively, which finally reaches the forecast of 36th data. To examine the accuracy of forecasting, variance of the forecasting error is calculated for the data of 25th to 36th data. The final forecasting data is obtained by multiplying the monthly ratio and the trend. Variance of the forecasting error is calculated by (24). Calculation of fitness is exhibited in Figure 2. Figure 2. The flow of calculation of fitness.

Scaling (Ogasahara and Inoue, 2009) is executed such that fitness becomes large when the variance of forecasting error becomes small. Fitness is defined as follows. where U is the maximum of σε21, α2) during the past W generation. Here, W is set to be 5.

##### 6.3.3 Selection

Selection is executed by the combination of the general elitist selection and the tournament selection. Elitism is executed until the number of new elites reaches the predetermined number. After that, tournament selection is executed and selected.

##### 6.3.4 Crossover

Crossover is executed by the uniform crossover. Crossover rate is set as follows. ##### 6.3.5 Mutation

Mutation rate is set as follows. Mutation is executed to each bit at the probability Pm, therefore all mutated bits in the population M becomes Pm × M × 14.

We examined one point crossover, two points of crossover and uniform crossover, and we found that the uniform crossover was the best in convergence. Therefore, we took the uniform crossover in this case. We varied the mutation rate and found that this value was the best in performance (Takeyasu and Kainosho, 2012).

### 7. NUMERICAL EXAMPLE

#### 7.1 Application to the Original Production Data ofProcessed Cooked Rice

We analyzed the original production data of processed cooked rice for 2 cases (Data of chilled cooked rice and those of frozen cooked rice: Annual Report of Statistical Research, Ministry of Agriculture, Forestry and Fisheries, Japan) from January 2008 to December 2010. Furthermore, GA results are compared with the calculation results of all considerable cases in order to confirm the effectiveness of GA approach. First, graphical charts of these time series data are exhibited in Figures 3 and 4. Figure 3. Data of chilled cooked rice. Figure 4. Data offrozen cooked rice. Figure 5. Convergence process in the case of chilled cooked rice (monthly ratio is not used). Figure 6. Convergence process in the case of chilled cooked rice (monthly ratio is used). Figure 7. Convergence process in the case of frozen cooked rice (monthly ratio is not used). Figure 8. Convergence process in the case of frozen cooked rice (monthly ratio is used).

#### 7.2 Execution Results

GA execution condition is exhibited in Table 3. Table 3. Genetic algorithm(GA) execution condition.

We made 10 times repetition and the maximum, average, minimum of the variance of the forecasting error and the average of convergence generation are exhibited in Tables 4 and 5. Table 4. Genetic algorithm execution results (monthly ratio is not used) Table 5. Genetic algorithm execution results (monthly ratio is used)

The variance of forecasting error in the case that the monthly ratio is not used is smaller than those the monthly ratio is used for frozen cooked rice. It may be because frozen cooked ricedoes not have definite seasonal trend in general.

The minimum variance of forecasting error of GA coincides with those of the calculation of all considerable cases, and it shows the theoretical solution. Although it is a rather simple problem for GA, we can confirm the effectiveness of GA approach. Further study should examine the complex problems hereafter.

Next, optimal weights and their genes are exhibited in Tables 6 and 7. Table 6. Optimal weights and their genes (monthly ratio is not used) Table 7. Optimal weights and their genes (monthly ratio is used)

In the case that monthly ratio is not used, the linear function model is best in both cases. In the case that monthly ratio is used, the linear function model is best in both cases. Parameter estimation results for the trend of Eq. (21) using least square method are exhibited in Table 8 for the case of 1st to 24th data. Table 8. Parameter estimation results for the trend of Eq. (21)

Trend curves are exhibited in Figures 9 and 10. Figure 9. Trend of chilled cooked rice. Figure 10. Trend of frozen cooked rice.

Calculation results of Monthly ratio for 1st to 24th data are exhibited in Table 9. Table 9. Parameter estimation result of monthly ratio

Estimation result of the smoothing constant of minimum variance for the 1st to 24th data are exhibited in Tables 10 and 11. Table 10. Smoothing constant of minimum variance of Eq. (17) (monthly ratio is not used) Table 11. Smoothing constant of minimum variance of Eq. (17) (monthly ratio is used)

Forecasting results are exhibited in Figures 11 and 12. Figure 11. Forecasting result of chilled cooked rice. Figure 12. Forecasting result of frozen cooked rice.

#### 7.3 Remarks

In the case of chilled cooked rice, it had a better fo-recasting accuracy in the case that the monthly ratio was not used. On the other hand, frozen cooked rice had a better forecasting accuracy when the monthly ratio was used. Both cases had a good result in the linear function model.

The minimum variance of forecasting error of GA coincides with those of the calculation of all considerable cases, and it shows the theoretical solution. Although it is a rather simple problem for GA, we can confirm the effectiveness of GA approach. Further study should examine the complex problems hereafter.

### 8. CONCLUSION

Focusing on the idea that the equation of ESM is equivalent to (1,1) order ARMA model equation, a new method of estimating smoothing constant in exponential smoothing method had been proposed previously by us which satisfied minimum variance of forecasting error. Generally, the smoothing constant was selected arbitrarily. But in this paper, we utilized the above stated theoretical solution. Firstly, we made estimation of the ARMA model parameter and then estimated the smoothing constant. Thus theoretical solution was derived in a simple way, and it may be utilized in various fields.

Furthermore, combining the trend removal method with this method, we aimed to improve forecasting accuracy. An approach to this method was executed in the following method. Trend removal by a linear function was applied to the original production data of processed cooked rice. The combination of linear and nonlinear function was also introduced in trend removal. GA was utilized to search the optimal weight for the weighting parameters of linear and nonlinear function. For comparison, the monthly trend was removed after that. Theoretical solution of the smoothing constant of ESM was calculated for both the monthly trend removing data and the non monthly trend removing data. Then forecasting was executed on these data. The new method shows that it is useful for the time series that has various trend characteristics. The effectiveness of this method should be examined in various cases.

### Reference

1.Box, G. E. P., Jenkins, G. M., and Reinsel, G. C. (1994), Time Series Analysis: Forecasting & Control (3rd ed.), Prentice-Hall, Englewood Cliffs, NJ.
2.Brown, R. G. (1963), Smoothing, Forecasting and Prediction of Discrete Time Series, Prentice-Hall, Englewood Cliffs, NJ.
3.Ekern, S. (1982), Adaptive exponential smoothing revisited, Journal of the Operational Research Society, 32(9), 775-782.
4.Ishii, N., Iwata, A., and Suzumura, N. (1981), Bilateral exponential smoothing of time series, International Journal of Systems Science, 12(8), 977-988. 5.Johnston, F. R. (1993), Exponentially weighted moving average (EWMA) with irregular updating periods, Journal of the Operational Research Society, 44(7), 711-716.
6.Kobayashi, K. (1992), Sales Forecasting for Budgeting, Chuokeizai-Sha Publishing, Tokyo, Japan.
7.Maeda, K. (1984), Smoothing constant of exponential smoothing method, Report Seikei University, Faculty of Engineering, 38, 2477-2484.
8.Makridakis, S. and Winkler, R. L. (1983), Averages of forecasts: some empirical results, Management Science, 29(9), 987-996. 9.Miki, T. and Yamakawa, T. (1999), Analog implementation of neo-fuzzy neuron and its on-board learning, Proceedings of the 3rd IMACS/IEEE International Multiconference on Circuits, Systems, Communications, and Computers, Athens, Greece.
10.Ogasahara, T. and Inoue, H. (2009), Efficient hybrid neural network for chaotic time series prediction, Proceedings of the Forum on Information Technology (FIT2009), Sendai, Japan, 597-598.
11.Takaho, H., Arai, T., Otake, T., and Tanaka, M. (2002), Prediction of the next stock price using neural network: extraction the feature to predict next stock price by filtering, IEICE Technical Report: Nonlinear Problems, 102(432), 13-16.
12.Takeyasu, K. (1996), System of Production, Sales and Distribution, Chuokeizai-Sha Publishing, Tokyo, Japan.
13.Takeyasu, K. and Kainosho, M. (2012), Optimization Technique by Genetic Algorithms for International Logistics, presented at 2012 International Symposium on Semiconductor Manufacturing Intelligence (ISMI 2012), Hsinchu, Taiwan.
14.Takeyasu, K. and Nagao, K. (2008), Estimation of smoothing constant of minimum variance and its application to industrial data, Industrial Engineering and Management Systems, 7(1), 44-50.
15.Tokumaru, H., Soeda, T., Nakamizo, T., and Akizuki, K. (1982), Analysis and Measurement: Theory and Application of Random Data Handling, Baifukan Publishing, Tokyo, Japan.
16.West, M. and Harrison, J. (1989), Bayesian Forecasting and Dynamic Models (1st ed.), Springer-Verlag, New York, NY. 17.Winters, P. R. (1960), Forecasting sales by exponentially weighted moving averages, Management Science, 6(3), 324-343.  Do not open for a day Close