:: Industrial Engineering & Management Systems ::
Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.17 No.2 pp.294-301
DOI : https://doi.org/10.7232/iems.2018.17.2.294

Improving Sampling Using Fuzzy LHS in Healthcare Supply Chain

Saman Siadati, Mohammad Jafar Tarokh*, Rassoul Noorossana
Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
Faculty of Industrial Engineering, K. N. Toosi University of Technology, Tehran, Iran
Industrial Engineering Department and Center of Excellence for the Optimization of Advanced Production and Service Systems, Iran University of Science and Technology, Tehran, Iran
Corresponding Author, E-mail: mjtarokh@kntu.ac.ir
February 26, 2017 June 21, 2017 August 17, 2017

ABSTRACT


Considering the effects of risk on supply chain in healthcare industry, we must provide a mathematical model based on the risk to re-design the supply chain network, which is a part of the optimization module, random sampling methods use. One of the objectives for applying sampling methods is to determine the best method (by reducing the variance and computational time) for different sizes. The large number of random parameters of the objective function value led to very high variance that required using methods for reducing the variance. In this research, our approach to handle risk analysis problems in mean approximation is using traditional sampling method namely Latin hypercube sampling. However, to reduce error in correlations between variables, it is proposed to perform a fuzzy method on the intervals to eliminate uncertainty in statistical values. Limitations in hypercube sampling will be discussed and numerical results involving a FLHS are presented and compared with Monte Carlo, simple LHS and other types of LHS. We show that the proposed method can affect the precision of mean and variance values.



초록


    1. INTRODUCTION

    A supply chain is the connected network of individuals, organizations, resources, activities, and technologies involved in the manufacture and sale of a product or service. A supply chain starts with the delivery of raw material from a supplier to a manufacturer, and ends with the delivery of the finished product or service to the end consumer. SCM oversees each touch point of a company's product or service, from initial creation to final sale. With so many places along the supply chain that can add value through efficiencies or lose value through increased expenses, proper SCM can increase revenues, decrease costs, and impact a company’s bottom line.

    Healthcare supply chain, recently noted by researchers and practitioners as an area with significant and increasing impact on GDP, has great costs associated with it which requires attention in order to improve performance efficiency (Kwon et al., 2016). However, this problem contains levels of risk which is an important issue in the literatures of supply chain management (Ceryno et al., 2013).

    Most risk analysis simulation software products offer Latin hypercube sampling (LHS). It is a method for ensuring that each probability distribution in your model is evenly sampled which at first glance seems very appealing. The technique dates back to 1980 (Iman et al., 1980) when computer was very slow, very modest number of distributions in a simulation model took hours or days. It was attractive back then because it allowed one to obtain a stable output with far fewer examples of simple Monte Carlo simulation.

    Improving and handling uncertainty in sampling is a vital component for effective decision making. Uncertainty is insufficiently and explicitly communicated to random sampling methods (Nadjafi et al., 2014). The quantification and propagation of uncertainty become essential in precisely those situations where quantitative modeling cannot draw upon extensive historical, statistical, or measurement data (Kurowicka and Cooke, 2006). Fuzzy logic systems constitute a powerful tool for coping with ubiquitous uncertainty in many engineering applications (Linda and Manic, 2010). The most important point in assessing uncertainty of gained population via sampling is to recognize that all uncertainties are not quantifiable, and therefore they should be separated from the sampling characteristics (Verdonck et al., 2007). In sampling methods, if we are willing to give up some features of random sampling, notably serial independence, then variance reduction techniques may be invoked (Kwakernaak, 1978). A suitable mathematical model for random variables which assume fuzzy values are called fuzzy random variables (Zadeh, 1978) and use linguistic variables to encounter fuzzy sets (Zadeh, 1975).

    However, desktop computers are now at least 1,000 times faster than the early 1980s, and the value of LHS has disappeared as a result. LHS does not deserve a place in modern simulation software. We are often asked why we don’t implement LHS in our model risk software, since nearly all other Monte Carlo simulation applications do. Hence, we thought it would be worthwhile to provide an explanation here (Vose, 2014).

    Design optimization usually requires a large number of potentially expensive simulations. Translational propagation algorithm (Kwon et al., 2016) is use to obtain optimal or near optimal Latin hypercube designs without using formal optimization (Viana et al., 2010).

    Latin hypercube sampling is generalized in terms of a spectrum of stratified sampling (SS) designs referred to as partially stratified sample (PSS) designs. The variance of PSS estimates is derived along with some asymptotic properties. PSS designs are shown to reduce variance associated with variable interactions, whereas LHS reduces variance associated with the main effects. Several highdimensional numerical examples highlight the strengths and limitations of the method (Shields and Zhang, 2016).

    2. DEFININTIONS

    One of the stratified sampling types is Latin hypercube sampling. This type of random sample is generated based on a probability distribution. Probability distributions can be described by a cumulative curve. If you want to take N samples from this distribution, you can split the scale into N equal probability ranges:

    [0-1× ( 100/N )  % ] [ ( 100/N )  -2× ( 100/N )  %],…,  [ ( N-1 ) × ( 100/N )  -N× ( 100/N )  % ] .

    Now you should take one random sample within each range and calculate the variable value that has this cumulative probability. In a model that contains just one variable, the distribution can be stratified into the same number of partitions as there are samples: so, if you want N samples you can have N stratifications to be guaranteed that there will be precisely 1 sample in each 0.1% of the cumulative probability range.

    However, risk analysis models can use many types of distributions. LHS controls the sampling of each distribution separately to provide even coverage for each distribution individually, but does not have control over numbers generated from the combinations of distributions. This means that extra precision offered by LHS over standard Monte Carlo sampling rapidly becomes imperceptible as the number of distributions increases. However, we want to increase LHS performance in both generations of numbers and also precision.

    3. COMPARISIONS LHS WITH MONTE CARLO

    We have chosen normal distribution and LHS offers the greatest improvement in precision over Monte Carlo sampling when sample number is small. There is essentially no difference between the LHS and Monte Carlo simulation results. The increase in precision offered by LHS is extremely modest, even when one applies it to simulations where it offers the greatest benefit (i.e. few distributions and few samples) and this increase in precision is trivial in comparison to the imprecision of the results achieved from running few samples.

    3.1. Precision of Results

    The more samples (iterations) one performs in a simulation model, the closer the results approach the theoretical distribution that would be given with an infinite number of samples. The proximity of the actual results to the theoretical is called the level of precision. There are statistical tests for determining the level of precision that one has achieved by running a Monte Carlo simulation. However, no such statistical tests are available if one uses LHS.

    LHS is useful in a couple of particular circumstances (Vose, 2014). First, you have only one or two distributions in your model and you need the answers very fast. In this situation, for most models the mean will stabilize more quickly with LHS, though the spread and shape of the output distribution will not stabilize much more quickly than with Monte Carlo sampling, and it is the spread and the tails of the output distribution that we are most concerned about in risk analysis. Model risk can be simulated using Monte Carlo sampling, a model with two distributions 100,000 times in under 11 seconds, or 10,000 times in a second, by which time the precision of the results will be essentially indistinguishable from LHS. Second, you are using Monte Carlo sampling to perform a numerical integration. A very specific technical situation that mostly applies in scientific and engineering work, but if you need to do a numerical integration there are better methods than simulation which will give a far greater precision than is possible to achieve with simulation, and performs the calculation in a fraction of a second. Table 1

    3.2. LHS types and Comparisons LHS with other Important Works

    In other related references such as (Cochran, 1977; Davis, 1987; Iman and Conover, 1982; Iman, 1992; McKay et al., 1979; Pebesma and Heuvelink, 1999; Stein, 1987; Wyss and Jorgensen, 1998; Zhang and Pinder, 2003) you can find some improved algorithms that efficiently computed LHS results.

    There are many proposed algorithms for improving simple LHS. In the following sections, we introduce an algorithm that uses fuzzy approach which shows a good accuracy and precision. However, for comparison purposes we use the best precision of the available algorithms in LHS to compare the obtained results from the proposed algorithm (Table 2).

    4. FUZZY LHS SAMPLING

    In the literature of statistical methods in sampling approaches, some methods have significant importance from the point of view of accuracy, e.g. Clustering and Stratified sampling, but as an uncertainty resource, we must reduce missed accuracy due to classification, So that we proposed fuzzy approach to encounter with this problem. Our method and formulas deducted from fuzzy contribution for LHS (type of Stratified Sampling) method (Viertl, 2011). Statistical data indicates variation in numbers. In other words, observations are not always precise numbers, or vectors, or categories or objects. Some types of data are frequently called fuzzy. Examples where this fuzziness is obvious are quality of healthcare data, environmental, biological, medical, sociological, and economeconomics data. Also the results of measurements can be best described by using fuzzy numbers and fuzzy approaches. Statistical analysis methods have to be adapted for the analysis of fuzzy data. The foundations of the description of fuzzy data are explained, including methods on how to obtain the characterizing function of fuzzy measurement results. Furthermore, statistical methods are then generalized to the analysis of fuzzy data and fuzzy a-priori information. Key Features provides basic methods for the mathematical description of fuzzy data, as well as statistical methods that can be used to analyze fuzzy data. This work is aimed at statisticians working with fuzzy logic (Viertl, 2011).

    In the explained method each element just belongs to one category, but in our approach each element includes in all groups with one exception that membership values are different. For example, consider age ranges (Figure 1) i.e. a man with 30 years old belongs to five groups by the following membership values:

    When we categorize such samples with LHS method, precision for computing measures such as mean and variance could improve. In fact, each element is a member of every category with a probability between 0 and 1. Our new approach is a combination of fuzzy and LHS. In this approach, we assume that linguistic variables comprised as five parts and modeled in trapezoid.

    4.1. Proposed Algorithm

    Like stratified and LHS method we choose some random elements from each stratum for sampling, and then we identify the neighborhood of LHS to model in fuzzy context. Let n1, n2…, nm be the number of elements in each of the m sampled LHS. Let X ¯ 1 , X ¯ 2 , , X ¯ m , be the means of the sampled LHS. The relative uncertainty itself can also be used without the subsequent statistical testing particularly if enough is known about the parameter being evaluated. Variance reduction techniques are methods that attempt to reduce the variance, i.e., the dispersion associated with the variations of the parameter being evaluated. This can result in one of two outcomes. Either the variance is reduced for the same number of sampling or the number of sampling can be reduced for the same variance, the comparison of both being made when no variance reduction techniques are used. This either increases the confidence in the results or reduces the computational burden. There are many forms of variance reduction niques and a specialized text should be consulted if full details of all techniques are needed. This algorithm provides reduced variances and standard errors comparison with other traditional methods. Schematic of proposed method is shown in Figure 2.

    Steps that are used for programming with the use of fuzzy-LHS method are mentioned in the following algorithm (Figure 2):

    • Determine the number of dimensions for problem

    • Select m (No. of points) random elements from each set of LHS

    • Determine neighborhood of selected point between sets of LHS

    • Calculate membership values for each LHS

    • Calculate mean value, variance, max confidence and min confidence from formulas

    • Calculate values for each LHS with effect of LHS neighbors

    • Calculate values with effect of element neighbors in each LHS

    • Calculate values with effect of both element neighbors and LHS neighbors

    Formulations and relations that are used for computing the parameters such as mean, variance, maximum and minimum confidence intervals (lower and upper bound of mean estimator) are the same as LHS and Stratified Sampling method formulations (Cochran, 1977), in addition with fuzzy membership functions. Hence, for mean estimator of this method we have to compute the overall sample mean and we need to compute the sample means for each dimensions. We used R.viertl definition for representing fuzzy numbers (chapter 2) and then computing Mean value of fuzzy numbers.(1)

    Χ ¯ i = x i n i
    (1)

    Then(2)

    M e a n = i = 1 m ( Χ ¯ i n i + μ i + 1 Χ ¯ i + 1 n i + 1 + μ i 1 Χ ¯ i 1 n i 1 ) i = 1 m n i
    (2)

    where Χi and ni are the mean and the number of elements for LHS, respectively. The parameters μi+1 and μi-1 are membership values of two neighboring LHS. An estimator of samples variance is given by

    V a r i a n c e = m q m q n ¯ 2 i = 1 m n i 2 ( Χ ¯ i M e a n ) q 1
    (3)

    where n ¯ = i = 1 m n i q and q = m ( μ i + 1 + μ i 1 ) (4)

    S E = ( 1 N ) × ( N i 2 × ( 1 n i N i ) × V a r i a n c e n i )
    (4)

    where N and Ni are the number of observations in the population for LHS, respectively. Margin of error (ME) is defined as(5)

    M E = C r i t i c a l V a l u e × S E = 1.96 × S E
    (5)

    The range of the confidence interval is defined by the sample statistic ± margin of error and the uncertainty is denoted by the confidence level, using the preceding information. We construct a 95% confidence interval for the mean as(6)

    C o n f i d e n c e I n t e r v a l = M e a n ± 1.96 × V a r i a n c e
    (6)

    Example 1:

    The steering committee has decided to sample five grades of health care services in north of Iran, Mazandaran province. Thus the study will include hospitals, health centers, home visits and ambulance services, etc. For the purpose of this document “health facilities” will refer to this full range of services.

    In order to provide the best chance for generalizability, health care facilities should be categorized. LHS allows the researcher to increase precision by grouping categories within the sample into more homogeneous sets. Standardized definitions of categories are important regarding generalizability and comparability of results.

    Assume that we have 50000 available facilities that each set has 10000 facilities, this year. A propor-tionate Latin Hypercube sample was used to select 500 facilities for testing. Because the population has equal facilities for every stratum, each stratum consisted of 100 facilities. We modeled the problem using this method, LHS and ex-haustive method (real mean and variance) in a system (win 7, mat lab R2010a, 2.30 GHz, 2 GB), and com-pared results given in Table 1.

    Results show that significantly error values for fuzzy method is less than LHS and also approximations of mean value for fuzzy is close to populations mean (Figure 3). Fuzzy method such as LHS is changing the standard error, As a result, statistical methods can be seen clearly that this is an efficient method to calculate the statistical values is due to accuracy and ME. Stand-ard error deviation in various implementation of fuzzy and LHS shown in Figure 4.

    Also, Figure 5 shows the mean values for the exact, LHS, and fuzzy sampling. As obviously, the fuzzy-LHS method has less deviation from the mean of exact solu-tion rather than simple LHS method. So, this method shows less error from the exact method and we can conclude that accuracy rate for fuzzy method is higher than LHS.

    Example 2:

    Assume the following information:

    Total iterations = 1000;

    Number of Samples = 100;

    Number of Variables = 6;

    Real Mean = [10 5 4 3 20 10];

    Standard Deviation = [0.1 1 0.1 1 1 1];

    c o r r e l a t i o n m a t r i x = [ 1.0 0 0 0 0 0 0 1.0 0 0 0 0 0 0 1.0 0 0 0 0 0 0 1.0 0.75 0.70 0 0 0 0.75 1.0 0.95 0 0 0 0.70 0.95 1.0 ]

    In the mentioned literature in section 4, we can conclude that correlation matrix and duplication items have important rule in gaining better results, i.e. correla-tion matrix must be positive definitive matrix because in these algorithms one important part of the solution procedure is transposing to lower triangle matrix and this restriction for finding sufficient correlation matrix can be challengeable from mathematical point of view. From other perspective, these approaches prove that by increasing duplication they can provide better results. However, both of these items have an overloaded cost for simulations. As you can see, we don`t need correla-tion matrix and duplication factor in the proposed FLHS algorithm and it is an improvement to other algo-rithms, especially you can consider an example that has the following properties:

    Number of variables = 10,000; with duplication factor 100; without correlation matrix and other pa-rameters introduced randomly. As a results in LHS pro-cedure concluded matrix has 1,000,000×100 dimension that means for gaining better results we need a big space for middle computations. For gaining better and more precise results to decrease uncertainty, we added a step to overall algorithms called total iterations and finally the average of results showed in Table 2.

    5. PROVING OPTIMALITY

    We can prove optimality of our approach in two different aspects:

    5.1. Mathematical View

    Of course for decreasing variance, as we can see in formula (3), an important and effective variable factor is q where Membership values used to determine q, we look at our fuzzy model and in worst case scenario we selected a point between two trapezoid sums such that two neighbor membership values are greater than 1. This means our proposed algorithm al-ways has better (less than) variance from simple one.

    5.2. Experimental Results View

    We put a variable in our implemented code and measured that for the worst case scenario but even in an example that our algorithm chooses points close to the edges of sets we see that we have better results than other mentioned algorithms.

    In the above table, results show that using fuzzy approach leads to significant improvement in compari-son to other methods.

    Figure 6 and Figure 7 show an example of output coordination using FLHS respectively for 3D and 4D results for 10 points. You can enable data curser in Matlab figure view for higher dimension example to check that points are independent.

    The main contribution of the proposed method for displaying statistical variance reduction and the approx-imate average is more accurate in general. View app for healthcare supply chain as it is used to reduce the risk of problems.

    6. CONCLUSIONS

    With the set of given numbers, elements or objects in the statistical approaches we can select the elements in a random way to compute values of many subjects like mean and variance. There are some sampling methods for this purpose discussed in many publica-tions. In the entire study, we assumed that the input data is precise, i.e. each data belongs to a set. For ex-ample, in the LHS we assumed that the population of N units may be divided into m groups and the m LHS are none overlapping, and then samples selected from with-in each groups. However, in real world, examples and elements may belong to any group. Even in simple ran-dom sampling or systematic way we can see some er-rors lead to equal probabilities of elements. In this paper we introduce new approach base on fuzzy theory com-bined with sampling methods. The process is to calcu-late the relative error or uncertainty as the simulation proceeds and, using appropriate statistical tests general-ly based on the fuzzy theory, calculate the confidence interval after each sampling. This is compared with the pre-specified example.

    The case study results show that the proposed method provides better measure of uncertainty than the existing methods as unlike traditional sampling method. In this paper, in order to decrease the uncertainty of sampling, the variance reduction techniques were used. Variance reduction techniques are methods that at-tempt to reduce the variance, i.e., the dispersion associ-ated with the variation of the parameter being evaluat-ed.

    Figure

    IEMS-17-294_F1.gif

    Demonstration of fuzzy membership function for age example.

    IEMS-17-294_F2.gif

    Flowchart of the proposed sampling method.

    IEMS-17-294_F3.gif

    Mean improving using fuzzy sampling.

    IEMS-17-294_F4.gif

    Standard Error deviations on difference run for fuzzy and LHS.

    IEMS-17-294_F5.gif

    Mean deviations of sampling.

    IEMS-17-294_F6.gif

    3D view from FLHS algorithm for 10 points.

    IEMS-17-294_F7.gif

    4D view results from running FLHS algorithm.

    Table

    Sampling parameter results

    Comparisons between our method and other important works

    REFERENCES

    1. P.S. Ceryno , L.F. Scavarda , K. Klingebiel , G. YA1/4zgA1/4lec (2013) Supply chain risk management: A content analysis approach., Int. J. Ind. Eng. Manag., Vol.4 (3) ; pp.141-150[IJIEM].
    2. W.G. Cochran (1977) Sampling Techniques., John Wiley & Sons,
    3. M.W. Davis (1987) Production of conditional simulations via the LU triangular decomposition of the covariance matrix., Math. Geol., Vol.19 (2) ; pp.91-98
    4. R. Iman , T.A. Cruse (1992) Reliability Technology, The American Society of Mechanical Engineers 1992., The American Society of Mechanical Engineers, ; pp.153-168
    5. R.L. Iman , W.J. Conover (1982) A distributionfree approach to inducing rank correlation among input variables., Commun. Stat., Vol.11 (3) ; pp.311-334
    6. R.L. Iman , J.M. Davenport , D.K. Zeigler A distributionfree approach to inducing rank correlation among input variables,, Communications in Statistics,, Vol.11 (3) ; pp.311-334
    7. D. Kurowicka , R.M. Cooke (2006) Uncertainty Analysis with High Dimensional Dependence Modelling., John Wiley & Sons,
    8. H. Kwakernaak (1978) Fuzzy random variables-I. definitions and theorems., Inf. Sci., Vol.15 (1) ; pp.1-29
    9. I.W. Kwon , S.H. Kim , D.G. Martin (2016) Healthcare supply chain management: Strategic areas for quality and financial improvement., Technol. Forecast. Soc. Change, Vol.113 ; pp.422-428
    10. O. Linda , M.M. Manic (2010) Importance sampling based defuzzification for general type-2 fuzzy sets, Proceedings of the WCCI 2010 IEEE World Congress on Computational Intelligence,
    11. M.D. McKay , R.J. Beckman , W.J. Conover (1979) Comparison of three methods for selecting values of input variables in the analysis of output from a computer code., Technometrics, Vol.21 (2) ; pp.239-245
    12. M. Nadjafi , M.A. Farsi , A. Najafi (2014) Uncertainty improving in importance sampling: An integrated approach with Fuzzy-Cluster sampling, Proceedings of the 24th annual European Safety and Reliability(ESREL) Conference,
    13. E.J. Pebesma , G.B.M. Heuvelink (1999) Latin hypercube sampling of Gaussian random fields., Technometrics, Vol.41 (4) ; pp.303-312
    14. M.D. Shields , J. Zhang (2016) The generalization of Latin hypercube sampling., Reliab. Eng. Syst. Saf., Vol.148 ; pp.96-108
    15. M. Stein (1987) Large sample properties of simulations using Latin hypercube sampling., Technometrics, Vol.29 (2) ; pp.143-151
    16. F.A.M. Verdonck , A. Souren , M.B.A. van Asselt , P.A. Van Sprang , P.A. Vanrolleghem (2007) Improving uncertainty analysis in European Union risk assessment of chemicals., Integr. Environ. Assess. Manag., Vol.3 (3) ; pp.333-343
    17. F.A.C. Viana , G. Venter , V. Balabanov (2010) An algorithm for fast optimal Latin hypercube design of experiments., Int. J. Numer. Methods Eng., Vol.82 (2) ; pp.135-156
    18. R. Viertl (2011) Statistical methods for fuzzy data., Vienna University of Technology, Wiley,
    19. D. Vose (2014) The pros and cons of latin hypercube sampling, http://liprof.com/ blog/the-pros-and-cons-of-latin-hypercube-sampling
    20. G.D. Wyss , K.H. Jorgensen (1998) A Users guide to LHS: sandias latin hypercube sampling software, Technical Report SAND98-0210, Sandia National Laboratories, Albuquerque, NM.,
    21. L.A. Zadeh (1975) The concept of a linguistic variable and its application to approximate reasoning-I., Inf. Sci., Vol.8 (3) ; pp.199-249
    22. L.A. Zadeh (1978) Fuzzy sets as a basis for a theory of possibility., Fuzzy Sets Syst., Vol.1 ; pp.3-28
    23. Y. Zhang , G. Pinder (2003) Latin hypercube lattice sample selection strategy for correlated random hydraulic conductivity fields., Water Resour. Res., Vol.39 (8) ; pp.SBH11
    오늘하루 팝업창 안보기 닫기