You are on page 1of 15

European Journal of Operational Research 161 (2005) 111125 www.elsevier.

com/locate/dsw

Reducing simulation models for scheduling manufacturing facilities


Andr Thomas *, Patrick Charpentier e
CRAN, Facult des sciences, BP239, 54506 Vandoeuvre les Nancy, France e Available online 3 December 2003

Abstract Within the framework of the application of their industrial management system, companies compile a Master Production Schedule (MPS). However, once the MPS is released, daily events may require it to be brought into question. The use of reduced models within the framework of ow dynamic simulation enables quick decision-making while maximizing the use of resources and minimizing risk. The article shows the advantage of model reduction and how we arrive at it. Afterwards we develop an analysis of the inuence of the model factors by highlighting the differences between the simulation results and MPS. Finally we show the circumstances in which the ow dynamic simulation with reduced models is relevant. 2003 Elsevier B.V. All rights reserved.
Keywords: Simulation; Modeling systems; Re-scheduling; MPS

1. Introduction Increasing both productivity and reactivity has become a prime objective for managers of manufacturing systems. Productivity implies paying very close attention to external variables such as clients, sales etc as well as to resources (machines, labour etc.). Reactivity can lead to extreme exibility in planning, an excellent awareness of the expectations expressed by external factors (customers, socio-economic context . . .) and a perfect awareness of expectations and activities within the organisation. This may give rise to increasingly immediate problems such as being aware of the situation of the production system at any moment, measuring gaps and changes and being aware of their causes, knowing and measuring the changes in the factors involving employees. New elements must therefore be added to the production management system in order to increase productivity and reactivity and this based upon improved knowledge of what happens on the shop oor.

Corresponding author. E-mail address: andre.thomas@cran.uhp-nancy.fr (A. Thomas).

0377-2217/$ - see front matter 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2003.08.042

112

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

The system of planning and re-planning should thus take into account all the events related to production resources whilst at the same time maintaining their meaning. Within the framework of the application of their industrial management system, companies compile a daily, weekly or monthly Master Production Schedule (MPS)weekly being the general rule. The choice of a particular MPS gives rise to a predictive scheduling with respect to a group of Manufacturing Orders (MO), viewed statically at this level as a complete entity. Predictive scheduling is the function that concerns the MPS initially established with the Manufacturing Planning and Control System (MPCS), in opposite of Reactive scheduling that gives the new MPS established after disruptions during the concerned week. Each MO deals with the manufacturing of an article, has a manufacturing routing and hence relates to a list of Work Centers (WC). At this level of planning, load/capacity equilibrium is obtained via the management of critical capacity function or Rough-Cut Capacity Planning (RCCP) which essentially concerns bottlenecks [21]. Goldratt and Cox, in The Goal [6] put forward the Theory of Constraint (TOC). This approach seeks to identify capacity constraints, attempts to use them as best as possible and to subordinate all other things to the exploitation of these constraints. The aim of the TOC is to maximise the bottleneck production. When reached, equilibrium means that the MPS is validated and goes into operation for the period (P) in question. This validation is made after the establishment of dierent scenarios that must be evaluated by the master scheduler. However, once the MPS is released, daily events may require it to be brought into question: its the problem of rescheduling. The real time systems performing manufacturing checks (production reporting) will input information very rapidly into the management systems [13]. The decision-making process (Fig. 1) include the establishment of scenarios, the evaluation of these scenarios and nally the decision. The ever present problem today is the speed at which decisions can be made when faced with all this information which may appear to have lost all meaning [15,16]. Indeed, when the decision is made within the space of a few hours only, the situation on the shop oor has already changed. Following predictive scheduling, the MPCS will propose a placing of the MO that we have said to be static. Eectively their load has been calculated using the cumulated mean processing time per MO taken as a xed time entity (mean unit run time multiplied by the number of items per MO plus the mean setup time). At this level, the manager will seek to saturate the bottlenecks (this is the essential role of Rough Cut Capacity Planning). Following the release of this MO portfolio, the time variables relative to the arrival intervals, k, and service durations, l, on the work centers (queuing systems theory) will create temporary overloads and under-loads. In this context, the apparent balance emanating from the predictive scheduling will, in fact, be an imbalance that the manager has to handle. Here it is useful to use dynamic simulation (discrete event simulation) of ows not only to highlight possible overloads but also to locate and deal with possible residual capacity (temporary underload). A rescheduling following a problem will inevitably lead to an even more critical situation on the bottlenecks. It becomes important to retrieve all the time spaces that have been freed by the unpredictable phenomena inuencing production ows. It is the main problem of this decision-making process. The master scheduler look for the rst schedule that attains to these objectives.

Scenarios

Decision

Evaluation

Fig. 1. The decision-making process.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

113

2. The reducing model problem 2.1. The scenario evaluation Experience has shown us that to reply in an ecient way to these objectives of reactivity, an extremely exible, rapid and user friendly system needs to be developed. As it is necessarily located in the very short term, the volume of the MO portfolio it will have to consider for re-scheduling will always be low in comparison to that of the initial scheduling. This does in fact enable re-planning simulations to be rapid [18]. The approach to this has always been to represent reality by a model. All simulation models represent all the work centers (WC) and all the characteristics of the real production system. The simulation software will generate in this model the events which reveal the reality of the workshop. However, neither the time spent creating the machine model with the dynamic ow simulation software nor that spent performing these simulations should be penalising. The scenario evaluation must be done very quickly. So we have therefore put forward the hypothesis that there is a generic model in this context (Fig. 2) representing all ows resulting from a manufacturing process which save time in construction, simulation and parametering but which are suciently signicant for risk of error to be minimized [4]. 2.2. Literature review on model reducing Some research work has attempted to put forward model reducing techniques. Amongst various authors, Zeigler was the rst to deal with this problem [22]. In his view, the complexity of a model is relative to the number of elements, connections and model calculations. He distinguished four ways of simplifying a discrete simulation model in replacing part of the model by a random variable, coarsening the range of values taken by a variable and grouping parts of a model together. Belz et al. [2] developed a prototype decision support system for short-term rescheduling in manufacturing. They coupled expert systems and simulation, but they did not use reduced models in their simulation. Innis et al. [9] rst listed 17 simplication techniques for general modelling. Their approach was comprised of four steps: hypotheses (identifying the important parts of the system), formulation (specifying the model), coding (building the model) and experiments. Leachman [10] proposed a model that considers cycle times in production planning models, especially for the semi-conductor industry which uses cycle time as an indicator. Brooks and Tobias [3] suggest a simplication of models approach for those cases where the indicators to be followed are the average throughput rates. They suggest an eight stage procedure. The reduced model can be very simple and then an analytical solution becomes feasible and the dynamic simulation redundant. Their work is interesting but is valid in cases where the required results are averages and where the aim is to measure throughput. It is of no interest to follow the various events taking place in the WC. However, as our objectives were to maximize the bottleneck utilization rate and to reschedule the remaining MO to be performed on them, we had to leave in our reduced model the means to carry out these analyses. Hung and Leachman [7] propose a technique for model reduction applied to large wafer fabrication facilities. They use total cycle time and equipment utilization as decision-making indicators to do away

MO

Pre Block Bottleneck

Post Block

Fig. 2. The primary theoretical model.

114

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

with the WC. In their case, these WC have a low utilization rate and a xed service level (they use standard deviation of batch waiting time as a decision-making criterion). These are the most relevant indicators in the problematic we have presented. Indeed, as the group of MO to be re-scheduled and time range are xed, the manager might accept the rst solution enabling him to perform the residual work. Our work has shown that the variance on l (service level on a WC, mentioned later l variance) has a signicant eect on the decision making risk as the internal waiting queues at the aggregated blocks are no longer negligible in this case. We will show that the l variance will be an indicator for the decision to reduce the model (Fig. 10). Tseng [20] compares the regression techniques applied to an aggregate model (macro) by using the ow time indicator. Indeed, he suggests reducing the model by mixing macro and micro approaches so as to minimise errors in the case of complex models. Here again, for the macro view, he only deals with the estimation of ow time as a whole. For the micro approach, he constructs an individual regression model for each stage of the operation to estimate its individual ow time. The cumulative order of ow time estimates is then the sum of the individual operation ow time estimates. He tries then to mix the macro and micro approaches. Li et al. [11] and Li and Shaw [12] proposed rescheduling expert simulation systems that integrate simulation techniques, articial neural network, expert knowledge and dispatching rules. Their systems faced fuzzy and random disturbances as incorrect work, machine breakdowns, rework due to quality problem and rush orders. Their work could be interesting for our research in rescheduling rules. In Petri nets work we nd some attempts to simplify network structures which use macro-places representing complex activities associated with function groups. Hwang et al. [8] use the principle similar to that adopted in the GRAI [5] method and is developed in visual C. 3. Our model reducing method Unlike the above-mentioned authors, we have sought rst of all, to reduce the manufacturing routings of the MO in order to construct a reduced model enabling them to be simulated. In reality, we do not totally model a problem so as to be able to reduce it later. We directly build up a simple model based on a reduced routing. Our hypothesis has lead us to put forward a reduced model (Fig. 3 explains its principle) in which we nd the bottlenecks and the blocks which are aggregates of the work centers required by the released MO. The reduced model will thus have fewer elements, connections or calculations. Hence, it is possible to put forward the following simple indicators: Model Reduction Ratio Time Save Ratio Nbr of necessary elements for the complete model ; Nbr of elements in the new reduced model

Time to simulate and parameter the whole model : Time to simulate and parameter the reduced model

3 WC are aggregated in one Bloc

EB1

XTS

XT4

Fig. 3. Reduced modelprinciple.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

115

The WC remaining in the model are either conjunctural and structural bottlenecks or WC which are vital to the synchronization of the MO. All other WC are aggregated blocks upstream or downstream of the bottlenecks. By conjunctural bottleneck we mean a WC which, for the MPS and predictive scheduling in question, is saturated. This is to say that it uses all available capacity. By structural bottleneck we mean a WC which (in the past) has often been in such a condition. Eectively, for one specic portfolio (one specic MPS) there is only one bottleneckthe most loaded WCbut this WC can be another WC than the traditional bottlenecks. Theoretically, only the conjunctural bottleneck is useful for our objectives. But to improve the possibilities in the decision-making process in rescheduling, we chose to modelize the most frequent bottleneck too. We call a synchronization work center one or several resources enabling the planning of MO with bottlenecks and those without to be synchronized. To minimize the number of these synchronization work centers, we need to nd WC having the most in common amongst all this MO portfolio not using bottlenecks and which gure in the routing of at least one MO using them. The reduction algorithm shown below highlights these so-called synchronization work centers. Indeed, the MO using structural or conjunctural bottlenecks may be synchronized and scheduled in comparison with one another thanks to the scheduling of these bottlenecks. But for certain MO that do not use them, synchronization WC will need to be used. Reduction algorithm: Variables and parameters Let R be the set of resources: R fri ji 2 1; mg; m is the number of resources: We will notify by rk;j the used resource for the MO k, operation j. rbs 2 R is defined as the structural bottleneck of the system: rbc 2 R is the conjunctural bottleneck of the system: Let M be the set of manufacturing orders: M fmk jk 2 1; ng, n is the number of manufacturing orders in the considered Master Production Planning. Each manufacturing order mk is dened by a set of attributes: Gk : set of ordered operation for mk . Gk fgk;j jk 2 1; n; j 2 1; qk g;

j is the sequencing order; qk is the maximum of operations in the routing k :

rk;j : the utilised resource for gk;j . sk;j : the set-up time for operation j of mk . pk;j : the processing time of one part of mk for operation j. lk;j : service level of one part of mk for operation j (lk;j 1=pk;j ). kk : arrival rate on the system for mk . Let O be the subset of manufacturing orders that do not use the bottlenecks: O fol j8l 2 1; n; rl;j 6 rbs and rl;j 6 rbc g:

116

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

The algorithm objective is to provide: R0 fri0 ji 2 1; m0 g; m0 is the number of new resources to determine:
0 We will notify rk;j the used block for the MO k, operation j.

G0k : set of ordered operation for m0k . R0 and G0k represent respectively all the new resources to be integrated into the reduced model, and the routings of each of the MO on these blocks. Each of these sets will keep the same attributes as in the equivalent sets of the complete model. The notations will be identical, the character prime (0 ) indicating simply that these attributes are for the reduced model. All the variables not appearing here are intermediate and temporary variables used in the algorithm. Algorithm (1) initialisation of R0 with the bottleneck resources frbs ; rbc g; O0 O; C 0 R R0 R0 (2) research of synchronisation resource do obtain rz 2 C 0 , the most commonly used resource by the MO of O0 X set of MO using rz if X 6 f;g R0 R0 frz g rz is added in R0 O0 O0 X MO using rz are removed from O else The resource which cannot be synchronised is removed and the operation is repeated C0 C 0 frz g endif while O0 6 f;g and C 0 6 f;g (3) Generation of reduced MO & generation of new routings for k 1 to n for all the manufacturing orders flag 0, j0 1 for j 1  qk for each operation a (3.1) if rk;j 2 R0 if operation is realised on one of the element of R0 a new operation is dened for the reduced routing, and its attributes updated 0 0 0 gk;j0 gk;j ; rk;j0 rk;j ; s0k;j0 sk;j ; pk;j0 pk;j ; j0 j0 1 (3.2) else if flag 0 block not still created flag 1 the creation of the block is marked 0 gk;j0 gk;j update of the new operation in new routing 0 rk;j0 ; endif 0 0 0 0 rk;j0 rk;j0 [ rk;j ; s0k;j0 s0k;j0 sk;j ; pk;j0 pk;j0 pk;j 0 if j qk or rk;j1 2 R if operation is the last or if the following belongs to R 0 R0 R0 frk;j0 g the new block in R0 is added 0 flag 0, j j0 1 marking end of block endif endif next j next k

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

117

Explanations concerning the reduction algorithm: (1) initialization: the bottleneck resources are placed in R0 ; (2) the search for synchronization resources. This phase consists in placing the resources enabling synchronization of the MO not using bottlenecks in R with those using them. We seek out the resource having the most in common amongst all the MO not using bottlenecks (the most common resource that do not use bottlenecks). This resources will become the synchronization work center if it is used by at least one MO using bottlenecks. (3) Generating reduced MO and new routings. Here R0 is completed with aggregated blocks that can be used for several operations in the same routing. For the dierent operations of each of the MO, the various required aggregate blocks can then be added to R0 . Two cases become apparent: (3.1) The operation is performed on one of the already existing blocks in R0 : this block is updated (resource integrated, set-up time, processing time) and a new operation in the routing in progress is dened. (3.2) The operation is performed on a resource not belonging to R0 . A new block is begun in this case whilst gradually updating the dierent attributes. This block will be closed (or created) if the last operation of the MO has been attained or if the following operation is performed on a resource already belonging to R0 . We used normal laws for the arrival frequencies on the WCs and exponential laws for the service levels. If we add dierent exponential laws an Erlang law is obtained, but we can only add processing times if we do not take into account the queue times between the WCs into the blocks.

4. Illustration 4.1. The elements of the problem The case described below concerns the manufacture of an assembled nished product whose parts are machine made. This true case has been simplied for the want of clarity. The routing le in Fig. 4 shows that this case study puts 17 work centers into operation where Tu is the run time and Ts the setup time. The realization of the routing of furs is subject to the production of the shirt ranges with their packing. The problem is the same for the distributor of the furs, of the secondary bushel and the bushel. By hypothesis the re-planning period considered concerns a group of ten MO for these six items, results of a predictive MPS (for example there was not possible to realise 4 MO in the last period). 4.2. Experimentation At rst the load in hours must be calculated for the period. Table 1 illustrates this. It highlights the conjunctural bottleneck XTS. The structural bottleneck XT4 is also heavily loaded. Indeed, in this study the available maximum capacity is 120H. First of all, we construct a nite capacity scheduling using a MPCS software (we used Prelude Production software). Secondly, a so called complete model enabling us to dynamically simulate this scheduling (using ARENA software) so as to determine whether we obtain the same results for the utilization rates on the bottlenecks and the production time for the 10 MO in question. The same MO portfolio was used with ARENA, with the same schedule and the same priority rules. We then simulate successively using constant, then variable parameters. This case is a reection of

118

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

SHIRT Resources TNB EB1 XTS LH3 RE3 RI1 EB1 CM0 Tu Ts 0.18 0.25 0.10 0.25 1 1.5 0.05 0.5 0.15 1.5 0.03 1.5 0.15 0.15 0.01

WRAPPER Resources TNB EB1 RC3 RI3 CM0 Tu Ts 0.20 0.25 0.07 0.25 0.02 2.5 0.11 2 0.01

SECONDARY BUSHEL Resources TNB LH3 RE3 EB1 XTS LH3 RI1 LH3 EB1 XT4 CM0 Tu Ts 0.14 0.25 0.10 0.5 0.10 1 0.08 0.25 1 1.5 0.05 0.5 0.3 1.5 0.15 0.5 0.05 0.25 1.5 0.01

BUSHEL Resources TNB RE3 GC1 PS1 EB1 XTS EB1 XT4 CM0 Tu Ts 0.15 0.25 0.05 1 0.11 1.5 0.06 0.55 0.07 0.25 1 1.5 0.01 1.5 0.05 0.25

FURS Resources AJ0 LH3 RE3 EB1 EB2 EB1 EA6 Tu = Run Time Ts = Setup Time Tu Ts 0.05 0.5 0.05 0.5 0.55 3 0.05 0.3 0.10 0.1 0.02 0.15 0.255 DISTRIBUTOR Resources LH3 EB4 RE3 EB4 EB1 LH3 RE3 EB1 PE0 Tu Ts 0.15 0.15 0.02 0.15 0.4 1.5 0.02 0.15 0.08 0.25 0.05 0.25 0.34 1.5 0.07 0.4 0.88 4.5

Fig. 4. The distributors simplied routing.

what actually takes place in the shops. Indeed, as maximizing the bottleneck utilization rate is the No. 1 criterion for decision making, the reections of scheduling in static MO as proposed by the MPCS (Fig. 5) are no longer sucient. On this software generated Gantt chart we notice that the system proposes a scheduling which apparently saturates the XTS bottleneck completely (120h) Fig. 5 shows a smaller load (112h) on XT4. If problems arise during a period separating two MPS, the managers role will be to reschedule the tasks remaining to be completed, the main aim being to saturate the bottlenecks. This is the only way to maximize the work performed in this nite time period at his disposal until the next MPS. The discrete event simulation of ow enables the time variables to be visualized. We note l Variance this parameter characterizing the variability of the service level on the WC of the MO routing. In fact, simulation enables us to show that the distribution of on these WC visible loads is totally theoretical. These time variables will create temporary overloading and under-loading on the bottlenecks. The complete model is made up of 275 ARENA basics modules. This is explained by the fact that it is not sucient to model resources but that distinctions must be drawn in the model as to manufacturing and setup. But also because the MO for which the machine has been set must be carried out before another MO begins with its own setting at this WC. The machines used for this process represent 17 WC. The complete

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125 Table 1 Load table MO1 TNB LH3 AJ0 RE3 EB1 EB2 EB4 XTS RI1 XT4 EA6 CM0 RC3 RI3 GC1 PE0 PS1 P 10 2.1 1 0 3 2.9 0 0 11 1.8 0 0 0.1 0 0 0 0 0 21.9 MO2 10 2.3 0 0 0 1 0 0 0 0 0 0 0.1 2.7 3.1 0 0 0 9.2 MO3 10 1.7 4.5 0 2 1.8 0 0 11 4.5 15 0 0.1 0 0 0 0 0 40.6 MO4 10 1.8 0 0 1.5 1.7 0 0 11 0 15 0 0.1 0 0 2.6 0 1.2 34.9 MO5 10 0 1 1 8.5 1.2 1.1 0 0 0 0 2.6 0 0 0 0 0 0 15.4 MO6 27 5.1 1.9 0 5.6 7.2 0 0 29 2.3 0 0 0.3 0 0 0 0 0 51.4 MO7 27 5.7 0 0 0 2.1 0 0 0 0 0 0 0.3 3 5 0 0 0 16.1 MO8 27 4 9.6 0 3.7 4 0 0 29 9.6 41 0 0.3 0 0 0 0 0 101 MO9 27 4.3 0 0 2.4 3.7 0 0 29 0 41 0 0.3 0 0 4.5 0 2.2 87.4 MO10 10 0 2.4 0 3.7 2.2 0 0.7 0 0 0 0 0 0 0 0 13 0 22

119

P 27 20.4 1 30.4 27.8 1.1 0.7 120 18.2 112 2.6 1.6 5.7 8.1 7.1 13 3.4

Fig. 5. MPCS work center schedule.

model is thus made up of 17 times the 14 ARENA basics modules necessary for the parametering of the events relative to a WC and a part representing the modeling of MO release composed of 30 ARENA basics modules. The mean operation time for the 10 MO using the complete model is 478h as against 232h according to the MPS resulting from the MPCS (here the overall lead time exhibits a large dierence478h rather than 232h. This is due to the fact that we chose a large batch size and high frequency of service). The indicator variability of overall lead time is 10h. We thus detect a considerable inuence of time data relative to the frequency parameters controlling the waiting queues in the system (Table 2). We thus witness the interest of dynamic simulation for optimizing production schedules. Simulation on a complete model with constant parameters obviously provides the same results as MPCS. However, the model with variable highlights waiting queue phenomena.

120

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

Table 2 MPCS vs. complete model comparison Criteria MPCS Complete model Constant parameters Utilisation rate XTS Utilisation rate XT4 Global Lead Time XT4 Load XTS Load 89% 82% 232h 112h 120h 89% 82% 232h 112h 120h Variable parameters 86% 70% 478h 108.4h 132.3h

Table 3 MPCS, complete model and reduced model comparison Criteria MPCS Complete model Constant parameters Util rate XTS Util rate XT4 Global Lead Time XT4 Load XTS Load 89% 82% 232h 112h 120h 89% 82% 232h 112h 120h Reduced model Constant parameters 99% 91% 478h 120h 130h Variable parameters 98% 91% 483h 120.2h 129.3h

4.3. Implementing the reduced model The remaining diculty in this problematic is that of the time required to: 1. create or modify the model, 2. perform simulations, 3. analyze and decide on re-planning. Hence the interest of the reduced model (Table 3). The simulation and the modeling of the reduced model is quicker (time gain ration of 1.65) than that of the complete model (remember that the time save ratio highlights the gain in both the time to simulate and the time to parameter the model). In fact the model only contains 79 modules (the reduction ratio is then 3.5). The average implementation of the MO portfolio is 483h and the standard deviation is 12.06h. It would thus appear that the reduced model leads us, even more than with predictive scheduling, to underestimate MO implementation time, but only minimally. The lead time of 483h is indeed only slightly greater: the level of error is very low (only 5h dierence). 5. Analysis of the simulation parameters The interest of such a model is the scope it provides to perform more simulations. If the organization allows it, the user has the freedom to vary the size of the transfer batches and to instantly become aware of the impact on the true dynamic of the parts. The model also enables the service level to be varied if required (interim operator or any production problem . . .), and/or any other parameter reecting the true dynamic of the work shop considered in view of the information he receives and which describe the events having taken place in this very short period. We have sought to highlight the eect of factors that appear essential to us in the parametering of the model, i.e. the variability in time frequencies, the number of work centers and the size of the batches. For

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

121

this we have experimented with complete and reduced models. We have used experimental designs and in particular Taguchis orthogonal designs [14,17]. Taguchi has perfected a method that uses fractional and orthogonal tables to perform experience plans and study the eects of factors of a system (Fig. 6). In a case where the system under study is subject to the eect of external and uncontrollable variables, he recommends the use of orthogonal or product plans which can be optimized through the use of a signal/noise ratio (S/N). We have implemented plans having three then four modalities for the factors variability on the service level (noted l Variance), batch size and number of WC. Indeed, as the expected responses were imagined to be non-linear, we had to dene more than two modalities per factor. We implemented four complete models and one reduced model. In fact one model corresponding to the criteria for reduction dened above does exist. So dierent complete models were reduced into the same (reduced) model using the same method. The responses studied were global time and the bottleneck load rate. Fig. 7a and b show the graphs of the eects of the factors on the mean lead time. In this paper we do not show the interactions between factors. Indeed, we were only seeking parameter combinations that would lead to extremums. And, in this case, we wish to analyze the evolution of the indicators with regard to the evolution of the parameters and not to nd the value of extremums (lead time, load rate . . .). So as to analyze the variances in repetition, each experiment was performed 20 times using PlanExpert software. The use of repetition enabled us to highlight the fact that the reduced model performs dierently to the MPS but similarly to the complete model (Table 4). The analysis of responses and the construction of the models in both cases has enabled us to show that batch size and variance factors are signicant over the mean of the responses (Fig. 7a). Indeed, the graph relating to the factor WC number exhibits variations of little signicance compared to those of the other factors. This is valid for the four modalities. We also performed a variance analysis to test the factors degree of meaning (Table 5). However, the three factors have a considerable inuence on the variation in responses (Fig. 7b and Table 6). The eect of the number of WC does remain minimal with the reduced model. Finally, we were able to show on the one hand, that this factor has little eect on the dierences between the average indicators of global lead time and bottleneck load level obtained by dynamic simulation and by
Control factors

Input Data

Studied System

Answer of the system

Variation of control factors Taguchi method Factor effects on the answer

Fig. 6. Taguchi experimental designs.

122

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

1000 900 800 700 600 Effects 500 400 300 200 100 0 1 2 3 4

1000 900 800 700 600 Effects 500 400 300 200 100 0 1 2 3 4

Nbr of WC

Nbr of WC

Q/Lot

Q/Lot

(a)

Modalities

Modalities
LT variability effects - Reduced Mod
12000

LT variability effect - Complet Mod


7000 6000 5000

10000

Effects
4000

8000

Effects
Nbr of WC
6000

Nbr of WC

3000 2000 1000 0

Q/Lot
4000

Q/Lot

2000

(b)

Modalities

Modalities

Fig. 7. Eects of the factors on the (a) mean and (b) variance.

Table 4 Studied factors summary Factors Modalities 1 l Variance Nbr of WC Quantity/lot 0.5r 10 5 2 1r 12 10 3 2r 17 30 4 3r 22 50

predictive scheduling of the MPCS on the other. However, the variance in the mean dierences is clearly inuenced by the three factors.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125 Table 5 Variance analysis, eect on the mean Actions Variance Nbr WC Q/lot Residus Total Square sum 736947.5342 35351.0524 1036938.0830 214014.6709 2023251.3400 DDL 3 3 3 6 15 Variance 245649.1781 11783.6841 345646.0276 35669.1118 F 6.8869 0.3304 9.6903 QF 0.0227 0.8042 0.0102 Signicance S S

123

Table 6 Variance analysis, eect on the variance Actions Variance Nbr WC Q/lot Residus Total Used variance Square sum 3684737.6710 176755.2619 5184690.4140 1343690.1810 10390143.530 273886.8267 DDL 3 3 3 70 79 64 Variance 1228245.890 58918.4206 1728230.138 19199.4312 4279.4817 F 287.0081 13.6777 403.8410 QF 0.0000 0.0000 0.0000 Signicance S S S

This means that between the initial MPS resulting from the MPCS and the results of the simulation on a reduced model, the variability of the results can in some cases be considerable (Fig. 8). This, however, is not due to the model being reduced. On Fig. 8corresponding to the case of 22 WC, batch size 50 (the biggest) and with l Variance equal to 3r (the biggest)we note that there are no signicant dierences between the models that have been dynamically simulated. But the dierence is located between the dynamic simulations and the results predicted by the MPCS (which is normal). This type of model, reduced in this way, thus has the following advantages: 1. On the critical centers and on the synchronization centers it maintains the qualities of the complete model. 2. It enables simulations to be performed more quickly. 3. It reduces decision making time.
MPS / Simulations Comparison
1600 1400 1200 1000

LT

MPS Complet simulated LT


Reduced simulated LT

800 600 400 200 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Fig. 8. Comparison of the results.

124

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

Fig. 9. Advantages and risk of the reduced model simulation.

Differences in secondes
2500 2000 1500 1000 500
1000 800 700 600 500 400 300 200 100 10 2

Variability
normal(1m,4s) tps fixe 1m
1

Order Quantity
1 2 10 100 200

300

400

500

600

700

800

1000

Fig. 10. Interest for the simulation.

Moreover, the major advantage on the one hand, is when the released batches are signicant values and when the MO are related to products from the same technological family on the other (thus highly common in the manufacturing process). It then highlights the dierences between planned and simulated reality. What is more, it is when these released batches are large and that the variance on the frequencies of service and arrival of the parts is high that the risk in decision making is all the higher (Fig. 9). The analysis of these dierent parameters has enabled us to highlight an area for which simulation has a denite interest (Fig. 10) compared to the scheduling decisions of the MPS [19]. The area in the circle shows the dierence (expressed in seconds, between the result of the scheduling of the MPS obtained from the MPCS and the simulation) brought about by a change in value of the WC service level parameters and those of the size of the batch.

6. Conclusion We have shown in this article that dynamic simulation of ow on reduced models can, in certain cases, be interesting and advantageous. Following the description of our method of constructing reduced models, we have demonstrated the eects of the signicant factors on the scheduling results for a MPS. We have particularly been able to reveal an area for which dynamic simulation of ow was not appropriate.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111125

125

We have observed a certain generic in reduced models. We wish to work on this aspect of the problem in more detail so as to show that it may be possible to create data banks of standard reduced models. We believe that the modes of parameter calculation for the aggregated blocks may well be dierent according to the case under study and consequently, they might act dierently on the model. Indeed, according to the type of statistical law used to dene the behavior of the WC, the calculation of the parameters of the aggregated blocks may be dierent. For example, if the service levels of the WC are constant or symmetrical distributions, it is sucient to total them up to arrive at the service life of the block. Other investigations are required in this eld. Finally, reection needs to be carried out into the relevant methods for the rapid re-planning of the MPS concerned. Indeed, when a problem arises in the shop, the planner should, with the aid of the dynamic simulation of ows performed on these reduced models, be able to propose a new scheduling for the MO remaining to be made before calculating the next MPS. This problem is similar to the heuristics of the shifting bottleneck put forward by Adams et al. [1] for which a problem of scheduling of a simple machine with latent time is solved. A schedule needs to be found on this bottleneck machine which will comply with a certain number of constraints. This should be done by saturating the critical bottlenecks whilst meeting the due dates.

References
[1] J. Adams, E. Balas, D. Zawack, The shifting bottleneck procedure for the job-shop scheduling, Management Science 34 (3) (1988). [2] R. Belz, P. Mertens, Combining knowledge-based systems and simulation to solve rescheduling problems, Decision Support Systems 17 (1996) 141157. [3] R.J. Brooks, A.M. Tobias, Simplication in the simulation of manufacturing systems, International Journal of Production Research 38 (2000) 10091027. [4] P. Charpentier, A. Thomas, Model reducing method for the scheduling decision-making, IEPM2001, Qubec, 2001. e [5] M. Doumeing, Conception de syst mes exibles de production, Laboratoire GRAI, 1989. e [6] E.M. Goldratt, J. Cox, Le but, Ed AFNOR Gestion, 1986. [7] Y.F. Hung, R.C. Leachman, Reduced simulation models of wafer fabrication facilities, International Journal of Production Research 37 (1999) 26852701. [8] J.S. Hwang, S. Hsieh, H.C. Chou, A Petri-net based structure for AS/RS operation modelling, International Journal of Production Research 36 (1999) 33233346. [9] G.S. Innis, E. Rexstad, Simulation model simplication techniques, Simulation 41 (1983) 715. [10] R.C. Leachman, Preliminary design and development of a corporate-level production planning system for the semi-conductor industry, Optimization in Industry, Chichester, UK, 1986. [11] H. Li, Z. Li, L.X. Li, B. Hu, A production rescheduling expert simulation system, European Journal of Operational Research 124 (2000) 283293. [12] Y.C.E. Li, W.H. Shaw, Simulation modeling of a dynamic job shop rescheduling with machine availability constraints, CIE 35 (1998) 117120. [13] M. Khouja, An aggregate production planning framework for the evaluation of volume exibility, Production Planning and Control 9 (2) (1998) 27137. [14] M. Pillet, Introduction aux plans dexpriences par la mthode Taguchi, Editions dorganisation Universit, Paris, 1992. e e e [15] A. Pritsker, K. Snyder, Simulation for planning and scheduling, APICS (1994). [16] P. Roder, Visibility is the Key to scheduling success, APICS, Planning and Scheduling (1994) 53. [17] G. Taguchi, Orthogonal Arrays and Linear Graph, American Supplier Institute Press, 1986. [18] A. Thomas, Pour un pilotage dynamique et intgr, Revue Logistique et Management 7 (1999) 4355. e e [19] A. Thomas, P. Charpentier, Pertinence de modles rduits pour la prise de dcision en rordonnancement, CPI2001, Fs, 2001. e e e e e [20] T.Y. Tseng, T.F. Ho, R.K. Li, Mixing macro and micro owtime estimation model: wafer fabrication, International Journal of Production Research 37 (1999) 24472461. [21] Vollmann, Berry, Whybark, Manufacturing, Planning and Systems Control, The Business One Irwin, 1992. [22] B.P. Zeigler, Theory of Modelling and Simulation, New York, Wiley, 1976.

You might also like