You are on page 1of 155

Business Process Management Journal

Emerald Article: Analyzing process uncertainty through virtual process simulation Joel H. Helquist, Amit Deokar, Jordan J. Cox, Alyssa Walker

Article information:
To cite this document: Joel H. Helquist, Amit Deokar, Jordan J. Cox, Alyssa Walker, (2012),"Analyzing process uncertainty through virtual process simulation", Business Process Management Journal, Vol. 18 Iss: 1 pp. 4 - 19 Permanent link to this document: http://dx.doi.org/10.1108/14637151211214984 Downloaded on: 26-08-2012 References: This document contains references to 21 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 182 times since 2012. *

Users who downloaded this Article also downloaded: *


Jon Iden, (2012),"Investigating process management in firms with quality systems: a multi-case study", Business Process Management Journal, Vol. 18 Iss: 1 pp. 104 - 121 http://dx.doi.org/10.1108/14637151211215037 Faisal A. Abu Rub, Ayman A. Issa, (2012),"A business process modeling-based approach to investigate complex processes: Software development case study", Business Process Management Journal, Vol. 18 Iss: 1 pp. 122 - 137 http://dx.doi.org/10.1108/14637151211215046 Prodromos D. Chatzoglou, Anastasios D. Diamantidis, Eftichia Vraimaki, Stergios K. Vranakis, Dimitrios A. Kourtidis, (2011),"Aligning IT, strategic orientation and organizational structure", Business Process Management Journal, Vol. 17 Iss: 4 pp. 663 - 687 http://dx.doi.org/10.1108/14637151111149474

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

BPMJ 18,1

Analyzing process uncertainty through virtual process simulation


Joel H. Helquist
Department of Accounting, Utah Valley University, Orem, Utah, USA

Amit Deokar
College of Business and Information Systems, Dakota State University, Madison, South Dakota, USA, and

Jordan J. Cox and Alyssa Walker


Department of Mechanical Engineering, Brigham Young University, Provo, Utah, USA
Abstract
Purpose The purpose of this paper is to propose virtual process simulation as a technique for identifying and analyzing uncertainty in processes. Uncertainty is composed of both risks and opportunities. Design/methodology/approach Virtual process simulation involves the creation of graphical models representing the process of interest and associated tasks. Graphical models representing the resources (e.g. people, facilities, tools, etc.) are also created. The members of the resources graphical models are assigned to process tasks in all possible combinations. Secondary calculi, representing uncertainty, are imposed upon these models to determine scores. From the scores, changes in process structure or resource allocation can be used to manage uncertainty. Findings The example illustrates the benets of utilizing virtual process simulation in process pre-planning. Process pre-planning can be used as part of strategic or operational uncertainty management. Practical implications This paper presents an approach to clarify and assess uncertainty in new processes. This modeling technique enables the quantication of measures and metrics to assist in systematic uncertainty analysis. Virtual process simulation affords process designers the ability to more thoroughly examine uncertainty while planning processes. Originality/value This research contributes to the study of uncertainty management by promoting a systematic approach that quanties metrics and measures according to the objectives of a given process. Keywords Process analysis, Virtual process simulation, Process modeling, Risk management, Uncertainty management, Modelling, Opportunity management Paper type Research paper

Business Process Management Journal Vol. 18 No. 1, 2012 pp. 4-19 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637151211214984

Introduction Business processes today have become increasingly complex. This complexity arises from such factors as globalization, narrow market windows, uctuating economic environments, etc. Organizations have enabled various business process management solutions in an effort to offset this complexity. Strategies for pre-planning business processes become imperative for successful process or product launches. Part of the pre-planning process is identication and management of the uncertainties that will be

encountered in the business process. These actions enable the organization to proactively identify potential issues with the process and develop appropriate responses or contingency plans. This paper proposes the use of virtual process simulation (VPS) as a systematic method for organizations to examine processes as part of a risk or uncertainty management strategy. This analysis aids business process management efforts in the analysis and design phases of the business process management lifecycle (Muehlen and Ho, 2006). VPS is an approach to business process modeling that uses network graphs to quantify the uncertainty within a business process. This analysis enables the organization to identify potential risks and opportunities before the process is implemented. VPS differs from simulations such as Monte Carlo. The Monte Carlo method assumes the structure of the process remains xed while the variables in the process (e.g. data, inputs, outputs, and artifacts) are perturbed to determine sensitivities. For example, time sensitivities can be analyzed through altering time durations. Similarly, the impact of variability in raw materials pricing can be analyzed. In VPS, the objective is to identify benecial changes to the process structure (e.g. tasks, task sequence, actors, roles, etc.). This constitutes a different type of simulation. For example, a new product development process may contain several tasks that may possess exibility with regard to sequence. VPS enables modeling to examine options that arise from this exibility. Although outside the scope of this paper, certain processes may benet from rst executing VPS to examine process structure variability followed by a Monte Carlo simulation to examine variability. Uncertainty management Risk management is the process of identifying potential risks to the specic objectives of a process and addressing those risks systematically. Risk management is typically associated with limiting exposure to negative events and their potential consequences. For example, in product manufacturing, these risks could include such things as tumultuous labor supply or unpredictable production schedules. Similarly, the risks may include such things as the formation and usage of a new project team; the actors on the project may not have experience working together, potentially creating unknown regarding the exchanges and dependencies between actors. The terminology risk management has been argued to unnecessarily limit the scope or focus of uncertainties (Ward and Chapman, 2003). Oftentimes, the word risk is associated with events that are typically negative. Thus, risk management may encourage a threat perspective, focusing specically on potential negative events that would impact the process. Instead, Ward and Chapman argue that the focus should be placed on uncertainty management rather than risk management (Ward and Chapman, 2003). Uncertainty leads to both threats (risks) and opportunities (Hillson, 2002). This change from an event-based management approach to an uncertainty management approach fosters a more comprehensive analysis of potential opportunities as well as risks that may arise from uncertainty. Thus, uncertainties in a given process can be divided into two types: uncertainties that may create negative impact on the business process (risks) and uncertainties that may create unforeseen benets (opportunities) (Perminova et al., 2008). A comprehensive uncertainty management strategy should encompass both of these aspects.

Analyzing process uncertainty 5

BPMJ 18,1

The sources of uncertainty include variability and ambiguity (Ward and Chapman, 2003). Variability in such things as time requirements, quality, and efciency can lead to risks from a project management standpoint (Atkinson et al., 2006). However, variability in manufacturing can sometimes produce robustness in the product, therefore representing an opportunity. Ambiguity may also exist with regards to such things as the desired outcome or the process by which the outcome is developed. The lack of clarity may lead to considerable risks when trying to model or manage a process (Keizer and Halman, 2007). Again, this lack of clarity may lead to strategic market opportunities. Oftentimes, companies choose to implement two separate strategies for dealing with risks and opportunities even though the sources are the same. Risk management strategies Once risks have been identied as part of an uncertainty management program, the organization develops an appropriate response. The strategies for addressing risks can be categorized as one of the following (Hillson, 2002; Royer, 2000): . avoidance; . transfer; . assumption; and . mitigation. Avoidance refers to the organization redesigning the process to avoid the specic risk. However, this approach is not always feasible. For example, new or emergent processes may possess an inherent amount of risk due to its nascent status that cannot be avoided. Risk transfer includes steps to place the risk on another party. Examples of risk transfer include outsourcing the specic process or insuring against a potential loss. As with avoidance, this approach is not always feasible or practical. A third type of risk management response is to accept or to assume the consequences or outcomes of the risk. The organization assesses the risk and determines that either the probability of the risk occurring or the impact of the risk is low enough such that a response is not required. In these scenarios, the organization assumes the risk and the consequences that may come with it. Finally, organizations may choose to mitigate the risk. Mitigating the risk means specic measures are taken to develop appropriate responses, measures, or controls to reduce the risk. The objective of the response is to reduce the risk to an acceptable level, either by reducing the probability of the risk occurring or the impact of the consequences. Risk mitigation is an ongoing process whereby the risks are analyzed, specic measures designed and deployed, and the outcome monitored. The results are closely examined to determine what changes need to be made during the next iteration of risk management. Overall, risk management requires an analysis be made, starting in the pre-planning stages, to identify risks and the corresponding strategies. For example, in engineering, failure modes and effects analysis (FMEA) is a method of risk assessment and risk management focused specically on a product or technology (McDermott et al., 2008; Stamatis, 2003). VPS can be viewed as a part of a FMEA for processes.

Opportunity management strategies In the same way that risks must be identied and managed, potential opportunities need to be identied and strategically exploited where feasible. VPS can aid in the identication of potential opportunities. Once these opportunities have been identied, the organization has the opportunity to develop an appropriate strategy for exploiting the possible benets. These strategies include the following: . exploit; . trade; . ignore; and . protect. These strategies are similar to previous literature addressing opportunity management strategies (Hillson, 2002, 2003). Exploit refers to instances where the organization fully engages the given opportunities. Management may alter strategic objectives or modify which processes are implemented in an attempt to maximize the benets afforded by an unforeseen opportunity. Trade refers to the organization electing to negotiate the transfer of an opportunity to a different organization. For example, an organization may trade the rights to an opportunity to a trusted supplier or business partner. The organization may elect to ignore the current opportunity. In this category, the costs may not be compatible with the benets or the opportunity may not t within the overall strategy of the organization. Finally, the organization may elect to protect a given opportunity by ling patents or arranging contractual obligations to safeguard the opportunity. These strategies are not argued to be exhaustive or mutually exclusive. VPS can be use to explore the risk and opportunity management strategies. The models created in VPS attempt to explore all of the possible process congurations and their respective teams or groups of actors for a given strategy. Quantication of the impact of risks and opportunities can then be calculated. Enterprise risk management framework Frameworks exist to assist in deploying systematic uncertainty management activities. One such framework is the Committee of Sponsoring Organizations (COSO) Enterprise Risk Management-Integrated Framework. This framework aids organizations to achieve objectives in four categories (Moeller, 2007): (1) strategic high-level goals of the organization; (2) operations effective and efcient use of resources; (3) reporting reliability of reporting; and (4) compliance compliance with laws and regulations. These categories enable an organization to view risks and opportunities from different perspectives, promoting a more thorough review and examination of the uncertainties. In this paper, VPS is presented as a means to aid in the planning of strategic and operations objectives. Specically, this modeling process enables management to assess strategic uncertainty by comparing competing processes or business alternatives. This strategic level of uncertainty management enables organizations to more systematically identify and seize opportunities. Likewise, the modeling process can be used to address

Analyzing process uncertainty 7

BPMJ 18,1

operational uncertainty, including how to design and deploy a given process. This operational analysis leads to a more systematic analysis of the uncertainty associated with differing deployment alternatives and selection of a deployment option that best ts organizational and process objectives. VPS and uncertainty management The process modeling approach presented here is a framework that can help identify and quantify uncertainty. The overarching goal of this framework is to enable managers to analyze the possible processes that could be used to deploy a venture prior to actual deployment. The managers examine the competing options, selecting the specic alternative that maximizes the goals of the process. This approach utilizes the following major steps. First, graphical models of the process tasks and the actors completing the tasks are created using graph-like models. Next, a library of autonomous agents is created, representing a combination of a given actor and a specic task. From these agents, all of the potential process deployment options are enumerated. Finally, each deployment option is scored and evaluated. The rst step in process modeling is to create graphical models representing the tasks within the overall process and the actors that will be executing the tasks. These graphical models are constructed using workow diagrams. Each node in the graph represents a specic task to be executed and the dependencies between the tasks. From this, a specic library of tasks is generated. The library includes each task along with the required inputs and outputs for that task. The goal of this step is to identify all the tasks in the process along with their respective inputs and outputs. For example, Figure 1 shows a portion of a new product development process in BPMN[1]. The specic modeling notation used to create the graphical representations is not consequential. The graphical representation merely serves to identify the nodes and dependencies between the nodes. The three tasks illustrated represent the graph of the process. The rst two tasks are completed in parallel prior to the nal task being executed. Figure 1, as shown here for the sake of simplicity, utilizes a model where the task sequencing is predetermined. The VPS approach can also be used in situations where the task sequencing can be exible and is not rigid. In exible scenarios, unknown exists with regard to the sequencing of particular tasks within the overall process. Based on this process graph, Table I represents the library of tasks. This library illustrates the task, the required inputs, and the outputs from the task. Graph-like models are then created to graphically represent the organizations and individuals that are part of the process. Each node can represent an organizational role, employee, or contractor that may be performing a given task. All of the potential participants in the process are identied and a library of actors is created, including
Technology Company Market Analysis (Task 1)

Figure 1. Process graph

Concept Development (Task 2)

Concept Selection (Task 3)

the specic skill or experience information used later in quantifying metrics. This information often resides within HR but typically has not been organized in this manner. Figure 2 shows a portion of the organization chart for this process. In this example, two different teams each have personnel that may participate in the process. Each of the actors is enumerated and the actor library, Table II, is populated. The actor library lists the actor, skill sets, and any other relevant metrics (e.g. quality level, workload information, availability). Once again, these metrics must be created from known information about the actors. Similar to the actor library, a resource library enlisting tools and techniques that are used to carry out automatic or semi-automatic tasks is also created. For sake of simplicity, the discussion in this article is restricted to the use of actor library. Once the tasks and actors are enumerated, autonomous agents are created. The agent is dened as an actor assigned to a specic task. Each of the tasks from the task library is matched with the actors that have the skills to complete the task. The agent denition may also include such things as a minimum level of matching between the task and skill sets or even actor availability. This selective matching reduces the total number of agents created and as a result, the total number of potential process options. The combination of task and actor are identied, creating the agent library, see Table III. Additional types of resources can be incorporated into the agent denition, such as facilities and tools. Once the agent library is compiled, all of the potential process alternatives or deployment options could be enumerated. This enumeration utilizes backwards mapping of the agents, starting with the agent that produces the desired output from the process. The required inputs for this agent are identied and the agents with the satisfying outputs are then identied. The process can be repeated until all the possible combinations of agents are enumerated. However, often there are less desirable or non-feasible agent combinations. These should be pruned before identifying potential process alternatives. Table IV presents the possible process deployment options for this example. From the enumeration of process options, it is possible to extract the actors involved in the option. A resulting actor graph-like model can be constructed, representing the dependencies and exchanges between the actors. These actor models can be used to assess process alternatives. The exchanges between actors represent a source of unknown and an opportunity for performance improvements (Rummler and Brache, 1995). Once the possible process options are identied, as represented by the network graph-like models, accompanying analysis can be conducted. Each agent mapping in the agent library is linked to relevant metrics and characteristics. For example, the metrics identied in the actor library can then be applied so that each deployment option can be scored based on a secondary calculus. The purpose is to compare each of the deployment options against each other to identify the specic alternative that produces the optimal results, based on the objectives and metrics of interest.
Task Task 1 (T1) Task 2 (T2) Task 3 (T3) Inputs A B CD Outputs C D E

Analyzing process uncertainty 9

Table I. Task library

BPMJ 18,1

10

Figure 3 shows a scoring for enumeration number four from Table IV. In this scoring, the metrics of time and cost were used. In this option, the shortest possible time to complete the process would be ve days. Tasks one and two can be completed in parallel, with task two requiring the longest time at four days. Task three then can be completed in one day as soon as tasks one and two are complete. The total cost of this option would be $1,800, summing the costs of the three separate tasks.

US Team

Indian Team

Figure 2. Organizational chart

Actor 1

Actor 2

Actor 3

Actor 4

Actor Actor Actor Actor Actor 1 2 3 4 (A1) (A2) (A3) (A4)

Location USA USA India India

Skill set/skill level T1 T3 T1 T2 high, T2 medium high medium high, T3 medium

Quality level T1 T3 T1 T2 90%, T2 70% 90% 70% 90%, T3 70%

Table II. Actor library

Agent Agent Agent Agent Agent Agent Agent 1 2 3 4 5 6

Task T1 T1 T2 T2 T3 T3

Actor A1 A3 A1 A4 A2 A4

Table III. Agent library

Enumeration 1 2 3 4 5 6 7 8

Task 1 Agent Agent Agent Agent Agent Agent Agent Agent 1 2 1 1 2 2 1 2

Task 2 Agent Agent Agent Agent Agent Agent Agent Agent 3 3 4 3 4 3 4 4

Task 3 Agent Agent Agent Agent Agent Agent Agent Agent 6 6 6 5 6 5 5 5

Table IV. Enumeration of deployment options

Agent 1

Market Analysis (Task 1)

$700 3 days

Analyzing process uncertainty 11


$200 1 day

Technology Company

Agent 5

Concept Selection (Task 3)

Agent 3

Concept Development (Task 2)

$900 4 days

Figure 3. Enumeration four metrics

Enumeration option ve from Table IV is shown in Figure 4. This deployment option would require, at a minimum, eight days to complete with a cost of $1,500. A variety of metrics may be used depending on the particular goals and objectives of the process. The time metric could represent calendar days to complete the task or it could represent labor hours. Similarly, costs could be used to analyze labor expenses or it could be a surrogate for measuring the efciency and effectiveness of the actor executing a particular task. These values are aggregated to better evaluate each possible deployment option. The method of metric aggregation depends on the metrics used. For example, total length in calendar days requires the calculation of the minimum number of days required. Alternatively, total labor hours require the summation of all the hours required to complete each of the tasks. Following the initial scoring, the metrics may be weighted according to the pertinent process objectives to enable a more thorough comparative analysis of the possible process deployment options. Processes may have different objectives, including such things as time to market, quality of the product produced, or minimization of intellectual property theft. Each primary objective is related to multiple relevant criteria or metrics of interest. For example, if the primary objective is cost minimization, specic metrics may include such things as actor time requirements, actor salary, materials purchasing,
$500 4 days

Agent 2

Technology Company

Market Analysis (Task 1)

Agent 6

Concept Selection (Task 3) 2 days

$200

Agent 4

Concept Development (Task 2)

$800 6 days

Figure 4. Enumeration ve metrics

BPMJ 18,1

12

and materials shipping. Alternatively, if the primary objective is optimal resource utilization, metrics such as actor workload information and actor availability may be relevant in addition to metrics such as time requirements and actor salary. The weighting of metrics affords analysis of different combinations of metrics or criteria with respect to multiple process objectives (Atkinson et al., 2006). Multiple criteria decision analysis (MCDA) techniques (Figueira et al., 2005) (e.g. analytic hierarchy process, analytic network process (Saaty, 2005)) are suitable to perform such analysis on the process deployment alternatives, and form the secondary calculus. It is the overall objectives of the process that drive the modeling and the analysis of competing process alternatives (Jaafari, 2001). The focus of VPS is to support comparison of different deployment options with respect to which resources are utilized, and the sequence in which they are used, as modeled by the agents. The uncertainty, with respect to the deployment, is being quantied and compared by way of the agents and the comparisons between the differing deployment options. This approach differs from other methods, such as critical path analysis (CPM), that modify such things as time estimates to analyze uncertainty. Arguably, the CPM approach can be used to complement the VPS approach. One key element of the VPS technique is the creation of the graph-like models. When creating the graph of the processes and tasks to be completed, a hierarchy of graph-like models is created. At the lowest level, specic tasks are identied. Moving up a level, the tasks can be summarized into different sub-processes. By increasing the level of abstraction, it is possible to select the level of the hierarchy that provides the appropriate level of detail for the analysis. For example, the analysis may occur at a higher process level rather than the more detailed task level. By moving to a more abstract level, VPS enables managers to change the objectives of the analysis. Examination of the most detailed (task) level of the graph model hierarchy may be most suitable for analysis of operational risks and issues. Examination of a more abstract process view (higher level of abstraction) may be more suitable for analyzing more strategic issues as the analysis compares differing processes rather than specic tasks. Higher levels of abstraction may be necessary when insufcient data exists for a more detailed model. A similar hierarchy of graph models is created to represent the individuals and groups responsible for executing the tasks and processes. The most detailed level represents specic individuals within an organization. Moving up a level, these individuals can be summarized into departments or other organizational units. Again, the hierarchy facilitates the selection of the appropriate level of detail for the analysis. Certain processes may benet from identifying specic individuals to complete tasks. Other processes may benet by using the department, location, or organization as the unit of analysis. See Helquist et al. (2009) and Walker and Cox (2008a, b) for a more thorough review of the modeling process steps, including more information relating to team analysis, objectives, and metrics. Again, the focus of VPS in uncertainty management is the ability to perform mapping between tasks and resources. Creation of the agents allows for exibility in analysis that encompasses a variety of actor, task, and resource compositions. Each of the agents possesses certain characteristics or metrics that can be tabulated when evaluating competing processes. Additionally, this approach enables a hierarchical approach, enabling the selection of the most appropriate level of abstraction. As such,

VPS differs from other approaches, such as PERT or CPM, that focus mainly on uncertainty with respect to time and its impact on process completion. For example, using estimates such as optimistic time, most likely time, and pessimistic time, a process designer can examine the impact of time uncertainty on the overall process. PERT and CPM analysis may still be executed on the models that are selected from the VPS process to further analyze and rene the process. VPS may be used as a rst step to identify perhaps a single or a few of the top process alternatives based on the objectives of interest. These top alternatives could then be run through these additional analysis techniques to further examine and select a given deployment option. An example of using VPS As discussed in the previous section, the development of VPS models to study business opportunities and assess risks involves the construction of process and organization graphs at different levels of abstraction. An example scenario to illustrate this approach is presented below. Assume that an engineering company is considering the launch of a new product development program involving a signicant investment, such as a new aircraft or aircraft engine. Typically, the engineering company identies several potential concepts, one of which may involve a new technology. The other concepts are considered fall-back concepts in case the development of the new technology fails. The concepts are developed and at some point a decision must be made as to which product concept will be launched into detailed design and development. The decision is critical as the investment is of the order of a billion dollars. Uncertainty in the eventual design process and program launch represents a signicant issue, bringing both opportunity and risk. Figure 5 shows a typical workow for considering concepts and selection of a preliminary product concept before program launch. This gure, termed Option 1, represents the development of the new product using existing technology. This network representation is at a high-level of abstraction and does not represent all of the individual tasks. Often in the process of considering new concepts, a new technology, such as a new metal alloy, is identied as a potential opportunity for improving product performance or achieving other efciencies (e.g. cost and weight). However, the metal alloy or new technology must be developed, yielding an inherent risk of failure. This risk leads to uncertainty in selecting the product concept that would use the new technology
Marketing

Analyzing process uncertainty 13

Market Analysis

Technology Company

Preliminary Design Performance Prediction Design Concept Selection Technoogy Assessment Technology Selection Configuration Selection Material Selection

Launch Detailed Design

Business

Concept Development

Figure 5. Workow graph of Option 1 development with existing technology

BPMJ 18,1

14

or metal alloy. The workow graph changes to include the additional activities associated with the development of the new technology. Figure 6 shows the additional steps, termed Option 2. The selection of Option 1 (use current technology) vs Option 2 (develop new technology) requires analysis of more detailed process models at a lower level of abstraction. VPS aims to reduce uncertainty in making the nal investment decision by systematically analyzing these two competing options and selecting the deployment option that is in concordance with the objectives. Figure 6 shows that as compared to Options 1, 2 involves additional sub-processes and activities, performed by the research and development division. These include technology development, representing the new technology development process, and opportunity management, representing the decision process following the successful development of new technology. If the decision is made to invest in the development of the new technology, it is assumed that the technology can be developed. Upon further detailing Option 2, it is evident that there are four different process deployment alternatives within the opportunity management sub-process shown in Figure 6. This is shown in Figure 7. First, the organization can build in-house capacity to produce the new technology. This option is termed exploiting the technology (Option 2a). Second, the organization may develop sub-contract capacity to produce it. This option is termed trading (Option 2b). Third, the organization may license the new technology and shelve it so no one else can use it. This option is called protecting it (Option 2c). Finally, the fourth option is to ignore this new technology development all together (Option 2d). The decomposition of these options will result in additional
Marketing

Market Analysis

Preliminary Design Design Performance Prediction Concept Selection Technology Assessment Technology Selection Configuration Selection

Technology Company

Material Selection

Launch Detailed Design

Business

Concept Development Opportunity Management

Research & Development

New Material Specification Technology Needs New Material Development

Technology Development Material Testing Material Production Planning

In-house Production (Exploit) Sub-contract Production (Trade) Protect & Shelve (Protect) Do Nothing (Ignore)

Figure 6. Workow graph of Option 2 development with new technology

Develop Capability Exploit

In-house Production

Provide Material Develop Production Capacity

Analyzing process uncertainty 15

Sub-Contract Production
In-House

Trade

Identify Sub-contractor

Develop Partnership

Disclose Technology

Sub-Contractor

Develop Capability Develop Customer Base Develop Production Capacity Provide Material

Protect & Shelve Protect Apply for Patent Shelve

Do Nothing Ignore

Figure 7. Four process models from opportunity management in Option 2

process deployment alternatives. For example, with respect to Option 2b (trading), multiple versions of this sub-process (Option 2b(i), (ii), etc.) for different sub-contractors are available, and the metrics associated with each of these sub-contractor options differ from each other. Thus, it is clear that decomposition of process models results in revealing additional deployment alternatives. Selection of the best alternative requires creation and analysis of virtual process models for each of these options. Following the identication of different process deployment alternatives, virtual process models need to be constructed by associating metrics relative to each of the tasks and actors at each level of abstraction. This is a bottom-up process and starts with the most detailed level of abstraction. Table V shows a sample set of metrics for a specic deployment option, such as Option 2b(i), i.e. trade deployment option with sub-contractor

BPMJ 18,1

16

1 within Option 2. These numbers are representative of typical metrics that will be gathered from the pre-dened actor library. Similar set of metrics are collected for other deployment alternatives at this level (i.e. Option 2b(ii), (iii), and so forth). As noted earlier, the choice of these metrics is dependent on the overall process level analysis objectives like cost optimization, resource utilization, and time to market. The metrics are prioritized and weighted with respect to each other, and a MCDA technique is applied to select the best alternative at a given level of abstraction. In this example, the different sub-contractor options are evaluated by applying MCDA techniques to nd the best sub-contractor option with respect to the process objective (say, cost optimization). Once a specic sub-contractor trade deployment option is chosen, corresponding metrics are aggregated and summarized for analysis at the next higher level. In this example, for the best sub-contractor option chosen, the overall cost is computed by summation of constituent costs. Similarly, the overall process time is computed based on the task dependencies in the process, while the overall quality level is computed by a simple average of quality levels for all relevant tasks. Upon aggregating the metrics, the virtual process models at the next higher level are analyzed. In this example, this implies primarily comparing Option 2a (exploit), and Option 2b (trade), assuming that the other options are not preferred. Cascading this analysis, eventually, the two higher level process deployment options, Options 1 and 2 are scored. Given the hierarchical nature of VPS, the inherent risks and opportunities associated with specic process deployment alternatives are accounted for in comparing the higher level Options 1 and 2. Using this information, a nal score in terms of compatibility with overall goals is determined, and a specic deployment chosen. Managing residual uncertainty VPS enables managers to analyze and select the best deployment option. However, residual uncertainty still exists in regards to the actual execution of the process. There may still be considerable variability issues (e.g. how consistent will be the quality of the output?) and ambiguity issues (e.g. how well will the new team function together?) that must be managed. Additional uncertainty management strategies can be considered to further mitigate the residual uncertainty. This paper proposes research be done in strategies, such as just-in-time training, organizational changes, and process or support changes, to identify effective methods for managing residual uncertainty. Just-in-time training is the notion of introducing task, actor, and process specic training to reduce uncertainty in a process. The content of the training is dependent on the context of the task. The context includes such things as who is completing the task, the location where the task is being completed, and the team assembled to complete the process. Research has shown that younger, less mature companies often lack resources to commit to training. This training gap may increase the level of uncertainty by failing

Table V. Metrics for scoring tasks within the trade option for sub-contractor 1 (Option 2b(i))

Cost Develop capability Develop customer base Develop production capacity $5 M $500,000 $2 M

Time 8 months 1 year 6 months

Quality (%) 90 90 90

to adequately control implemented processes. This lack of training leads to weaknesses in internal controls in certain situations (Doyle et al., 2007). The training may focus on a variety of skills ranging from the technical skills (e.g. how to use a specic software program) to the interpersonal, team-related aspects (e.g. clear communication of expectations and results). The training must be customized to address the given actor, completing a given task, in a specic environment. As such, this training will change dramatically depending on the context. However, this just-in-time training has its own workow graph that must be integrated into the deployment option it is servicing. Another means of reducing uncertainty is through organizational changes. A close relationship exists between the structure of an organization and its ability to control processes (Ouchi, 1977). Organizational changes refer to the organization restructuring either its business relationships or the internal structure of its business units to improve control over a given process. This change may include altering areas of responsibility as well as chain of command. This change may also include such things as altering business relationships and supply chains (Miller, 1992). This restructuring would impact the organization graphical models and the assignment of actors to tasks. Finally, the organization may choose to alter the process or its supporting infrastructure. Processes with high uncertainty may benet from additional tasks, such as enabling additional communication, meetings, or interaction among team members and managers. The additional support of meetings may offset the uncertainty by enabling enhanced communication and involvement among the team members. This creates change in the workow graphs of the VPS models. Conclusion and future directions This paper presents VPS as a mechanism for managers to quantify and evaluate competing process alternatives. VPS enables organizations to more thoroughly examine and analyze uncertainty that surrounds new and redesigned processes prior to actual launch. VPS enables uncertainty management to occur at different levels of abstraction. At the highest level, organizations can compare competing strategic alternatives. Quantication of the process metrics enables grounded decisions to be made and strategic opportunities embraced. At lower levels of abstraction, organizations can select a specic deployment option that minimizes risk exposure or meets other process objectives. Again, the VPS technique enables managers the ability to more systematically investigate process alternatives to make more educated decisions. The VPS process requires and produces a more thorough understanding of the process as the models are constructed and analyzed. One additional benet of using VPS in addressing uncertainty is the ability to assess and evaluate team dynamics. VPS enables process designers the ability to analyze differing processes based on the individuals that will execute the tasks. Each deployment option may consist of differing individuals, each bringing varying backgrounds and experience. VPS enables the enumeration of the individuals and analysis of the process team composition. This analysis enables a more thorough uncertainty management review. VPS is dependent on the ability of organizations to assign metrics to actors and tasks. BPM support tools, such as process modelers and designers, can be used to facilitate various aspects of this technique. In particular, BPM tools, such as those

Analyzing process uncertainty 17

BPMJ 18,1

18

from Software AG and Ultimus, can be used to map resources to process models, helping to create process model and resource libraries. Development of metrics into a standardized repository becomes a critical component of this uncertainty management approach and also represents a potentially signicant drawback. More research is needed to identify and compile uncertainty metrics that can be applied to the VPS models. The overhead required to construct the models may seem daunting but the benets, particularly in a costly program, include providing managers a viable method for quantifying uncertainty before program launch.
Note 1. www.bpmn.org

References Atkinson, R., Crawford, L. and Ward, S. (2006), Fundamental uncertainties in projects and the scope of project management, International Journal of Project Management, Vol. 24 No. 8, pp. 687-98. Doyle, J., Ge, W. and McVay, S. (2007), Determinants of weaknesses in internal control over nancial reporting, Journal of Accounting and Economics, Vol. 44 Nos 1/2, pp. 193-223. Figueira, J., Greco, S. and Ehrgott, M. (Eds) (2005), Multiple Criteria Decision Analysis: State of the Art Surveys, Springer, Berlin. Helquist, J.H., Cox, J.J. and Walker, A. (2009), Exploring diverse process and team alternatives through virtual process simulation, Business Process Management Journal, Vol. 15 No. 5. Hillson, D. (2002), Extending the risk process to manage opportunities, International Journal of Project Management, Vol. 20 No. 3, pp. 235-40. Hillson, D. (2003), Effective Opportunity Management for Projects: Exploiting Positive Risk, CRC Press, Boca Raton, FL. Jaafari, A. (2001), Management of risks, uncertainties and opportunities on projects: time for a fundamental shift, International Journal of Project Management, Vol. 19 No. 2, pp. 89-101. Keizer, J.A. and Halman, J.I. (2007), Diagnosing risk in radical innovation projects, Research-Technology Management, Vol. 50 No. 5, pp. 30-6. McDermott, R.E., Mikulak, R.J. and Beauregard, M.R. (2008), The Basics of FMEA, Productivity Press, Cambridge, MA. Miller, K.D. (1992), A framework for integrated risk management in international business, Journal of International Business Studies, Vol. 23 No. 2, pp. 311-31. Moeller, R.R. (2007), COSO Enterprise Risk Management: Understanding the New Integrated ERM Framework, Wiley, New York, NY. Muehlen, M. and Ho, D. (2006), Risk management in the BPM lifecycle, Business Process Management Workshops, pp. 454-66. Ouchi, W.G. (1977), The relationship between organizational structure and organizational control, Administrative Science Quarterly, Vol. 22 No. 1, pp. 95-113. m, K. (2008), Dening uncertainty in projects a new Perminova, O., Gustafsson, M. and Wikstro perspective, International Journal of Project Management, Vol. 26 No. 1, pp. 73-9. Royer, P.S. (2000), Risk management: the undiscovered dimension of project management, Project Management Journal, Vol. 31 No. 1, pp. 6-13.

Rummler, G.A. and Brache, A.P. (1995), Improving Performance: How to Manage the White Space in the Organization Chart, 2nd ed., Jossey-Bass, San Francisco, CA. Saaty, T.L. (2005), The analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making, in Figueira, J., Greco, S. and Ehrgott, M. (Eds), Multiple Criteria Decision Analysis: State of the Art Surveys, Springer, Berlin, pp. 345-407. Stamatis, D.H. (2003), Failure Mode and Effect Analysis: FMEA from Theory to Execution, ASQ Pr, Milwaukee, WI. Walker, A.J. and Cox, J.J. (2008a), Incorporating global characteristic data into virtual development models, Journal of Computer-Aided Design and Application, Vol. 5 No. 6, pp. 900-20. Walker, A.J. and Cox, J.J. (2008b), Virtual product development models: characterization of global geographic issues, IFIP World Computer Congress, Milan, Italy. Ward, S. and Chapman, C. (2003), Transforming project risk management into project uncertainty management, International Journal of Project Management, Vol. 21 No. 2, pp. 97-105. About the authors Dr Joel H. Helquist graduated with a PhD in Business Management with an emphasis in Management Information Systems from the University of Arizona. Prior to completing his PhD, Dr Helquist worked as a Risk Management Consultant for Arthur Andersen, LLP and KPMG, LLP in Seattle, Washington. His research interests are primarily focused around risk management and processes and technologies to support collaboration. Dr Joel H. Helquist is the corresponding author and can be contacted at: joel.helquist@uvu.edu Amit Deokar is an Assistant Professor of Information Systems in the College of Business and Information Systems at Dakota State University. His recent research interests are in business process management, collaboration processes and technologies, decision support systems, knowledge management, and healthcare informatics. His work has appeared in journals including Journal of Management Information Systems, Communications of the AIS, Information Systems Frontiers, and IEEE Transactions. He has also presented at national and international conferences and authored book chapters in the eld of Information Systems. He holds a BE in Mechanical Engineering from V.J. Technological Institute, Mumbai, a MS in Industrial Engineering from the University of Arizona, and a PhD in Management Information Systems from the University of Arizona. He is a member of AIS, ACM, and AAAI. Dr Jordan J. Cox graduated with BS and MS degrees in Mechanical Engineering from BYU in 1983 and 1984, respectively, and graduated with a PhD in Mechanical Engineering from Purdue University in 1991. He has worked as an Engineer for the Naval Weapons Center in China Lake, CA and Garrett Turbine Engine Company in Phoenix, AZ. He has consulted for Honeywell Corporation, Johnson & Johnson, CCI, and United Technologies Inc. Currently he is serving on the Technical Advisory Board for Pratt & Whitney. Alyssa Walker graduated with her Bachelors degree in Geography from Brigham Young University in April 2007. Currently she is pursuing a PhD in Instructional Psychology and Technology at Brigham Young University and working as a Research Assistant in the Advanced Product Development Laboratory.

Analyzing process uncertainty 19

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Construction Innovation: Information, Process, Management


Emerald Article: Benchmarking a new design management system using process simulation approach Hemanta Doloi

Article information:
To cite this document: Hemanta Doloi, (2010),"Benchmarking a new design management system using process simulation approach", Construction Innovation: Information, Process, Management, Vol. 10 Iss: 1 pp. 42 - 59 Permanent link to this document: http://dx.doi.org/10.1108/14714171011017563 Downloaded on: 26-08-2012 References: This document contains references to 41 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 880 times since 2010. *

Users who downloaded this Article also downloaded: *


Kerui Weng, Bo Qu, (2009),"The optimization of road building schedule based on budget restriction", Kybernetes, Vol. 38 Iss: 3 pp. 441 - 447 http://dx.doi.org/10.1108/03684920910944146 Wei-Lun Chang, Yu-Ting Hong, (2011),"A mixture model to estimate customer value for e-services", Kybernetes, Vol. 40 Iss: 1 pp. 182 - 199 http://dx.doi.org/10.1108/03684921111117997 Ge Zhiyuan, Ping Gao, (2008),"Studies on traffic effects of high-speed ring road in city center", Kybernetes, Vol. 37 Iss: 9 pp. 1315 - 1321 http://dx.doi.org/10.1108/03684920810907607

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1471-4175.htm

CI 10,1

Benchmarking a new design management system using process simulation approach


Hemanta Doloi
Faculty of Architecture, Building and Planning, The University of Melbourne, Melbourne, Australia
Abstract
Purpose The purpose of the paper is to put forward a research-based argument on the benets of simulation approach in managing design at an early stage of a project. Having selected an optimal design conguration, the operational uncertainties can be removed and investment decisions are fully justied over the lifecycle of projects. Design/methodology/approach A simulation-based methodology embodying balanced scorecard (BSC) for measuring the operational and business performance has been synthesised in the research. Multi-criteria decision analysis (MCDA) has been employed to evaluate the trade-offs between feasible design alternatives and to select the optimum design conguration. Findings The ndings show that the integrated framework developed in this research by integrating simulation technology, BSC and MCDA adds signicant contributions in improving the current body of knowledge in the design management practices. Research limitations/implications The framework should further be tested by applying to large engineering projects in order for realising the benets in the decision-appraisal process. Access to data in large projects before implementation would be the greatest challenge from a commercial-in-condence perspective. Practical implications The framework will help the practitioners understanding and management of the design conguration in highly complex modern projects. This will allow the decision makers to manage interdependency of complex processes and select optimal designs upfront. The resulting framework will signicantly contribute to reducing scope creeps and cost variations and thereby reducing contractual disputes in projects. Originality/value The original design of the integrated framework of this kind for managing design complexity in projects adds signicant value in the design management practices. The use of simulation embodying BSC and MCDA adds signicant novelty in theoretical advancement of contemporary knowledge in the design management profession. Keywords Benchmarking, Design and development, Project management, Simulation, Balanced scorecard, Construction industry Paper type Research paper

42
Received 1 December 2007 Accepted 24 November 2009

Construction Innovation Vol. 10 No. 1, 2010 pp. 42-59 q Emerald Group Publishing Limited 1471-4175 DOI 10.1108/14714171011017563

Introduction The need for better design management in the architectural, engineering and construction industry has never been so high. This is due to emerging factors that reect both changing market conditions and the advent of new procurement processes. To maintain competitiveness, the industry needs to focus on the improvement of the design process, especially to cope with tougher accountability and tighter fee scales (Williams, 1999). Capital projects have necessitated design input from an increasing range of specialists. The increased emphasis for keeping the construction projects on time

and within budget has required effective management of project scope associated with multifaceted stakeholder groups in the project (Goldschmidt, 1992). Thus, denition of projects scope in the concept phase vastly inuences the project development and its overall business outcomes. Understanding the complexity of design in both functional and operational contexts at an early stage is important in dening appropriate end facility of the project (Kohler, 2008). Increasing complexity and sophistications in construction create new challenges in design management practices. The clients are not only interested in value for money in relation to the investment in project development but also in costs associated with operation and maintenance over the project lifecycle. While the clients interests may be prot driven in the competitive markets, the architects or design professionals are responsible for balancing design innovations, sophistications and cost-effectiveness in the project. In order to cope with these challenges, the design professionals require a full understanding of the wide variety of design parameters and technical expertise of each party to deliver the project as per expected project objectives (Nicholson and Naamani, 1992). In order to address the aforementioned challenges in managing projects business intents, this research aimed to develop a framework for holistic evaluation and analysis of projects upfront. The primary objective is to determine how to enhance the projects operational performance and increase strategic and business outcomes from an effective design management perspective. Inheriting from this main aim, the underlying research questions are: RQ1. How does the design management impact on setting a benchmark on appropriate project management practices? RQ2. How can the operational performance of project be linked in evaluating and managing design complexity upfront? RQ3. How does the design management impact in decision making and overall business outcomes in projects? Following sections focus on the methodological approach in devising an integrated framework to address the aforementioned research questions. The use of process simulation for assessing soundness of design and overall projects conguration from the operational perspective is discussed. Finally, a decision framework is demonstrated using a case study. Research methodology In order to address the above research questions, this research established a model incorporating the strategic objectives of projects in a proactive and explicit manner (Kohler, 2008; Montana et al., 2007). Process simulation is employed for evaluating operational performances of complex processes at an early phase of the project. Process simulation allows for the evaluation of alternative operational processes and the selection of sub-optimal process congurations in the design management context. The process simulation provides the output information on resource utilisation, cycle time, throughput and overall efciency of the operational processes on alternative design congurations which then forms the inputs for the project level decision analysis. A decision analysis framework based on balanced scorecard (BSC) methodology

New design management system 43

CI 10,1

44

(Kaplan and Norton, 1996) is established for comparing alternative project options in order to optimise operational performance of project facility over project lifecycle (Artto et al., 2001). The BSC is based on four strategic business objectives namely nancial performance, environment performance, project operating and business performance and sustainability and risk. Amongst the four business objectives, the nancial performance is measured (in percentage form) based on the typical cash ow analysis (Kirkham, 2005). Environmental performance is measured based on the predened performance indicators, such as waste reduction and shorter cycle. Project operating and business performance is measured based on customer satisfaction survey, and usually expressed in percentage points. An index scale (0-6) was used to measure the sustainability and risks indicators across the available design alternatives. The measurement details of these objectives have been discussed in Doloi (2007). The multi-criteria decision analysis (MCDA) methodology is employed for assessing the trade-offs and the priority ranking is then established between the feasible alternatives. MCDA allows decision makers to evaluate problems involving conicting objectives by simultaneously taking into account several sources of judgement even if measured in different scales or units (Xiang et al., 1992). The overall ` -vis design integrated approach provides a platform for real time project denition vis-a optimisation at an early stage of project development by incorporating the technical, functional and operational performance from downstream phases of projects. Proactive design management and project performance Many organisations have found design to be one of the key performance indicators to project success. Growing pressure on design innovation and timely delivery is a fact of life for project managers, architects and design professionals (Heath et al., 1994). The design phase of a project alone offers the greatest scope for reduction in overall project costs and adds maximum values in the project. The size and complexity of modern design with increased uncertainty requires front-end planning throughout the life of projects. Design management is a continuous iterative process and as the project moves on, it provides feedback points for new information and exibility to assimilate actions. Thus, initial design and planning must concentrate on creating viable project solutions for each principal alternative design in the context of life-cycle planning of projects. In the case of strategic planning, such analysis is signicantly hindered due to constraints imposed by the project development and operational environments (Doloi and Jaafari, 2002). Modelling of technical and operational functionalities of the project facility and analysis of end-users acceptance must justify the strategic decisions in projects. Appropriate design and optimal scope denition over entire lifecycle are the keys for achieving value for money in projects. Management of design complexity and simulation In recent years, the concept of modelling has become increasingly important in engineering design management practices. It is no longer sufcient to pay detailed attention to the design of various elements of a project individually, rather, all elements must be considered in relation to others in order to make the overall system effective. However, good project design is not restricted to detailed design coupled with attention to interrelationships between physical parts and elements. Design must be analysed and evaluated at a deeper level and in relation to the projects operational environments

(Doloi, 2007; Dikmen et al., 2005). Design conguration and scope of projects must re-assess and readjust to ensure that the objectives are met at the end. As a result, the overall process involving the design of parts and products to reach these goals becomes iterative, which constitute the overall project. Simulation approach allows for building a model of the proposed system by capturing the salient features of the overall system. Digital computer models facilitate analysis of complex processes associated in projects. A simulation model is a means for collecting information about the likely performance of a system based upon user-dened conditions (Marmon, 1991). Simulation models can improve the planners understanding of the real life situation during conceptualisation and nal design or actual construction (Luk, 1990). By using simulation model, the effect of changes in process design can be justied and ne-tuned at an operational level of projects. Once the operational processes are analysed and the sub-optimal conguration of processes are established, appropriate trade-off analysis is performed to select the optimal project solution at the project level. The decision framework for analysing project level decisions has been discussed briey in the following sections. Analysis of design and the decision framework Information and decision generated over feasibility (or concept design) and planning phases of projects have a great impact on the downstream activities and consequently on the overall cost of projects (Artto et al., 2001). Understanding the projects end facility and its underlying operational processes, supported by relevant information and tools, will lead to better decisions in project development. Thus, the attempt to reduce design complexity, increase functionality, clarity and constructability at an early stage has now been the focus among researchers in the eld (Montana el al., 2007; Bruce and Daly, 2007). Selection of an appropriate design and conguration of operational processes of project facility is an important consideration in successful project outcomes. Life cycle project management (LCPM) is an approach for integrating business and strategic objectives of projects throughout the project lifecycle (Doloi, 2008). The LCPM approach employs an integrated and concurrent project management principle to substitute the process-based and activity-driven project management approach (illustrated in the current practice) with an innovative strategy-based and outcome-driven project management paradigm. Much work has already been published on LCPM methodology in project evaluation and management contexts (Doloi, 2008; Goldschmidt, 1992). Figure 1 shows the decision analysis framework in project design over life of the project. Selection of design alternatives and investment decision has a direct inuence on the strategic project objectives and overall performance of projects (Irani et al., 2000). The project design, and its underlying capability, needs to be dened by integrating optimum projects conguration and inherent business intents. As shown in Figure 1, once the initial decision on a feasible design is made, the project and project processes are thoroughly analysed towards identifying realistic alternatives, selection and allocation of appropriate resources and establishment of the best project option for development. The process of selecting best project option is facilitated by the simulation technology (El Maraghy, 1982). Projects are broken down into smaller products and process models are constructed incorporating operational

New design management system 45

CI 10,1
Process simulation framework

Process model

Project definition Project and process models

46
Design management Evaluate the design functions Decision variables Design efficiency Design variation Uncontrollable variables Design for OH and S Design management Design flexibility Design and process information

Project managment

Design to embrace sensitive environmental issues

Build the project model

Feasible and suboptimal process configuration

Figure 1. Framework for analysing design management decisions

Measure of effectiveness (LCOF 's)

Accept

Reject

scenarios for simulation analysis. The outcome of simulation forms the basis for evaluation of suboptimal conguration against the target performance criteria (named as life-cycle objective functions, LCOFs) of the project. After the project is developed and commissioned, operation is monitored based on the performance of the LCOFs, organisational strategy and the overall competition in the project. As mentioned earlier, the LCOFs are drawn by incorporating the BSC methodology based on technical, functional and operational performance of the project (Kaplan and Norton, 1996). The BSC-based methodology has a wide application in analysing trade-off decisions, which is capable of measuring project level performance enabling to respond to the global challenges and achieving the true value on investment in the integrated project development (Kaplan and Norton, 1996; Kirkham, 2005). Process simulation as a tool for design management The simulation is a numerical technique for conducting experiments on digital computers involving certain types of mathematical and logical models to describe the behaviour of a system over extended periods of the real time (Pidd, 1992).

During the last decade, discrete event simulation has gained a signicant role in engineering planning and design (Doloi and Jaafari, 2002). Numerous examples reported in the literature provide evidence demonstrating how organisations can save millions of dollars and avoid major risks using process simulation (El Maraghy, 1982; Bell and OKeefe, 1987; Williams and Orlando, 1998; Irani et al., 2000). Many researchers reported the investigations on merits of using simulation to assist plant layout and factory prototyping (OKane, 2003). Eloranta and Raisanen (1987) discuss these issues by proposing a simulation-based planning tool to help with decision to benchmark the plant capacity, buffer size requirements and viewing the effects on throughput time as the plant design changes. Nymon (1987) suggested the simulation as an enabler for driving design process in projects. Arduino and Bollino (1987) used the simulation to assess cost implications of alternative solutions to project development. In early 1993, the IBM PC Company in Europe used the process simulation technique to evaluate different manufacturing execution strategies and to identify the lower-cost distribution policies, which reportedly resulted an estimated $40 million per year savings in the distribution costs of the company (Artto et al., 2001; Kirkham, 2005; Yeo and Tiong, 2000). The research on how the discrete event simulation works is not embryonic as development of computer-aided process simulation techniques have accelerated in recent years. Whilst there have been many previous discrete-event simulation studies conducted, these have tended to investigate one typical application in construction processes or manufacturing layout in isolations (OKane, 2003; Williams and Orlando, 1998). Its use for project denition, design management practices and life-cycle investment decisions is not widespread (Artto et al., 2001; Doloi, 2008). The application and inuence of simulation approach on setting the benchmarks for design management practices within the complex project management framework adds signicant contribution in the eld. Table I shows how the simulation as tool applies for appropriate front-end management of respective objectives over the project lifecycle. Second and third columns clearly demonstrate the inuence of simulation outputs for decision-making across most project objectives listed in the rst column. Integrated framework The framework focuses the integration of projects functionality and the business objectives at an early phase of project. Design selection, project investment decision and organisational business intents have direct inuence on the strategic planning and development of the project (Satter et al., 1998; Yeo, 1995). In order to analyse the design at an early stage of project, hierarchical process models are built and simulated by linking the processes and allocating available resources across all disciplines over the lifecycle of projects. Alternative processes are identied and tested for optimal design and overall project conguration. The project level decisions on operability, functionality, quality or performance issues are then analysed using the LCOFs as mentioned in previous section (Jaafari et al., 2004). Figure 2 shows an integrated framework for evaluating alternatives and decision analysis process over life of the project. As seen, project concept, design alternatives and operational scenarios are identied and resources and product specications are dened for a number for feasible project solutions. Simulation is then applied on the respective process models considering the operational environments upfront.

New design management system 47

CI 10,1

Project objectives Project concept development Project facility planning

Needs for decision making

Usability of simulation tools in front end management Supply-demand planning, optimum utilisation of resources Capacity planning and scope denition

48

Table I. Project objectives and simulation input for design management

Market need analysis Project option analysis Decision process for project development Product design Project management functions Project Scope control and management implementation Time management Cost management Project operation Market economics and changes and maintenance Facility operation and users exibility and meeting the demand Sales and Market consumption marketing Customers satisfaction and acceptance Research and Product design and redesign development Product innovation and process reengineering IT/IS support Process automation and optimum facility utilization Waste reduction, cost minimization Project organisation Resources and skills requirements and utilisation Self-managing teams and cross cultural integration Key performance measures and controlling Risk resilient and uncertainty management Change management

Constructability analysis, change control and alternative planning Project functionality and operability of the end project product Supply-demand analysis, evaluation of logistics Simulation model for what-if analysis, process reengineering Simulation model for evaluating facility utilisation, activity-based costing Simulation model for resource planning, resources levelling and optimisation

Create model on the base case Alternative 1 Alternative 2 Create models on proposed design Alternative 3 Develop simulation model, run and analyse Does No the infrastructure provide required service? Yes Make adjustments to infrastucture

Figure 2. Framework for alternative evaluation and life-cycle decision analysis

Alternative n

Recommend for go ahead

Yes

Does the LCOFs satisfied?

No

Do not go ahead with the project

The outcomes from simulation modelling on project conguration, operational requirements and resource utilisation are used as input for analysing required management capabilities and transformation for project specic environment. Continuous assessment on the functionality and operability of the project product at early stage allows appropriate feedback mechanism for dynamic interaction among the design professionals and evaluation of optimal design conguration over the lifecycle (OKane, 2003; Ranasinghe, 1996). Design management using simulation a case example In order to demonstrate the use and benets of the process simulation in design management practices, a case study on a proposed tram route design project is presented. In this project, the simulation model representation provided a key decision-making platform that quantied the effectiveness of varying level of design and planning to support an optimum operational scenario. A signicant project implementation challenge during the design and planning phases can be understood following the processes shown in Figure 3. In the road network design, as the analysis unfolds, there are often congested or problem areas within the network that require

New design management system 49

Stop A

Legend: Lane mark Tram track


Stop B

Stop

A&B

Figure 3. Kerb access stop option

CI 10,1

some engineering changes and adjustments. The ability to quantify these impacts is a signicant benet of using a simulation model upfront. Once the design is altered to suit the required service requirements, the projects life-cycle objectives can be assessed and validated. The framework provides the functionality to make such changes and adjust related variables impacted by the alternative design at the project level. Project background client brief During the recent years, the tram network in the Melbourne (Victoria, Australia) metropolitan area has been expanding with new and redesigned routes to cater for increasing demand and services. The tram network runs on electric tracks and most of the time, it shares the lanes with other mainstream road trafc. As the vehicular trafc in the city is growing in a faster rate over time, the authority is under increasing pressure for improvement of services in infrastructure and performance of public transport. Amongst many ongoing initiatives for improvement of public transport performance, the project on the improvement plan of an existing tram route was selected for demonstration of the framework in the research. While the total length of the proposed route is about 10 kilometres, only a critical intersection was selected as study area for demonstration. The project was aimed to deliver an appropriate solution by addressing three major benets, to reduce tram travel times, to improve reliability of services and to improve safety and access onto trams. In the proposed design, there were two options considered for this project, the kerb access tram stop and the central platform tram stop. The kerb access tram stops option was considered as the base case situation in the project. Project target, budgets, and LCOFs In addition to two design options considered in the initial design: kerb access stop option and central platform stop option, a few other variable design options were also considered in other locations along the project route. However, in this study, a comparative analysis on the operational performance was drawn considering the rst two options in the selected intersection. In the kerb access stop option, passengers are able to get on and off the low-oor directly from the raised platform without having to cross a lane of trafc or negotiate steps. Trafc merges into one lane when passing the tram stops. As with existing kerbside stops, motorists are required to stop behind a stationary tram while passengers get on and off. Figure 3 shows the schematic design of the kerb access stop option. In the central platform stop option, the 2.8 metres wide central median has been utilised for multiple uses, including tram stop platforms, space for right turning vehicles, relocation of tram poles and possible planning and streetscape improvements. This option is combined with a pedestrian crossing to provide a safe route for passengers between the tram stop platform and the footpath. Passengers get on and off from the right side of the tram. Two lanes of trafc in each direction remain open with this option with the possibility of trafc in the kerbside lane being able to overtake a stationary tram. Figure 4 shows the schematic design of the central platform stop option. As seen, Route A has a new centre platform stop and a queue jump lane on either road. The Route A is the design priority for performance enhancement over Route B. In this design, a queue jump lane has been added for south bound trams

50

New design management system 51

Queu e jum p

Stop A

Legend: Lane mark Tram track

Stop

(shaded lane) and the existing kerb side stop has been moved to the other side of the intersection for both trams. In addition, two dedicated lanes for the vehicular trafc are part of the design near the queue jump lane which was achieved by widening the road. Table II depicts the target LCOFs derived from the available nancial data used for decision making at the project level. The target equity internal rate of return (EIRR) of 24 per cent is the focus of all the decision makings on this project. Simulation model

Life-cycle objective functions (nancial) TLCC A$ (million) in present value Equity internal rate of return (%) Net present value to capital investment ratio TLCC/Po Unit cost per service output Cost to worth ratio Environmental emission standard

Stop B

Figure 4. Central platform stop option

Target 120 24 1.60 Condential Condential 3.4 N/A

Table II. Targeted LCOFs

CI 10,1

52

was constructed for both design options and responses to the variable demands were evaluated with a target for improving the performance of the facilities and resources for optimal operations. In both models, the time delay experienced by the trip volume in both trams and vehicular trafc were studied to assess the performance in the situations. A total of three alternatives have been studied with varied design parameters within the proposed designs and an optimised operational scenario was devised using the optimiser in the model. Result and analysis Figure 5 shows the model outputs depicting the number of vehicles over time for the proposed design. It was found that the trafc ow and travel duration are entirely dependent on the movement of trams and interval of trafc signals at the intersection. In order to optimise the proposed designs, evolutionary optimisation approach was employed on three scenarios and impacts on performance of the trafc ow were analysed (Khral, 2002; Pongcharoen et al., 2002). Figure 6 shows an output of the optimiser with approximately 99 per cent convergence for maximum trafc ow in the model. The genetic algorithm (GA) based optimiser produces signicantly better operational performance and utilisation of infrastructure over existing situation. The optimiser includes a number of parameters such as the probabilities of crossover and mutation, the population size and the number of generations (Khral, 2002). The details of GA and convergence are out of scope in this manuscript, which are widely covered in Gen and Cheng (1997) and Pongcharoen et al. (2002).

Figure 5. Simulation outputs

Figure 6. Optimisation outputs

A factorial experiment has been performed to identify appropriate values for these factors that produce the best results within a given simulation time (Pidd, 1992). The overall objective was to achieve maximum total trafc ow with minimum total travel time. An optimised schedule for tram service and cycle time for signalised intersection was achieved with about 80 per cent efciency of the resources and overall infrastructure targeted in the model. The model outputs show the reduction of travel time in proposed designs over the base case model. As seen, there is about 25 per cent improvement in travel time in the new optimised design. Impact of new process congurations Figure 7 shows a comparative analysis of process utilisation between kerb access, central platform and central platform with optimised scenarios. The optimised process conguration for maximum output values in the proposed design over the base case scenario was achieved by increasing the capacity of four processes over the proposed scenario. As seen, there is a good balance with about 95 per cent average utilisation of processes in the new optimised design. An introduction of an additional processes along with the alteration of ow sequences on processes have signicant impact on overall process performances of the project. It is evident that the capability of the facility could be enhanced by altering the current baseline operation; obviously there is a limit to what can be achieved without signicant investment in new project design and the resulting facilities after implementation. These decisions then need to be investigated in terms of target LCOFs in the integrated framework by using the existing operations as the starting point (Table II). Management strategies and required capability are then built supporting the reengineered processes and project operation. As demonstrated in this example, the process simulation is a powerful tool in achieving this objective. Life-cycle decision analysis The life-cycle decision analysis is performed to evaluate the trade-offs between all conicting objectives at the project level. Life-cycle decision-making requires an assessment of impacts and consequences of underlying factors impacting business and operational performance of projects. The BSC technique allows integration of the performance matrices and aligns business operations with the overall organisational strategy for project development (Kaplan and Norton, 1996; Satter et al., 1998).
140 120 100 Time (sec) 80 60 40 20 0 Kerb access Central platform Central platform with Central platform with tram optimizer traffic signal optimizer Max travel time Mean travel time

New design management system 53

Figure 7. Travel time comparison

CI 10,1

54

However, it neither weights the relative importance nor acknowledges the issues of interaction and trade-offs between matrices. Thus, the BSC incorporating the MCDA hierarchy serves as a meaningful evaluation vehicle in developing optimal design management solutions. By using the analytical hierarchy process (AHP), prioritised numerical scales are generated representing the relative performance of the perceived solutions to improving overall design management (Saaty, 1980). Discussion on MCDA is not the main focus of this manuscript and details can be found in many sources including Brans and Vincke (1985), Buchanan (1994) and Xiang et al. (1992). These trade-offs are then used to locate the optimal solution against the targets set for each criterion (Ranasinghe, 1996). Figure 8 shows an integrated hierarchy combining operational and business performance employed for trade-off analysis of the case project. The feasibility of the operational scenarios for all four alternatives (Figure 7) in terms of efciency, functionality and resource utilisation and operational complexity is explored using the process simulation. The feasible operational scenarios (those that can satisfy the respective operational scenarios constraints) are identied by applying a set of criteria derived from the organisations strategic objectives. Table III shows an overall evaluation matrix for optimal project level decision in the given project. The numerical gures shown in the columns are incremental values across all four feasible solutions (e.g. alternatives) under consideration. While all four solutions in Column 1 are assumed feasible, each has a differing total life-cycle cost (TLCC) (Column 4) and a corresponding level of various high-level criteria inuencing strategic business objectives (Columns 5-8). The information on utilisation of processes and underlying resources in Columns 2 and 3 are fed by the simulation outputs (Figure 7). TLCC in Column 4 is estimated on each feasible scenario over the project lifecycle (Doloi and Jaafari, 2002). In order to determine the optimal solution, values in Columns (5)-(8) are used to see what trade-offs are available against values in Columns (2)-(4). As mentioned above, such trade-off analysis is performed in hierarchical
Optimum design

Overall strategy

Operational performance Criteria level Travel time efficiency Resource utilisation TLCC Financial

Strategic business objectives (based on BSC) Environmental Operational Sustainability and risks

Sub-criteria level

Figure 8. Hierarchy for MCDM analysis

Feasible alternative scenarios

Alternative 1

Alternative 2

Alternative 3

Alternative 4

Reduced risks

Increased sustainability

Service improvement

Shorter cycle

Customers satisfaction

Waste reduction

Unitcost

ROI

Feasible alternative scenarios (1) 65 79 85 87 95 89 90 91 0 10 12 12 12 17 21 21 0 10 15 15 0 15 10 10 0 10 20 20 0 5 8 8

Average % travel time efciency (2)

Average % resource utilisation (3)

Financial Strategic business objectives performance Environmental Project operating and (5) performance (6) business performance (7) Unit Waste Shorter Service Customers TLCCa cost ROI reduction cycle, improvement satisfaction (%) (4) (%) (%) (%) (%) (%) (%) 0 1 1 1

Sustainability and risk (8) Reduced risks Sustainability b (0-6)b (0-6) 0 1 2 2

Overall operational capabilityc 0.69 0.75 0.81 0.85

Alt. Alt. Alt. Alt.

1 2 3 4

62 93 91 95

Notes: aTLCC is the total life-cycle cost for scenario under consideration; bsustainability and reduced risks are measured on an index scale from 0 for no effect to six for highly effective; coverall operational capability is measured based on the weighted average of all performance indicators

New design management system 55

Table III. Life-cycle decision analysis

CI 10,1

56

structure using the AHP technique. One of the major advantages of the AHP is that the analysis does not always require statistically signicant sample size (Dias and Ioannou, 1996). AHP uses a number of pairwise comparisons between quantitative or qualitative criteria in order to assess the relative importance of each criterion (Dey, 2003). The input data in AHP analysis is based on experts perceived judgement and a single input usually represents a group of representatives in the sample data (Golden et al., 1989; Schot and Fischer, 1993). Details of AHP analysis have not been included in this manuscript and can be found elsewhere including (Doloi, 2008). Based on the trade-off analysis of all four alternatives, Alternative 4 (Central platform with optimised trafc signals) was found to be most optimum design in the project in question. The research develops a new approach by integrating project development and operational performance for the holistic evaluation and management of complex projects. The simulation-based approach allows the design professionals to analyse what-if scenarios of the operational challenges and to ne tune the design over project lifecycle. Much work needs to be done to better understand and apply a project-based approach by integrating processes and operations in the front-end management practices. The framework assists evaluation on how the current design conguration balances with the expected present or future demand patterns while still maintaining its business and environmental performances (Doloi, 2008). The resulting framework sets a benchmark of a new design management practice for managing complex projects. It has shown a way forward in computational aspect of the project management approaches for sustainable (value for money) project development and management practices. Conclusion In this research, simulation modelling has been introduced as a decision support tool for front-end planning and design analysis of projects. An integrated approach has been discussed linking project scope, end product or project operational performance and the strategic project objectives at the early stage of projects. The case study example on the Melbourne tram network demonstrates that application of simulation assists in assessing performance of project operation and making appropriate design management and investment decisions over the lifecycle of project. In developing the prototype, the process simulation approach has been used in the projects. The simulation-based framework facilitates evaluating the functionality and operability of feasible project conguration for strategic implementation. Research by the author reveals that there has been little attempt to assess the link between the physical projects facility and the underlying business capability and ability to respond to market shifts in contemporary project management practices. The concept presented in this research has taken into consideration multiple views of project facility within a business operating environment. Process reengineering or investment decision on the existing facility depends on the target LCOFs of the project. Analysis of alternative project solutions (based on alternative process scope and conguration) rather than focusing on well designed activities for project implementation has signicant contribution in supporting decision making and management of future project outcomes. Optimised design and maintenance of the projects end facilities in competitive business environment triggers the strategic positioning of the project organisations

over the lifecycle of the project. The preliminary research has identied the key roots of inefcient operations in terms of the capabilities and utilisation of the project facilities and resources. The simulation-based framework provides the engineering assistance in optimising the projects conguration, planning, and the design and investment decisions in capital projects. By integrating the operational performance of project facilities with design development and implementation, this research assists in devising optimal project solutions based on life-cycle objective functions of the project. The framework allows organisations to make the efcient management decisions on project development in respond to market dynamics, customer needs and organisational intents. The ability for quick exploration of the multiple scenarios of signicant benets and the capability and impacts of design and engineering processes in devising the best possible solution in complex projects are the signicant contributions in this research. One of the major shortfalls of this research is the limited access to case studies across future projects at their planning stages across all sectors. One other dimension for improving the integrated decision framework is the incorporation of realistic performance indicators of the project organisation for BSC methodology which are usually commercial-in-condence. The author is constantly striving to gain appropriate access to projects at their concept and planning phases and to clearly add value in the decision-making process.
References Arduino, M. and Bollino, A. (1987), Modelling and simulation study of a real exible assembly system, Proceedings of the 3rd International Conference on Simulation and Manufacturing, pp. 195-206. Artto, K., Lehtonen, J.M. and Saranen, J. (2001), Managing projects front-end: incorporating a strategic early view to project management with simulation, International Journal of Project Management, Vol. 19, pp. 255-64. Bell, P.C. and OKeefe, R.M. (1987), Visual interactive simulation history, recent developments and major issues, Simulation, Vol. 49 No. 3, pp. 109-16. Brans, J.P. and Vincke, P. (1985), A preference ranking organization method (the PROMETHEE method for multiple criteria decision making), Management Science, Vol. 31 No. 6, pp. 647-56. Bruce, M. and Daly, L. (2007), Design and marketing connections: creating added value, Journal of Marketing Management, Vol. 3 Nos 9/10, pp. 929-53. Buchanan, J.T. (1994), An experimental evaluation of interactive MCDM methods and the decision making process, Journal of the Operational Research Society, Vol. 45 No. 9, pp. 1050-9. Dey, P.K. (2003), Analytical hierarchy process analyses risk of operating cross-country petroleum pipelines in India, National Hazard Review, Vol. 4 No. 4, pp. 213-21. Dias, A. Jr and Ioannou, G. (1996), Company and project evaluation model for privately promoted infrastructure projects, Journal of Construction Engineering Management, Vol. 122 No. 1, pp. 71-82. Dikmen, I., Birgonul, M.T. and Artuk, S.U. (2005), Integrated framework to investigate value innovation, Journal of Management in Engineering, Vol. 21 No. 2, pp. 81-90. Doloi, H. (2007), Developing an integrated management system for optimising project options, Journal of Enterprise Information Management Systems, Vol. 20 No. 4, pp. 465-86.

New design management system 57

CI 10,1

Doloi, H. (2008), Life Cycle Project Management A Systems Based Approach to Managing cken, 4 June, p. 384. Complex Projects, VDM Verlag, Saarbru Doloi, H. and Jaafari, A. (2002), Towards a dynamic simulation model for strategic decision making in life cycle project management, Project Management Journal, Vol. 33 No. 4, pp. 23-38. El Maraghy, H. (1982), Simulation and graphical animation of advanced manufacturing systems, Journal of Manufacturing Systems, Vol. 1 No. 1, pp. 53-63. Eloranta, E. and Raisanen, J. (1987), Evaluation and design of plant layout by simulation, Proceedings of the 3rd International Conference on Simulation in Manufacturing, pp. 11-22. Gen, M. and Cheng, R. (1997), Genetic Algorithms and Engineering Design, Wiley, New York, NY. Golden, B., Wasil, E. and Harker, P. (Eds) (1989), The Analytical Hierarchy Process: Applications and Studies, Springer, New York, NY. Goldschmidt, G. (1992), Criteria for design evaluation: a process oriented paradigm, in Kalay, Y.E. (Ed.), Evaluating and Predicting Design Performance, Wiley, New York, NY, pp. 67-79. Heath, T., Scott, D. and Boyland, M. (1994), A prototype computer-based design management tool, Construction Management and Economics, Vol. 12, pp. 543-9. Irani, Z., Hlupic, V., Baldwin, L.P. and Love, P. (2000), Re-engineering manufacturing processes through simulation modelling, Logistics Information Management, Vol. 13 No. 1, pp. 7-13. Jaafari, A., Doloi, H. and Gunaratnam, D. (2004), Life cycle project management: a platform for strategic project management, in Slevin, D.P., Cleland, D.I. and Pinto, J.K. (Eds), Innovations: Project Management Research, Project Management Institute (PMI), Philadelphia, PA, pp. 141-59. Kaplan, R.S. and Norton, D.P. (1996), The Balanced Scorecard, Harvard Business School Press, Boston, MA. cesan, E., Chen, C.-H., Snowdon, J.L. Khral, D. (2002), The extend simulation environment, in Yu and Charnes, J.M. (Eds), Proceedings of the 2002 Winter Simulation Conference, pp. 205-13. Kirkham, R.J. (2005), Re-engineering the whole life cycle costing process, Construction Management & Economics, Vol. 23 No. 1, pp. 9-14. Kohler, N. (2008), Long-term design, management and nance for the built environment, Building Research & Information, Vol. 36 No. 2, pp. 189-94. Luk, M. (1990), Hong King Air Cargo terminals to work in Synch because of simulation applications, Industrial Engineering, Vol. 11, pp. 42-5. Marmon, C. (1991), Teledyne applies simulation to the design and justication of a new facility, Industrial Engineering, Vol. 3, pp. 29-32. Montana, J., Guzman, F. and Parellada, F.S. (2007), Market orientation and design orientation: a management model, Journal of Marketing Management, Vol. 23 Nos 9/10, pp. 861-76. Nicholson, M.P. and Naamani, Z. (1992), Managing architectural design a recent survey, Construction Engineering and Economics, Vol. 10, pp. 479-87. Nymon, J.G. (1987), Using analytical and simulation modelling for early factory prototyping, Proceedings of the 1987 Winter Simulation Conference, pp. 721-3. OKane, J.F. (2003), Simulation as an enabler for organisational excellence, Measuring Business Excellence, Vol. 7 No. 4, pp. 12-19. Pidd, M. (1992), Computer Simulation in Management Science, 3rd ed., Wiley, Chichester.

58

Pongcharoen, P., Hicks, C., Braidena, P.M. and Stewardson, D.J. (2002), Determining optimum genetic algorithm parameters for scheduling the manufacturing and assembly of complex products, Int. Journal of Production Economics, Vol. 78, pp. 311-22. Ranasinghe, M. (1996), Total project cost: a simplied model for decision makers, Construction Management & Economics, Vol. 14 No. 6, pp. 497-505. Saaty, T.L. (1980), The Analytical Hierarchy Process, McGraw-Hill, New York, NY. Satter, A., Wood, L.R. and Ortiz, R. (1998), Asset optimisation concepts and practice, Journal of Petroleum Technology, Vol. 50 No. 8, pp. 62-7. Schot, J. and Fischer, K. (1993), Introduction: the greening of the industrial rm, in Fischer, K. and Schot, J. (Eds), Environmental Strategies for Industry, Island Press, Washington, DC. Williams, D.K. (1999), Managing the megaproject, Civil Engineering, Vol. 69 No. 10. Williams, E.J. and Orlando, D.D. (1998), Simulation applied to nal engine drop assembly, paper presented at Winter Simulation Conference, Washington, DC, 13-16 December. Xiang, W.N., Gross, M., Fabos, J.G. and MacDouigall, E.B. (1992), A fuzzy-group multicriteria decisionmaking model and its application to land use planning, Environment and Planning., Vol. 19, pp. 61-84. Yeo, K.T. (1995), Planning and Learning in major infrastructure development: systems perspectives, International Journal of Project Management, Vol. 13 No. 5, pp. 287-93. Yeo, K.T. and Tiong, L.K.R. (2000), Positive management of differences for risk reduction in BOT projects, International Journal of Project Management, Vol. 18, pp. 257-65.

New design management system 59

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Journal of Manufacturing Technology Management


Emerald Article: Comparing functional and cellular layouts using simulation and Taguchi method Abdessalem Jerbi, Hdi Chtourou, Aref Y. Maalej

Article information:
To cite this document: Abdessalem Jerbi, Hdi Chtourou, Aref Y. Maalej, (2010),"Comparing functional and cellular layouts using simulation and Taguchi method", Journal of Manufacturing Technology Management, Vol. 21 Iss: 5 pp. 529 - 538 Permanent link to this document: http://dx.doi.org/10.1108/17410381011046940 Downloaded on: 26-08-2012 References: This document contains references to 25 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 382 times since 2010. *

Users who downloaded this Article also downloaded: *


Franois Des Rosiers, Jean Dub, Marius Thriault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376 Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 Sandrine Roginsky, Sally Shortall, (2009),"Civil society as a contested field of meanings", International Journal of Sociology and Social Policy, Vol. 29 Iss: 9 pp. 473 - 487 http://dx.doi.org/10.1108/01443330910986261

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

Comparing functional and cellular layouts using simulation and Taguchi method
Abdessalem Jerbi
` mes Electro-Me caniques, Laboratoire des Syste nieurs de Sfax, Sfax, Tunisie Ecole Nationale dInge

Functional and cellular layouts

529
Received April 2009 Revised December 2009 Accepted January 2010

di Chtourou He
` mes Electro-Me caniques, Laboratoire des Syste nieurs de Sfax, Sfax, Tunisie and Ecole Nationale dInge partement de technologie, De paratoire aux e tudes dInge nieurs de Sfax, Sfax, Tunisie, and Institut Pre

Aref Y. Maalej
` mes Electro-Me caniques, Laboratoire des Syste nieurs de Sfax, Sfax, Tunisie Ecole Nationale dInge
Abstract
Purpose A number of simulation studies were conducted by several researchers in order to compare performances of cellular and functional layouts. The methodologies used by these studies either present several objectivity lacks or are highly time-consuming. The purpose of this paper is to propose a novel and objective methodology, based on the coupling of simulation and the Taguchi method. Design/methodology/approach Simulation models for both layouts are rst developed. Simulations are then conducted following a standard Taguchi orthogonal array. Subsequently, the obtained results are analyzed using the analysis of variance technique. Finally, a mathematical model is built, and validated by the conrmation test. Findings The proposed comparison method permitted to obtain a valid mathematical model used to predict the superiority rank of the two layouts within the scope of the paper. Originality/value This paper presents a novel objective methodology for comparing functional and cellular layouts. Keywords Cellular manufacturing, Manufacturing systems, Simulation, Taguchi methods Paper type Research paper

1. Introduction There have been concerted efforts to improve the productivity of manufacturing systems (MS) by introducing new technologies. Cellular manufacturing is one of such technologies. Since its apparition, cellular layout (CL), an application of the group technology concept, has emerged as the best substitute for the traditional functional layout (FL). Unlike the FL that groups functionally similar machines into separate departments, the CL clusters the machines required to manufacture each family of similar product types into independent cells. Analytical models and empirical research have often been used to compare the two MS layouts. Although, the major part of the literature dedicated to FL-CL comparison is based on simulation modeling.

Journal of Manufacturing Technology Management Vol. 21 No. 5, 2010 pp. 529-538 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410381011046940

JMTM 21,5

530

In the last decades, several simulation studies have focused on FL-CL comparison. Methodologies used by these studies vary widely but can be classied into three groups. In the rst group, some authors have employed the one-factor-at-a-time method. In this method, the two layouts are rst compared for one manufacturing context considered as a base model. Then, different experiments are carried out in order to test the robustness of the layout choice obtained in the base model. The testing procedure is based on the alteration of some operating factors, one factor at a time (Morris and Tersine, 1990, 1994). In the second group, some other authors have investigated the effects of the studied factors considering only some specic combinations of their settings. The choice of these combinations was not quite justied. This group includes different comparison studies such as Suresh and Meredith (1994), Huq et al. (2001) and Li (2003). In the third and nal group, researchers carried out full factorial designs including all the factors to study. Shafer and Charnes (1992, 1995), Jensen et al. (1996), Farrington and Nazemetz (1998) and Pitchuka et al. (2006) comparison studies belong to this group. Methodologies of the rst two groups lack objectivity in the choice of the experimentation conditions. Hence, they do not permit to attach any statistical level of condence to their conclusions. In addition, no information about factor interaction could be obtained from such methods. On the other hand, full factorial design methodology is highly time consuming and impracticable when the number of factors to study is high. Accordingly, none of these comparison studies can effectively help the MS mangers in their effort to reliably adopt one of the two layouts or to evaluate the pertinence of migrating from the existing layout to the other. Hence, this research focuses on the development of an objective FL-CL comparison methodology that could fulll this imperative need of MS managers. In fact, the presented methodology, based on the Taguchi method (TM) for design of experiments and discrete event simulation, could be easily applied to any manufacturing context. It also provides trustworthy results with a minimum experimentation effort. The remainder of this paper is organized as follows. The next section gives a general presentation of the TM and its major steps. Section 3 depicts the main parameters dening the MS layouts to be compared. Section 4 deals with the application of the TM to layout comparison. 2. Taguchi method The objective of the TM is to obtain a more robust processes/product under varying environmental parameters. Unlike the full factorial design method that investigates every possible combination of processes parameters, the TM studies the entire parameter space with a minimum number of experiments. Accordingly, the studied process should be characterized by a number of parameters: signal factors (SFs), control factors (CFs) and noise factors (NFs) (Figure 1).
Signal factors Control factors Manufacturing system Performance

Figure 1. MS parameters

Noise factors

SFs are parameters that dene the study context. They are kept constant. On the other hand, CFs are the factors to be investigated. They are varied throughout the experimentation plan. Finally, NFs are factors difcult, expensive, or impossible to control during the studied process. The process should be robust with regard to the effects of these factors (Phadke, 1989; Ross, 1996). Hence, this method is based upon the technique of orthogonal arrays (OA) which are specially designed experiment plans allowing to simultaneously capture the effects of several CFs (Bagchi, 1993). Also, the TM normally includes the expression of the results using the signal to noise ratio (S/N). This ratio is an essential indicator of the ability of the system to perform robustly in the presence of some noise effect (Park, 1998). In fact, each experiment of the OA should be repeated several times for the sake of capturing the noise effect. The S/N ratio is used to consolidate the measures issued from these repetitions into a single value (Haldar and Mahadevan, 2000). In addition to the OA and to the S/N ratio, TM makes use of the analysis of variance (ANOVA) technique (Montgomery, 2001; Miller, 1985). The ANOVA technique establishes the relative signicance of parameters in terms of their percentage contribution to the process response using the statistical F-test. This is accomplished by subdividing the total variability of the S/N ratios, into the sum of the contributions imputed to the parameters as well as the Error (Phadke, 1989; Ross, 1996). The rst step of the application of the TM is the identication of the MS parameters. The parameter levels are then selected. Next, the appropriate OA is selected and the MSs parameters are assigned to the OA columns. Simulations are then run based on the arrangement of the OA. Following, results are analyzed using ANOVA. Finally, the mathematical model is developed and subjected to a conrmation test. In the framework of the present study, the conrmed model could be exploited by the MS manager to predict the superiority rank of the two layouts within the scope of the study.

Functional and cellular layouts

531

3. Manufacturing system parameters 3.1 Signal factors According to the FL, the shop is composed of (d ) departments Di (i 1, . . . , d ) each of them include Mn functionally equivalent machines. In this layout, parts move through departments according to their production routings. In this layout, machines are not dedicated to part types (Figure 2). In contrast, the CL is based on the group technology that capitalizes on similar and repetitive activities. Indeed in this layout, the MS is composed of (c) independent manufacturing cells Cj ( j 1, . . . , c). Each one of these cells is a cluster of Mf different machines dedicated to a number of similar part types, called part family (Figure 2). Furthermore, the MSs are designed for a demand pattern
Part Routing Family 2 1 1 Department D3 M31 M32 Functional layout Department D1 M11 M12 M13 Department D2 M21 M22 M23 M22 M12 Cell C2 M32 M23 M13 Cell C1 M21 M31 M11

Part1 D2 D1 D3 Part2 D2 D3 D1 Part3 D2 D1

Part4 D2 D1 D3 D2 D1 2 Sample data

Cellular layout

Figure 2. Functional and CLs general structure

JMTM 21,5

comprising p part of t types belonging to a number of families identical to the number of cells c. Each product type requires a number of manufacturing operations mopt. 3.2 Control factors All the factors included in this category are controllable by the operators or the plant managers. CFs are varied during simulations in order to investigate the superiority domains of the two studied layouts. Based on the ndings of Chtourou et al. (2008), seven CFs were selected. The rst CF is the ratio of set up time (ST) over the processing time (PT). The second CF, is TT/PT, TT being the transfer time corresponding to the interdepartmental travel time in FL and to the durations of intra-cell moves in CL. ST, PT and TT are generally modeled using adequate probabilistic laws. In addition, part types of a same family have usually very similar set ups on the machines. Hence, if machine is set up for a part type and then must be set up for another type of the same family, the nominal ST for the second job should be weighed by the third CF, namely the ST reduction factor d. Also, jobs enter the MS in batches following an inter-arrival time (IAT) distribution which is the fourth CF, generated by a common probabilistic distribution. The size of these batches (BS) is the fth CF. Besides, in both layouts, every job may have to wait in a queue until the required machine becomes available. The scheduling rule (RULE) governing the different queues is the sixth CF in the present study. It could be First Come First Served (FCFS), Repetitive Lots (RL) or any other sequencing rule (Flynn, 1987; Suresh and Meredith, 1994; Huq et al., 2001). Finally, once processed, every job must be transferred to the next work station in its routings. In the FL, jobs are often transferred by batches in order to reduce the transfer costs. Some studies used this transfer mode in the CL whereas others exploited the proximity of machines of a same cell to transfer jobs by part. The part by part transfer mode allows simultaneous execution of several operations on the same batch parts called operations overlapping. This is the seventh CF named (OVER). 3.3 Noise factors The values of some parameters such IAT or ST are not deterministic. It could be subjected to some variations due to incontrollable factors related to the human or the physical resources of the MS. These variations can inuence the performances of the MS. Modeling these parameters using appropriate probabilistic laws accounts for their stochastic aspect. 3.4 Performance measures The most popular measures to assess the performances of MS are the mean ow time (MFT) and the work in process (Chtourou et al., 2008). These two measures characterize the uidity of the material ow in the system. Hence, the ratio of MFT of both layouts (MFTFL/MFTCL) is considered in this study for comparison purpose. Also, the throughput, which is usually considered as a productivity measure, is here utilized to characterize the attainment of simulation steady state. 4. Application 4.1 Experiments planning In this section, we illustrate the application of the comprehensive comparison methodology through the same example treated by Pitchuka et al. (2006). In this example, the MS is characterized by four parts types grouped in two families and eight

532

machines divided into three process departments in the FL and into two cells in the CL. Every part family is composed of two part types. Each part type requires an average of four operations/part. Besides, no inter-cell moves are required. As for CFs, they are here studied with two levels each as depicted in Table I. In addition to the considered seven CFs, several factor interactions could also be investigated. The selected interactions are:       ST TT ST ; BS; IAT RULE; RULE OVER; PT PT PT     TT ST and d: BS PT PT Every two level factor has 1 degree of freedom (DOF) (number of levels 2 1). Besides, every two-level factor interaction has 1 DOF [(number of levels of the rst factor 2 1) (number of factors of the second level 2 1)]. Hence, the total DOF required for the studied seven factors and six interactions is 14 7 2 2 1 6 2 2 1 2 2 1 1]. So, a two-level OA with at least 15 DOF was to be selected. The L16(215) OA was thus selected for the present analysis. This array having 15 DOF requires 16 experimental runs and has 15 columns. The factors were assigned to the L16(215) OA using the specied linear graph (Figure 3). A linear graph is a graphic representation of the relation between the studied factors and interactions (Bagchi, 1993; Taguchi et al., 1989). The obtained OA is shown in Table II. 4.2 Simulation The FL and CL simulation models of the given MS developed by Jerbi et al. (2006, 2009) are here utilized. Theses models are developed using the ARENA commercial software (Rathmell and Sturrock, 2002; Kelton et al., 2002). Observations were collected for two performance measures: throughput and MFT. The simulation model is assumed as a non-terminating system, so a steady-state analysis is done using the throughput. This analysis demonstrates that the warm-up period length is 200,000 min. The models are then run for 800,000 min.
Factors IAT (mn) BS 15 25 5 25

Functional and cellular layouts

533

Levels 1 2

STa/PTa ST1/PT1 ST2/PT1

TT (mn)/PTa U(1,3)/PT1 (for CL) U(1,9)/PT1 (for FL) U(3,9)/PT1 (for CL) U(3,27)/PT1 (for FL)

d
0.5 0.8

OVER With overlapping Without overlapping

RULE RL FCFS

Notes: aA different distribution is used for each machine and each part; PT1, the low level of PT used by Pitchuka et al. (2006); ST1, the low level of ST used by Pitchuka et al. (2006); ST2, the high level of ST used by Pitchuka et al. (2006); U (Min, Max): uniform distribution between Min and Max
6 7 3 10 2 8 5 4 1 13 9 14 15 12 11

Table I. Control factors

(ST/PT)d

ST/PT

RULE

(ST/PT)BS BS

(ST/PT)TT/PT

IATRULE RULEOVER

BS(TT/PT)

TT/PT

IAT

OVER

Figure 3. Linear graph for L16(215) OA

JMTM 21,5

Results MFTFL/ MFTCL Exp ST/PT (1) BS (2) OVER(4) IAT (5) d (6) TT/PT (8) RULE (11) Rep1 Rep2 S /N 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 1.82 0.97 0.82 1.04 0.06 1.04 0.94 0.79 0.93 1.88 0.14 0.77 1.06 0.11 0.83 0.98 1.82 0.89 0.82 1.03 0.06 1.04 0.96 0.77 1.21 1.92 0.14 0.77 1.07 0.12 0.82 0.98 5.20 2 0.69 2 1.74 0.30 2 24.96 0.34 2 0.45 2 2.19 0.35 5.59 2 17.12 2 2.31 0.56 2 19.10 2 1.65 2 0.19

OA

534

Table II. OA and results

4.3 S/N-based ANOVA There are three types of S/N ratios: lower-the-better, nominal-the-best, and higher-the-better (HB). Since the objective of this study is to determine under which conditions CL is better than FL, it is proposed to maximize the HB type S/N characterizing the MFT ratio MFTFL/MFTCL. This S/N ratio is given by (Ross, 1996; Park, 1998; Montgomery, 2001): ! n S 1X 1 210 log10 N n i1 y2 i where yi is the ith MFT ratio value of the n trial conditions. In fact, every experiment suggested by the OA is run twice and the corresponding S/N ratio is computed. Results are shown in Table II. The ANOVA for S/N ratios is carried out. Using pooling technique, the insignicant factors and interactions are pooled up with the error. The initial and pooled ANOVA results are presented in Table III. The analysis results indicate that only the CFs BS, d and RULE are statistically signicant. The factors ST/PT, TT/PT, IAT and OVER are considered insignicant as their F-values and contributions are very low. On the other hand, only the interaction RULE OVER is statistically signicant. All the other interactions are insignicant. Figure 4 shows the main effects of the CFs and their interactions and graphically depicts these remarks. In this gure, the importance of the factor is expressed by its slope whereas the importance of an interaction is expressed by the slope difference between the two curves of the interaction. 4.4 Mathematical model development and exploitation Based on the computed S/N ratios, a mathematical model is derived by linear interpolation (Dobson, 2001). In this model, every CF can take one of two values: 1 or 2, depending on the chosen parameter level:

Parameter/interaction ST/PT BS TT/PT IAT OVER d RULE ST/PT TT/PT IAT RULE RULE OVER BS TT/PT ST/PT d ST/PT BS Error Total

SS 5.87 86.65 29.08 38.11 3.40 306.25 269.36 19.92 34.24 135.21 7.37 12.49 34.32 163.36 1,145.63

Initial ANOVA DOF MS C% 1 1 1 1 1 1 1 1 1 1 1 1 1 2 15

Pooled ANOVA SS DOF MS 86.65 306.25 269.36 135.21 184.80 1,145.63 1 1 1 1 11 86.65 306.25 269.36 135.21 31.65

F 2.74 9.68 8.51 4.27

Functional and cellular layouts

5.87 0.51 0.07 Pooled 86.65 7.56 1.06 29.08 2.54 0.36 Pooled 38.11 3.33 0.47 Pooled 3.40 0.30 0.04 Pooled 306.25 26.73 3.75 269.36 23.51 3.30 19.92 1.74 0.24 Pooled 34.24 2.99 0.42 Pooled 135.21 11.80 1.66 7.37 0.64 0.09 Pooled 12.49 1.09 0.15 Pooled 34.32 3.00 0.42 Pooled 81.68

535

Notes: SS, sum of squared deviations; DOF, degree of freedom; MS, mean square; C%, percentage contribution; F, F-value

Table III. Results of the ANOVA

2.00 0.00 2.00 4.00 6.00 8.00 10.00

1 1 1 2 2 ST/PT BS 1 TT/PT 21 21 22 11 12 11 (ST/PT) (TT/PT) IAT RULE 12 21 22 11 12 12 11 22 RULE OVER BS (TT/PT) 21 11 22 21 2 1 IAT 2 1 2 2 OVER d

S/N ratio

2 RULE

4.00 2.00 0.00 2.00 4.00 6.00 8.00 10.00 12.00

22

21 22 12 11

S/N ratio

12 (ST/PT) d (ST/PT) BS

Figure 4. Parameters effects

S ST TT 48:95 2:00 13:46 2 5:69 IAT 2 3:45 d 2 16:52 OVER N PT PT ST TT 2 9:37 BS 2 34:43 RULE 2 4:46 5:85 IAT RULE PT PT TT ST ST 11:63 RULE OVER 2 2:72 BS 2 3:53 d 5:86 BS PT PT PT

JMTM 21,5

536

A conrmation experiment is carried out to validate the developed model. This experiment consists of adopting the recommended best levels of CFs, as shown by Figure 4. The average result from the conrmation test should statistically correspond to the optimum performance estimated by the mathematical model. If the average of the results of the conrmation experiment is within the limits of the condence interval (CI) of the predicted result, then the mathematical model is considered conrmed; otherwise additional analysis and experimentation are needed (Ross, 1996). Considering maximum value of S/N ratio, the optimum levels of the CFs are as follows: ST/PT1, TT/PT2, IAT2, BS1, d1, OVER2, RULE1. In this case, the expected result in terms of S/N ratio is 9.17 dB. The computed 95 percent CI is equal to CI ^ 11.16 dB. Therefore, the expected result should lie between 2 2.02 and 20.30 dB. In fact, the best expected response of 2 1.47 dB obtained by the conrmation experiment, which is repeated two times, is within the limits of the CI. The mathematical model is hence considered valid. Consequently, it can be used by the manufacturer to determine the best layout of its MS machines. The manufacturer can also investigate the effect of the change of one or several CFs levels on performances of the two layouts. If the model computed S/N ratio value is negative then the FL is better than the CL. In contrary, if the predicted S/N ratio value is positive then the CL outperforms the FL. Finally, the two layouts performances are equivalents if the S/N ratio value predicted by the mathematical model is close to zero. Table IV depicts the CL and FL superiority contexts expressed as a combination of the CFs. This table can be used by the manufacturer to determine the more effective layout for every one of the 128 possible CF level combinations. Indeed, the intersection between the line that represents the combination of the ST/PT, TT/PT, BS and OVER levels and the column corresponding to the IAT, RULE and d levels gives the
IAT1 RULE1 IAT2 RULE1 d1 d2 CL CL CL FL CL CL CL CL CL CL CL CL CL CL CL CL CL FL FL FL CL CL FL FL FL FL FL FL FL FL FL FL

d1
ST/PT1 TT/PT1 BS1 BS2 TT/PT2 BS1 BS2 ST/PT2 TT/PT1 BS1 BS2 TT/PT2 Table IV. Level combinations giving layout superiority BS1 BS2 OVER1 OVER2 OVER1 OVER2 OVER1 OVER2 OVER1 OVER2 OVER1 OVER2 OVER1 OVER2 OVER1 OVER2 OVER1 OVER2 CL CL CL FL CL CL CL CL and FL CL CL CL CL CL CL CL CL

d2
CL FL FL FL CL CL FL FL FL FL FL FL FL FL FL FL

RULE2 d1 d2 FL FL FL FL FL CL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL FL

RULE2 d1 d2 FL CL FL FL CL CL FL CL FL CL FL CL FL CL FL CL FL FL FL FL FL CL FL FL FL FL FL FL FL FL FL FL

Notes: X1, rst level of the CF X; X2, second level of the CF X

outperforming layout. For example, the FL is the best layout for the following CFs levels combination: ST/PT2, TT/PT1, BS1, OVER1, IAT2, RULE1 and d2. The mathematical model can also be used to predict the best layout for intermediate levels of the CFs ST/PT, TT/PT, BS, IAT and d which are continuous factors. Unlike these CFs, OVER and RULE are discrete and can be investigated only for the two levels 1 and 2. For example, for the following setting combination: ST/PT1.2, TT/PT1.1, BS1.3, OVER1, IAT1.6, RULE2 and d0.65 the FL outperforms the CL. 5. Conclusion This paper presents an objective comparison methodology between functional and CLs. The main goal of this methodology is to help MS managers choosing the most appropriate layout to their manufacturing context. The developed methodology integrates discrete event simulation and the TM for the design of experiments and results analysis. Through a minimal experimental effort, this method permits to reliably evaluate the effect of each MS parameter on the system performance and to reveal the possible interactions between these parameters. The main outcome of the proposed methodology is a mathematical model depicting the superiority trend of the two layouts. In fact, once developed and validated, the mathematical model can be used by the MS manager to predict the S/N ratios for any combination of the MS parameters within the scope of the experimental study. For every parameter combination, the sign of the S/N ratio indicates the best layout. The model can also be exploited to interpolate the results between the studied levels of continuous parameters such as batch inter arrival time or batch size. The application of this methodology on an illustrative example showed the efciency of this methodology for the choice of the best layout for an MS. Many aspects of the comparison methodology are currently being developed. The rst task is the renement of the performed study by considering three levels for each CF in order to capture non-linearity. Then, the enlargement of the application scope to other domains is also projected. This should increase the chance of the proposed methodology to be successfully applied and validated on real cases.
References Bagchi, T.P. (1993), Taguchi Methods Explained, Prentice-Hall of India, New Delhi. Chtourou, H., Jerbi, A. and Maalej, A.Y. (2008), The cellular manufacturing paradox: a critical review of simulation studies, Journal of Manufacturing Technology Management, Vol. 19 No. 5, pp. 591-606. Dobson, A.J. (2001), An Introduction to Generalized Linear Models, Chapman & Hall, Boca Raton, FL. Farrington, Ph.A. and Nazemetz, J.W. (1998), Evaluation of the performance domain of cellular and functional layouts, Computers & Industrial Engineering, Vol. 34 No. 1, pp. 91-101. Flynn, B.B. (1987), Repetitive lots: the use of a sequence dependent set-up time scheduling procedure in group technology and traditional shops, Journal of Operations Management, Vol. 7 No. 2, pp. 203-16. Haldar, A. and Mahadevan, S. (2000), Probability, Reliability and Statistical Methods in Engineering Design, Wiley, New York, NY. Huq, F., Douglas, A.H. and Zubair, M.M. (2001), A simulation analysis of factors inuencing the ow time and through-put performance of functional and cellular layouts, Integrated Manufacturing Systems, Vol. 12 No. 4, pp. 285-95.

Functional and cellular layouts

537

JMTM 21,5

538

Jensen, J.B., Malhotra, M.K. and Philipoom, P.R. (1996), Machine dedication and process exibility in a group technology environment, Journal of Operations Management, Vol. 14 No. 1, pp. 19-39. Jerbi, A., Chtourou, H. and Maalej, A.Y. (2006), Functional VS cellular layout: using simulation as a comparison tool, paper presented at the Third International Conference on Advances in Mechanical Engineering and Mechanics, Hammamet. Jerbi, A., Chtourou, H. and Maalej, A.Y. (2009), Comparing functional and cellular layouts: simulation models, International Journal of Simulation Modelling, Vol. 8 No. 4, pp. 215-24. Kelton, W.D., Sadowski, R.P. and Sadowski, D.A. (2002), Simulation with Arena, 2nd ed., McGraw-Hill, New York, NY. Li, J. (2003), Improving the performance of job shop manufacturing with demand-pull production control by reducing set-up/processing time variability, International Journal of Production Economics, Vol. 84 No. 3, pp. 255-70. Miller, R.G. (1985), Beyond ANOVA: Basics of Applied Statistics, Wiley, New York, NY. Montgomery, D.C. (2001), Design and Analysis of Experiments, 5th ed., Wiley, New York, NY. Morris, J.S. and Tersine, R.J. (1990), A simulation analyses of factors inuencing the attractiveness of group technology cellular layouts, Management Science, Vol. 36 No. 12, pp. 1567-78. Morris, J.S. and Tersine, R.J. (1994), A simulation comparison of process and cellular layouts in a dual resource constrained environment, Computers and Industrial Engineering, Vol. 26 No. 4, pp. 733-41. Park, S.H. (1998), Robust Design and Analysis for Quality Engineering, Chapman & Hall, London. Phadke, M.S. (1989), Quality Engineering Using Robust Design, P.T.R. Prentice-Hall, Englewood Cliffs, NJ. Pitchuka, L.N., Adil, G.K. and Ananthakumar, U. (2006), Effect of the conversion of the functional layout to a cellular layout on the queue time performance: some new insights, International Journal of Advanced Manufacturing Technology, Vol. 31 Nos 5/6, pp. 594-601. Rathmell, J. and Sturrock, D.T. (2002), The ARENA product family: enterprise modeling solutions, Proceedings of the 34th Winter Simulation Conference in San Diego, California, IEEE, Piscataway, NJ, pp. 165-72. Ross, P.J. (1996), Taguchi Techniques for Quality Engineering, McGraw-Hill, New York, NY. Shafer, S.M. and Charnes, J.M. (1992), Cellular versus functional layouts under a variety of shop operating conditions, Decision Sciences, Vol. 24 No. 3, pp. 665-81. Shafer, S.M. and Charnes, J.M. (1995), A simulation analyses of factors inuencing loading practices in cellular manufacturing, International Journal of Production Research, Vol. 33 No. 1, pp. 279-90. Suresh, N.C. and Meredith, J.R. (1994), Coping with the loss of pooling synergy in cellular manufacturing systems, Management Science, Vol. 40 No. 4, pp. 466-83. Taguchi, G., Elsayed, E. and Hsiang, T. (1989), Quality Engineering in Production Systems, McGraw-Hill, New York, NY. Corresponding author Abdessalem Jerbi can be contacted at: jerbi_a@yahoo.fr To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Journal of Manufacturing Technology Management


Emerald Article: Design of lean manufacturing systems using value stream mapping with simulation: A case study Anand Gurumurthy, Rambabu Kodali

Article information:
To cite this document: Anand Gurumurthy, Rambabu Kodali, (2011),"Design of lean manufacturing systems using value stream mapping with simulation: A case study", Journal of Manufacturing Technology Management, Vol. 22 Iss: 4 pp. 444 - 473 Permanent link to this document: http://dx.doi.org/10.1108/17410381111126409 Downloaded on: 26-08-2012 References: This document contains references to 71 other documents Citations: This document has been cited by 1 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 2592 times since 2011. *

Users who downloaded this Article also downloaded: *


Franois Des Rosiers, Jean Dub, Marius Thriault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376 Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 Sandy Bond, (2011),"Barriers and drivers to green buildings in Australia and New Zealand", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 494 - 509 http://dx.doi.org/10.1108/14635781111150367

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

JMTM 22,4

Design of lean manufacturing systems using value stream mapping with simulation
A case study
Anand Gurumurthy
Mechanical Engineering Group, Birla Institute of Technology and Science, Pilani, India, and

444
Received June 2009 Revised March 2010 Accepted March 2010

Rambabu Kodali
Mechanical Engineering Group and Engineering Technology Group, Birla Institute of Technology and Science, Pilani, India
Abstract
Purpose Generally, the implementation of lean manufacturing (LM) starts with the development of value stream maps. However, it has been found that value stream mapping (VSM) suffers from various shortcomings. Hence, researchers have suggested the use of simulation along with VSM. The purpose of this paper is to present an application of VSM with simulation during the design of lean manufacturing systems (LMS) using a case study of an organisation following a job shop production system to manufacture doors and windows. Design/methodology/approach Simulation models were developed using QUeuing Event Simulation Tool for the case organisation to demonstrate how the case organisation will be changed after implementing various LM elements, apart from analysing the impact of implementing these LM elements on the organisations performance. Findings Simulation studies were carried out for different scenarios such as before LM (current state VSM) and after LM (future state VSM). It was found that the case organisation can achieve signicant improvement in performance and can meet the increasing demand without any additional resources. Practical implications It is believed that this paper will enable practitioners to appreciate the role of simulation in helping them understand how the operations department of the case organisation will be transformed during the design of LMS. Originality/value According to the authors knowledge, no case study exists in the literature that discusses the application of VSM with simulation in an organisation that manufactures doors and windows using a job shop production system. Furthermore, the paper simulates the impact of those LM elements which were not considered by other researchers on the performance measure of the case organisation. Keywords Lean production, Manufacturing systems, Computer software, Simulation, Performance measures Paper type Case study

Journal of Manufacturing Technology Management Vol. 22 No. 4, 2011 pp. 444-473 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410381111126409

The authors would like to thank Mr M.N. Sridhar, a student of Distance Learning Programmes Division, BITS, Pilani, for sharing his knowledge during the viva-voce examination and for making use of his dissertation that was submitted as a partial fullment for his Masters degree. Similarly, thanks are due to Mr Gursharanjit Singh, a nal year student of BE (Hons) Mechanical Engineering Group, BITS, Pilani for his timely help in developing the simulation models using the QUEST software. Thanks are also due to Mrs M. Sowmiya for formatting this manuscript and Ms A. Gayathri for proofreading this manuscript.

1. Introduction In recent years, many organisations both in India and other countries are implementing the principles and concepts of lean manufacturing (LM) with the objective of achieving superior competitive advantage over other organisations. Few companies have attained their objective, while many of them did not. For instance, Dunstan et al. (2006) examined the application of LM in a mining environment. They described the implementation of certain LM elements that are applicable in such organisations and noted that health- and safety-related incidents were reduced from 154 to 67; absenteeism was reduced by 3.4-1.8 per cent, while about $2 million (Australian) were saved during the year 2006. On the other hand, Bamber and Dale (2000) discussed the application of lean production methods to a traditional aerospace manufacturing organisation. They found that there are two main stumbling blocks to the LM application: the redundancy programme and a lack of employee education in the concept and principles of lean production. Mohanty et al. (2007) too supported this statement and noted that:
[. . .] many of the companies that report initial gains from lean implementation often nd that improvements remain localized, and the companies are unable to have continuous improvements going on. One of the reasons, we believe, is that many companies or individual managers who adopted lean approach have incomplete understanding and, as a result, could not be able to gain all the benets as Toyota enjoys.

LMS using value stream mapping

445

Apart from these stumbling blocks, other reasons for failures include: the lack of understanding by managers of the organisations regarding the following: . How to implement LM? . What changes will happen in an organisation as it gets transformed by implementation of LM? . How LM will affect the performance measures of an organisation? To overcome the rst issue (i.e. how to implement LM), researchers have proposed different methodologies and steps. For example, Womack and Jones (1996) enumerated the ve tenets of LM and emphasized that value stream mapping (VSM) has to be carried out as the rst step towards LM implementation. Recently, Grewal (2008) described the application of VSM in XYZ bicycle manufacturing company, a small manufacturing rm located in northern part of India. He explained in detail about the current state of activities within the rm, the opportunities for improvement and the improvement programmes required for achieving the future state apart from enumerating the benets obtained. It is evident from this case that VSM can also provide answer to both the second and third questions to some extent. But a literature review revealed that VSM suffers from several shortcomings (which are discussed later). To resolve these shortcomings, researchers have suggested that simulation can be utilised in conjunction with VSM. Few studies combining VSM and simulation are available in the literature (which is reviewed in the next section). However, according to the authors knowledge, no studies exist in the literature of LM, which describe the application of VSM with simulation during the design of lean manufacturing systems (LMS) in an organisation that manufactures doors and windows using a job shop production system. Hence, an attempt has been made in this paper to present the same. Furthermore, it will enable the practitioners to understand:

JMTM 22,4

. .

The feasibility of implementing LM tools/techniques/practices/procedures (in short, it can be called as elements). How an organisation will function after the implementation of LM? What are the benets or performance improvement due to LM implementation?

446

The paper is arranged as follows: Section 2 provides a literature review, which reveals the research gaps, while Section 3 presents an overview about the case organisation. Section 4 enumerates the design of LMS describing the initial steps taken by the case organisation and Section 5 demonstrates the development of simulation models for designing the LMS for the case organisation. Section 6 deals with results and discussions and nally, Section 7 ends with conclusions. 2. Literature review This section is divided into four sections. The rst section provides a brief review of literature related to case studies describing the implementation of LM while the second section deals with the review of literature related to VSM. The third section reviews the literature related to application of VSM with simulation during the design of LMS and the last section highlights the various research gaps. 2.1 A brief review of case studies describing the implementation of LM Many case studies exist that deals with the LM implementation in a wide variety of industrial sectors other than manufacturing. For instance, Sreedharan and Liou (2007) elaborated a case study of implementing LM principles in a university rapid manufacturing laboratory. Although lean initiatives are undertaken in other sectors, the number of LM implementations in the manufacturing sector is much higher when compared to other sectors. Hence, this review focuses only on LM implementations in manufacturing sector. Table I provides a list of case studies describing the LM implementation in manufacturing sector. From Table I, it can be found that LM has been implemented in variety of manufacturing industries. A cursory review of these papers will reveal that these industries have established different manufacturing systems such as project shop, job shop, batch production, mass production and continuous production systems. Hence, a classication scheme (taxonomy) is also established for the reviewed papers based on the types of production system followed in each case organisation. 2.2 A brief review of literature related to VSM Rother and Shook (1999) explained that a value stream is comprised of all the actions (both value added (VA) and non-value added (NVA)) that are required to bring a product or a group of products from raw materials to the arms of the customer. On the other hand, VSM is a pencil and paper visualisation tool that shows the ow of material and information as a product makes its way through the stream. Many researchers have described the application of VSM. Table II shows a review of papers describing the application of VSM. From Table II, it can be found that VSM has been used in both manufacturing and service industries; however, its application is more predominant in manufacturing. It is used mostly for productivity improvements, but in recent times, it is also applied

Classication S.no. scheme 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Continuous production Batch production Project shop Job shop

Industry type Ship building Traditional aerospace manufacturing Aerospace component suppliers Specialist machinery manufacturers and aerospace sector Capital equipment Secondary wood products High-mix, low-volume manufacturing (aerospace component) Rough mill

Author(s) and year Storch and Lim (1999) Bamber and Dale (2000) Crute et al. (2003) Jina et al. (1997) Mottershead (2001) Czabke (2007) Dudley (2005) Gumbo et al. (2006) Kumar et al. (2006) Scott (2007) Sahoo et al. (2008)

LMS using value stream mapping

447

Die casting industry (SME) precision machined components Printing technologies Forging (supplier for railways, oil and gas and the machine tool sector) Mass production Automobile industry motor (compartment automated monorail system) Automotive components (chassis systems)

SMEs

Braiden and Morrison (1996) Mabry and Morrison (1996) Automotive components (windscreen wiper systems) Sohal (1996) Auto component supplier (electro-mechanical Kasul and Motwani (1997) components) Auto component supplier (technical fastening Soderquist and Motwani devices) (1999) Automotive components (electro mechanical components) Motwani (2003) Truck manufacturing company Wallace (2004) Continuous product line of a tyre manufacturing Mukhopadhyay and plant Shanker (2005) Auto component supplier (motorcycle frames) Seth and Gupta (2005) Truck production Berg and Ohlsson (2005) Robotic assembly cell in automotive component Abduelmula et al. (2005) manufacturer Automobile industries Mohanty et al. (2007) Car manufacturer Lee and Jo (2007) Automotive component assembly line (combustion Domingo et al. (2007) injection valves) m Paper industries Lehtonen and Holmstro (1998) Steel manufacturing Brunt (2000) Metal forming Lee and Allwood (2003) Mining environment Dunstan et al. (2006) Textile Goforth (2007) Durable articles Gupta and Brennan (1995) Automotive components (automobile lamps) Gunasekaran and Lyu (1997) hlstro m Electronic ofce equipment manufacturer Karlsson and A (1997) Numerically controlled bagging machines Abdul-Nour et al. (1998) Automotive components (wiper systems) Gunasekaran et al. (2000) Small bicycle manufacturing company Grewal (2008)

Table I. A list of case studies describing the LM implementation in various manufacturing industries

JMTM 22,4

Author(s) and S.no. year 1

Remarks

448

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Table II. A review of papers describing the application of VSM

18

Hines and Rich Introduced seven more tools, which can be used in conjunction with VSM (1997) Hines et al. (1999) Discussed the application of VSM to the development of supplier network Brunt (2000) Demonstrated how VSM can be used to map the entire processes along the supply chain from steelmaking (i.e. raw material) to steel component supplier Freire and Detailed the application of VSM in the design process of construction n (2002) Alarco projects McManus and Explored the concept of value stream analysis and mapping as applied to Millard (2002) product development (PD) efforts Dhandapani et al. Constructed the current and future state VSMs of a steel company (2004) Emiliani and Stec Enumerated the role of VSM to determine leadership beliefs, behaviours (2004) and competencies zkan et al. O Explained how VSM and its associated tools can be used to design a desired (2005)) future state aligned with LM principles at a shop oor of an automotive industry Schulte et al. Documented how lean can apply in a PD test laboratory (2005) Seth and Gupta Described the application of VSM for lean operations and cycle time (2005) reduction in an auto component supplier company Taylor (2005) Applied lean value chain improvement techniques (i.e. VSM) to a complete supply chain for a food product from farm to consumer Braglia et al. Discussed about the new VSM approach for the design of complex (2006) production systems Endsley et al. Introduced the application of VSM in a hospital tracing the ow of a patient (2006) Lummus et al. Reported on a VSM project in a small medical clinic (2006) Parry and Turner Described the application of lean visual process management tools (2006) Grewal (2008) Detailed the application of VSM in small bicycle manufacturing company Lasa et al. (2008) Presented a case study of a company in which the VSM has been created by the team to improve the productive system of a manufacture for plastic casings for mobile phones Serrano et al. Commented about the applicability of VSM to redesign disconnected ow (2008) lines based on manufacturing environments with a diversity of logistical problems

for other purposes such as improving leadership, perform benchmarking and increase value across the supply chain. 2.3 A brief review of papers describing the application of VSM with simulation Chu and Shih (1992) commented that although several methodologies have been used in studying just-in-time (JIT) production, simulation has attracted the attention of many researchers and practitioners. In recent times too, many simulation studies were reported in the eld of LM. Table III shows a review of papers describing the application of simulation and VSM during the design of LMS.

Author(s) and S.no. year LM elements used 1 Czarnecki and Loyd (2000) One piece ow, takt time

Remarks Developed a simulation model to demonstrate the application of lean principles to a high-volume manufacturing facility and transforming the case company into a high-performing lean enterprise Used the simulation to assist in the decision of implementing LM principles at an existing assembly operation of consumer electronic product having a volume of about 500,000 units per year Utilised simulation in conjunction with VSM to improve the performance of British telecommunications PLC Developed two simulation models for the VSM following two scenarios: push and pull (kanban) systems Described an application of VSM and simulation to a dedicated product line in an engineer-to-order motion control products manufacturing plant. They used arena for the purpose of simulation Noted that using conventional simulation systems to model complete door-to-door production is an expensive and time-consuming undertaking. Hence, they presented a simulation method that suits the practical requirements of VSM Presented the use of discrete event simulation to understand the concepts of LM Developed a simulation in arena to model a factory of Taiwan-funded enterprise in mainland China that produces oval-gear ow metres to understand the effect of implementing lean control approaches in the factory Described a simulation model that was developed to contrast the before and after scenarios of VSM constructed for a large integrated steel mill Improved the throughput using a combination of takt time and simulation by understanding how each stage of the system interacts with other stages in a company that manufactures two main types of customised products (continued )

LMS using value stream mapping

449

Detty and Yingling (2000) Dennis et al. (2000) Lian and Van Landeghem (2002) McDonald et al. (2002)

Pull system, standard containers, small lot production, 5S and pokayoke LM elements were not discussed Kanban, supermarkets, pull system and U-shaped manufacturing cells Supermarket, kanban, heijunka box and setup reduction

3 4 5

Mittelhuber et al. (2002)

Description of LM elements were not available

7 8

Schroer (2004) Huang and Liu (2005)

Line balancing against takt time, pull versus push manufacturing and kanban inventory control Setup time reduction using single minute exchange of dies, use of new machines, reduced distances between workstations

Abdulmalek Total productive maintenance and and Rajgopal setup time reduction (2007) Duanmu and Taaffe (2007) Takt time analysis and line balancing

10

Table III. A review of papers describing the application of VSM and simulation

JMTM 22,4

Author(s) and S.no. year LM elements used 11 Kannan et al. Maintenance process improvement (2007) activities

Remarks

450

12

13

Table III.

Emphasized that the traditional VSM cannot be utilised as is for the maintenance activities. Hence, they developed a VSM specically for maintenance to evaluate the NVA activities and provided recommendations to reduce the mean maintenance lead time through simulation Lian and Van Supermarkets, pull system and Enumerated the application of VSMLandeghem kanbans based simulation generator in the shop (2007) oor of poultry and pig raising equipment manufacturer for feeding, drinking, feed storage and feed transportation systems Narasimhan LM elements were not discussed, as it Introduces a new approach known as et al. (2007) is applied for engine testing the simulation-aided Value Stream Mapping (saVSM), and illustrated a case study, showcasing the successful application of saVSM approach, at a global engine manufacturers test environment

From Table III, it can be inferred that some of the studies focused only on simulation. For example, Detty and Yingling (2000) demonstrated the application of simulation during the design of LMS for a case organisation. However, they did not discuss about the role of VSM and did not integrate VSM in their simulation. Some of the studies explored the simultaneous application of simulation and VSM in industries having different types of production system/process. Abdulmalek and Rajgopal (2007) and Comm and Mathaisel (2005) described the application of VSM with simulation in a continuous process industries such as steel mill and textile industry, respectively. Lian and Van Landeghem (2007) discussed the application of VSM-based simulation in a low-volume and high-variety component production shop of a poultry and pig raising equipment manufacturer. On the other hand, Dennis et al. (2000) demonstrated the application of VSM with simulation in a service industry. From Table III, it can also be concluded that application of VSM with simulation is more prevalent in manufacturing than service. 2.4 Research gaps Although signicant work has been carried out in the recent past in the areas of LM implementation, VSM and VSM with simulation, various research gaps were identied from these three different reviews. A review of case studies in Table I revealed that: . LM can be applied in any type of industries irrespective of the size and the type of production system/process involved. It can be applied even in a small die-casting unit or in a large aerospace manufacturing organisation. Although LM is being applied in industries irrespective of the type of production system (such as project, job, batch, mass or continuous production systems), the number of case studies

in the category of project or continuous production is very less, while that in mass production category is more. The number of case studies for the remaining two production systems (job shop and batch) lies in between the project and mass production systems. hlstro m (1997), most of the case studies are As mentioned by Karlsson and A from automotive sector, comprising of component suppliers and automobile manufacturers. Nearly, 45 per cent (i.e. about 16 out of 36) of the reviewed studies are from the automotive sector. The number of case studies dealing with LM implementation in small- and medium-sized enterprise(s) SMEs is very less. Only six papers are available, which specically mentioned about implementation of LM in SMEs, However, if some of the industries dealing in metal forming, die casting, etc. are included, it may increase to eight, which is again comparatively less. The number of papers describing LM implementation in Indian industries is also very less. Out of the 36 papers reviewed, only six papers dealt with LM implementation in Indian industries. Similarly, a cursory review of these case studies reveals that even in India, LM is predominantly getting applied only in the automobile sector.

LMS using value stream mapping

451

Hence, in this paper, an attempt has been made to present a case study to overcome most of these issues. This case study is different from the reported case studies in the following ways: . It demonstrates that LM can be implemented in an organisation which manufactures doors and windows using a job shop production system. This case study of door and windows manufacturing organisation is unique in the category of job shop production and according to the authors knowledge no such case studies exist in the realm of LM till date. Although a couple of wood products companies were identied in the review, these industries produce only the raw materials (i.e. properly cut and saw wood) for furniture making, whereas the case organisation considered for this study uses different materials such as poly-vinyl chloride (PVC) for manufacturing doors and windows. . Second, this case study is different from the reviewed ones, as it is not from the automotive sector. Furthermore, it details the LM implementation and application of VSM with simulation in an organisation under the furniture industry sector. . Third, it reports about the LM implementation especially in an Indian organisation from non-automotive sector to emphasize that LM implementations are getting widespread attention among the Indian industries. . Finally, the case organisation can fall under the category of SME, as it is relatively a small facility both from the perspective of size, number of employees, capital invested, equipments, etc. when compared with an organisation in the automotive sector. On the other hand, a review of papers related to VSM in Table II revealed that: . VSM is also utilised in any manufacturing organisation irrespective of the type of production system. But, the application of VSM in a doors and windows manufacturing environment is yet to be documented till date. Although Czabke (2007)

JMTM 22,4

and Gumbo et al. (2006) have described the application of LM in wood industry, they did not demonstrate the application of VSM with simulation in their study. Similarly, a review of papers related to application of simulation and VSM in Table III revealed that: . Most of the simulation studies that were carried out from the early 1990s to present are addressing the areas of kanban, pull/push, mixed model assembly/production, inventory control (small lot production), etc. But adequate importance is not given to other JIT/LM elements such as multi-machine activities, kaizen (continuous improvement), cycle time reduction, pokayoke, visual management, process improvements, automation, oor space reduction, etc. Very few papers have attempted to address or incorporate these LM elements during the simulation. . Apart from this, most of the simulation studies are focused on analysing one or few issues such as nding the optimal size of kanbans or developing an optimal schedule for mixed model assembly or analysing the performance of push/pull systems. According to authors knowledge, very few studies have been undertaken considering a combined implementation of JIT/LM elements. . No paper exists in the literature which demonstrates the application of VSM with simulation apart from considering various LM elements such as layout change, multi-machine activities, kaizen (continuous improvement), takt time analysis, cycle time reduction, pokayoke, visual management, process improvements, automation, oor space reduction, etc. simultaneously in developing the simulation models during the design of LMS especially for a door and window manufacturing organisation. Thus, this paper attempts to address some if not all of the above-mentioned research gaps by developing a simulation model for designing aN LMS based on a real-life data for the doors and windows manufacturing organisation. This paper demonstrates how a simulation model can be constructed, if a combination of above-mentioned LM elements were implemented and analyses what will be their impact on the performance measures of the case organisation. 3. An overview about the case organisation The company considered is named as ABC Limited (ABCL) to maintain the condentiality. ABCL is a unit of XYZ Limited, which has an annual turnover of about Rs. 2,500 crores and has 30 years of experience in managing large-scale process industries. The company had launched the business of PVC door and window manufacturing systems in India from 2003 in technical collaboration with a UK plastics company with its state-of-the-art PVC prole extrusion plant at Rajasthan facility. The fabrication units are located in Bhiwadi apart from other metros such as Hyderabad, Bangalore, Mumbai and Chennai. The total production capacity of all these fabrication units is about 100,000 windows per annum. The LM implementation is currently carried out in the fabrication unit located in Hyderabad, which has strength of about 80 people. Currently, the Hyderabad unit manufactures ve types of products, namely: (1) casement window; (2) casement door; (3) sliding window;

452

(4) sliding door; and (5) ventilators, as per the customer sizes and designs (Sridhar, 2007). Since, the windows are custom designed and the volumes for individual design are very less, this industry falls under the job shop production process.

LMS using value stream mapping

453
3.1 Problems faced In recent times, the construction sector in India is booming. Naturally, the demand for windows and doors is also increasing. For instance, the demand for the Hyderabad plant is expected to increase from 40 windows per day to 60 windows per day. As the market is increasing, naturally the case organisation has to compete not only with similar industries, but also with local manufacturers, who make wooden doors and windows. Analysing the production system, they found lot of areas, where signicant areas of improvement are required. For example, the production rate of the cell, which is measured in number of squares produced per shift is 160 (i.e. 160 squares/shift of 8 hours). This rate is sufcient only to meet the existing demand of 40 windows per day and the cell suffers from under-capacity to meet the future demand. Another aspect of the case organisation is that the inventory level within the plant is found to be higher. The current work in progress (WIP) for entire fabrication unit is 1,000 squares per day, i.e. an average of 125 squares is held before each work station (Sridhar, 2007). Because of the lower production rate, lower capacity and higher inventory, the top management of the ABCL was planning to implement the principles and concepts of LM to remain competitive and meet the ever increasing demand without much increase in the resources by eliminating the wastes plaguing the operations. 4. Design of LM systems The management of ABCL started their lean journey and named their production system as ABCL Production system similar (TPS). The reason is that the top managers had a strong belief that LMS has originated from the automotive sector and it cannot be copied into their production system. They believed that the concepts of TPS have to be adapted, customised and suited to their production system. As a starting point, to enable the employees of ABCL to understand the new principles and procedures of LM, necessary training sessions were arranged in the following tools and techniques: . 5S; . kaizen (continuous improvement); . VSM; and . muda (wastes), etc. (Sridhar, 2007). After their initial training, the team started to collect the details regarding the existing situation of the shop oor. The production process is analysed and the different stages involved in making a window/door were identied as shown in Figure 1. 4.1 Value stream mapping The next step is to draw the VSM, for which an understanding regarding the process sequence is a pre-requisite. Drawing VSM involves two steps: step 1 is to draw the

JMTM 22,4

Profile cutting

454

Processing (drainage/ v-groove/routing/single head welding)

Reinforcement cutting and fixing

Fusion welding

Assembly

Bead cutting

Glazing

Figure 1. Process sequence of making the window

Packaging and dispatch Source: Sridhar (2007)

current state map, while step 2 is to draw the future state map. Similarly, for the case organisation the current state VSM has been developed as shown in Figure 2. From Figure 2, it can be found that the VA time for the cell is just 1,476 seconds, while the production lead time is about 12.53 days or 360,864 seconds. The process ratio is found to be just 0.0041, which clearly reveals that the manufacturing process involves lot of NVA activities. The next step is to compute the takt time. Currently, the demand is only 40 windows or 160 squares. The plant works for a single shift of 8 hours, which does not include the lunch breaks of 30 minutes and tea breaks of 15 minutes. Therefore, the available time is found to be 8 60 60 28,800 seconds. Hence, the takt time for the current state is found to be 8 60 60/160 180 seconds/square. From Figure 2, it can be found that the stages such as prole cutting, processing (i.e. drainage, V-groove, etc.), reinforcement assembly and fusion welding have cycle times less than the takt time, while the time taken for the remaining stages are greater than the takt time. This is one of the reasons for storing an excess amount of inventory in the shop oor.

Current State Value Stream Map Customer demand 40 windows/day


Production control Forecast

Monthly forecast

Profiles Expedite orders through phone 40 windows per day or 160 squares per day Team leader Sales order Dispatch

Weekly

Delivery at the end of shift

RM stores Processing Reinforcement Assembly Assembly No. of operators = 5 CT = 320 sec. CO = 180 sec. AT = 28800 sec.
No. of operators =2 CT = 58 sec. CO = 0 AT = 28800 sec.

Profile cutting Welding No. of operators = 3 CT = 150 sec. CO = 600 sec. AT = 28800 sec. No. of operators = 2 CT = 152 sec. CO = 60 sec. AT = 28800 sec.

Bead cutting No. of operators =4 CT = 296 sec. CO = 30 sec. AT = 28800 sec.

Glazing No. of operators = 2 CT = 205 sec. CO = 0 AT = 28800 sec.

Packing No. of operators = 3 CT = 210 sec. CO = 0 AT = 28800 sec.

Contains raw materials and other hardware items

No. of operators = 3 CT = 85 sec. CO = 30 sec. AT = 28800 sec.

I
150 squares 0.94 days 152 sec. 58 sec. 150 sec. 0.53 days 0.75 days 1.56 days 85 squares 120 squares 250 squares

I
125 squares 0.78 days 320 sec. 296 sec.

I
100 squares 0.63 days 205 sec.

I
55 squares 0.34 days 210 sec.

1120 squares

7 days

85 sec.

Value added time: 1476 sec. Production lead time: 12.53 days

Takt time: 180 sec. Process ratio: 0.0041 or 0.41%

Source: Sridhar (2007)

LMS using value stream mapping

Figure 2. Current state VSM for the doors and windows fabrication line

455

JMTM 22,4

456

However, if the future demand is considered, which is about 60 windows per day or 240 squares, the mismatch between the cycle times of different processing stages and the takt time is very high. The takt time according to the future demand is found to be 8 60 60/240 120 seconds, in which case only the rst stage can meet the future customer demand, as it has the lowest cycle time of 85 seconds. Hence, to reduce the problems revealed by the current state VSM, the team was contemplating on implementing the following elements of LM: . 5S for organising the work place; . kaizens to simplify the process by combining/eliminating/simplifying the operations; . line balancing for achieving continuous ow processing; . layout change to reduce the people movement and unnecessary transportation of materials; . establish supermarket at various places of the manufacturing line to reduce inventory; and . work towards mixed production at the pacemaker assembly. They also developed a future state VSM to visualise how their organisation will be, after eliminating the wastes by applying the LM elements mentioned above. Figure 3 shows the future state VSM for the doors and windows fabrication line. From this gure, it can be found that the team estimated that the total inventory can be reduced to just 1.45 days of stock. Similarly, they also foresaw a reduction of processing times through process improvement techniques. Based on their estimates, they predicted that the process ratio can be increased to 0.018 from 0.004. 4.1.1 Shortcomings of VSM. Thus, by drawing the VSM, the practitioners were able to: . visualise and clearly see the entire ow; . identify the waste in the value stream; . establish the linkage between the information ow and the material ow; and . understand how the organisation will be in the future, if all the improvement activities are implemented properly and if the identied wastes were eliminated or removed. Although the VSM has the above-mentioned advantages, it suffers from the following shortcomings: . The VSM as a tool is static in nature and can capture only a snapshot view of the shop oor on any particular day. For instance, on a given day, the production might be running smoothly without any problems, while on the other day, there might be various delays due to breakdowns of machines, late delivery by key vendors, quality problems, etc. In these circumstances, the VSM tend to vary according to the situations that prevail in the organisation. . The future state map which is drawn is based on the assumption that all the issues in the problematic areas will be completely resolved. However, in practice, the entire problem may not be completely resolved.

Future State Value Stream Map Customer demand 60 windows/day


6 Month forecast+ Monthly forecast+ Weekly forecast Production Control Forecast Sales order
Dispatch

Profiles
Expedite orders through phone Team leader

Line balancing
Combine two stages Profile cutting and processing

Reinforcement Assembly and welding Assembly Bead cutting No. of operators =3 CT = 120 sec. CO = 60 sec. AT = 28800 sec. No. of operators =4 CT = 146 sec. CO = 180 sec. AT = 28800 sec. operators =3 No. of Process CT = 132 sec. improvements CO = 30 sec. AT = 28800 sec.

Glazing

Packing
No. of operators =3

No. operators= 3 CT = 147 sec. CO= 20 sec. AT = 28800 sec.

No. of operators =2 CT = 110 sec. CO = 0 sec. AT = 28800 sec.

sec. CT = 115 activities to Kaizen CO = 0 sec. packing improve = 28800 sec. AT

5S and layout change

Super market forI inventory reduction 240

I
30 squares 20 squares 0.083 days 120 sec. 146 sec. 0.13 days
. 147 sec.

I
20 squares 0.083 days 132 sec.

I
20 squares 0.083 days 110 sec.

I
20 squares 0.083 days 115 sec.

squares 1 day

Value added time: 740 sec. Production lead time: 1.4 days

Takt time: 120 sec. Process ratio: 0.018 or 1.8%

Source: Sridhar (2007)

LMS using value stream mapping

Figure 3. Future state VSM for the doors and windows fabrication line

457

JMTM 22,4

Similarly, the reduction in NVA, the increase in process ratio and the benets that are assumed to be obtained after carrying out possible improvements are based on estimates. But, in practice, similar benets may not be achieved. Drawing VSMs by hand, displaying them and making changes to them is a cumbersome process and it takes a lot of time.

458

Similarly, other researchers too have identied the shortcomings of VSM. Lian and Van Landeghem (2002) commented that: . VSM is composed by physically walking along the ow and recording what happens on the oor. Hence, the level of detail and the number of different versions that can be handled is very limited. . In real-world situations, many companies are of a high variety, low volume type, which many result in composing many value streams of many industrial parts and products, which further adds a level of complication (and variability). Finally, they noted that:
[. . .] revealing VSM as a map may hamper many people from failing to see how it translates into reality. So, the VSM risks ending up as a nice poster, without much further use.

Similarly, McDonald et al. (2002) cautioned that VSM may not serve the purpose, when it is used to map a production line which produces different types of product families that are having different processing times and set up times for each processing step apart from different number of shifts. Hence, to overcome these shortcomings, researchers have suggested the use of simulation models in conjunction with VSM as it is an effective tool to simulate both the current and future state of the case organisation. 5. Development of simulation models for the design of LMS It should be remembered that this simulation study is not meant for optimisation purposes. Rather, it is to provide an idea to the managers of the case organisation a real-time perspective of how the organisation will be after getting transformed through the LM elements and how the implementation of these LM elements will affect the performance measures of the organisation. The simulation models were developed using QUeuing Event Simulation Tool (QUEST), a simulation software package which can emulate a complete three-dimensional digital factory environment. It is possible to experiment with parameters such as facility layout, resource allocation, kaizen practices and alternate scheduling scenarios, which can help in quantifying the impact of the decisions on production throughput and cost. The most commonly needed behaviour logic can be selected from comprehensive logic menus that are parameter driven. On the other hand, for handling unique problems, it has a robust and exible simulation language which provides distributed processing with access to all system variables. This high-level, structured language allows users to dene custom behaviours and gain unlimited control over the simulation (www.delmia.com). 5.1 Simulation data for the current state map The data which were collected during the development of current state VSM are used for developing the simulation model. Apart from this, additional data such as setup time, number of operators, uptime of the machine, space available, machine arrangements, etc.

were also collected. For instance, Table IV shows the details of manpower requirement and operations carried out in each stage. From Table IV, it is found that 24 people (12 technicians and 12 casuals) are required to meet the demand of 40 windows per day. On the other hand, the total available shop oor area of the Hyderabad plant is about 1,791 square metres, out of which 1,400 square metres of the area is used by the manufacturing line. 5.1.1 Assumptions. However, to ensure that the model replicates exactly the actual production happening in their organisation, the following assumptions were made: . The inventory is taken entirely to be the initial inventory. Before the start of the simulation, this inventory will be built up before the workstations. This is due to the fact that a VSM captures the snap shot picture of the shop oor at any given point of time. Hence, the simulation too starts with the current situation as obtained from the current state VSM. . The setup time in seconds is included as per the current state VSM and it has been assumed that setups are performed during the start of production. The setup involves xing the tool, cleaning and ensuring that materials required are ready for production.
Manpower required T C Operations involved 1 2 Study the drawing and select the prole as per drawing Collect the proles from the rack Set the machine and cut as per the length Write the location code on all prole cut pieces Study the drawing, collect the proles from trolley and make necessary holes to t the hardware elements 1 Insert the cut galvanised iron (GI) reinforcement into the PVC prole as per drawing and x the screws Drill the sher holes in the outer frame pieces 1 Collect the proles, clean at the corners and weld as per drawing 2 Clean all welding ashes and assemble the weather seal gasket Select the hardware such as handle, lock, etc. as per the specications and drawing Assemble as per the drawing 3 Drill sher holes which are not possible in reinforcement assembly stage Assemble the re tree gasket, measures the bead length and cut in the machine 1 Collect the window panels and glasses as per the location codes available on windows and glasses Then assemble the bead to the window 2 Inspect the windows for sizes and visual defects in couplers, hardware, etc. Paste all six varieties of stickers on the windows Pack the window by keeping the window on bubble roll sheet on oor 12

LMS using value stream mapping

459

S.no. Operation 1

Prole cutting

2 3 4 5

Processing Reinforcement assembly Welding Assembly

2 1 2 3

Bead cutting

7 8

Glazing Inspection and packing Total

1 1

12

Notes: T technicians; C casual labourers Source: Sridhar (2007)

Table IV. Details of manpower requirement and operations carried out in each stage

JMTM 22,4

. .

460
.

The labour for each stage is allocated as per Table III. Each day consists of one shift with each shift having two 15-minute tea breaks and a 30-minute lunch break, which is separate and does not interfere with the production hours of 8 hours in each shift. The source is an active source and the inter-arrival time of each part is made equal to the cycle time of the rst machine in each line. This is due to the fact that the organisation follows a push system of operation in the shop oor. If an operation has two similar machines performing the same operation, then the machining time of all such similar machines is assumed to be a constant.

5.2 Simulation model for the current state map A snap shot of the simulation model for the current state VSM is shown in Figure 4. In the simulation model shown in Figure 4, the templates of different machines are identied from the software library and are placed in the simulation world. The existing layout of the factory was replicated by placing the machines as per the exact distances and the labour was allocated to each machine as per Table IV. For each machine, the details of cycle time, setup time, etc. are entered in its associated data boxes. The material ow logic was established based on the sequence of the operations for making doors and windows shown in Figure 1. Before each workstation, both input and output buffers in the form of wooden pallet are placed. The raw materials, i.e. prole pieces, glasses, etc. are supplied as per the size of the drawing. Ten different sources were placed at one corner of the layout from which the necessary raw materials are supplied. With these actual data, the current state VSM is simulated.

Figure 4. Snap shot of the simulation model for the current state VSM

5.3 Simulation data for the future state map As said earlier, the current state VSM revealed different types of wastes. Hence, to reduce these wastes and meet the increase in demand, the team identied the following LM elements for improvement. Layout change. Layout improvement was planned to reduce various wastes such as unnecessary transportation and motion. The team estimated that the total area utilisation can be reduced to 12 60 720 square metres, while it was about 1,400 square metres in the existing layout. Line balancing. From the current state VSM shown in Figure 2, it can be found that the operations were not balanced. For instance, the prole cutting operation takes only 85 seconds; while the fusion welding process takes 150 seconds and the bead cutting consumes 296 seconds. Hence, the focus is to balance the line by ensuring that the processing time in each stage is equally distributed apart from making it more or less equal to the takt time. To accomplish this, the production engineers proposed combining the different stages of manufacturing. For instance, they proposed combining the operations of prole cutting machine having one technician and two casuals with processing machines of two technicians (i.e. in total, three technicians and two casuals). They identied that manpower of two technicians and two casuals is sufcient for the merged stages, as both the prole cutting machine and processing is doing overproduction. However, to integrate these two stages, the layout has to be changed, which will naturally eliminate the previously held inventory between these work stations. In a similar manner, they proposed combining the reinforcement assembly and welding machine operations. Initially, they had one technician and one casual in reinforcement assembly and two technicians and one casual in welding work station (total of three technician and two casuals). Again, by proposing a layout change to place these two stages nearer and ne-tuning the process through some kaizens (described below), they estimated that one technician and two casuals are sufcient to work in both the stations. Naturally, the inventory between these two stages will become zero. Similarly, they performed various process improvements to balance the line and reduce the cycle time. Table V shows the revised manpower requirement for the improved layout. Kaizens. The team also identied kaizen activities for other stages such as assembly, bead cutting and glazing operations to eliminate NVA activities, which will result in reduction in process time apart from improving the safety. A sample of proposed kaizen activities are as follows: . Use of double bead block in bead cutting machine. Earlier, they were using a mono-block (work holding device) to hold the PVC and perform the bead cutting operation. They planned to redesign the work holding device in such a way that it can hold two PVCs of same size at the same time, and the bead cutting can happen simultaneously in both the PVCs, which can lead to productivity improvement. . Packing area improvement. From the current state map, it can be found that packing and dispatching were taking more time. Hence, they studied the process in detail and came out with lot of process improvements. Earlier, the bubble sheet, which is used to pack the window/door, is used to be un-rolled on the oor and cut by the operator manually according to the size of the window/door. The operator has to sit and bend to perform the cutting, which was unproductive due to increased strain and fatigue for the worker. The production engineers suggested the use of a trolley, in which the bubble sheet roll can be mounted at the top

LMS using value stream mapping

461

462

JMTM 22,4

S.no. Operation

Table V. Revised manpower requirement for the improved layout


Modications Combined the operational responsibility for both technicians, which will also reduce the inventory held between these stages 1 Study the drawing and select the prole as per drawing Collect the proles from the rack Set the machine and cut as per the length Write the location code on all prole cut pieces Study the drawing, collect the proles from trolley and make necessary holes to t the hardware elements 1 Insert the cut GI reinforcement into the PVC prole as per drawing and x the screws Combined the operations of reinforcement xing and welding with one technician and two casuals, which will also reduce the inventory held between these stages Can assemble 60 windows/shift on an average with additional operations of stickers pasting and couplers attachments Takes the responsibility of glazing operation also and support the members as and when it is required Drill the sher holes in the outer frame pieces 1 Collect the proles; clean at the corners and weld as per drawing 2 Clean all welding ashes and assemble the weather seal gasket Select the hardware such as handle, lock, etc. as per the specications and drawing Assemble as per the drawing 2 Drill sher holes which are not possible in reinforcement assembly stage Assemble the re tree gasket, measures the bead length and cut in the machine 1 Collect the window panels and glasses as per the location codes available on windows and glasses Then assemble the bead to the window 2 Inspect the windows for sizes and visual defects in couplers, hardware, etc. Paste all six varieties of stickers on the windows Pack the window by keeping the window on bubble roll sheet on the packing xture 10

Manpower required T C Operations involved

Prole cutting 1

Processing

Reinforcement assembly

Welding

Assembly

Bead cutting

Glazing

Inspection and packing

Total

Source: Sridhar (2007)

on a roller and it can be easily unrolled by pulling it and can be cut without bending. The next step is to pack the windows using these cut bubble sheets. Previously, the packing is performed by placing the window on the oor and covering it with bubble sheet. Since, it was taking too much time, the engineers were interested in developing a rotary packing table, in which the windows can be placed and can be rotated according to the orientation required for packing. The bubble sheet is rolled around it and an adhesive tape is afxed over it. Thus, they believed that the bending of operator can be completely avoided thereby the productivity loss due to fatigue can be eliminated. These improvements can lead to drastic reduction in the cycle time and the engineers have attempted to reduce it to half the existing cycle time for these stages such as packing, bead cutting, assembly, etc. 5.4 Simulation model for the future state map Considering these improvements, the simulation model of the current state map as shown in Figure 4 is modied to develop the simulation model for the future state map as shown in Figure 5. A cursory look at Figure 5 will reveal that various stages were combined and the layout got changed to accomplish the same. Apart from this, the parameters associated with simulation such as initial inventory, cycle time, etc. has been modied for each stage as per Table V. Similarly, the number of workers, distance travelled by a window, etc. also got reduced. However, the assumptions for the future state simulation model are same as that of current state simulation model. Similarly, the method of building the simulation model is also the same as that of current state map.

LMS using value stream mapping

463

Figure 5. Snap shot of the simulation model for the future state VSM

JMTM 22,4

464

6. Results and discussions The models for both the current state and future state are simulated for 30 days to represent a months production. To compare these two models, various performance measures identied in our earlier study (Anand and Kodali, 2008) are used to quantify the degree of improvements. Table VI shows the comparison of performance measures of the case organisation for the current state and future state VSMs. In Table VI, the number of units produced is measured in number of squares. Generally, in any industry, the production rate is measured in units/hour. As per this convention, the production rate for the case organisation should be measured as number of windows/doors produced per hour or number of windows/doors produced per 8-hour shift. However, in this case, the size of window/doors differs considerably and hence this unit of measurement may not adequately reect the daily production. For instance, if the size of window is more, then the complexity associated with it will affect the manufacturing and handling, naturally leading to lesser number of windows/doors on a particular day. Hence, to overcome this problem and to establish the uniformity in computing the total production, the case organisation has a practice of counting the total production based on number of squares in that window/door. If the window/door size exceeds more that 1.5 metres in size, then it is counted as two squares. Hence, utilising this convention, the total production and productivity was calculated based on number of squares produced in a shift of 8 hours. In addition to this, Table VI reveals that all inventories are represented in the form of days. This is due to the fact that to calculate the total production lead time in a VSM, the inventory is considered as the number of days a part waits before it gets processed. Hence, based on the daily demand, the inventory is converted into number of days by dividing the available inventory by per day requirement. For instance, in the current state VSM (Figure 2), prole cutting has an inventory of about seven days. Since per day requirement is 40 windows or 160 squares, it is equal to 7 160 1,120 squares. Other processes such as reinforcement assembly, bead cutting, etc. have an inventory of 85 and 125 squares, respectively. Hence, dividing the inventory at various stages by the per day demand, we will get 85/160 0.53 days for reinforcement assembly and 125/160 0.78 days. In other words, reinforcement assembly and bead cutting stages hold about 0.53 and 0.78 days of stock, respectively, which are yet to be processed. In a similar manner, the stock details for other stages were calculated. In the case of future state VSM, the demand per day should be taken as 240 squares instead of 160 to calculate the inventory details. Another important aspect in VSM is the calculation of process ratio. As explained earlier, the process ratio is dened as the ratio of VA time and total production lead time. For instance, from the future state VSM (Figure 5), the sum of VA time of all stages is found to be 740 seconds, while the total production lead time, which includes the waiting time of the parts before the machines in the form of inventory, is found to be 1.45 days or 41,760 seconds. Hence, the process ratio of future state VSM is 740/41,760 0.018. In a similar way, the process ratio for current state VSM is also calculated. Apart from this, the results obtained from the simulation models have revealed that the case organisation can achieve the following benets: . The distance a part travelled from raw material to nished products such as windows/doors got reduced. When the organisation had the existing layout (Figure 5), the total distance travelled by the part is found to be 62 metres.

S.no. 40 160 240 (1) 30 (0.125) 20 (0.083) 20 (0.083) 20 (0.083) 20 (0.083) 42.72 12.68 0.39 120 147 120 146 132 110 115 4,992 1,248 1,196 299 1,120 (7) 150 (0.94) 85 (0.53) 120 (0.75) 250 (1.56) 125 (0.78) 100 (0.63) 55 (0.34) 63.36 37.87 0.17 180 85 152 58 150 320 296 205 210 60 240

Performance measures

Current state

Future state

3 4 5 6 7

10 11 12

Demand per day In number of windows In number of squares Initial inventory of squares at the beginning of simulation (in number of days) Prole cutting Drainage, V-grooving and other prole machining operations Reinforcement assembly Fusion welding Assembly Bead cutting Glazing Packing and dispatch VA time (in minutes) Production lead time (in days) Process ratio (%) Takt time (in seconds) Cycle time (in seconds) Prole cutting Drainage, V-grooving and other prole machining operations Reinforcement assembly Fusion welding Assembly Bead cutting Glazing Packing and dispatch Total WIP inventory after 30 day simulation (in numbers) In number of squares In number of windows Parts produced after 30 day simulation (in numbers) In number of squares In number of windows Distance travelled by a single window (in metre) Manpower used (in numbers) Floor space used (in square metres) 7,956 1,989 62.5 24 (12 technicians 12 casuals) 1,587 10,535 2,561 54 17 (ten technicians seven casuals) 720

LMS using value stream mapping

465

Table VI. Comparison of performance measures of the case organisation for the current state and future state VSMs

JMTM 22,4
.

466

After the revised layout, the travel distance from prole storage to dispatch is found to be 54 metres, a reduction of about 8 metres per window. Inventory level at various stages can be reduced drastically by 76 per cent on an average. For instance, the WIP of windows after 30 days of simulation was found to be 1,248 numbers, which can be reduced to just 299 windows in the future state. The introduction of kaizen and line balancing has resulted in a reduction in cycle time at various stages of the manufacturing line. Hence, the total number of windows that can be produced may increase by 28.5 per cent. If further process improvements are undertaken, then the entire shop can become more productive and it can meet the future demands of 85 windows per day with the existing capacity itself.

To obtain these benets, the engineers have planned to implement the following elements: VSM, process simplication, line balancing, layout change, job enlargement, oor space reduction, etc. However, since the case organisation has just started with the LM implementation, other LM elements such as kanban system, pull system, mixed model manufacturing/scheduling, load levelling and other supplier-related elements, etc. are not implemented. This may be taken up in the future. 6.1 Validation The simulated values were veried by checking the same with the company personnel. It was found that most of the simulated values are matching. For instance, due to the changed layout, the distance a part travelled from raw material to nished products such as windows/doors got reduced. According to the simulation model, the total distance travelled by the part from prole storage to dispatch is found to be 62 metres in the current state layout. However, in reality, it was around 66 metres on an average, while the travel distance after revising the layout is found to be 51 metres, a reduction of about 15 metres per window. In addition to the elements that were planned, the case organisation also implemented the following additional LM elements, as advised by their external consultants. 5S. It refers to ve different stages for housekeeping. To provide a condence to the workers about the LM principles, the engineers actually started their lean journey with the 5S implementation in various areas. They trained the shop oor employees on 5S concepts and the employees were made to identify unnecessary objects, which were removed from the work places. Further, the employees are trained on how to keep the work environment clean and ensure that they clean their work place before and after the shift. Similarly, they were encouraged to keep the tools, xtures and other accessories in clearly marked positions and those who maintain it properly were rewarded every month, based on the 5S audit. This improved the motivation of employees and numerous 5S activities were carried out at different stages of manufacturing, which ensured that their work environment was clean. Similarly, all the production stages were properly identied based on the process carried out, the gangways are marked and tools are kept in proper position. Naturally, these activities resulted in productivity improvement (Sridhar, 2007). Plate 1 shows a sample 5S implementation in the storage of PVC squares in prole storage area, which are stored according to its sizes. Suggestion schemes. The suggestion schemes, which was introduced as part of operators involvement resulted in many improvement ideas and one of the idea

LMS using value stream mapping

467

Source: Sridhar (2007)

Plate 1. A sample 5S implementation in the storage of PVC squares in prole storage area

was provided by the operator from the reinforcement assembly stage. In this stage, one of the activities is to insert a wool pile into the PVC block. Previously, these wool pile, which is in the form of a roll, are kept over the table, unrolled, cut according to the length of the PVC and then it is inserted. It took a signicant amount of time. Later, the operator gave an idea of mounting the pile roll in a shaft on the table, which facilitated easy un-rolling. The wool pile is inserted directly in the PVC block for the desired length and then it was cut. This simple idea eliminated unnecessary measurement activity before cutting and thereby reduced the cycle time (Sridhar, 2007). Although, various LM elements as mentioned above were implemented, the case organisation could not drastically reduce the inventory as predicted in the future state map. They could reduce only by half of the current state map, as the supervisor and his team of employees were hesitant in reducing it to such a low level. Since, the process variability and supplier variability are not yet improved; the operation manager and supervisor still preferred having some WIP. Nonetheless, the case organisation has achieved a signicant improvement and with the continuous efforts from the LM team the inventory can be slowly reduced by implementing additional LM elements. 7. Conclusions This paper started with the claim that one of the reasons for an organisations failure in their LM implementation efforts is due to the fact that the managers do not fully understand how an organisation will be after it gets transformed by the principles of LM. Even though VSM can resolve the above issue to some extent, the literature review revealed that it suffers from various shortcomings. Researchers have commented that simulation can be utilised along with the VSM. However, most of the simulation studies in the literature focused on studying about the LM elements such as kanbans

JMTM 22,4

468

(nding the optimal number of kanbans), push and pull systems (comparison), mixed model assembly (sequencing and scheduling), etc. Other LM elements such as multi-machine activity ( job enlargement), cycle time reduction, process improvements, etc. have not been given adequate importance. This paper attempted to overcome all these issues by using simulation in conjunction with VSM to model the current state and future state VSM of a door and window manufacturing organisation following a job shop production system/process. A literature review related to case studies of LM implementation too revealed that no case study exists which described the implementation of LM in such an organisation. Thus, utilising the simulation models, the impact of implementing some of the basic LM tools such as line balancing, multi-machine activity, 5S, etc. on the performance of the organisation was analysed by comparing the performance measures for current and future state VSM. It was found that there was signicant improvement in the productivity, while there was signicant reduction in inventory, cycle time, oor space, manpower, etc. Thus, these simulation models also proved effective for the managers and engineers to actually see and feel how their manufacturing system will be in the future before the actual design of LMS. It should be noted here that the case organisation has just started off with their LM implementation efforts and hence only a few LM elements such as line balancing, job enlargement, layout change, process improvements, 5S, etc. have been implemented and advanced LM elements such as kanban, pull system, load levelling, etc. are not implemented. However, it can be concluded that the organisation is in the right track of LM implementation and if the managers and engineers of the organisation implement the remaining LM elements properly, then the case organisation ABCL is bound to achieve a superior competitive advantage over its competitors in the near future.
References Abduelmula, A., MacIsaac, R. and ElMekkawy, T.Y. (2005), Lean manufacturing implementation to a robotic-press line at A G Simpson Automotive Systems, Proceedings of the 35th International Conference on Computers & Industrial Engineering, Istanbul, Turkey, 19-22 June, pp. 7-12, available at: www.umoncton.ca/cie/ Conferences/35thconf/CIE35%20Proceedings/PDF/087.pdf (accessed 11 June 2008). Abdulmalek, F.A. and Rajgopal, J. (2007), Analyzing the benets of lean manufacturing and value stream mapping via simulation: a process sector case study, International Journal of Production Economics, Vol. 107 No. 1, pp. 223-36. Abdul-Nour, G., Lambert, S. and Drolet, J. (1998), Adaptation of JIT philosophy and Kanban technique to a small-sized manufacturing rm: a project management approach, Computers & Industrial Engineering, Vol. 35 Nos 3/4, pp. 419-22. Anand, G. and Kodali, R. (2008), Performance measurement system for lean manufacturing a perspective from SMEs, International Journal of Globalisation and Small Businesses, Vol. 2 No. 4, pp. 371-410. Bamber, L. and Dale, B.G. (2000), Lean production: a study of application in a traditional manufacturing environment, Production Planning & Control, Vol. 11 No. 3, pp. 291-8. Berg, A. and Ohlsson, F. (2005), Lean manufacturing at Volvo truck production Australia development of an implementation strategy, unpublished Masters thesis, Lund Institute of Technology, Lund, available at: http://epubl.ltu.se/1402-1617/2005/222/index-en.html (accessed 11 June 2008).

Braglia, M., Carmignani, G. and Zammori, F. (2006), A new value stream mapping approach for complex production systems, International Journal of Production Research, Vol. 44 No. 18, pp. 3929-52. Braiden, B.W. and Morrison, K.R. (1996), Lean manufacturing optimization of automotive motor compartment system, Computers & Industrial Engineering, Vol. 31 Nos 1/2, pp. 99-102. Brunt, D. (2000), From current state to future state: mapping the steel to component supply chain, International Journal of Logistics: Research and Applications, Vol. 3 No. 3, pp. 259-71. Chu, C.H. and Shih, W.L. (1992), Simulation studies in JIT production, International Journal of Production Research, Vol. 30 No. 11, pp. 2573-86. Comm, C.L. and Mathaisel, D.F.X. (2005), An exploratory analysis in applying lean manufacturing to a labor-intensive industry in China, Asia Pacic Journal of Marketing and Logistics, Vol. 17 No. 4, pp. 63-80. Crute, V., Ward, Y., Brown, S. and Graves, A. (2003), Implementing lean in aerospace challenging the assumptions and understating the challenges, Technovation, Vol. 23 No. 12, pp. 917-28. Czabke, J. (2007), Lean thinking in the secondary wood products industry: challenges and benets, unpublished Masters thesis, Oregon State University, Corvallis, OR, available at: https://ir.library.oregonstate.edu/dspace/handle/1957/4259 (accessed 11 June 2008). Czarnecki, H. and Loyd, N. (2000), Simulation of lean assembly line for high volume manufacturing, available at: www.uaheconomicdevelopment.org/pdfs/10.pdf (accessed 4 January 2009). Dennis, S., King, B., Hind, M. and Robinson, S. (2000), Applications of business process simulation and lean techniques in British Telecommunications PLC, in Joines, J.A., Barton, R.R., Kang, K. and Fishwick, P.A. (Eds), Proceedings of the 2000 Winter Simulation Conference, Orlando, FL, USA, 10-13 December, pp. 2015-21, available at: www.informs-cs.org/wsc00papers/276.PDF (accessed 20 December 2007). Detty, R.B. and Yingling, J.C. (2000), Quantifying benets of conversion to lean manufacturing with discrete event simulation: a case study, International Journal of Production Research, Vol. 38 No. 2, pp. 429-45. Dhandapani, V., Potter, A. and Naim, M. (2004), Applying lean thinking: a case study of an Indian steel plant, International Journal of Logistics: Research and Applications, Vol. 7 No. 3, pp. 239-50. Domingo, R., Alvarez, R., Pena, M.M. and Calvo, R. (2007), Materials ow improvement in a lean assembly line: a case study, Assembly Automation, Vol. 27 No. 2, pp. 141-7. Duanmu, J. and Taaffe, K. (2007), Measuring manufacturing throughput using takt time analysis and simulation, in Henderson, S.G., Biller, B., Hsieh, M.-H., Shortle, J., Tew, J.D. and Barton, R.R. (Eds), Proceedings of the 39th Winter Simulation Conference, Washington, DC, USA, 9-12 December pp. 1633-1640. Dudley, A.N. (2005), The application of lean manufacturing principles in a high mix low volume environment, unpublished Masters thesis, Massachusetts Institute of Technology, available at: http://dspace.mit.edu/handle/1721.1/34828 (accessed 10 June 2008). Dunstan, K., Lavin, B. and Sanford, R. (2006), The application of lean manufacturing in a mining environment, Proceedings of the International Mine Management Conference, Melbourne, Australia, 6-18 October, pp. 145-57, available at: www.leanmining.com.au/Dunstan.pdf (accessed 2 February 2008). Emiliani, M.L. and Stec, D.J. (2004), Using value-stream maps to improve leadership, The Leadership and Organisation Development Journal, Vol. 25 No. 8, pp. 622-45.

LMS using value stream mapping

469

JMTM 22,4

470

Endsley, S., Magill, M.K. and Godfrey, M.M. (2006), Creating a lean, Family Practice Management, April, pp. 34-8, available at: www.aafp.org/fpm (accessed 3 January 2009). n, L.F. (2002), Achieving lean design process: improvement methodology, Freire, J. and Alarco Journal of Construction Engineering and Management, Vol. 128 No. 3, pp. 248-56. Goforth, K.A. (2007), Adapting lean manufacturing principles to the textile industry, unpublished Masters thesis, North Carolina State University, Raleigh, NC, available at: www.lib.ncsu.edu/theses/available/etd-03212007-230809/(accessed 10 June 2008). Grewal, C. (2008), An initiative to implement lean manufacturing using value stream mapping in a small company, International Journal of Manufacturing Technology and Management, Vol. 15 Nos 3/4, pp. 404-17. Gumbo, D., Kline, D.E. and Bumgardner, M.S. (2006), Benchmarking performance measurement and lean manufacturing in the rough mill, Forest Products Journal, Vol. 56 No. 6, pp. 25-30. Gunasekaran, A. and Lyu, J. (1997), Implementation of just-in-time in a small company: a case study, Production Planning & Control, Vol. 8 No. 4, pp. 406-12. Gunasekaran, A., Forker, L. and Kobu, B. (2000), Improving operations performance in a small company: a case study, International Journal of Operations & Production Management, Vol. 20 No. 3, pp. 316-35. Gupta, S.M. and Brennan, L. (1995), Implementation of just-in-time methodology in a small company, Production Planning & Control, Vol. 6 No. 4, pp. 358-64. Hines, P. and Rich, N. (1997), The seven value stream mapping tools, International Journal of Operations & Production Management, Vol. 17 Nos 1/2, pp. 46-64. Hines, P., Rich, N. and Esain, A. (1999), Value stream mapping: a distribution industry application, Benchmarking: An International Journal, Vol. 6 No. 1, pp. 60-77. Huang, C.C. and Liu, S.H. (2005), A novel approach to lean control for Taiwan-funded enterprises in mainland China, International Journal of Production Research, Vol. 43 No. 12, pp. 2553-75. Jina, J., Bhattacharya, A.K. and Walton, A.D. (1997), Applying lean principles for high product variety and low volumes: some issues and propositions, Logistics Information Management, Vol. 10 No. 1, pp. 5-13. Kannan, S., Li, Y., Ahmed, N. and El-Akkad, Z. (2007), Developing maintenance value stream map, available at: http://iienet.org/uploadedFiles/IIE/Community/Technical_Societies_ and_Divisions/Lean/Lean_details_pages/Kannan4-07.pdf (accessed 4 January 2009). hlstro m, P. (1997), A lean and global smaller rm?, International Journal of Karlsson, C. and A Operations & Production Management, Vol. 17 No. 10, pp. 940-52. Kasul, R.A. and Motwani, J.G. (1997), Successful implementation of TPS in a manufacturing setting: a case study, Industrial Management & Data Systems, Vol. 97 No. 7, pp. 274-9. Kumar, M., Antony, J., Singh, R.K., Tiwari, M.K. and Perry, D. (2006), Implementing the lean sigma framework in an Indian SME: a case study, Production Planning & Control, Vol. 17 No. 4, pp. 407-23. Lasa, I.S., Laburu, C.O. and Vila, R.C. (2008), An evaluation of the value stream mapping tool, Business Process Management Journal, Vol. 14 No. 1, pp. 39-52. Lee, B.H. and Jo, H.J. (2007), The mutation of the Toyota production system: adapting the TPS at Hyundai Motor Company, International Journal of Production Research, Vol. 45 No. 16, pp. 3665-79. Lee, W.L. and Allwood, J.M. (2003), Lean manufacturing in temperature dependent processes with interruptions, International Journal of Operations Management, Vol. 23 No. 11, pp. 1377-400.

m, J. (1998), Is just-in-time applicable in paper industry logistics?, Lehtonen, J.M. and Holmstro Supply Chain Management, Vol. 3 No. 1, pp. 21-32. Lian, Y. and Van Landeghem, H. (2002), An application of simulation and value stream mapping in lean manufacturing, in Verbraeck, A. and Krug, W. (Eds), Proceedings 14th European Simulation Symposium: Simulation in Industry (ESS 2002), Dresden, Germany, 23-26 October, available at: www.scs-europe.org/services/ess2002/PDF/log-11.pdf (accessed 13 January 2008). Lian, Y. and Van Landeghem, H. (2007), Analysing the effects of lean manufacturing using a value stream mapping-based simulation generator, International Journal of Production Research, Vol. 45 No. 13, pp. 3037-58. Lummus, R.R., Vokurka, R.J. and Rodeghiero, B. (2006), Improving quality through value stream mapping: a case study of a physicians clinic, Total Quality Management & Business Excellence, Vol. 17 No. 8, pp. 1063-75. McDonald, T., Van Aken, E.M. and Rentes, A.F. (2002), Utilising simulation to enhance value stream mapping: a manufacturing case application, International Journal of Logistics: Research and Applications, Vol. 5 No. 2, pp. 213-32. McManus, H.L. and Millard, R.L. (2002), Value stream analysis and mapping for product development, Proceedings of the 23rd International Council of the Aeronautical Sciences (ICAS ) Congress, Toronto, Canada, 8-13 September, pp. 6103.1-6103.10, available at: http://dspace.mit.edu/bitstream/handle/1721.1/7333/Value%20Stream%20Analysis% 20and%20Mapping.pdf?sequence1 (accessed 13 January 2008). Mabry, B.G. and Morrison, K.R. (1996), Transformation to lean manufacturing by an automotive component supplier, Computers & Industrial Engineering, Vol. 31 Nos 1/2, pp. 95-8. fer, B. and Langsdorff, P.V. (2002), Simulation-based value stream Mittelhuber, B., Lo mapping, Industrie-Management, Vol. 18 No. 1, pp. 44-7. Mohanty, R.P., Yadav, O.P. and Jain, R. (2007), Implementation of lean manufacturing principles in auto industry, Vilakshan XIMB Journal of Management, pp. 1-32, available at: www. ximb.ac.in/ximb_journal/Publications/Article-01.pdf (accessed 10 April 2008). Mottershead, D. (2001), Introducing lean manufacturing at ESI, Proceedings of the Portland International Conference on Management of Engineering and Technology (PICMET 01), Portland, OR, USA, 29 July-2 August, Vol. 1 p. 448. Motwani, J. (2003), A business process change framework for examining lean manufacturing: a case study, Industrial Management & Data Systems, Vol. 103 No. 5, pp. 339-46. Mukhopadhyay, S.K. and Shanker, S. (2005), Kanban implementation at a tyre manufacturing plant: a case study, Production Planning & Control, Vol. 16 No. 5, pp. 488-99. Narasimhan, J., Parthasarathy, L. and Narayan, P.S. (2007), Increasing the effectiveness of value stream mapping using simulation tools in engine test operations, in Wamkeue, R. (Ed.), Proceedings of the 18th IASTED International Conference, Montreal, Canada, 30 May-1 June, pp. 260-4. n, S., Kilic Ozkan, K., Birgu ogullari, P. and Akman, G. (2005), Responding to customer requirements with value stream mapping: an automotive industry application, Proceedings of the 35th International Conference on Computers & Industrial Engineering, Istanbul, Turkey, 19-22 June, pp. 1517-22, available at: www.umoncton.ca/ cie/Conferences/35thconf/CIE35%20Proceedings/PDF/309.pdf (accessed 11 June 2008). Parry, G.C. and Turner, C.E. (2006), Application of lean visual process management tools, Production Planning & Control, Vol. 17 No. 1, pp. 77-86. Rother, M. and Shook, J. (1999), Learning to See, Lean Enterprise Institute Inc., Brookline, MA.

LMS using value stream mapping

471

JMTM 22,4

472

Sahoo, A.K., Singh, N.K., Shankar, R. and Tiwari, M.K. (2008), Lean philosophy: implementation in a forging company, International Journal of Advanced Manufacturing Technology, Vol. 36 Nos 5/6, pp. 451-62. Schroer, B.J. (2004), Simulation as a tool in understanding the concepts of lean manufacturing, Simulation, Vol. 80 No. 3, pp. 171-5. Schulte, K.M., Paruchuri, M.R. and Patel, J.B. (2005), Applying lean principles in a test laboratory environment, Proceedings of the SAE World Congress, Detroit, MI, USA, 11-14 April, Paper No. 2005-01-1051, available at: www.sae.org (accessed 4 January 2009). Scott, N.A. (2007), Lean conversion and Genba Shikumi, Proceedings of the IET International Conference on Agile Manufacturing (ICAM 2007 ), Durham, UK, 9-11 July, pp. 168-71. Serrano, I., Ochoa, C. and De Castro, R. (2008), Evaluation of value stream mapping in manufacturing system redesign, International Journal of Production Research, Vol. 46 No. 16, pp. 4409-30. Seth, D. and Gupta, V. (2005), Application of value stream mapping for lean operations and cycle time reduction: an Indian case study, Production Planning & Control, Vol. 16 No. 1, pp. 44-59. Soderquist, K. and Motwani, J. (1999), Quality issues in lean production implementation: a case study of a French automotive supplier, Total Quality Management & Business Excellence, Vol. 10 No. 8, pp. 1107-22. Sohal, A.S. (1996), Developing a lean production organization: an Australian case study, International Journal of Operations & Production Management, Vol. 16 No. 2, pp. 91-102. Sreedharan, S. and Liou, F. (2007), Can lean manufacturing be applied to university laboratories?, Proceedings of the 2007 American Society for Engineering Education Annual Conference & Exposition, Honululu, HI, USA, Paper No.: AC 2007-273, available at: www.icee.usm.edu/ICEE/ conferences/asee2007/papers/273_CAN_LEAN_MANUFACTURING_BE_APPLIED_TO_ UNI.pdf (accessed 11 June 2008). Sridhar, M.N. (2007), Productivity improvement by lean manufacturing systems, unpublished Masters dissertation, BITS, Pilani. Storch, R.L. and Lim, S. (1999), Improving ow to achieve lean manufacturing in shipbuilding, Production Planning & Control, Vol. 10 No. 2, pp. 127-37. Taylor, D.H. (2005), Value chain analysis: an approach to supply chain improvement in agri-food chains, International Journal of Physical Distribution & Logistics Management, Vol. 35 No. 10, pp. 744-61. Wallace, T. (2004), Innovation and hybridization: managing the introduction of lean production into Volvo do Brazil, International Journal of Operations Management, Vol. 24 No. 8, pp. 801-19. Womack, J.P. and Jones, D.T. (1996), Lean Thinking: Banish Waste and Create Wealth in Your Corporation, Simon & Schuster, New York, NY. About the authors Anand Gurumurthy is an Assistant Professor in the Mechanical Engineering Group of Birla Institute of Technology and Science (BITS), Pilani, India. He completed his PhD in the area of LM and ME degree in Manufacturing Systems Engineering at BITS, Pilani, India, while he received his BE degree in Mechanical Engineering from the University of Madras, India. He has around seven years of teaching/research experience and two years of industrial experience as a Production Engineer with one of Indias leading industrial houses the TVS Group. He has published around 25 papers in peer-reviewed national and international journals and presented many papers

in various national/international conferences. His current research interests include LM, operations management, maintenance management and world-class manufacturing (WCM). Rambabu Kodali is a Professor in the Mechanical Engineering Group of BITS, Pilani, India. He is also the Group Leader of both the Mechanical Engineering Group and Engineering Technology Group, since 1994 and 2004, respectively. Till date, he has around 25 years of teaching/research experience and 15 years of administrative experience as a Group Leader. He has published around 200 papers in various national and international journals and has been an invited speaker for various national/international conferences. His research areas are: manufacturing excellence/WCM, LM systems, supply chain management, computer-integrated manufacturing systems (CIMS), exible manufacturing systems (FMS), world-class maintenance systems and innovative product design and development. He has completed several research projects in CIMS, FMS and WCM. He has developed the curriculum of 16 integrated rst-degree, higher-degree, work-integrated learning and collaborative learning programmes apart from establishing the FMS Laboratory at BITS, Pilani. Rambabu Kodali is the corresponding author and can be contacted at: proframbabukodali@gmail.com

LMS using value stream mapping

473

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Education + Training
Emerald Article: Enhancing students' employability through business simulation Alex Avramenko

Article information:
To cite this document: Alex Avramenko, (2012),"Enhancing students' employability through business simulation", Education + Training, Vol. 54 Iss: 5 pp. 355 - 367 Permanent link to this document: http://dx.doi.org/10.1108/00400911211244669 Downloaded on: 26-08-2012 References: This document contains references to 63 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 71 times since 2012. *

Users who downloaded this Article also downloaded: *


Sandy Bond, (2011),"Barriers and drivers to green buildings in Australia and New Zealand", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 494 - 509 http://dx.doi.org/10.1108/14635781111150367 Franois Des Rosiers, Jean Dub, Marius Thriault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376 Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0040-0912.htm

Enhancing students employability through business simulation


Alex Avramenko
Salford Business School, University of Salford, Salford, UK
Abstract
Purpose The purpose of this paper is to introduce an approach to business simulation with less dependence on business simulation software to provide innovative work experience within a programme of study, to boost students confidence and employability. Design/methodology/approach The paper is based on analysis of existing business simulation literature, which is synthesised with contemporary pedagogic trends and the outputs of the authors longitudinal research on improving the effectiveness of business simulation as a teaching method. Findings The use of business simulation as a pedagogic tool can be considerably extended beyond built-in functionality to match the needs of various business-related disciplines. Learning from their own mistakes enabled students to appreciate the gap between theory and its application. Research limitations/implications Business simulation can provide an innovative provision of work experience for students, if its design utilises continuous formative feedback and reflective practice amongst other pedagogical elements rather than relying on sophisticated business simulation software. Practical implications This paper offers a blueprint for the provision of business simulation exercises in higher education as means for equipping participants with a work-like experience. Originality/value The article presents a fresh view on the use of business simulation in the educational process, while contributing to the long-standing debate on bridging the gap between theory and practice. Keywords Business simulation, Employability, Higher education, Employment, Students Paper type Research paper

Students employability

355
Received 21 March 2011 Revised 4 September 2011 Accepted 10 October 2011

Introduction The educational process in business schools is continually criticised for failing to equip students with employability skills (Bennis and OToole, 2005; Neubaum et al., 2009; Pfeffer and Fong, 2004). This failure can be subdivided into two broad categories: first, the irrelevance of the management theory being taught (Chia and Holt, 2008; Ghoshal, 2005; Starkey et al., 2004); and second, the outdated processes used to teach the students (Bennis and OToole, 2005; Mintzberg, 2004; Pfeffer and Fong, 2004). To address both weaknesses this paper introduces a bespoke approach to business simulation, aiming at bridging the gap between theory and practice by facilitating students sense-making activities, which in turn has a positive effect on students confidence and employability. As student employability is at the top of business schools agendas (Bhanugopan and Fish, 2009; Hay, 2008; Mihail and Elefterie, 2006), especially in times of recession (Cox and King, 2006; DCruz and Soberman, 2009; Rae, 2008), bridging the gap between theory and practice is more critical than ever. One way to close this gap is to give
The author would like to thank Dr Aleksej Heinze for his comments and inspirational contribution to the earlier versions of this paper.

Education Training Vol. 54 No. 5, 2012 pp. 355-367 r Emerald Group Publishing Limited 0040-0912 DOI 10.1108/00400911211244669

ET 54,5

356

students an opportunity to apply learnt theoretical frameworks (Jones, 2006; Pfeffer and Fong, 2002) enabling the reflective evaluation of the theory (Moon, 2004; Harvey, 2001). Learning in such a way is more effective even than learning within the real work environment or on the job, as the latter, while providing direct experience, is limited by to the routine nature of tasks and the complexity of the work environment. According to Tomlinson (2009), graduates at work feel over-challenged physically and emotionally by a sudden increase in responsibility and by the constant demands of most working environments. Senior colleagues, who are best-placed to provide constructive feedback, are often too busy to do so, as illustrated by studies of physicians (Elsey and Eskandari, 1999), salespersons (Makoto and Takashi, 2002) and military and police officers (Patterson, 2002) which compromises the needs of learners. In such circumstances, a viable alternative to learning within the real work environment is the use of business simulation in the educational process. Business simulation is not a new approach in education (Chin et al., 2009; Faria et al., 2009). It is often commented on positively (Gopinath and Sawyer, 1999; Musselwhite, 2006) although with some criticism about its flexibility (Herz and Merz, 1998; Morgan, 2009), over-simplicity (Mintzberg, 2004) or effectiveness in general (Anderson and Lawton, 2009; Chang et al., 2003; Faria and Wellington, 2004). The key difference between the approaches commented on here and other similar uses of business simulation (Lean et al., 2006) is that our intent is to use business simulation software to set the scene for decision making rather than to gain discipline-specific knowledge. Consequently, there is a high degree of flexibility in adopting particular business simulation software, if any is used. Drivers for business simulation Business simulation is most often associated with business simulation games, which have grown due to advances in computer technology (Summers, 2004). However, board and war games were widely used by many ancient civilisations over 5,000 years ago including the Chinese (Bell, 1979; Faria et al., 2009; Wolfe and Crookall, 1998), Persians (Bell, 1979; Wilkins, 2002), Indians (Bell, 1979; Wilkins, 2002) and others, and represent an ancient, respectable approach to learning. The main reported benefits of computer-based business simulations include a risk-free environment (Fripp, 1997), simplified real world (Doyle and Brown, 2000), learning by comparison (Musselwhite, 2006; Whiteley and Faria, 1989) and the acquisition of time management (Doyle and Brown, 2000) skills. A literature review of the benefits of business simulations is summarised in Table I. Deterrents to business simulation The evaluative analysis of literature on the use of computer-based business simulations indicates various concerns related to the use of such simulations in educational settings. The main limitations associated with the pedagogic use of computer-based business simulations emphasise gaming rather than learning (Doyle and Brown, 2000; Thorngate and Carroll, 1987), and highlight the superficiality of business simulations and, hence, their predisposition to be too demanding on the abstract thinking abilities of learners (Anderson and Lawton, 2009; Wolfe, 2004). There are also claims that the computerbased or gaming approach is an inefficient pedagogy for some subjects (Anderson and Lawton, 2009; King and Newman, 2009; Neuhauser, 1976). The analysis of the main limitations indicates that it is the computerised element of simulations that contributes to their low effectiveness, mainly due to the substitution

Benefit Teamworking Motivation Risk-free environment Variety Experiential learning Value for money Quantitative skills Critical thinking Simplified real world Learning by comparison Negotiation skills Time management Support for independent learning

Description Helpful in developing teamworking skills Stimulates enjoyable learning Provides an opportunity to experiment with certain decisions which could be too risky to implement in practice Useful addition to lectures or case study teaching, another way of engaging learners Ensures quick and detailed feedback to learners to reflect upon their actions A cost-effective alternative to real-life practice with a relatively high effectiveness compared to alternative methods of teaching Effective in improving quantitative skills Tends to assist in enhancing critical management thinking skills, if facilitated Focuses learners attention on a specific element by the simplification of the real world Allows players to compare their performance against each other and against the industrys real-life data Supports augmentation of potential to negotiate within a team and where necessary across teams of players Provides practice of working towards deadlines Helps learners to understand theories and encourages to think

References Fripp (1997); King and Newman (2009) Fripp (1997) Fripp (1997)

Students employability

357
Doyle and Brown (2000); Whiteley and Faria (1989) Adobor and Daneshfar (2006); Fripp (1997); Musselwhite (2006) Doyle and Brown (2000); Musselwhite (2006) Whiteley and Faria (1989) Doyle and Brown (2000); Lane (1995); Martin and McEvoy (2003); Sun (1998) Doyle and Brown (2000); Low et al. (1994) Musselwhite (2006); Whiteley and Faria (1989) Doyle and Brown (2000) Doyle and Brown (2000) Sun (1998) Table I. Summary of the benefits of business simulations

of multifaceted real-world situations with a complex mathematical model (Marakas, 1999). As a result, group and interpersonal interactions, made up of intricate human interrelationships, so characteristic of work-related situations, are being replaced by interactions with a computer screen. Therefore, the view supported here stresses the need to reinforce, if not substitute, the use of computer-based business simulations with ongoing contextual facilitation in order to engage learners abstract thinking beyond the computer screens constraints. To enable this and elaborate on its strengths the following pedagogical framework has been developed. Business simulation design The design of our business simulation rests on the constructivist pedagogy allowing an individual to construct knowledge, drawing on the learners existing knowledge, in an idiosyncratic way while working in a group. Tutors facilitate the classroom environment in which students are encouraged to determine, challenge, change or add existing beliefs and understanding through engagement in provided tasks (Richardson, 2003, p. 1626).

ET 54,5

358

The design presented below is developed for a postgraduate management programme, aiming to provide students with a reasonably realistic experience of managing a business entity in a competitive environment. The key intent of the business simulation in the proposed framework (Figure 1) is to assist students in bridging the gap between practice and theory of management studies, while developing critical thinking and complex reasoning skills. It is supported with a constructive alignment between learning outcomes, learning activities and assessment to encourage deep engagement of students (Biggs, 1999, p. 31). The intended learning outcomes, to facilitate active sense-making, are to analyse various streams of information; to evaluate how discipline-specific theory can be applied to make use of that information in order; and to create specific solutions in exploiting opportunities or neutralising threats presented by business environment. The learning activities comprise an iterative cycle of four activities: analysing business environment, identifying opportunities and threats, applying theory to deal with identified threats and opportunities, analysing the results of the chosen interventions. The assessment has a particular emphasis on ongoing reflective practice in two n, 1983). While reflectionelements: reflection-in-action and reflection-on-action (Scho in-action is associated with in-simulation activities and is facilitated by tutors, reflection-on-action takes the form of reflective commentaries (Moon, 2004) mainly during the after-simulation period. The software element of our business simulation was the software package Executive version 8 (April Training Executive, 2010). It was selected because of its extensive reporting functionality which offers 20 different types of reports. This functionality supported the iterative learning activities (Figure 1) and significantly helped learners to focus on analysis rather than an interactive software interface or gaming experience. This setting the scene use of simulation software allowed the counteraction of the common limitations which characterise computer-based business simulations (Table II) and at the same time severed the dependence on a particular software package. Altogether, the discussed configuration of our business simulation, comprising greatly diminished reliance on computer software, contextual facilitation and reflective practice, provide a framework for counteracting the reported deterrents to the business simulation approach being a more effective pedagogic tool. Our conceptualisation of new business simulations reaching beyond the use of technology emphasises the importance of an individual building up his knowledge base based on testing his own assumptions while applying theory to simulated practice. It assists the learners in

Bridging the gap Theory Subject-level theory Marketing, finance, etc. (depends on a particular software package)
Applying

Practice Learning activities 1. Analysing business environment 2. Identifying threats and opportunities 3. Applying theory to deal with above 4. Analysing results of interventions - Reflection-in-action Learning outcomes Assessment - Reflection-on-action

Figure 1. Business simulation: the pedagogical framework

Constructivist approach - Marketing sense while delaing with - Rethinking

Making sense

Drawback Gaming Not adequate for theory learning Need for combination Need for purpose Time and resource commitment Simulation model Too complex Cultural differences

Description Winning in simulation can be attributed to luck and not skill Simulation games are not suitable for gaining theoretical knowledge Lessons learnt in simulation games might not be obvious to all learners, lectures are still needed Without a specific focus or context a game can be easily perceived as a time wasting activity Intensive teaching requires time and resources to facilitate an appropriate level of learning and learners engagement The model used by simulation software is often quite limited in its application of the relevant theory The latest computer-based simulations offer a high number of variables and their relationships for consideration Business games do not always allow students of different cultures to learn effectively, there is the possibility of losing face

References Thorngate and Carroll (1987) Whiteley and Faria (1989) Doyle and Brown (2000) Doyle and Brown (2000) Anderson and Lawton (2009); Faria and Wellington (2004); Lean et al. (2006) Faria and Wellington (2004); Feinstein et al. (2002) Anderson and Lawton (2009); Low et al. (1994); Wolfe (2004); Lean et al. (2006) Chang et al. (2003); Moore (1998)

Students employability

359

Table II. Summarised limitations of business simulations

being in charge of their learning and, therefore, aims to contribute to enhancing students confidence and employability. Employability in this paper refers to the propensity of students to obtain a job (Harvey, 2001, p. 98), the development of skills, knowledge and attitudes augment this ability (Harvey, 2001; Hillage and Pollard, 1998). The contribution of our business simulation method to the development of graduates supports their transition to employment by providing lifelike experience, ensuring the formation of certain business skills and, by that, boosting the students self-confidence. Research settings This research was longitudinal and undertaken between 2007 and 2010 in a UK-based business school. The staff, students and alumni accounts presented below belong to the first three years of the study, around 250 management students having participated in the business simulation project during that period of time. The student profile includes predominantly learners with limited real-life employment experience, and some mature students returning to university driven mainly by career diversification. In terms of nationalities, all cohorts were diverse and included students from Europe, Asia, Africa and North America. The students views were obtained using a focus group method (Krueger and Casey, 2000). Focus groups were conducted after each delivery of the business simulation project. The moderator of each focus group had not been involved in delivery and, hence, their task was to seek clarification of any focal claim or statement. This approach to conducting the focus group interviews diminished both the potential anticipatory bias and favouritism. Additionally, the themes emerging from different focus groups have been cross-examined for patterns. The student views were then

ET 54,5

corroborated by questionnaire responses obtained from graduates via the alumni network (Salford Alumni Association, 2010) within a year from graduation. Context of business simulation In this paper we refer to a business simulation delivered to the management studies students at the conclusion of the course providing them with opportunities to manage a business entity and to test the knowledge acquired during the programme of study. The participants in the business simulation worked in large boardroom-style rooms, organised in teams representing simulated companies which competed with each other. Each participant carried out a managerial role as the executive overseeing at least one business function, typically strategy, operations, human resource management, finance, research and development or marketing. The duties and responsibilities within the assigned role were dictated, in the main, by the specifics of the functional discipline, allowing for flexibility and enabling participants to experience role ambiguity and even conflict. The instructors role was largely one of facilitation: helping participants to adopt an active role in learning and, particularly, in exploring the context within the simulation. Instructors also contributed to the richness of the simulated context by representing the roles of industry regulators, overseeing compliance with the respective industry regulations and policies. Additionally, instructors encouraged and maintained face-to-face discussions, involving reciprocal questioning and critical exploration methods (Duckworth, 2006) with an emphasis on the situational application of theory to practice. The simulated sessions were typically organised in an eight-hour work pattern with one hour lunch break for a period of two weeks in a real-world office-style working environment and occasionally supported by mini-lectures, if instructors felt that students had particular gaps in their knowledge of a particular topic or issue. After the two-week period a self-study ten-week period was given to students to work on the individual task of assessing the application of chosen theoretical frameworks, relevant to the individual role, to practice in a reflective way. Putatively at the initial stage, and then reportedly, such an approach has been proven to assist development of analytical and critical thinking skills. During this period students were supported with an online learning environment and occasional on request face-to-face discussions with instructors. This business simulation design enabled participants to get a reasonably realistic experience of managing a business entity in a competitive environment, while practising their leadership, communication and negotiation skills under pressure of time and tasks. Although two weeks may not be a sufficient time-span for different styles of learners, it certainly provides a taster of being in a managerial seat and echoes constructivist pedagogic beliefs, accentuating its significance for students being encouraged to apply and assess management theories taught within the programme of study. Results and discussion After in-class simulation activities, it was commonly acknowledged that the disciplinelevel theory studied in preceding simulation courses made much more sense than before, with some students concluding that they had developed ownership of theories which enabled them to use the theory driven by the situational context rather than following well-established prescriptions. The simulations participants reportedly

360

recognised an increase of self-confidence due to practically gained awareness that theory is not perfect and that learning from making mistakes was more enlightening than following advice prescribed by theory. Exploring student views The views below have derived mainly from the analysis of the focus group material and are encapsulated to reflect the main themes which have emerged. These include criticism of teamwork, appreciation of an opportunity to practice relevant theory and, related to this, team-based decision making, with the resurfacing of some cultural differences, rethinking of a potential impact of immediate business environment and interplay between the main business functions. The facilitating involvement of instructors also received approval for the opportunity it provided to engage students in a tentative dialogue and reciprocal questioning. These themes are concluded with an analysis of questionnaire responses obtained from graduates via the universitys alumni networks enquiry. The majority of students explicitly commended the opportunity to apply in practice theoretical concepts learnt during their course of study. The very few unfavourable comments, which had been expressed, were in relation to the time-demanding nature of the business simulation and teamwork. Given the fact that the design of business simulation requires participants involvement for two consecutive weeks in order to get them immersed in a routine of managing a business entity, the time commitment needed may involve certain disapproval. On the other hand, as King and Newman (2009) assert, the participants had a real feel for decision making and teamwork, which is fairly arduous to arrange in the university setting. There was a modicum of dissatisfaction with teamwork reported by some respondents, mainly due to differences in individual students commitment to the teams tasks and a subsequent lack of differentiation in the case of uneven contributions to the teams outcomes (Karau and Williams, 1997). Respondents, though, noted that gaining practical managerial experience may not always be considered a gain due to the prevailing desire to pass the module rather than to learn. This attitude has also been recognised by Knowles et al. (2005), who distinguished it as having two facets: the need to know and readiness to learn. According to the students feedback, this attitude also corresponds to an immediate need for dealing or balancing with student lifes current priorities against the need to demonstrate relevant experience which will emerge much later during a job search or thereafter. Students were aware that a business simulation does not reflect all the complexity of real-life situations. Nonetheless, it was claimed that it made them aware of the potential impact of the external environment on the business as a whole. It also helped participants to be increasingly mindful of the interrelatedness of the main business functions: marketing, operations, personnel and finance. Arguably, applied knowledge of these essential elements of management theory and their interrelationships contributes towards enhancing students employability. These assertions support the literatures claim that, in spite of limitations of the controlled environment provided by simulations, they generate insights that deepen individual learning (Adobor and Daneshfar, 2006; Fripp, 1997, Low et al., 1994; Moon, 2004). It was noted by respondents that the face-to-face and instructor-to-team exploratory discussions helped international students, especially those of North American and Chinese origin, to recognise cultural differences in dealing with their functional duties which frequently lead to active disagreement or even interpersonal conflict. Whilst

Students employability

361

ET 54,5

American students tended to make bolder decisions in terms of planning or scale of initiatives, Chinese students were predominantly fixed on small often reduction-driven changes, such as scale-reducing or price-cutting efforts in optimising business performance. A timely and apt instructor-led facilitation made possible what Chang et al. (2003) identify as a barrier to enhancing learning experience and cultural awareness of participants. Students views on business simulation after securing employment To discern the impact of business simulation on employability as it is perceived by graduates, they were contacted via the alumni register one year after their graduation. This method highlighted a few revealing findings. At the outset, it became apparent that one of the main benefits indicated by alumni was the presence of business simulation as such in their curriculum. It was noted by many alumni that mentioning the business simulation during a pre-employment interview had a positive effect since it raised employers curiosity particularly about what had been learnt within it and, therefore, steering the interviews in the direction of the strengths of those candidates. Furthermore, the presence of this business simulation in the curriculum was deemed to give graduates with no previous formal work experience an opportunity to accentuate during employment interviews their non-formal work experience gained during the business simulation which reportedly had enabled them to stand out from the pool of other candidates. In the words of an alumnus:
[I]t was specifically useful in convincing companies that although I did not have real world experience I was a little better prepared than most (of other candidates) (Respondent 2).

362

In terms of the view on the impact of business simulation on alumni confidence in preparing for employment, it was outlined that the business simulation helped students to gain confidence in their understanding of a potential employers company, guiding preparation for interviews and, as one of the respondents puts it:
[T]he module gave me a shot in the arm to the effect of being able to appreciate the larger picture of the business that is being done by the organisation that I was interviewing (sic) for (Respondent 3).

The above opinion corroborates Harveys (2001) assertion that employers tend to be favourably disposed towards graduates with work experience, which includes, but is not limited to, formal, non-formal, short employment or placement with a company. In particular, graduate recruiters are increasingly attracted by new graduates with work experience as part of their programmes of study (Harvey, 2001, p. 103). In relation to skills development, or improvement, it was indicated that teamwork was useful in learning real-world communications and decision making, as well as in business planning and subsequent tracking and rectification of individuals and teams oversights. The importance of critical thinking has been succinctly summarised by the respondent:
The module gave me a hands on in the criticality of each and every department of a business be it marketing, operations, finance, HR or overall leadership. More importantly the biggest take away from the module was the importance of a fine balance across all departments in order to run a successful business (Respondent 3).

Additionally, it was indicated that positioning of business simulation as the last module of the management studies programme was beneficial as it provided an

opportunity for a practical application of all theoretical material learned within the programme curriculum. Instructors outlook on the effectiveness of business simulation Using business simulation as a practice-based teaching method is not without its challenges. Drawing on the analysis of the four-year long delivery of business simulation to students of different programmes and levels, the biggest challenges have been identified, contributing to effective delivery of the business simulation, including creating an environment conducive for learning, neutralising the effect of students uneven contribution to work and coordinating group dynamics. To ensure the authenticity of the simulated business environment an individual roles execution has to be closely interlinked with all business decisions via adequate communication and coordination of decision making, which is to be based on a pertinent theory. Therefore, the significance of a computer-based element, i.e. use of a software package, within the simulation has to be reduced to emulate the basic characteristics of an industrys environment in the form of reports. Instead, the objectives of communication and coordination should be re-enforced through the facilitation of team discussions, with instructors challenging participants decision making on the grounds of choice and application of relevant theories. Two other main challenges considered during the planning and delivery stages of the business simulation are offsetting students uneven contribution to work and issues of group dynamics. These challenges are not a constituent of business simulation as a teaching method but are generic attributes of students team-based behaviour. While the former is attributed to the pass vs learn attitude (Knowles et al., 2005), rather than social loafing (Karau and Williams, 1997), and a peer assessment may well counterbalance it, the latter represents a constructive challenge as it implies practicing such skills as leadership, communication, negotiation, problem solving and decision making, largely referred to as employability skills (Harvey, 2001). Conclusions The main initial focus of this study was on improving the effectiveness of business simulation as a teaching method. Continuously analysing students feedback and observation notes, it was found that elements such as a topical and enduring face-toface formative feedback, team-to-tutor issue-based discussions and reflective practice have greater impact on students learning than engaging with sophisticated simulation software. These indications were subsequently validated by summative assessment and led to the business simulation design only marginally utilising the software element. By and large this study supports the view that business simulation is an invaluable tool to supplement such traditional methods of syllabus delivery as lectures, seminars and tutorials (Anderson and Lawton, 2009; King and Newman, 2009; Kumar and Lightner, 2007; Whiteley and Faria, 1989). At the same time, however, an effective delivery of business simulation requires a significant amount of time and effort for its preparation, organising and implementation. This echoes the view that such an intensive teaching method requires considerable commitment of tutoring time and other associated resources to facilitate appropriate levels of learning and learner engagement (Anderson and Lawton, 2009; Faria and Wellington, 2004; Lean et al., 2006), and, therefore, may not be a primary choice for universities today. On the other hand, the list of alternatives to business simulation, offering innovative provision of

Students employability

363

ET 54,5

364

work experience, hardly presents an overwhelming variety comprising as it does sandwich courses, semester and shorter placements, visits and work shadowing (Harvey, 2001). It is despite this that work experience is increasingly being seen as enhancing students employability prospects. It has been found that despite the limitations of the controlled environment, inherent in business simulation, participants were able to apply learnt theory to practice, which reportedly compensated for their lack of work experience. Furthermore, it was noted that practice of leadership, decision making and negotiation skills gained through business simulation, as well as exposure to certain standards and practices, have significantly contributed to graduates self-confidence. This boost of confidence supported by the augmented ability to discern a gap between theory and practice has been perceived by graduates as helping their employability.
References Adobor, H. and Daneshfar, A. (2006), Management simulations: determining their effectiveness, Journal of Management Development, Vol. 25 No. 2, pp. 151-68. Anderson, P.H. and Lawton, L. (2009), Business simulations and cognitive learning: developments, desires and future directions, Simulation & Gaming, Vol. 40 No. 2, pp. 193-216. April Training Executive (2010), Business Simulation Executive, April Training Executive, Cheshire, available at: www.trainingsimulations.com/training_products/executive/ executive.php (accessed 23 August 2010). Bell, R.C. (1979), Board and Table Games from Many Civilizations, Dover Publications Inc, New York, NY. Bennis, W. and OToole, J. (2005), How business schools lost their way, Harvard Business Review, Vol. 83 No. 5, pp. 96-104. Bhanugopan, R. and Fish, A. (2009), Achieving graduate employability through consensus in the South Pacific island nation, Education Training, Vol. 51 No. 2, pp. 108-23. Biggs, J. (1999), Teaching for Quality Learning at University: What the Student Does, Open University Press, Buckingham. Chang, J., Lee, M., Ng, K.L. and Moon, K.L. (2003), Business simulation games: the Hong Kong experience, Simulation & Gaming, Vol. 34 No. 3, pp. 367-76. Chia, R. and Holt, R. (2008), The nature of knowledge in business schools, Academy of Management Learning & Education, Vol. 7 No. 4, pp. 471-86. Chin, J., Dukes, R. and Gamson, W. (2009), Assessment in simulation and gaming: a review of the last 40 years, Simulation & Gaming, Vol. 40 No. 4, pp. 553-68. Cox, S. and King, D. (2006), Skill sets: an approach to embed employability in course design, Education Training, Vol. 48 No. 4, pp. 262-74. DCruz, J. and Soberman, D. (2009), Managing business schools to weather economic change, INSEAD Working Paper No. 2009/27/MKT, INSEAD, Fontainebleau, available at: http:// ssrn.com/abstract1413794 (accessed 6 January 2010). Doyle, D. and Brown, W.F. (2000), Using a business simulation to teach applied skills the benefits and the challenges of using student teams from multiple countries, Journal of European Industrial Training, Vol. 24 No. 6, pp. 330-6. Duckworth, E.R. (2006), The Having of Wonderful Ideas and Other Essays on Teaching and Learning, 3rd ed., Teachers College Press, New York, NY. Elsey, B. and Eskandari, M. (1999), Identifying the management development needs of senior executives in Irans teaching hospitals, Journal of Management in Medicine, Vol. 13 No. 6, pp. 421-35.

Faria, A.J. and Wellington, W.J. (2004), A survey of simulation game users, former-users and never-users, Simulation & Gaming, Vol. 35 No. 2, pp. 178-207. Faria, A.J., Hutchinson, D., Wellington, W.J. and Gold, S. (2009), Developments in business gaming: a review of the past 40 years, Simulation & Gaming, Vol. 40 No. 4, pp. 464-87. Feinstein, A.H., Mann, S. and Corsun, D.L. (2002), Charting the experimental territory: clarifying definitions and uses of computer simulation, games, and role play, Journal of Management Development, Vol. 21 No. 10, pp. 732-87. Fripp, J. (1997), A future for business simulations?, Journal of European Industrial Training, Vol. 21 No. 4, pp. 138-42. Ghoshal, S. (2005), Bad management theories are destroying good management practices, Academy of Management Learning & Education, Vol. 4 No. 1, pp. 75-91. Gopinath, C. and Sawyer, J.E. (1999), Exploring the learning from an enterprise simulation, Journal of Management Development, Vol. 18 No. 5, pp. 477-89. Harvey, L. (2001), Defining and measuring employability, Quality in Higher Education, Vol. 7 No. 2, pp. 97-109. Hay, M. (2008), Business schools: a new sense of purpose, Journal of Management Development, Vol. 27 No. 4, pp. 371-8. Herz, B. and Merz, W. (1998), Experiential learning and the effectiveness of economic simulation games, Simulation & Gaming, Vol. 29 No. 2, pp. 238-50. Hillage, J. and Pollard, E. (1998), Research Report RR85: Employability: Developing a Framework for Policy Analysis, Department for Education and Employment, London. Jones, C. (2006), Enterprise education: revisiting whitehead to satisfy Gibbs, Education Training, Vol. 48 No. 5, pp. 356-67. Karau, S.J. and Williams, K.D. (1997), The effects of group cohesiveness on social loafing and social compensation, Group Dynamics: Theory, Research, and Practice, Vol. 1 No. 2, pp. 156-68. King, M. and Newman, R. (2009), Evaluating business simulation software: approach, tools and pedagogy, On the Horizon, Vol. 17 No. 4, pp. 368-77. Knowles, M.S., Holton, E.F. and Swanson, R.A. (2005), The Adult Learner, 6th ed., Elsevier, Amsterdam. Krueger, R.A. and Casey, M.A. (2000), Focus Groups: A Practical Guide for Applied Research, 3rd ed., Sage Publications, Thousand Oaks, CA. Kumar, R. and Lightner, R. (2007), Games as an interactive classroom technique: perceptions of corporate trainers, college instructors and students, International Journal of Teaching and Learning in Higher Education, Vol. 19 No. 1, pp. 53-63. Lane, D.C. (1995), On the resurgence of management simulations and games, Journal of the Operational Research Society, Vol. 46 No. 5, pp. 604-25. Lean, J., Moizer, J.D., Towler, M. and Abbey, C. (2006), Simulations and games: use and barriers in higher education, Active Learning in Higher Education, Vol. 8 No. 1, pp. 227-42. Low, M., Venkataraman, S. and Srivatsan, V. (1994), Developing an entrepreneurship game for teaching and research, Simulation & Gaming, Vol. 25 No. 3, pp. 383-401. Makoto, M. and Takashi, K. (2002), Salespersons procedural knowledge, experience and performance: an empirical study in Japan, European Journal of Marketing, Vol. 36 Nos 7/8, pp. 840-54. Marakas, G.M. (1999), Decision Support Systems in the Twenty-First Century, Prentice Hall, Upper Saddle River, NJ.

Students employability

365

ET 54,5

366

Martin, D. and McEvoy, B. (2003), Business simulations: a balanced approach to tourism education, International Journal of Contemporary Hospitality Management, Vol. 15 No. 6, pp. 336-9. Mihail, D.M. and Elefterie, K.A. (2006), Perceived effects of an MBA degree on employability and career advancement: the case of Greece, Career Development International, Vol. 11 No. 4, pp. 352-61. Mintzberg, H. (2004), Managers not MBAs: A Hard Look at the Soft Practice of Managing and Management Development, Financial Times Prentice Hall, London. Moon, J.A. (2004), A Handbook of Reflective and Experiential Learning: Theory and Practice, Routledge Falmer, London. Moore, E.G. (1998), Competitive judgments in a business simulation: a comparison between American and Chinese business students, Psychology and Marketing, Vol. 15 No. 6, pp. 547-62. Morgan, G. (2009), Challenges of online game development: a review, Simulation & Gaming, Vol. 40 No. 5, pp. 688-710. Musselwhite, C. (2006), University executive education gets real, American Society of Training and Development, Vol. 60 No. 5, pp. 57-8. Neubaum, D.O., Pagell, M., Drexler, J.A., Mckee-Ryan, F.M. and Larson, E. (2009), Business education and its relationship to student personal moral philosophies and attitudes toward profits: an empirical response to critics, The Academy of Management Learning and Education, Vol. 8 No. 1, pp. 9-24. Neuhauser, J.J. (1976), Business games have failed, The Academy of Management Review, Vol. 1 No. 4, pp. 124-9. Patterson, G.T. (2002), Predicting the effects of military service experience on stressful occupational events in police officers, Policing: An International Journal of Police Strategies & Management, Vol. 25 No. 3, pp. 602-18. Pfeffer, J. and Fong, C.T. (2002), The end of business schools? Less success than meets the eye, Academy of Management Learning & Education, Vol. 1 No. 1, pp. 78-95. Pfeffer, J. and Fong, C.T. (2004), The business school business: some lessons from the US experience, Journal of Management Studies, Vol. 41 No. 8, pp. 1501-20. Rae, D. (2008), Riding out the storm: graduates, enterprise and careers in turbulent economic times, Education Training, Vol. 50 Nos 8/9, pp. 748-63. Richardson, V. (2003), Constructivist pedagogy, Teachers College Record, Vol. 105 No. 9, pp. 1623-40. Salford Alumni Association (2010), Welcome to the Salford Alumni Association!, available at: https:// supporters.salford.ac.uk/NetCommunity/Page.aspx?pid194 (accessed 13 August 2010). n, D. (1983), The Reflective Practitioner. How Professionals Think in Action, Temple Smith, Scho London. Starkey, K., Hatchuel, A. and Tempest, S. (2004), Rethinking the business school, Journal of Management Studies, Vol. 41 No. 8, pp. 1521-31. Summers, G.J. (2004), Todays business simulation industry, Simulation & Gaming, Vol. 35 No. 2, pp. 208-41. Sun, H. (1998), A game for the education and training of production/operations management, Education Training, Vol. 40 No. 9, pp. 411-6. Thorngate, W. and Carroll, B. (1987), Why the best person rarely wins: some embarrassing facts about contests, Simulation & Gaming, Vol. 18 No. 3, pp. 299-320. Tomlinson, M. (2009), The degree is not enough: students perceptions of the role of higher education credentials for graduate work and employability, British Journal of Sociology of Education, Vol. 29 No. 1, pp. 49-61.

Whiteley, T.R. and Faria, A.J. (1989), A study of the relationship between student final exam performance and simulation game participation, Simulation & Gaming, Vol. 20 No. 1, pp. 44-64. Wilkins, S.E.D. (2002), Sports and Games of Medieval Cultures, Greenwood Publishing Group, Westport, CT. Wolfe, J. (2004), Two computer-based entrepreneurship experiences: an essay review, Academy of Management Learning & Education, Vol. 3 No. 3, pp. 333-9. Wolfe, J. and Crookall, D. (1998), Developing a scientific knowledge of simulation/gaming, Simulation & Gaming, Vol. 29 No. 1, pp. 7-19. Further reading Brockbank, A. and Mcgill, I. (2007), Facilitating Reflective Learning in Higher Education, 2nd ed., McGraw-Hill International, Open University Press, Maidenhead. Faria, A.J. (1987), A survey of the use of business games in academia and business, Simulation and Games, Vol. 18 No. 2, pp. 207-25. Corresponding author Alex Avramenko can be contacted at: a.avramenko@salford.ac.uk

Students employability

367

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Business Process Management Journal


Emerald Article: Simulation for emergency care process reengineering in hospitals Sung J. Shim, Arun Kumar

Article information:
To cite this document: Sung J. Shim, Arun Kumar, (2010),"Simulation for emergency care process reengineering in hospitals", Business Process Management Journal, Vol. 16 Iss: 5 pp. 795 - 805 Permanent link to this document: http://dx.doi.org/10.1108/14637151011076476 Downloaded on: 26-08-2012 References: This document contains references to 19 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 613 times since 2010. *

Users who downloaded this Article also downloaded: *


Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 Franois Des Rosiers, Jean Dub, Marius Thriault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376 Sandrine Roginsky, Sally Shortall, (2009),"Civil society as a contested field of meanings", International Journal of Sociology and Social Policy, Vol. 29 Iss: 9 pp. 473 - 487 http://dx.doi.org/10.1108/01443330910986261

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

Simulation for emergency care process reengineering in hospitals


Sung J. Shim
Stillman School of Business, Seton Hall University, South Orange, New Jersey, USA, and

Emergency care process reengineering 795

Arun Kumar
School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University, Melbourne, Australia
Abstract
Purpose Using computer simulation, this paper seeks to model the emergency care process in a hospital and evaluate the effects of some proposed changes to improve patient wait times in the process. Design/methodology/approach The paper is based upon a case study conducted at the hospital and uses historical data provided by the hospital to simulate the emergency care process. Findings The simulation results demonstrate that the changes proposed can shorten patient wait times in the emergency care process. The proposed changes involve adding another payment station and a new short-stay ward in the process. Based upon the results, the paper supports the implementation of the changes proposed. Research limitations/implications A couple of limitations are recognized in this paper. First, the simulation does not consider varying the capacity of resources and locations involved in the emergency care process. Second, the simulation does not consider patients by clinical disciplines in which they are treated. Practical implications The simulation results show that computer simulation can be an effective decision support tool in modelling the emergency care process and evaluating the effects of changes in the process. The results would be helpful to those who are considering reengineering and improving emergency care or other similar processes in hospitals. Originality/value Based upon a case study using real-world data, this paper extends the line of studies on computer simulation in healthcare by considering not only patient wait times in the emergency care process but also some ways to improve patient wait times and their effects on the process. Keywords Business process re-engineering, Simulation, Health services, Hospitals, Modelling Paper type Research paper

Introduction Organizations reengineer their business processes to contain costs, improve efciency, and stay competitive in the marketplace. With escalating healthcare costs, hospitals also seek ways to contain costs and provide quality healthcare services. Hospitals have traditionally emphasized breakthroughs in healthcare procedures and technology to stay competitive. As competition among hospitals continues to intensify, however, patients may perceive little difference in healthcare procedures and technology used by different hospitals. Consequently, hospitals come to understand that process reengineering could be a better solution to achieve competitive advantage. Just as many businesses successfully

Business Process Management Journal Vol. 16 No. 5, 2010 pp. 795-805 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637151011076476

BPMJ 16,5

796

reduce costs and gain competitive advantage by reengineering their business processes, hospitals can reengineer the way certain healthcare processes are carried out to achieve efciency and cost containment. Computer simulation, which has proven successful in improving various business processes, can also be an effective tool in searching for more efcient processes in hospitals. This paper describes a case study undertaken at Tan Tock Seng Hospital in Singapore (referred to as the Hospital hereafter for brevity). The Hospital management considers implementing some changes to improve patient wait times in the emergency care process. Using computer simulation, the study models the emergency care process and assesses patient wait times in the process. Then, it evaluates the effects of the changes considered by the Hospital management on patient wait times in the process. Given the paucity of studies on the methods and techniques for emergency care process reengineering in hospitals, the results of the study will prove helpful to those considering reengineering and improving emergency care or other similar processes in hospitals. Computer simulation in healthcare Computer simulation involves modelling processes. These models enable analysts to study how a system reacts to conditions that are not easily or safely applied in a real-world situation and how the working of an entire system can be altered by changing individual parts of the system (Proctor, 1996). The real power of simulation is fully realized when it is used to study complex systems (Kelton et al., 1998). Healthcare is a dynamic system with complex interactions among various components and processes. Furthermore, healthcare management operates in an environment of aggressive pricing, tough competition, demanding patients, and rapidly changing guidelines. To meet these challenges, healthcare management must respond quickly to identify critical system processes, recognize all relevant resources, access real-time information, and analyze what if cases (Stepanovich and Uhrig, 1999). While there are many applications of computer simulation to healthcare management and operations, we may classify these into two groups: (1) applications to healthcare systems at various levels of communities, regions, or the nation; and (2) applications to specic operations, processes or services in healthcare institutions. The rst group includes applications intended to study the provisions of mental health, public health, health reform, or healthcare workforce, often with policy implications. For example, Anderson et al. (2002), Jacobson and Sewell (2002), Levy et al. (2002), Rauner (2002) and Zaric (2002) illustrate the use of simulation for various health policy analyses. The second group, which is relevant to the case study of this paper, includes applications intended to improve facility design, stafng, and scheduling and to reduce patient wait times and operating costs (Anderson, 2002). It is worth noting that many applications of computer simulation to specic healthcare processes assess patient wait times. Everett (2002) describes a simulation model that provides a means for a central bureau to schedule the ow of elective surgery patients to appropriate hospitals in Australia to reduce wait times. van Merode et al. (2002) use simulation to determine the optimal production and inventory policies for each combination of patient type and cytostatic drug type to minimize patient wait

times and costs. Further, Blake et al. (1996) describe a simulation model of the emergency room to investigate issues contributing to patient wait times, and indicate that patient wait time is affected by the availability of staff physicians and the amount of time physicians are required to spend engaged in the education of medical residents. Lane et al. (2000) also describe a simulation model to understand patient wait times in an accident and emergency department, and nd that while some delays to patients are unavoidable, reductions can be achieved by selective augmentation of resources within, and relating to, the accident and emergency department. A common objective of these simulation models is to reduce patient wait times in the emergency department or other settings. The case study described in this paper attempts to extend this line of studies by considering not only patient wait times but also some ways to improve patient wait times in the emergency care process. Methods The Hospital is the second largest in Singapore with 1,400 beds and it provides healthcare services in 17 clinical disciplines. The Accident and Emergency (A&E) Department of the Hospital treats about 390 patients on a daily average, which counts for about 28 percent of all emergency patients treated in the public hospitals in Singapore. Modelling the emergency care process Any patient coming to the A&E Department rst stops over the screening station, at which the patient is determined on which part of the A&E Department he or she should be routed to next. After the screening, the patient registers at the registration station, and then, based upon the screening result, a nurse triages the patient at the triage station. The patient then waits to see a doctor, who determines and provides the appropriate treatment for the patient. When the patient is discharged after the treatment, the patient arranges payment at the payment station before leaving the A&E Department. The A&E Department classies emergency patients into four classes including PAC 1, PAC 2, PAC 3, or PAC 4, based upon the patients acuity class (PAC). PAC 4 patients are those who should go to clinics instead of the A&E Department owing to their minor symptoms or diseases. In contrast, PAC 1 patients are seriously ill or injured, for example, in car accidents or stroke/heart attack, and they need life-saving treatments. Data from the Hospital show that many PAC 1 patients are treated in the specialty areas of cardiovascular and respiratory medicine as they have difculty in breathing. The diseases or injuries of PAC 2 and PAC 3 patients are generally related to activities in workplace. The top three specialty areas where PAC 2 and PAC 3 patients are treated include general medicine (e.g. food poisoning), general surgery (e.g. bleeding injury requiring minor operation), and orthopedics (e.g. broken bones). PAC 1 patients are most urgent and so they are immediately treated, followed by PAC 2, PAC 3, and PAC 4 patients in this order. PAC 4 patients are treated last after PAC 1, PAC 2, and PAC 3 patients are treated. Table I shows the prole of patients treated in the A&E Department from July 2005 to January 2006. Most emergency patients are either PAC 2 or PAC 3 patients, and PAC 3 patients are generally a little more frequent than PAC 2 patients. PAC 1 patients usually account for less than 10 percent of all emergency patients. PAC 1 patients do not experience long wait times. In fact, all PAC 1 patients should be treated immediately, and hence, there should be no wait time for them. The median wait

Emergency care process reengineering 797

BPMJ 16,5

798

times recommended by the Singaporean Ministry of Health are 20 minutes for PAC 2 patients and 30 minutes for PAC 3 patients (Ministry of Health, 2003). However, the median wait times experienced in the A&E Department of the Hospital are 0.79 hours (or 47 minutes) for PAC 2 patients and 1.03 hours (or 61 minutes) for PAC 3 patients, as shown in Table I. Prolonged wait times make patients frustrated and dissatised, as reported in previous studies, e.g. Kyriacou et al. (1999). Also, prolonged wait times prevent doctors and nurses from doing their best work and leave them disillusioned and angry (Waldrop, 2009). Ultimately, patients may be diverted away from the Hospital. The Hospital management is aware of these problems and wants to explore some direct ways to improve patient wait times in the emergency care process, while still considering making more resources and space available in the A&E Department in the long term. Figure 1, which was constructed in SIMUL8, shows the simulation model for the current emergency care process. The simulation model consists of four basic components: (1) input (entrance); (2) queues (waits); (3) work stations (screening, registration, triage, consultation, and payment stations); and (4) exits (discharged or hospitalized). At each work station is a queue of patients waiting to be serviced. Patients are entities that the simulation model processes, and work stations are locations where entities are routed for processing. Also, each work station is attended and serviced by healthcare providers. Healthcare providers are resources that the simulation model uses for servicing entities. Healthcare providers in the A&E Department include doctors (who are further classied into senior doctors and medical ofcers, based upon their qualications and experiences) and nurses (who are further classied into senior staff nurses, staff nurses, assistant nurses, nurse ofcers, and healthcare assistants). The path networks in the simulation model are from the entrance to the exits. The changes that the Hospital management considers in order to improve patient wait times in the emergency care process include: . adding another payment station; and . adding a new short-stay ward.

PAC 1 Average daily number of patients (a) (a)/(c) (%) Average of monthly medians of patient waiting times Average daily number of patients being hospitalized (b) (b)/(c) (%) (b)/(a) (%) 31 8 0.00 23 18 74

PAC 2 166 43 0.79 79 62 47

PAC 3 188 48 1.03 25 20 14

PAC 4 3 1 0.73 0 0 0

Total (c) 388 0.84 127

Table I. Prole of patients

Entrance Pac 1 Resuscitation or consultation Rm 0 2 0 0 Bed request 0 Screening booth 1

Doctors 5 ICU 0

Queue for screening 0

Bed actualised 139

Observation Rm 0 0

Queue for registration 0 Registration 2 Pac 2 0 Queue for payment 0 Upgraded Pac 3 and 4 0 Consultation 10

Payment 1

Medication and discharge 1727

Queue for triage 0 Triage 1 0

Pac 3 and 4 5

Emergency care process reengineering

Figure 1. Simulation model before the changes

799

BPMJ 16,5

800

Figure 2, which was also constructed in SIMUL8, shows the simulation model for the emergency care process with these two changes (circled in Figure 2). A payment station is already located at the end of the emergency care process. Starting with registration and ending with payment is common in any emergency care or similar healthcare processes. There are two types of patients based upon their payments, i.e. those who pay only the standard fee (currently $70) for standard services and those who pay more than the standard fee for services in addition to standard services. The payment station currently handles both types of patients, but it takes more time to arrange payments for patients who pay more than the standard fee than to arrange payments for patients who pay only the standard fee. Therefore, the Hospital management considers setting up another payment station between the registration and triage stations. With two payment stations in the process, all PAC 2, PAC 3, and PAC 4 patients rst arrange the standard payment at the new payment station located between the registration and triage stations, and then, only those patients who have to pay more than the standard fee stop over the second payment station to arrange additional payments while the other patients bypass the second payment station. In contrast, PAC 1 patients do not go to the triage station, and they pay at the second payment station as they pay more than the standard fee. The Hospital management expects that the new payment station can improve patient wait times not only at the existing payment station but also at other work stations in the process. There are some PAC 2, PAC 3, or PAC 4 patients whose medical conditions are still unstable and undetermined for the next step of treatments even after the consultation by doctors. Then, these patients are routed to the observation room for further observation and monitoring by doctors and nurses. They stay in the observation room until their medical conditions become stable and can be determined for the next step of treatments. Some patients stay in the observation room for a few hours, while some patients stay there for several days. Given the limited capacity of the observation room, the longer patients stay in the observation room, the longer the wait times are for other patients to get into the observation room. Thus, the Hospital management considers setting up a new short-stay ward (called as emergency diagnosis and therapy center in Figure 2) only for patients who need to be further observed and monitored for less than a day, while keeping the observation room for patients who need to be further observed and monitored for longer than a day. The Hospital management expects that the new short-stay ward can also improve patient wait times in the process. Data For the simulation in this study, we used historical data provided by the Hospital. The data entry in the A&E Department is computerized. The computer system keeps both healthcare data used mainly by healthcare providers and non-healthcare data used mainly by the operation quality management. The Hospital also uses radiofrequency identication technology, which increases the accuracy of data on patients, particularly on their movements from one work station to another in the emergency care process. Patients in the A&E Department go through many steps from screening, registration, payment, triage, and consultation with possible additional paths to laboratory and X-ray, to the point of observation before discharge. We were provided with computerized data for various factors associated with the emergency care process, such as:

0 Pac 1 0 Resuscitation or consultation Rm 3 0 Screening booth 1 0 EDTC 0 0 Bed request 0

ICU 0

Entrance

Bed actualised 176

Queue for screening 0

Queue for registration 0 Pac 2 0 Consultation Rm 7

Registration 1

Medication and discharge 1683

Queue for payment 0 Payment 1 Pac 3 and 4 5 Triage 1 Upgraded p3 and p4 0 0 0

Observation Rm 1

Queue for triage 0

Queue for counter payment 0

Counter payment 1

Emergency care process reengineering

Figure 2. Simulation model after the changes

801

BPMJ 16,5

. . . .

number of patients treated; monthly, daily, and hourly patterns of patient arrivals; capacity of each work station; and number of healthcare providers available at each work station.

802

When data needed in the simulation model were not readily available, we used the best estimates provided by the Hospital staff familiar with the emergency care process. We used SPSS software for statistical analysis of data and obtained various parameters such as mean and median values of patient wait times, number of patients in the different acuity classes, inter-arrival times of patients, service times at work stations, probability distributions, and so on. Running and validating the simulation model The simulation model was run for 100 independent replications, with each replication using an additional warm-up period. The warm-up period is set for the simulation run to eliminate any bias at the early stages of the process (Law and Kelton, 2000). The run length and number of independent replications of the simulation are determined based upon the tests of normality and independence proposed by Law and Kelton (2000). The simulation results presented in the following section are based upon the average results of the 100 independent replications. Validation is the process of determining whether the simulation model is a useful or reasonable representation of the real system (Pritsker et al., 1997). Absolute validation is usually impossible because the simulation is at best an approximation of the real system, and the most denitive method is to compare the output data from the simulation with the actual data from the existing system using formal statistical analyses such as condence intervals (Son, 1993). In validating the simulation model of this study, we calculated the condence intervals of the simulation outputs at 95 percent (a 0.05) condence level and compared them to the actual values provided by the Hospital. We also veried the architecture of the simulation model with staff in the A&E Department before the simulation runs and showed the simulation results to the staff after the simulation runs to ensure that the simulation results are reliable. Results and discussion We ran the simulation model in two different scenarios: before and after the changes in the emergency care process. Both the scenarios are built on a common foundation or base model in which all variables are held constant. Table II shows the average numbers of
Before the changes After the changes Difference Average Low 95% High 95% Average Low 95% High 95% (a) range range (b) range range (b) 2 (a) 149 778 934 144 769 926 154 786 943 147 775 939 145 770 935 149 779 944 22 23 5

PAC Table II. Number of patients in the simulation model PAC 1 patients PAC 2 patients PAC 3 and PAC 4 patients

patients processed in each simulation run. The simulation runs in both the scenarios use almost the same numbers of patients in total as well as by PACs. Table III shows the simulation results on patient wait times at each work station in the emergency care process before and after the changes. Before the changes, patients experience the longest wait time at the registration station (3.29 minutes), followed by the triage station (2.29 minutes), the payment station (1.12 minutes), and the screening station (0.15 minutes). The actual average patient wait times at the registration and triage stations were 3.60 and 2.50 minutes, respectively, which are within the 95 percent condence intervals of the simulated estimates. The new payment station and the short-stay ward help shorten patient wait times most at the triage station by 2.20 minutes and then at the payment station by 0.64 minutes, while the changes have little effect on patient wait times at the screening and registration stations. Throughout the emergency care process from the screening station to the payment station, the changes shorten patient wait times by 2.81 minutes, i.e. about 41 percent of the wait times experienced before the changes. Table IV shows the simulation results on patient wait times in the emergency care process before and after the changes, by PACs. Before the changes, PAC 3 and PAC 4 patients experience the longest total wait times (64.55 minutes), followed by PAC 2 patients (63.88 minutes) and PAC 1 patients (0.27 minutes). The changes shorten the wait time of PAC 2 patients by 7.80 minutes and the wait time of PAC 1 patients by 0.27 minutes. It is notable that there is no wait time for PAC 1 patients in the process after the changes. However, the changes lengthen the wait times of PAC 3 and PAC 4 patients by 6.01 minutes. It is worth noting that the changes help signicantly shorten the wait times of PAC 1 and PAC 2 patients who need more immediate treatment than PAC 3
Before the changes After the changes Difference Average Low 95% High 95% Average Low 95% High 95% (a) range range (b) range range (b) 2 (a) 0.15 3.29 2.29 1.12 6.86 0.14 2.92 2.03 1.09 6.19 0.16 3.66 2.55 1.15 7.52 0.15 3.32 0.97 0.09 0.48 5.01 0.15 3.11 0.93 0.09 0.46 4.74 0.15 3.52 1.00 0.10 0.51 5.28 0.00 0.02 2 2.20 2 0.64 2 2.81

Emergency care process reengineering 803

Work station Screening Registration Standard payment (new) Triage Payment Total wait times at work stations

Table III. Patient wait times (in minutes) at work stations before and after the changes

PAC PAC 1 patients PAC 2 patients PAC 3 and PAC 4 patients Total times in the system

Before the changes After the changes Difference Average Low 95% High 95% Average Low 95% High 95% (a) range range (b) range range (b) 2 (a) 0.27 63.88 64.55 133.93 0.18 60.24 63.82 131.78 0.37 67.52 65.29 136.08 0.00 56.08 70.56 123.33 0.00 44.73 68.15 123.01 0.00 67.43 72.98 135.53 2 0.27 2 7.80 6.01 2 10.60 Table IV. Patient wait times (in minutes) by PAC before and after the changes

BPMJ 16,5

804

and PAC 4 patients. In fact, 74 percent of PAC 1 patients and 47 percent of PAC 2 patients get hospitalized, whereas only 14 percent of PAC 3 patients and few PAC 4 patients get hospitalized (Table I). On average, patients stay in the emergency care process for 133.93 minutes before the changes and for 123.33 minutes after the changes. Table V shows the 95 percent condence intervals of the simulated estimates against the actual values of patient wait times by PACs. All condence intervals include the actual values. Thus, the simulation model seems to be capable of reproducing patient wait times in the emergency care process in the Hospital. A couple of limitations are recognized in this case study. First, the simulation does not consider varying the capacity of resources (healthcare providers including doctors and nurses) and locations (work stations from the screening station to the payment station) involved in the emergency care process. The varying capacity of resources and locations may have signicant effects on not only patient wait times in the process but also exibility and quality of the process. Utilizing more resources and locations, however, may incur increased costs. Second, the simulation does not consider patients by clinical disciplines in which they are treated. Data from the Hospital suggest that PAC 2 and PAC 3 patients experience long wait times when they are treated in the disciplines of general medicine, general surgery day, or orthopedic, and that many PAC 1 patients are treated in the disciplines of cardiovascular and respiratory medicine. Patients treated in different disciplines may experience different wait times in the process. These limitations are certainly not exhaustive, but important ones. Obviously, these limitations, in turn, suggest several possibilities for future study.

Conclusion The Hospital management considers some changes in the emergency care process in order to improve patient wait times in the process. Using computer simulation, this study models the emergency care process and evaluates the effects of some changes on patient wait times in the process. The simulation results are validated with the actual values. More specically, the simulation estimates on patient wait times by PACs are compared against the actual values obtained from the Hospital. The 95 percent condence intervals of the simulation outputs include the actual values, indicating that the simulation model is capable of reproducing the emergency care process in the Hospital with respect to patient wait times. The simulation results demonstrate that the new payment station and the short-stay ward can shorten patient wait times in the emergency care process, and show that computer simulation can be an effective decision support tool in modelling the emergency care process and evaluating the effects of changes in the process. Based upon the simulation results, we support the implementation of the changes to improve patient wait times in the emergency care process in the Hospital.

Table V. Patient wait times (in minutes) simulation results versus actual values

PAC PAC 1 patients PAC 2 patients PAC 3 and PAC 4 patients

95% condence interval (0.18, 0.37) (60.24, 67.52) (63.81, 65.29)

Actual value 0.35 61.51 65.01

References Anderson, J.G. (2002), Preface: special issue of simulation in health care management, Health Care Management Science, Vol. 5 No. 2, p. 73. Anderson, J.G., Harshbarger, W., Weng, H.C., Jay, S.J. and Anderson, M.M. (2002), Modeling the costs and outcomes of cardiovascular surgery, Health Care Management Science, Vol. 5 No. 2, pp. 103-11. Blake, J.T., Carter, M.W. and Richardson, S. (1996), An analysis of emergency room wait time issues via computer simulation, INFOR, Vol. 34 No. 4, pp. 263-73. Everett, J.E. (2002), A decision support simulation model for the management of an elective surgery waiting system, Health Care Management Science, Vol. 5 No. 2, pp. 89-96. Jacobson, S.H. and Sewell, E.C. (2002), Using Monte Carlo simulation to determine combination vaccine price distributions for childhood diseases, Health Care Management Science, Vol. 5 No. 2, pp. 135-45. Kelton, D.W., Sadowski, P.R. and Sadowski, A.D. (1998), Simulation with Arena, WCB/McGraw-Hill, New York, NY. Kyriacou, D.N., Ricketts, V., Dyne, P.L., McCollough, M.D. and Talan, D.A. (1999), A 5-year time study analysis of emergency department patient care efciency, Annals of Emergency Medicine, Vol. 34 No. 3, pp. 326-35. Lane, D.C., Monefeldt, C. and Rosenhead, J.V. (2000), Looking in the wrong place for healthcare improvements: a system dynamics study of an accident and emergency department, Journal of the Operational Research Society, Vol. 51 No. 5, pp. 518-31. Law, A.M. and Kelton, D.W. (2000), Simulation Modeling and Analysis, McGraw-Hill Higher Education, New York, NY. Levy, D.T., Chaloupka, F., Gitchell, J., Mendez, D. and Warner, K.E. (2002), The use of simulation models for the surveillance, justication and understanding of tobacco control policies, Health Care Management Science, Vol. 5 No. 2, pp. 113-20. Ministry of Health (2003), Web site on Singapore Health Care System, available at: www.moh.gov.sg Pritsker, A.B., OReilly, J.J. and LaVal, O.K. (1997), Simulation with Visual SLAM and AWESIM, Wiley, New York, NY. Proctor, T. (1996), Simulation in healthcare, Health Manpower Management, Vol. 22 No. 5, pp. 40-4. Rauner, M.S. (2002), Using simulation for AIDS policy modeling: benets for HIV/AIDS prevention policy makers in Vienna, Austria, Health Care Management Science, Vol. 5 No. 2, pp. 121-34. Son, Y.K. (1993), Simulation-based manufacturing accounting for modern management, Journal of Manufacturing Systems, Vol. 12 No. 5, pp. 417-27. Stepanovich, P.L. and Uhrig, J.D. (1999), Decision making in high-velocity environments: implications for healthcare, Quality Management in Health Care, Vol. 5 No. 3, pp. 72-9. van Merode, G.G., Groothuis, S., Schoenmakers, M. and Boersma, H.H. (2002), Simulation studies and the alignment of interests, Health Care Management Science, Vol. 5 No. 2, pp. 97-102. Waldrop, R.D. (2009), Dont be put out by throughput in the emergency department, Physician Executive, Vol. 35 No. 3, pp. 38-41. Zaric, G.S. (2002), Random vs. non-random mixing in network epidemic models, Health Care Management Science, Vol. 5 No. 2, pp. 147-55. Corresponding author Sung J. Shim can be contacted at: sung.shim@shu.edu To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Emergency care process reengineering 805

International Journal of Physical Distribution & Logistics Management


Emerald Article: Simulation of goods delivery process Jean-Marie Boussier, Tatiana Cucu, Luminita Ion, Dominique Breuil

Article information:
To cite this document: Jean-Marie Boussier, Tatiana Cucu, Luminita Ion, Dominique Breuil, (2011),"Simulation of goods delivery process", International Journal of Physical Distribution & Logistics Management, Vol. 41 Iss: 9 pp. 913 - 930 Permanent link to this document: http://dx.doi.org/10.1108/09600031111175852 Downloaded on: 26-08-2012 References: This document contains references to 26 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 398 times since 2011. *

Users who downloaded this Article also downloaded: *


Charles Inskip, Andy MacFarlane, Pauline Rafferty, (2010),"Organising music for movies", Aslib Proceedings, Vol. 62 Iss: 4 pp. 489 - 501 http://dx.doi.org/10.1108/00012531011074726 Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 James DeLisle, Terry Grissom, (2011),"Valuation procedure and cycles: an emphasis on down markets", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 384 - 427 http://dx.doi.org/10.1108/14635781111150312

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0960-0035.htm

Simulation of goods delivery process


Jean-Marie Boussier
nieurs en Ge nie des Syste mes Industriels (EIGSI), LEcole dInge La Rochelle, France and de La Rochelle, La Rochelle, France L3i, Universite

Simulation of goods delivery process 913

Tatiana Cucu
nieurs en Ge nie des Syste mes Industriels (EIGSI), LEcole dInge La Rochelle, France and de Bordeaux, Bordeaux, France, and IMS LAPS, Universite

Luminita Ion and Dominique Breuil


nieurs en Ge nie des Syste mes Industriels (EIGSI), LEcole dInge La Rochelle, France
Abstract
Purpose This paper claims that the parking policy is one of the most obvious tools for reducing trafc congestion, pollutant emissions and conicts between transportation network users. The purpose of this paper is to propose and implement a strategy, via a simulation tool, for the sharing of parking places between light cars and vans for goods delivery. Design/methodology/approach Temporal and spatial dynamic booking of on-street parking places is described by using the multi-agent paradigm. Main agents concerned by the sharing of parking places, their rules and interactions are implemented. Behavioral models and learning process of cognitive agents based on stated preferences collected beside the network users are designed for capturing multi-agent interactions. Findings By coupling a 2D trafc simulation tool and the Copert III methodology, it is possible to simulate the trafc and environmental consequences of several scenarios for different infrastructures, occupancy rate of the places reserved for goods delivery and durations of the delivery process. Research limitations/implications Several points are under development: a 3D environment will capture with more realism the behavior of agents in a larger spatial scale and in real time. The behavioral models will be designed by stated preferences obtained from surveys containing questions coupled with pictures of possible scenarios. Practical implications Applied in a real context, the sharing of parking places strategy shows benets for trafc and for the environment. A decision maker can use this strategy for simulating scenarios, in the context of an urban area in particular. Originality/value The paper demonstrates how a simulation tool based on strategy of parking place sharing can satisfy constraints of transportation network users. Keywords Transportation, Freight forwarding, Goods delivery, Simulation, Agent-based simulation, Behavioral modeling, City logistics, Environmental impacts Paper type Research paper

1. Introduction Urban goods distribution is vital for the prosperity of inner cities, especially for the shopping areas that fulll an important role for the city. However, goods transport induces noise, air pollution, physical hindrance and a decrease in trafc safety.

International Journal of Physical Distribution & Logistics Management Vol. 41 No. 9, 2011 pp. 913-930 q Emerald Group Publishing Limited 0960-0035 DOI 10.1108/09600031111175852

IJPDLM 41,9

914

Freight transportation with electric powered vans is interesting thanks to the benets on noise, energy and emissions. But when the eet management is not optimized very well, problems remain. One problem, for example, is the congestion created by searching for one available parking place in order to complete the delivery. Reversely, the urban structure induces accessibility constraints as well as logistical efciency problems to the urban goods transport system. This results in the increase of trip delays, a lower quality (reliability) and in some cases inefcient logistical systems by using more vehicles (vans) than necessary. No discussion can take place without an overview of the interactions between the urban spatial structure and the transportation network. For it, authorities require tools analyzing problems associated with city logistics. The main objective of these works is to build a prototype tool for local authorities of urban transport for optimizing the sharing of parking places between light cars and freight vans. For it, consequences of interactions between different actors of transportation system will be simulated and the environmental benets will be pointed out. This paper is organized as follows. After the introduction, Section 2 treats about the interests and difculties to manage a goods distribution system. In Section 3, the main principles of an agent-based simulation used for describing interactions between the network users are described. The main tools used for the evaluation of transportation and environmental impacts of new strategies are also presented. Section 4 presents a study case and Section 5 focuses on works in progress. 2. Coexistence of light vehicles and vans for goods delivery 2.1 State of art Generally, studies are focused on the generation of freight or vehicles trips. Several researchers (List and Turnquist, 1994; He and Crainic, 1998; Gorys and Hausmanis, 1999; Harris and Liu, 1998) have proposed approaches based on gravitational model or the four-steps model. More recently, Munuzuri et al. (2004) proposed a methodology based on entropy maximization in order to build an origin-destination matrix for freight transport taking into account home deliveries and deliveries within several branches of industry. Thompson and Taniguchi (1999) tackled the problem of vehicle routing inside the city. For generating and controlling urban goods delivery, simulation tools based on different approaches have been developed. The main functional tools are Freturbq (Routhier et al., 2001) (France), GoodTripq (Boerkamps and Binsbergen, n et al., 2003). 1999), Wiverq (Meimbresse and Sonntag, 2000) and Distraq (Fride Few works or tools have treated the congestion created by freight vans during the goods delivery. 2.2 Our proposal: sharing the parking spaces Small- and medium-sized cities, built around an historical center, are quite often rich with several types of shops as well as craftsmen and small industries. There are only limited opportunities to enhance physical capacity of road infrastructure at surface level. Parking policy (on-street or off-street) is one of the most obvious tools for alleviating trafc congestion. Our proposal is to analyze the impacts of a spatial and temporal dynamic booking of on-street stopping places (to park private cars or to deliver goods). The basic idea

is to organize the parking places in several different kinds of units; the basic unit can be the street or just a part of the street: . Each day, before the beginning of goods distribution, the on-street parking places are booked near the location to be delivered. The length of vans is taken into account. This is the spatial booking. . Reservations are done only for periods of deliveries; each period is computed according the tour planning of each known supplier. This is the temporal booking. When the stopping places are not reserved, they are available to park individual cars under some conditions. Variable message panels (VMP) display information about the state of each place to avoid conicts (i.e. reserved, available with authorized duration of the stop, etc.). Car drivers can identify available places to park their individual cars and corresponding conditions. For example, an individual car driver sees on the VMP that a space is available for 10 min. When he decides to park the car, it is important to consider that after 10 min, the police will be called for a penalty notice (ne). If the private car is parked on one booked parking place, the driver is informed before his parking. But he is also free to choose a parking place available during a short delay xed by the authorities (e.g. 30 min). Figure 1 shows the interactions and communications between the different parts (actors, objects). To improve car trafc uidity, a reservation system can be established for passenger cars. Indeed, the authorized parking places must be available before the beginning of the booking time slice. If the place is not available for the delivery, the police ofce
: V.M.P. : Stopping Space : Control Post : Supply

Simulation of goods delivery process 915

delivery planning Booking of places : Car driver display Place states (reserved, when) (spatial, temporal)

See information Park his car [cnd1] Request to release the place [cnd2] unauthorized Release the place occupation Alert Police : Police

cnd1: [time slice finished OR reserved space] cnd2: [vehicle present AND alert delay finished]

Figure 1. Main events and actions when a driver searches to park his individual car

IJPDLM 41,9

916

is informed by an alarm in order to evacuate the vehicle if possible. Similarly, freight drivers must identify the reserved places to deliver goods according the time schedule for parking. Our solution could be interesting for local authorities that want to keep parking incomes as well as to reduce the congestion and also to decrease the pollution created by the search of available places. The optimization of the goods delivery activities must take into account both sets of actors: freight vans driver and light vehicles driver. For it, simulations seem to be the most appropriate in order to capture the interactions of the coexistence between them. 3. Simulation of the sharing of the parking places 3.1 Choice of agent-based simulation Few models concern the searching of parking place (Arnott and Inci, 2006); in those, urban areas are simplied and spatially homogeneous. The interactions between vans and light cars cannot be captured. The multi-agent paradigm is able to capture with a lot of realism the behavior of road network users and the consequences of their interactions in a dynamic environment. A great number of individual agents can be treated in a consistent simulation framework. In past years, the multi-agent systems have been successfully applied in building distributed intelligent systems in various domains, in particular, for designing trafc simulators. We nd few works done for the simulation of goods process (Cavezzali et al., 2003; Wisetjindawat et al., 2007). Figure 2 shows the main agents and interactions concerned by the occupation of the street during the delivery process. The role of each agent and its interactions with other agents are presented below: . Control post agent receives information from supply agents concerning the address and number of customers, number of parcels, itinerary, hour of arrival, average duration for delivering. . Control agent (or stopping space agent) books places for deliveries. The shop agents interact with stopping space agents to dene the location and the duration of the delivery.

Shop Private Car

Supply

Stopping Space -Unit -VMP -Vechicle Identif

Figure 2. Main agents concerned by the sharing of parking places

Control Post Goods Van

A stopping space agent is autonomous to adapt itself to environment changes and to communicate with a control post agent. It communicates with agents private cars and goods vans by using the same principle than trafc signal control (e.g. trafc light) of Intelligent trafc management.

Simulation of goods delivery process 917

Each control agent is associated to an object VMP. It indicates the status of the parking space for drivers and the duration of the delivery process. In the same time, this agent can recognize the type of vehicles (agents) and can alert the police. Its role is to manage and to control the set of parking places of a street or a part of the street. Figure 3 shows the different states of an agent of type stopping space in the simulator. 3.2 Decision-makers entities 3.2.1 Agents simulating the drivers of goods delivery vans. Before the beginning of the delivery tours, each agent has a diary containing the destinations for the delivering process, the itineraries, the dedicated places for delivering in agreement with the length of the van and the expected durations of the delivery process. 3.2.2 Agents simulating the drivers of light vehicles. Their behavioral is more complex because a decision is based on certain individual rules. Experimental measurements support the notion that seeking for a parking space is not a random procedure, but an activity undertaken by the driver for a specic purpose having a pre-set objective (Hess and Polak, 2004). In multi-agent simulation, the analysis of people
Simulation Manage one stopping place detect a car available planned delivery create Configuration parking end cancel reserved loading end detect the goods van loading detect a car
Police

parking time slice finished out of delay

edit

no parking

delete Interact with vehicles

out of service

Manage the V.M.P.

Communicate with C.P.

Figure 3. State diagram of a stopping space agent managing a set of on-street parking places

IJPDLM 41,9

918

behavior is typically disaggregated so it means that a decision-making model must represent the behavior of each traveler. The simulation is the result obtained from concurrent execution of the various agents behaviors taking into account the feedback process. And in this case, the multi-agent system becomes more complex to represent than the system itself. A solution could be to do the hypothesis that the individual decision-maker entity depends on the particular application (for example, a household or an organization). This hypothesis allows us to ignore all internal interactions within the group and to consider only decisions of the group as a whole (Ben-Akiva et al., 1998). We assumed that there are two types of drivers of light cars: having an urgent activity with temporal constraints (work, meeting, etc.) or having one exible activity (shopping, administrative procedures, recreational activities, etc.). There are two possibilities (Figure 4): (1) If a place is available, the agent can park the car. (2) If no place is available (or the stopping agent gives a duration of the place dedicated for goods delivery less than the duration of its activity), the agent has four choices (to do loops by waiting one available place, to search another car park, to choose an illicit place or to defer the activity). 3.2.3 Behavioral model of drivers of light cars searching one available place. Our simulator will use data obtained from stated preferences surveys. Such kind of questionnaire is strongly indicated for identifying signicant criteria into a decision-making process and for testing new strategies. A criterion can be quantitative (e.g. parking duration) as well as qualitative (e.g. type of activity). The levels of a criterion are values (or modalities) of this criteria. Placed in hypothetical scenarios, respondents (drivers of light cars) are asked to assign a preference for scenarios obtained by different levels of criteria. In order to draw the maximum amount of information, a full matrix of questions with all possible combinations of levels is necessary. In this case a questionnaire for testing the effects of seven input criteria, each one at two levels necessitates 128 questions and
TRAVEL (a place is available)

(no place is available to park the car)

To do loops

To choose the 2nd car park

To choose an illicite place

To defer the activity

Agent parks its car

Agent chooses the next activity

Figure 4. Choices of driver agents

Agent becomes pedestrain

becomes strongly exhaustive. That is why we had used orthogonal arrays as subsets of full designs. The questionnaire must contain particular combinations: each level of each criterion appears an equal number of times; each combination between the levels of two distinct criteria appears an equal number of times. In each scenario, simultaneously varying several design input criteria may have interactive effects on the studied response. When the effect of one criterion depends on the level of another one, an interaction exists. Taguchi (1987) provides standard orthogonal arrays and linear graphs in order to affect columns for input criteria and interactions (Figure 5). Table I shows one application of this approach. Suppose four criteria are tested: walking distance on foot with level 1 , 200 m and level 2 . 200 m (named A), price/hour with level 1 , 1 Euro and level 2 . 1 Euros (named B), type of activity with level 1 urgent and level 2 exible (named D), duration of activity with level 1 20 min and level 2 . 20 min (named G). The linear graph also agrees to evaluate interactions between three criteria: AB, AD and BD. The response expected from each respondent and for each alternative k and for each scenario j is the choice frequency (captured by semantic modalities such
Actions: criteria or interactions Question 1 2 3 4 5 6 7 8 A 1 1 1 1 2 2 2 2 B 1 1 2 2 1 1 2 2 C 1 1 2 2 2 2 1 1 D 1 2 1 2 1 2 1 2 E 1 2 1 2 2 1 2 1 F 1 2 2 1 1 2 2 1 G 1 2 2 1 2 1 1 2

Simulation of goods delivery process 919

A C B F E D

Figure 5. L8(2)7 array and one of its linear graph (nodes for criteria and arcs for interactions)

Criteria Alternative K A distrance B G To do loops To To To on foot to parking D type of activity waiting one choose choose defer destination price/ activity duration available the 2nd an illicite the Scenario J (m) hour to do (min) place car park place activity 1 2 3 4 5 6 7 8 , 200 , 200 , 200 , 200 . 200 . 200 . 200 . 200 ,1 ,1 .1 .1 ,1 ,1 .1 .1 Urgent exible Urgent Flexible Urgent Flexible Urgent Flexible , 20 . 20 . 20 , 20 . 20 , 20 , 20 . 20 Possible responses for each alternative K and for each scenario k: Very frequently (VF) Frequently (F) Sometimes (S) Very rarely (VR)

Table I. Example: criteria to test for each alternative K and each scenario J

IJPDLM 41,9

920

as very frequently, frequently, sometimes, rarely, etc.). For example, the responses done by a driver for scenario 5 are shown in Table II. After a statistical lter of responses collected by questionnaire, the fusion of individual responses is done with Dezert-Smarandache theory (Smarandache and Dezert, 2004). It is a new extension of Dempster-Schafer theory (Shafer, 1976) which, efciently takes into account the doubt during a decision-making process and the conict between information sources. The fusion process (exhaustively detailed in our past works (Boussier et al., 2009) is briey resumed below. Let us take Q { H1, H2, H3, H4 } being the set of hypothesis which make up the frame of discernment with Hi potential alternatives. In our case: Q {drive around; second car park; illicit place; defer activity} 1

One response is done for each alternative and can be among: very frequently (VF), frequently (F), sometimes (S), very rarely (VR). The assigned probability mQ(A) measures the belief exactly assigned to A and represents how strongly the evidence supports A. A basic probability assignment is a function called a mass function and satises: X D Q {F; H 1 ; . . .H 1 < H 2 ; . . .Q}; mQ : 2Q ! 0; 1; mQ F 0; mQ A 1
A#Q

2 where DQ is the power set of Q; F is the null set; A is any subset of Q, mQ(Q) is the degree of ignorance. The mass assignment that we have built is in agreement with a linear utility function m(VF) 0.4; m(F) 0.3; m(S) 0.2; m(VR) 0.1. People can give the same evaluation for different alternatives. In this case, the doubt of respondents can be taken into account. A discounting operation, done before the fusion, consists in taking into account the reliability of source (respondent) Rj (aRj). The discounted belief function is: ; A Q; mQj A aRj mQj A; mQj Q 1 2 aRj aRj mQj Q
Ra R Ra R

where a is a coefcient dened according to:   n21 a 12 N

In formula (4), n is the number of alternatives having the same response; N is the number of studied alternatives. For people giving different responses for all alternatives, we have a 1.
A distance on foot on destination (m) . 200 B D type G parking of activity price/ activity duration hour (e) to do (min) ,1 urgent . 20 To do loops waiting one available place F To To To choose choose defer the 2nd an illicite the car park place activity S VF VR

Scenario Table II. 5

For people who give the same answer for all alternatives, the mass is strongly discounted without completely reject their opinion (a 0.25 if four alternatives have the same response). After the successive application of DSmT rules, a single belief function is obtained for all focal elements and subsets of the discernment framework. Then, for each scenario, the mass of the subsets, except the singletons, is redistributed by the pignistic transformation (Smets, 1990): X 1 ; Hi [ Q; PQ Hi mQ A jAj A [ 2Q Hi , A where PQ(Hi) is the pignistic probability for Hi; jAj is the cardinality of A. In order to give a global evaluation for each scenario j and each alternative k, a score is dened:
j Sk j l PQ Hk

Simulation of goods delivery process 921

where l is an arbitrary crisp value and PQj(Hk) is the pignistic probability of the alternative k belonging to the scenario j. Finally a score is obtained for each scenario of the array and for each alternative. The simulator must be able to simulate scenarios not tested by the respondents (there are only eight questions for 128 possible combinations when the number of tested criteria is seven). We adopted a model based on analysis of means in order to design the score function. It is an additive model with mean effects of criteria and studied interactions: !   a1 b1 a1 b2 S S av a1 a2 A b1 b2 B AB 1 7 a2 b 1 a 2 b 2 with ai S av Ai 2 S av ; ai bj S av Ai ; Bj 2 S av 2 ai 2 bj where Sav is the average of all scores; ai are matrix elements representing the effects of criteria A at level i; aibj are matrix elements for the mean effect of the AB interaction when A is at level i and B at level j; Sav(Ai) is the average of all scores where A is at level i; Sav (Ai, Bi) is the average of all scores when A is at level i and B at level j. For example, for a scenario with only two criteria (A at level 1, B at level 2), the determinist part of the model gives: SA1 ; B2 Sav a1 b2 a1 b2 8

With this type of model, the scores can be calculated for all scenarios and all alternatives that were not evaluated by questionnaire. The analysis of variance (ANOVA) retains only signicant criteria and interactions and strongly reduces the number of communications and the computing time. One of the interests of the agent-based simulation is that the behavior of each agent can change thanks to a mechanism called learning. After choosing one alternative, an agent can change for the next trip: for example, if choosing an illicit place (for goods delivery), is sanctioned by a ne, the choice of the agent could be different for the

IJPDLM 41,9

922

next simulation. For it, a jointly study of effects of input criteria (known before the trip beginning, e.g. the distance, the price, the duration of delivery, etc.) and external criteria (result of interrelations between agents, e.g. congestion, nes, etc.) must be done. The principle of the learning mechanism based on the stated preferences analysis is shown in Figure 6. It has been presented exhaustively in our past works (Boussier et al., 2009). Based on this behavioral model, the end-user of the simulator is able to test several scenarios for different occupancy rate of the places dedicated for goods delivery, durations of the delivery process. He can capture the interaction between the actors of a car park (on-street or off-street) and is able to compute the consequences on trafc characteristics and on environmental quality. 3.2.4 Tools for evaluation of trafc characteristics and environmental impacts. 3.2.4.1 Trafc simulation. Our team developed a micro-simulator (Teng, 2008) based on VIS-SIM prototype (Fotherby, 2002). The prototype uses Model-View-Controller Pattern. In agreement with real or virtual situations, the end-user is able to add or to destruct roads, junctions or car parks. The simulation is done with particular characteristics of trafc (number of vehicles, percentage of vans, speed, number of places for goods delivery). The application also provides statistical results such as the averaged speed and the congestion level. 3.2.4.2 Environmental impacts. The trafc simulator is coupled with a software tool based on COPERT III methodology (Ntziachristos and Samaras, 2000). COPERT III methodology can be applied for the calculation of fuel consumption and trafc emissions (CO, NOx, VOC, PM) with a relatively high aggregation level, both temporally and spatially. For it, a trafc database must be exploited in order to quantify number of vehicles and average value of speed of the eet. When real data is absent (for example, a virtual scenario), simulations with our VIS-SIM version are done. 4. The study case: Elcidis (at La Rochelle) 4.1 Interests of the electric van use Urban Community of La Rochelle launched the ELCIDIS experimental hub in February 2001 as part of the ELCIDIS European project. The objective was to optimize goods
Input criteria Cost, distance, type of activity,..

Decision-making model S(t0) Choice External criteria Action

Figure 6. Decisional and learning mechanisms

Evalution S(t)

congestion ....

distribution in the historical city center with an environmentally friendly approach. This platform uses electric vans. Figure 7 (on the left-hand side) shows the number of deliveries and kilometers for 20 months. The average of delivery number is about 320 a month; the trip distance of the electric small van eet is about 2,300 km a month. Another expected result is the decrease of the fuel consumption and of the emissions (NOx, NO2, COV, CO, CO2, PM2,5 PM10). For computing it, the hypothesis that the deliveries would be done by thermal vehicles has been considered (Figure 7, on the right-hand side). Because the value of the average distance a trip between the ELCIDIS platform and the customers is low (less than 10 km), the algorithm takes into account the cold emissions. 4.2 The case study, zone de la Coursive Zone de la Coursive is an urban area with shops, companies and administrative centers. A car park located in the centre of this area has a capacity of about 50 places with ve places for goods delivery, generally booked all the day (even if there are only ten deliveries/day (average) for one duration lower than 15 min each). The primary necessity in this area is to reduce environmental impacts which are very important because of the trafc congestion. Another objective is to reduce the illegal parking of private cars, especially on dedicated places for goods delivery. 4.2.1 Present situation. The average values of ows per day as well as the number of cars using the car park situated have been obtained by experimental measurements and are presented below (Table III). For the location of inputs 1-3, see Figure 8 (on the left-hand side). For estimating the average value of the speed for these periods, simulations have been done.
100 04/03/2006 04/04/2006 05/05/2006 05/06/2006 06/07/2006 06/08/2006 06/09/2006 07/10/2006 07/11/2006 month 08/12/2006 01/01/2007 01/02/2007 04/03/2007 04/04/2007 05/05/2007 05/06/2007 06/07/2007 06/08/2007 06/09/2007 07/10/2007 1,000 1,500 2,000 2,500 3,000 3,500 4,000 4,500 07/10/2007 nb km with electric cars number of deliveries 0 5 10 15 20 25 30 35 ton nb KM with electric cars month 05/06/2006 06/07/2006 06/08/2006 06/09/2006 07/10/2006 07/11/2006 08/12/2006 01/01/2007 01/02/2007 04/03/2007 04/04/2007 05/05/2007 05/06/2007 06/07/2007 06/08/2007 06/09/2007 Diesel CO2 150 200 250 300 350 400 450 04/03/2006 04/04/2006 05/05/2006

Simulation of goods delivery process 923

number of deliveries

Figure 7. (a) and (b) Graphs of parcel deliveries electric small vans, diesel and CO2 economy/month

(a)

(b)

IJPDLM 41,9

924

Figure 8 (on the right-hand side) shows the frame of the trafc simulation. The trafc model is a car-following model. The car park activities are represented by transferring vehicles out or into the grid (in agreement with the studied period). Simulations are done even if all behavioral aspects of the agents drivers of light cars are not implemented yet. The agents van drivers have their destinations, itineraries and information about dedicated places for delivering. Unfortunately, the normal period and the peak period are the same for the light cars and for the vans for goods delivery. The car park is full and the places for goods delivery booked (sometimes without reason). The agents drivers of light cars must perform urgent activities or exible activities. The method to organize their planning has been detailed in our previous works (Boussier et al., 2005). For instant, the simulations are done in a randomized manner. According with the behavioral model obtained by the method presented in past sections, most part of the drivers of light cars would prefer two alternatives: (1) For a exible activity (administrative procedures, recreational activities or shopping): agents prefer to do loops in the area (for about 8 min). As consequence of this situation, the simulation gives an average value of the speed very low during these periods (15 km/h) due to the congestion created by the research of one available place to park the light cars. (2) For an urgent activity (work, meeting), agents prefer to park in an illegal manner within the zone. To illustrate the consequence of this choice, the Council
Periods for trafc level 1 out of peak 2 normal Hours 8 pm-7 am 10 am-12 am 6 pm-8 pm 7 am-10 am 12 am-2 pm 4 pm-6 pm Flow/hour (inputs nos 1-3) 50-10-5 150-35-20 200-50-20 Number of parked cars 20 Full Full

Table III. Characteristics of trafc level and occupancy rate of the car park

3 peak

Figure 8. Map of the studied area and graphical user interface of VIS_SIM

Police provided us data on parking nes issued in Zone de la Coursive in 2007 (the results are rigorous because the Municipal Police is situated face to this car park). Table IV presents an extract of the data from June 2007 to December 2007. Because of important number of cars running at low speed, other consequences of the cars park activities are the fuel consumption and the emissions. Results obtained by COPERT software (taking into account the length of trips in this area) are shown in Figure 9. 4.2.2 Expected situation after the sharing of car parks. Now suppose that the booking of the places for goods delivery is shared between the agents drivers of vans and drivers of light cars. We also consider that the delays (durations) are respected. Simulations have been done only for normal and peak periods by using the same inputs (characteristics of trafc ows, occupancy rate of the car park, behaviors of agents). The number of cars using the available places if no delivery is done is not too important (only eight cars can park). What is important in this case is the reduction of the congestion level generated by searching an available place and the decrease of the number of km done. Figure 10 shows the evolution of fuel consumption and corresponding emissions in this case. 5. Works in progress 5.1 A three-dimensional simulation The 2D framework cannot capture reactions of individuals in agreement with particular trafc conditions (congestion level, occupancy rate of a road). Laboratory L3I (La Rochelle University) provided a framework to build easily the model of a system existing or future (Augeraud et al., 2005). Figure 11 shows the development
Street name (rue de [. . .]) ` re Verdie ` res St Pe Cloche Total June 2 0 1 3 July 6 0 0 6 August 11 1 2 14 September 7 0 0 7 October 2 0 0 2 November 1 0 1 2 December 2 0 0 2

Simulation of goods delivery process 925

Table IV. Number of nes in the Coursive area between June and December 2007

average value of fuel consumption 30 25 20 15 10 5 0 out of peak normal peak Diesel Gasoline (a) 1.00 0.80 0.60 0.40 0.20 0.00 CO

emissions (Kg)

NOX normal (b)

COV peak

out of peak

Figure 9. (a) and (b) Fuel consumption and emissions in area Coursive

IJPDLM 41,9

926

environment for building a virtual city in which different types of vehicles (cars, goods vans) move themselves according their diaries or tour planning. The end-user of this tool can instantiate and assemble components in order to model a city and the trafc network. He can change infrastructures, can create dedicated lanes for heavy vehicles, can put signs and vertical signals such as trafc light or road panels, can give driving rules and goods distribution rules for the simulated city. Buildings can be changed. We consider that the buildings are agents because they are intelligent. They have their own strategies to produce and consume people and goods. The details about the design of the adaptive ground model have been exhaustively presented in previous works (Augeraud et al., 2005). The advantages of the simulation are: . It is possible to simulate the trafc and environmental impacts of both scenarios sharing the street or sharing the car park places by changing infrastructure characteristics. . The 3D framework is able to capture reactions of individuals in agreement with particular trafc conditions (congestion level, period, state of the use of the roads).
average value of fuel consumption 20 15 10 5 1.00 0.80 0.60 0.40 0.20 normal Gasoline (a) peak Diesel 0.00 CO NOX out of peak (b) COV normal emissions (Kg)

Figure 10. (a) and (b) Expected values of fuel consumption and emissions in zone de la Coursive

Figure 11. (a) and (b) A block for the 3D representation and the development environment of adaptive city model

(a)

(b)

5.2 A more realist behavior of agents drivers of light vehicles In a 3D environment, the behavior of the agents is more complicated to describe and depends on more criteria than presented by a 2D framework (e.g. the possibility to pass on another lane). Same method presented in past sections (stated preferences) is used for behavioral modeling. The questions are presented by using a combination between linguistic criteria and criteria captured by pictures. One example is shown in Figure 12. The linguistic terms are used to describe one particular situation (A-period, B-type of activity, C-number of lanes, etc) and the picture gives spatial details about the 3D situation (Browne, 2009). For this example, the scenario is: period peak hours, activity work, number of lanes one, light green, dedicated place for delivering absent, goods van partially blocs the way, pedestrian absent, place to pass yes, drivers of van out of danger, visibility good, position on car following list rst one. Each respondent sanctioned each scenario by a preference converted on crisp value (example: Ill be very nervous translated by score S 1). For each scenario, the average value of all responses is computed and considered like representative for the behavior of a sample size statistically signicant. The analysis of variance will allow us to identify the main parameters which affects the perception of one situation and which governs the behavior of the agents drivers, in order to reduce the number of information transmitted to each agent. After the implementation of behavioral models for agents of light cars, agents of vans, stopping agents, several scenarios can be simulated. Decision makers will be able to select optimal sites for delivering and/or periods for ensuring a good coexistence between all actors of the urban trafc. 6. Conclusions and perspectives The purpose of the paper is the framework development based on agent technology for simulating the impacts of the coexistence of light cars with vans for goods delivery. Because within urban areas are only limited opportunities to enhance physical capacity of road infrastructure at surface level, the retained idea is to analyze the impacts

Simulation of goods delivery process 927

A B 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 2 2 2 1 1 1 2 2 2

C D 1 1 2 1 2 2 2 2 1 2 1 1 1 1 2 2 1 2 2 1 2 1 2 1

E F 1 1 2 2 2 1 1 2 2 1 1 2 1 2 1 1 2 2 1 2 2 1 2 1 (a)

G 1 2 1 2 1 2 2 2 1 1 1 2

H 1 2 1 2 2 1 2 1 2 2 1 1

J 1 2 2 1 1 2 1 1 2 2 1 2

K L 1 2 2 1 2 1 2 1 1 1 2 2 1 2 2 2 1 1 1 2 1 2 2 1 (b)

Figure 12. (a) and (b) Orthogonal array for collecting perceptions/picture illustrating one scenario

IJPDLM 41,9

928

of a spatial and temporal dynamic booking of on-street stopping places (to park light cars or to deliver goods). This paper focuses on the management process of the parking places shared between car drivers and van drivers. For it, we have provided a framework to build easily the model of a system existing or future. The simulation tool is based on multi-agent paradigm. Several types of agents (private cars, freight vehicles, control, stopping) are built and their interactions generate the activity of roads and car parks. The drivers of private cars are agents able to decide and to learn, in agreement with a realist behavioral model. Their behavior is strongly dependent on the period of the day (peak or out-of-peak), of the type of activity to perform (work, shopping, etc.). The method for the design of behavioral model is the stated preferences methodology. One application is presented in order to propose the optimization of goods delivery process in urban area of La Rochelle by sharing the parking places between private cars drivers and vans drivers for goods deliveries. Several aspects are under development: a 3D ground for simulation able to capture perception of the agents in spatial scale and a more exhaustive behavioral model. For it, scenarios are built by using linguistic terms and pictures for capturing the attributes of the road.
References Arnott, R. and Inci, E. (2006), An integrated model of downtown parking and trafc congestion, Journal of Urban Economics, Vol. 60, pp. 418-42. Augeraud, M., Boussier, J.M., Colle, F., Estraillier, P. and Sarramia, D. (2005), Simulation approach for urban trafc system: a multi-agent approach, Proceedings of the International Conference on Industrial Engineering and Systems Management, Marrakech (Morocco), 16-19 May. Ben-Akiva, M., Bowman, J., Ramming, S. and Walker, J. (1998), Behavioral realism in urban transportation planning models, Proceedings of the Transportation Models in the Policy-making Process, A Symposium in Memory of Greig Harvey, Pacic Grove, CA, USA, 4-6 March. Boerkamps, J. and Binsbergen, A.V. (1999), GoodTrip a new approach for modelling and evaluation of urban goods distribution, paper presented at Urban Transport Systems Conference, Lund, Sweden, 7-8 June. Boussier, J.M., Sarramia, D. and Estraillier, P. (2009), Decision support for optimization of car park activity in an urban area, Journal of Advanced Transportation, Vol. 43 No. 2, pp. 45-58. Boussier, J.M., Estraillier, P., Augeraud, M. and Sarramia, D. (2005), Agenda elaboration of driver agents in a virtual city, Proceedings of the Workshop on Multi-agents for Modelling Complex Systems, European Conference on Complex Systems (ECCS05), Paris, France, p. 15. Browne, M. (2009), Supply chain for goods delivery, paper presented at Conference EIGSI, cembre. La Rochelle, France, 01-02 de Cavezzali, A., Girotti, A. and Rabino, G. (2003), Multi-agent systems and territory: concepts, methods and applications, European Regional Science Association (ERSA) Conference Papers. Fotherby, T. (2002), Visual trafc simulation, Final Report of MEng Computing Degree, Department of Computing, Imperial College, London, June.

n, L., Engelson, L., Sche ele, S., Tavasszy, L., Ruijgrok, C. and Sundell, L. (2003), DISTRA Fride Pre-Study on Modelling Local/Regional Distribution and Collection Trafc, INREGIA TNO Inro Statistics, Sweden (Swedish) (rev. 04-29). Gorys, J. and Hausmanis, I. (1999), A strategic overview of goods movement in the great Toronto area, Transportation Quarterly, Vol. 53 No. 2, pp. 101-14. Harris, R.I. and Liu, A. (1998), Input-output modelling of the urban and regional economy: the importance of external trade, Regional Studies, Vol. 32 No. 9, pp. 851-62. He, S. and Crainic, T.G. (1998), Freight transportation in congested urban areas: issues and methodologies, Proceedings of 8th World Conference on Transport Research, Antwerp, Belgique, 12-17 juillet. Hess, S. and Polak, J.W. (2004), An Analysis of Parking Behaviour Using Discrete Choice Models Calibrated on SP Datasets, European Regional Science Association (ERSA), Vienna. List, G. and Turnquist, M. (1994), Estimating multi-class truck ow matrices in urban areas, Proceedings of the 73rd Annual Meeting of Transportation Research Board, Washington, USA, January. Meimbresse, B. and Sonntag, H. (2000), Modelling urban commercial trafc with the model WIVER, Etudes et Recherches LET, Vol. 15, pp. 93-106. Munuzuri, J., Larraneta, J., Onieva, L. and Cortes, P. (2004), Estimation of an origin-destination matrix for urban freight transport: application to the city of seville, in Taniguchi, E. and Thompson, R.G. (Eds), Logistics Systems for Sustainable Cities, Elsevier, Oxford, pp. 67-82. Ntziachristos, L. and Samaras, Z. (2000), COPERT III computer program to calculate emissions from road transport methodology and emission factors, Technical Report No. 49, European Environment Agency, Copenhagen. Routhier, J.L., Segalou, E. and Durand, S. (2001), Mesurer limpact du transport de marchandises ` le de simulation FRETURB, LET-DRAST-ADEME, Paris. en ville, Le mode Shafer, G. (1976), A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ. Smarandache, F. and Dezert, J. (2004), Advances and Applications of DSmT for Information Fusion, Vol. 1, American Research Press, Rehoboth. Smets, P. (1990), Constructing the pignistic probability function in a context of uncertainty, Uncertainty in Articial Intelligence, Vol. 5, pp. 29-39. Taguchi, G. (1987), System of Experimental Design, UniPub/Kraus International Publications, New York, NY. Teng, F. (2008), Transferability of a transport project by using the up-scaling principle, Final Report of Master Degree, EIGSI (La Rochelle), June. Thompson, R. and Taniguchi, E. (1999), Routing of commercial vehicles using stochastic programming, Proceedings of 1st International Conference on City Logistics, Cairns, Australia, 12-14 July. Wisetjindawat, W., Sano, K., Matsumoto, S. and Raothanachonkun, P. (2007), Micro-simulation model for modeling freight agents interactions in urban freight movement, Proceedings of 86th Annual Meeting of the Transportation Research Board, Washington, USA, 21-25 January.

Simulation of goods delivery process 929

About the authors Jean-Marie Boussier is a Doctor in Computer Sciences of EIGSI. Since 2002, with a team of the L3i laboratory, his research works have been about designing of realist behavior of agents representing city road network users in a dynamic context. For that, a methodical framework has

IJPDLM 41,9

930

been proposed, integrating data mining, stated preference surveys. Jean-Marie Boussier is the corresponding author and can be contacted at: jean-marie.boussier@eigsi.fr Tatiana Cucu obtained the Master Degree in Industrial Economics, Micro-economics and Econometrics of Paris. She had been involved in research works on econometrics, such as the evolution of the Water price or the choice of transportation modes. Her thesis works focus on the cost-benets analysis in transportation projects. Luminita Ion is a Doctor in Physical Sciences. Since 1997, she has been testing different approaches for modeling the complex systems (design of experiments, data fusion, multi-criteria analysis) in battery tests and urban transport eld. She is a member of the CIVITAS project and studies impacts and new measures to improve the people mobility in sustainable cities. Dominique Breuil obtained his Doctorat dEtat es Science at Bordeaux University in Production Management. A Researcher in CNRS, he joined engineering companies for ten years. Currently, he is Head of the Research Department of EIGSI, an engineer school at La Rochelle, in which his team works on urban mobility at the European level.

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Journal of Enterprise Information Management


Emerald Article: Theory of deferred action: Agent-based simulation model for designing complex adaptive systems Nandish V. Patel, Tillal Eldabi, Tariq M. Khan

Article information:
To cite this document: Nandish V. Patel, Tillal Eldabi, Tariq M. Khan, (2010),"Theory of deferred action: Agent-based simulation model for designing complex adaptive systems", Journal of Enterprise Information Management, Vol. 23 Iss: 4 pp. 521 - 537 Permanent link to this document: http://dx.doi.org/10.1108/17410391011061780 Downloaded on: 26-08-2012 References: This document contains references to 48 other documents Citations: This document has been cited by 1 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 280 times since 2010. *

Users who downloaded this Article also downloaded: *


Charles Inskip, Andy MacFarlane, Pauline Rafferty, (2010),"Organising music for movies", Aslib Proceedings, Vol. 62 Iss: 4 pp. 489 - 501 http://dx.doi.org/10.1108/00012531011074726 Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 Sandrine Roginsky, Sally Shortall, (2009),"Civil society as a contested field of meanings", International Journal of Sociology and Social Policy, Vol. 29 Iss: 9 pp. 473 - 487 http://dx.doi.org/10.1108/01443330910986261

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-0398.htm

Theory of deferred action


Agent-based simulation model for designing complex adaptive systems
Nandish V. Patel, Tillal Eldabi and Tariq M. Khan
Brunel Business School, Brunel University, Uxbridge, UK
Abstract
Purpose The purpose of this paper is to address the problem of designing articial complex adaptive systems, like information systems and organisations, by developing a proof-of-concept conceptual proto-agent model. Design/methodology/approach The paper develops an exploratory proto-agent model and evaluates its suitability for implementation as agent-based simulation. Findings The paper focuses on understanding the effect of emergence when designing articial complex adaptive systems and produces a proto-agent model that identied agents and their behavioural rules for modelling. Practical implications In deferred action, agents act in emergent organisation to achieve predetermined goals. Since emergence cannot be predicted, information systems and organisation design approaches that cater for emergent organisation are required. Originality/value The deferred action construct is a synthesis of planned approaches and contingency approaches to design information systems. It recognises the effect of emergence on information systems. Keywords Design, Simulation, Information systems Paper type Conceptual paper

Theory of deferred action

521
Received March 2009 Revised April 2009 December 2009 Accepted December 2009

Introduction We consider what a general theory of complex systems, or complexity theory, can contribute to information systems theory and information systems (IS) design theory. A particular challenge for IS theory is to nd principles that explain regulatory behaviour in emergent organisation. We need to understand the degree to which the ontogeny (pathway of development) of IS is unique or deterministic to better understand IS development and use in organisations and in emergent organisations. We argue that business organisations, IS, and information technology (IT) systems can be better understood and designed if we characterise them as complex adaptive systems. Each is a complex adaptive system. Hevner et al. (2004) dene an information system as people, organisation and IT. We argue that such an information system is a complex adaptive system. A complex adaptive system is self-organising and has regular behaviour to distinguish it from the environment. The environment of a complex adaptive system is more complex than the system itself and the environment cannot be predicted. The complex adaptive system has to rely on sensing regularities in the environment to determine its behaviour (Jost, 2003). Complex adaptive systems have emergent behaviour, therefore they are difcult to predict. Consequently, it is problematical to design a system with unpredictable behaviour. However, we can dene a set of primitive elements and dene their

Journal of Enterprise Information Management Vol. 23 No. 4, 2010 pp. 521-537 q Emerald Group Publishing Limited 1741-0398 DOI 10.1108/17410391011061780

JEIM 23,4

522

behaviour, determine how the primitives as a whole interact with the environment, and then study the emergent collective behaviour of the system. A specic design problem can then be solved by mapping it into the initial states of the complex system and dening the dynamics (Steels, 2000 in Schroenauer et al., 2000) The design of complex adaptive systems is problematical. Emergence is a practical design problem because predicting when it happens and its effects is impossible. To cope with emergence designers need to become comfortable with uncertainty (emergence) and design artefacts with emergent properties. IS theory There many theories used in IS (AIS, 2009) but few IS design theories exist. Explanatory frameworks, exemplar models, and design principles stemming from IS behavioural science are numerous but inappropriate. Recent examples of IS design theories are Walls et al.s (1992) IS design theory, Markus et al.s (2002) IS design theory for emergent knowledge processes, and Arnotts (2006) decision support systems design theory. Although IS design theories can benet from knowledge established by IS behavioural science (March and Smith, 1995), the core of a IS design theory should be established by the IS design science epistemology because of its relevance for experimental IS. An IS design science research project is an applied research project and its outcomes are new theory and application (a product or IT artefact). There are no theories on the effect of emergence on IS research. There is explanatory research on emergence in IS (Baskerville et al., 1992; Truex et el., 1999, 2000). There is also research in the related areas of ad hoc IS development, evolutionary information systems, adaptive information systems and bricolage. Bricolage explains IT infrastructure design as drift from planned action and naively neglects the need for management control (Ciborra and Hanseth, 2000, p. 2). Research has drawn on complexity theory, whose central tenet is emergence, to explain IS (Kallinikos, 2006). Some of this research considers implications for IS design and development but does not address it sufciently practically (Moser and Law, 2006). Patel (2002, 2003, 2005) suggests practical design principles for developing complex adaptive IS. There is similar explanatory research on emergence in organisation studies with obvious implications for organisation design. Emergent organisation is a theory of social organisation that does not assume that stable structures underpin organizations (Truex et al., 1999, p. 117; Truex and Klein, 1991). Feldmans (2000) study reveals that even routines are a source of continuous change. Feldman (2004) states that organisational structure is emergent and it effects the allocation of organisational resources. Emergent organisations experience sudden and unexpected change resulting in structures, processes, and resources becoming unstable and difcult to predict. Emergent organisation affects actors need for information (Truex et al., 1999; Patel, 2002) and results in emergent knowledge processes (Truex et al. 1999; Markus et al., 2002; Patel, 2005). Emergence also effects technology use. Ali and Brooks (2008) discuss a framework of situated IS culture, which reects on how actors interaction with IT systems is an emergent structure. They further develop a practice lens based on situated use of IT that makes no assumption about stability, predictability, or relative completeness of technology.(Ali and Brooks, 2008, p. 557). This explanatory stream of research does not consider practical design issues and acknowledges that current design techniques are limited (Truex et al., 1999). Patel

(2006) addresses how to design organisations, IS, and IT systems to reect emergence. He seeks to design rationally complex adaptive systems that exhibit emergent behaviour. Since the actual emergence cannot be predicted, the problem is giving design capability to agents (organisational actors) to design in emergent environments. Our research programme, rooted in design science, is to develop design theory and modelling techniques to design articial complex adaptive systems (ACAS) with emergent properties. We invoke the theory of deferred action because of its axiom that many IS and aspects of IS emerge in organisations. It provides the theoretical framework to design capability for agents to act in emergent organisation. The importance of context and emergence in the related eld of intelligent systems research is briey described. Through agent modelling we aim to improve understanding on how to design ACAS. We develop an exploratory proto-agent model of emergent organisation and emergent IS suitable for later implementation as a multi-agent based simulation, and we evaluate the implementation computer language Netlogo. Design science and complexity Simon (1996) denes design science as:
Everyone designs who devises courses of action aimed at changing existing situations into preferred ones.

Theory of deferred action

523

It aims to create knowledge for the purpose of teleological design (Banathy, 1996). In IS research design research studies algorithms, human/computer interfaces, design methodologies (including process models), and computer languages as outcomes to improve IT artefacts (Hevner et al., 2004; Vaishnavi and Kuechler, 2004/2005). Complex systems that seek to adapt are termed complex adaptive systems (Axelrod and Cohen, 2000). Complex adaptive systems (CAS) are a collection of elements that behave collectively but no single element has complete control over the behaviour of the whole system. CAS acquires information on its environment and on its own interaction with that environment. The information is structured into regularities and then condensed into a schema for acting. Adaptation of the system to its environment occurs when it changes itself and its schema when they are inappropriate. This is self-organisation in response to the information (Gell-Mann, 1994). Examples of CAS social systems are the scientic enterprise, economy, population, organisation, and IS. Self-organisation and emergence are characteristics of CAS. The resulting behaviour of a collection of individual agents in which no one agent has complete control over the whole system is emergence. Complex adaptive systems regularly exhibit emergent behaviour (North and Macal, 2007). Self-organisation is the response of CAS to random change that causes the system to become unstable. When existing order is disturbed by change stability is restored by the system self-organising without some external causative factor. Restored stability is an emergent order and results in new structure that is intrinsic to the system which is not caused by an external factor. A business organisation is self-organising if no external inuences does the self-organising for it. The global IS, the world wide web, is a highly complex adaptive system designed by humans. Strategic IS and inter-organisational IS are also designed CAS. These are open socio-technical systems because they are affected by the environment.

JEIM 23,4

524

Intelligent systems and emergence The goal of intelligent systems research has been for many years to embody human-like intellectual capabilities within articial machines. The purpose was to emulate the exceptionally complex and creative reasoning abilities people have. Originally, as a simulation of the hypothesised cognitive mechanisms thought to be employed by humans, but later, following a realisation that much of the knowledge brought to bear during decision making is inexplicable and tacit, the purpose has realigned closer to a composition of cognitive, social and behavioural elements. Context plays an important role in forging behaviour, as much as logical reasoning. Traditionally, the view has been that goal-based reasoning was sufcient to explain how people make decisions in real life. It is now clearer that plans, for instance, created a priori and out of the context in which they will be executed cannot be guaranteed to play out as expected. This is expected primarily because execution must be within the context of unpredictable social systems, i.e. with the involvement of people who are governed not just be their cognitive processing of the situation, but also, by their psychological and physiological needs (often without cognisance). Inuence of situational factors can signicantly draw an individual away from their neat predetermined course of action. Therefore, for a realistic account of human behaviour in social systems, the notion that nothing goes according to plan rings true. Internal factors (cognitive, psychological, emotional, and physiological) and external factors (environmental and social) collude to bring about an unexpected and unpredictable context within which an information system (i.e. a human decision maker) must process incoming data and information, bring to bear their knowledge and apply their skills to exhibit what an observer would consider rational behaviour. The constructivist view would have it that from this new context emerges new knowledge, and correspondingly, a new information system with different (usually better) capabilities. One signicant research programme that promotes this view of socially oriented systems has sought to model social elements of behaviour beyond the cognitive to understand motivational factors driving apparent behaviour. Clancey et al. (2005) propose the BRAHMS simulation system for modelling realistic behaviour within the context of activities. The premise underlying this work is that information is selected to be processed according to the role and activity relevant at the time, and such information might not be readily included in an idealised task model or abstract representation of the procedure. Processing of information is interruptible when the focus of activities changes, and emergent quantities may arise, such as improvised conversations or other unplanned interactions. The BRAHMS project aligns well with deferred systems work and conrms the connections between IS research and developing ideas in the area of cognitive systems. Designing IS for emergent organisation Currently, data and information are understood to have stable properties that are predictable and structurable. Redenition of information ontology is needed to design IS for emergent organisation. It is problematical to predetermine and structure information in emergent organisation. Emergence affects data less than information. Names of customers, addresses or products manufactured do not change often. The affect on information is signicant.

Information is currently dened as processed data: Data algorithm information: 1

Theory of deferred action

For emergent organisation, we re-dene information as processed data in the context of emergence: Data emergence contextual algorithm information: 2

525

In emergent organisation information is dependent on context. Its qualities are suddenness, change, uncertainty, and unpredictability. Emergent (unpredictable) organisational situations arise in the course of organisational life for which information is needed, which makes information dependent on emergence. Such information cannot be pre-specied for design purposes. Therefore information ontology is static and emergent, as dened above, in emergent organisation. Static elements are knowable, predictable, and speciable for design purposes. Transaction processing systems are examples. Information required to manage a motorcycle production process can be predetermined and specied to design and develop the appropriate module for ERP systems. Emergent elements are not knowable in advance because emergent events occur suddenly and unpredictably. Information cannot be specied in advance. IS that are affected by emergence include strategic IS, decision support systems and the web. The theory of deferred action Theory-building requires principles. IS design theory should adhere two central principles for developing theory, the principle of complementarity and the principle of consistency. The principle of complementarity is that a complete description of IS requires both IS behavioural science and design science knowledge. Since the interest is in IS as rationally designed artefact, it can only be completely explained as the complement of behavioural science and design science. For the principle of complementarity to hold, knowledge accepted as valid in IS behavioural science should be such that it informs IS design. Similarly, knowledge accepted in IS design science should be such that it could be the basis for theorisation in IS behavioural science. IS knowledge established by behavioural science can be utilised for IS design and contribute to the development of IS design theory. IS knowledge established by design science can be utilised to develop better explanations of the IS phenomenon in IS behavioural science, and ultimately to IS design practice. The principle of complementarity is signicant. Behavioural science theory developed to explain IS should be usable to understand design of IS in design science. Such complementarity or duality of theories is common in physics (Randell, 2005). So, explanation of information as phenomenological in behavioural science theories should be such that it can contribute to IS design theory and be utilised in IS design practice. At present, phenomenologically and sociologically derived explanations of IS are not transferable to IS design science and IS design practice, especially software engineering for IS. Critical philosophy explanations of IS are similarly conned. Thus the central power construct in critical theory is signicant to understand IS in organisation, but it should satisfy the principle of complementarity such that it is transferable to IS design science.

JEIM 23,4

526

The principle of consistency is that the results of IS behavioural science should be transferable to IS design sciences body of usable knowledge, and that designed artefacts in turn are observable as IS phenomena by IS behavioural scientists. For the principle of consistency to hold, behavioural science and design science explanations should, at least as a minimum, address the set of things of interest to IS designers. IS behavioural science could address things extra to this set that constitute the wider IS phenomenon, but explanations should not contradict the practice of IS development by IS developers and IS usage by organisational actors. Explanations that are some orders removed from practice, such as emancipation (Hirschheim and Klein, 1994), are nevertheless valuable, but they are not transferable as usable knowledge. Similarly, design science explanations of the IT artefact should not contradict empirical observations of IS, which are used by IS behavioural science to develop theory and analytical frameworks. These two principles encompass the IT artefact and IS design theories and permit the use of the IS design science epistemology as a kind of experimental domain of the IS eld. But IS behavioural science and design science explanations and techniques should extend to the practice of design. An IS design theory should link behavioural science, design science and practice by adhering to these principles. Valid knowledge from IS behavioural and design sciences should be such that it could inform actual IS design. This is necessary because IS is an applied discipline. Although rational planning is necessary, in the context of emergence it is insufcient as the sole design dimension. Its scope is limited because agents modify their behaviour in the environment resulting in emergent organisation. The theory of deferred action provides understanding of systemic emergence to design complex adaptive systems (Patel, 2006). It is a generic artefact design theory for emergent organisations. In Gregors (2006) terms, it is a theory for action and design and therefore it informs design practice. Nomothetically, it explains and suggests effective models for organisation design, IT systems design, IS and KMS design, where emergence is a critical design factor. Here we invoke it to improve emergent IS and IT artefacts design. Three design dimensions are postulated in the theory of deferred action: planned action, emergence and deferred action, and their interrelationship constitute rational design of complex adaptive systems, as depicted in Figure 1. The theory assumes business organisations rationally determine goals and rationally plan to realise them. This is bounded rationality and acknowledges the sense-making of organisational actors (Weick, 2004). A plan is any artefact whose purpose is to construct the future such as strategic business plans or new systems design. However, the theory assumes actual organisational behaviour results in emergent organisation. Therefore, since rational behaviour is tempered by emergent behaviour the latter needs to be catered actively in the rational plan. A further assumption is that actuality is emergent and takes precedence over central plans but agents actions are constrained by the plan. Therefore, plans accommodate actuality and the teleological purpose of the system should not be deected by the emergence. Planned action Planned action, boundedly rational design, looks at future states of systems, designing new systems and enhancing existing systems. It develops new systems futures

Theory of deferred action

527

Figure 1. Deferred action design dimensions for designing articial complex adaptive systems

drawing on existing knowledge bases. The innovation of a new information system draws on existing knowledge bases for developing IT systems, such as IS methodologies and design languages like UML. When planned action is not affected by emergence systems can be specied, as depicted at point B in Figure 1. These are called specied systems. Planned action is undertaken centrally. It may be IS plans or management strategies. It is action prescribed by design and enacted regardless of actuality. For example, a three-year strategic plan or formal systems design for ERP systems. Planned action characterises organised action exclusively as rational act. It is useful for design problems that can be predetermined and well-structured and for solutions that can be predetermined requiring explicit and declarative knowledge. It assumes stable organisation structure and processes and negates emergence. Planned action is necessary but not sufcient for designing CAS.

JEIM 23,4

528

Emergence Emergence is the patterns that arise through interactions of agents, interactions between agents and IT artefacts, and agents responses to environment. Emergence is a becoming aspect of design. It affects design processes and the designed systems. Agents act locally in emergent situations. So, emergence requires present, contextual, and situational aspects to be factored into design based on past histories. To design CAS, planned action prescriptions need to cater for emergence. When planned action is affected by emergence systems cannot be completely specied. It is necessary to relate by synthesis planned action and emergence to design emergent IT artefacts, as depicted by points A and D in Figure 1. Planned action and emergence are related design dimensions when designing for emergent organisation. Deferred action Deferred action is the synthetic outcome of relating planned action and emergence for designing CAS. Agents undertake deferred action, within planned action, but their action is determined by and enacted in the emergent context. Thus adaptableness and self-organisation, characteristic of CAS, are facilitated as deferred action to operationalise CAS. Deferred action is necessary to design successful CAS. Deferred action reects emergence, space (location), and time in planned action. It contextualises planned actions in emergent situations. Since emergence is unpredictable agents should be enabled to respond to it in particular organisational situations. Deferred action enables agents to modify an IS within the context of its use. So, systems at points A and D in Figure 1 should provide actors with deferred action capability. The IS product is conceptualised as continuous design and development process, rather than a time-bound, predetermined product. The interrelationships among these design dimensions are detailed in Table I and they model designed systems in emergent actuality. Actuality is never sympathetic to plans. Plans are subject to systemic emergence and require an adequate embodied and situational response. In rationally designed CAS this response is deferred action. To illustrate, Googles organisation has the three design dimensions. Google has an IT infrastructure (planned action) that is built to build, providing the exibility needed in emergent context. It is designed to enable further building by expansion and
Design dimensions Planned action Emergence Deferred action Description Rational planning is necessary to set and achieve organisational goals, to build goal-oriented structures and processes Agents local responses to the environment create emergent situations. Emergence requires systems design and organisation design to be continuous Deferred action takes place within planned action in response to emergent locale. It synthesises planned action and emergence

Table I. Design dimensions for designing complex adaptive systems

Note: Synthesis of these constructs results in four system types: deferred systems (point A), specied systems (point B), autonomous systems (point C), and real systems (point D) in Figure 1. These types are also generic design types, systems types and organisation types

adaptation to market needs (emergence). Google executives realise that they are not best placed to know the emergence, so they actively enable employees to take action when they consider it appropriate (deferred action). Employees are given 10 per cent of their time for creative work. Thus a Google employee blogger reveals how easy it was for him to write software code and have it implemented in Googles gmail application because he disliked a certain aspect of it. Googles organisation is a deferred organisation and its IT systems are deferred systems, as depicted at point A in Figure 1. The deferred action construct is used by researchers and practitioners (Sotiropoulou and Theotokis, 2005; Stamoulis et al., 2001). Dron (2005) invokes deferred systems to design systems that have changing functionality. Elliman and Eatock (2005) developed the online E-Arbitraton-T system capable of handling workow for any legal arbitration case, thus meeting the emergence criteria. They applied the deferred design decisions principle to manage the open and changing system requirements. This enabled users (agents) to make design choices rather than the system developer. Research methodology We use the epistemological methodology of complexity to understand emergence, develop design constructs, design processes for designing CAS, and also to develop the proto-agent model. Researchers have applied complexity to design ( Johnson et al., 2005). Zamenopoulos and Alexiou (2005) suggest that complexity can used to construct better theories of design. The Engineering and Physical Sciences Research Council fund research that seeks to embrace complexity in design. The agent-based simulation method is closely related to complexity studies. Whereas why, how and what kind of research questions are addressed by other research methods that collect data for causal inferences, agent-based model simulation addresses what-if questions and generates data to understand complex interrelationships between agents whose interactions result in emergent structures and processes. Other research methods, relying on causal inferences, address phenomena that have already happened. Non-linear research methods, like agent modelling, generate simulations of target phenomena by running them into the future (simulated time) to observe what happens under differing initial conditions and variations in the environment. We develop a conceptual design of a proto-agent model, and not a full agent model, to limit the problem description to manageable levels in order to evaluate the model. The empirical observations of our target phenomenon were described in the introduction. The initial phase of the research project involved the identication of agents, agent attributes, rules of agent behaviour, and the agents environment. This was done through documentary and web-based search informed by the theory. The rule set for agent behaviour was derived from the bounded rationality and emergence design dimensions of the theory. Identifying agents and representing agents and rules realistically is agent modelling (North and Macal, 2007). This search is better achieved by basing agent models on good theory. North and Macal (2007) suggest modellers answer two questions: what theory has been selected and why was this theory selected for the modelling? Our model is based on the theory of deferred action as espoused above. It is

Theory of deferred action

529

JEIM 23,4

530

selected because it centrally recognises emergence as an ontological feature of business organisations and IT systems, it synthesises planned action with emergence, and because it predicts deferred action as the response of agents to systemic emergence. The deeper rationale was elaborated above. Basing the model on the theory has two important benets. Theoretical inferences enable us to make general statements on designing CAS that are applicable to cases other than the one under investigation. This is powerful because it means that each case does not have to be separately investigated before we are capable of acting. The other benet is that the obverse of generalisation is prediction. Ironically, by making generalisations prediction is made possible on the design of CAS. This is the major benet for designing, since we want to be able to tell whether our rational designs will work in situations that have not yet been empirically tested. From theory to model Gilbert and Troitzsch (2005) suggest simulation research questions should stem from theory and should be stated in terms of its theoretical concepts, which should form the main element of agent models. We observe this to be an operationalisation of verbal theory, such as the theory of deferred action. Our research question is: In the context of planned action, how do agents behave in emergent organisations? A sub question is: How do agents full informational needs in emergent organisation? The theory predicts that some aspects of organisation work emerge and that agents will respond to emergence as deferred action. It is predicted that agents design and develop their own IS using IT. The objectives of the modelling are to understand agent behaviour and satisfaction of information requirements in emergent organisation. Agent modelling will improve the theory because agent-based model simulations require precise denitions that are used for computer programme coding. This formalisation enhances the theory itself and provides inductive evidence of its veracity (Axelrod, 1997). Deferred action proto-agent network model Our modelling philosophy follows Einstein: Everything should be made as simple as possible, but no simpler. The model is deliberately simple because we believe parsimony results in greater understanding. We discuss the model design decisions and modelling approximations. A model is a simplication of the target to be modelled (Gilbert and Troitzsch, 2005). Our model is stochastic because we want to understand emergent behaviour and leverage the emergent behaviour to inform design of CAS. We used informal agent modelling techniques like hand drawn diagrams, simple text descriptions, and the formal technique UML. The modelling aim is to investigate how organisation and IT systems emerge and to understand agents behaviours and informational needs. The theory assumes organisations and IS to be socio-technical CAS, so we need to identify and dene the behaviours of social agents and technical agents and other agents. Agents are things that make choices or decisions including managers, executives, organisations and complex computer systems (North and Macal, 2007). Agents possess attributes and behaviours. The behaviour of agents is according to rules and composed of evaluating its current situation, executing the chosen action, and evaluating the

results of actions and adjusting the rules based on results. Agents have goals to focus the decision-making. Other agents should be able to identify the agent. Drawing on the theory, we have identied key agents, goals, behavioural rules, the agents environment, and random occurrences characteristic of probability systems detailed in Table II. We dene only the goals, behavioural rules and brief descriptions. Agent attributes are to be dened later for all the agents as shown in the example in Figure 2. We have dened only simple or proto-agents, which exhibit minimally adaptive behaviour (North and Macal, 2007). Proto-agents have a rule base and full-agents have a rule base and rules to change rules. We call the later meta-rules which provide adaptation by allowing routine adaptation to change over time (North and Macal, 2007). We will incrementally develop the proto-agents to full agents as the research progresses (see section Discussion and further development). The main elements of the model are bounded rationality (plans), emergence, and deferred action. Diversity among agents arises from differing behaviours, capabilities, resources, and positioning, and knowledge leads to emerging self-organisation and system structure. The diffused management agent is the logical consequence of emergent organisation. Since emergence is organisation-wide its management in relation to knowledge needs to be diffused in the organisation among agents (as in the case of Google employees decision making above). The agents will be represented as a small world network model. Networks will provide understanding of how networks are structured and grow and how information is communicated through networks in stable and emergent contexts. Small world networks have few nodes that are highly connected and other less connected. Networks can aid understanding of connectivity, tipping points, and ow of information propagated through the network. Networks of agents exhibit emergent behaviours and become self-sustaining. This will improve our understanding of how particular human organisation is established and becomes self-sustaining (Padgett, 2000). Following North and Macal (2007) in the network model we will address: . the appropriate type of connectivity for the network of agent relationships; . internal and external inuences on the network links and relationships; and . the effect of network connectivity on agent and system behaviour.

Theory of deferred action

531

Netlogo In terms of computing, simulations are self-contained programs that can control their own actions based on their perceptions of their operating environment (Huhns and Singh, 1998). The proto-agent model will be implemented in Netlogo. Here we discuss implementation issues and evaluate Netlogo. The code will require denitions of global and patches-owned variables, specication of set-up procedures and procedures to update patches. To design the simulation we will use UML because objects in UML are akin to agents in agent modelling. We will denes agents using the class template and specify their attributes and operations (agent rules), as depicted in Figure 2. Sequence diagrams will be used to depict system dynamics and activity diagrams.

JEIM 23,4

Computational Agents

Goals; agents are driven by goals and subgoals

532

Plan

To create strategically benecial future for the organisation.

Organisation

To add value for customers To maximise prot for shareholders

Manager

To improve efciency of business process

Process owner

To complete process task

Table II. Proto-agents model: agents, goals, rules and environment

Behaviour rules Rules should be such that they create emergence Relate agents to plan agent in the rule set Description Rules should be based on ToDA explanations of social action in emergent organisation. What does ToDA predict about how agents are expected to behave? Rules ?? (Yet to be formulated) Description (Like shop agent) A plan has a goal and detailed prescribed actions to achieve it Rules Make plan for survival of organisation If events in the environment change from planned action the organisation responds Description Organisation is composed of people, structure, processes, IT systems, IS and KMS. Organisation is emergent. Organising is normal Rules Manager acts on information Manager uses IS in context Manager locally designs information (systems) Manager communicates information to other managers and process owners Manager checks the plan agent for direction Manager monitors environment for opportunities and threats Description A manager is someone who makes resource allocation decisions and manipulates information to improve business process efciency Rules Process owners work on tasks Process owners act on information from other process owners, managers and environment Process owners communicate information to other process owners and managers Description Employees, or process owners, complete task on business processes (continued )

Information system What does it do? Like shop agent

To provide information on business processes

Rules An IS exists in relation to manager or process owner agents An IS responds to its environment, including other agents If manager agents lack information the IS creates it If process owner agents lack information the IS creates it Description IS is any information or knowledge artefact created with the use of IT. IS design depends on known, specied information requirements and unknown, emergent information requirements. IS design is affected by emergence, meaning that IS exhibits emergent design. IS design requires deferred action Rules ?? (Yet to be formulated) Description Diffused management of local situations is necessary because of emergence. It caters for selforganisation and adaptive behaviour. Centralised management of local situations is ineffective in emergent organisation

Theory of deferred action

533

Diffused management

To enable agents to locally control.

Environment Agents are located in the environment and connected in a network. The environment consists of business processes, predened and processed information. Agent activation is done in the environment The random occurrences in the environment consist of innovation by agents, innovation by competitor agents, agents leaving, new agents with new knowledge appearing, existing agents possessing new knowledge, previously unconnected agents connecting . . . What are the random events in the environment? Dene ToDA-based random events. What does ToDA predict to be random (not the same as emergent)? Random occurrences dene random agents; events Emergent behaviour is the result of the agent behaviours and interactions within the model that are not directly specied as part of the behaviours of the agents in the model. (North and Macal, 2007, p. 276). Emergence is the internally generated structure or patterns generated by agents. Structure is dened as the form of the relationships among agents

Table II.

Discussion and further development Our aim is to develop theoretical and practical knowledge for designing CAS. We expect to use understanding of emergence to intervene by design in CAS like business organisations and IS. We are specically interested in the kinds of design decisions we need to make at systems level to reect emergence in designed CAS at local level. Agents are autonomous, sociable, reactive, proactive, and they are capable of inferring and possessing knowledge and belief (Gilbert and Troitzsch, 2005). Our proto-agent model will be extended incrementally to encompass these attributes as we

JEIM 23,4

534

pursue our research agenda. As managers interpret information we will code agents with knowledge and information (knowledge representation). The meaning that managers attach to information, stemming from subjectivist research, will be encompassed in agents. The result will be a more sophisticated model representative of our target phenomenon. The full agent model will be used to code agents for simulation. A complex agent is adaptive, has the capability to learn and modify behaviour, and is autonomous and heterogeneous. The internal processing of agents is more sophisticated in full agent models than we have dened in the proto-agent model in Table II. To model the planning dimension of the theory we will develop agents that reect organisational plans. Agents will be designed to act within the context of complex plans and be able themselves to develop complex plans. We will draw lessons from the Evolution of Societies simulation (Doran and Gilbert, 1994), which has agents capable of complex planning. In the full-agent model simulation, we expect to observe the following emergent properties: Emergent organisational structure, emergent organisational processes, emergent organisational resources, emergent Information requirements and emergent organisational knowledge. These expected results are reported in extant empirical ndings and expressed as verbal theory discussed earlier. If the results of simulation, as a formalisation of verbal theory support extant empirical ndings, it improves the veracity of the verbal theory (Melerba et al. 1999). When we implement the full-agent model as a computer simulation, we expect the results to show emergence and deferred actions of agents. We expect to be able to use these results to enhance our theoretical understanding of designing for emergent organisation and inform the development of design techniques for designing CAS. We will compare our simulation data with the target actual observations from emergent organisations (Doran and Gilbert, 1994). Detailed scenarios will be identied to test the simulation model. One, such scenario will require agents interacting in the context of a complete unknown. The other scenario will consist of a partial unknown, where some aspects will be known and other aspects of the situation will be uncertain. We expect some interesting emergent behaviour of the agents to result in deferred action, as predicted by the theory. These simulation run results will then be compared to actual events in our target organisation.

Figure 2. Manager agent attributes and behaviours modelled in UML as an object

Conclusion The value of simulations is its experimental capability. Hales et al.(2003) assert that multi-agent based simulations are closer to an experimental science than a formal one. Their comment is aimed at the discipline of simulation itself. We will undertake experiments on the actual simulation model to understand the effect of various magnitudes of emergence on information needs and IT systems. For example, in terms of organisational knowledge, we will be interested in the effect of loosing key knowledge workers to competitors.
References AIS (2009), Association of Information Systems, available at: www.fsc.yorku.ca/york/istheory/ wiki/index.php/Main_Page (accessed 9 November 2009). Ali, M. and Brooks, L. (2008), A situated cultural approach for cross-cultural studies in IS, Journal of Enterprise Information Management, Vol. 22 No. 5, pp. 548-63. Arnott, D. (2006), Cognitive biases and decision support systems development: a design science approach, Information Systems Journal, Vol. 16 No. 1, pp. 55-78. Axelrod, R. (1997), Advancing the art of simulation in social sciences, Paper No. 97-05-048, Santa Fe Institute, Santa Fe, NM. Axelrod, R. and Cohen, M. (2000), Harnessing Complexity, Basic Books, New York, NY. Banathy, B.H. (1996), Designing Social Systems in a Changing World, Plenum, New York, NY. Baskerville, R., Travis, J. and Truex, D.P. (1992), Systems without method: the impact of new technologies on information systems development projects, in Kendell, K.E., Lyytinen, K. and DeGross, J.I. (Eds), Transactions on the Impact of Computer-Supported Technologies in Information Systems Development, Elsevier, Amsterdam. Ciborra, C. and Hanseth, O. (2000), Introduction to From Control to Drift, in Ciborra, C., Braa, K., Cordella, A., Dahlbom, B., Failla, A., Hanseth, O., Heps, V., Ljungberg, J., Monteiro, E. and Simon, K. (Eds), From Control to Drift: The Dynamics of Corporate Information Infrastructures, Oxford University Press, Oxford, pp. 1-14. Clancey, W.J., Sierhuis, M., Damer, B. and Brodsky, B. (2005), Cognitive modeling of social behaviors, in Sun, R. (Ed.), Cognition and Multi-agent Interaction: From Cognitive Modeling to Social Simulation, Cambridge University Press, New York, NY, pp. 151-84. Doran, J.E. and Gilbert, N. (1994), Simulating Societies: an introduction, in Gilbert, N. and Doran, J.E. (Eds), Simulating Societies: The Computer Simulation of Social Phenomena, UCL Press, London, pp. 1-18. Dron, J. (2005), Epimethean information systems: harnessing the power of the collective in e-learning, International Journal of Information Technology and Management, Vol. 4 No. 4, pp. 392-404. Elliman, T. and Eatock, J. (2005), Online support for arbitration: designing software for a exible business process, International Journal of Information Technology and Management, Vol. 4 No. 4, pp. 443-60. Feldman, M.S. (2000), Organizational routines as a source of continuous change, Organization Science, Vol. 11 No. 6, pp. 611-29. Feldman, M.S. (2004), Resources in emerging structures and processes of change, Organization Science, Vol. 15 No. 3, pp. 295-309. Gell-Mann, M. (1994), The Jaguar and the Quark, W.H. Freeman and Company, New York, NY.

Theory of deferred action

535

JEIM 23,4

536

Gilbert, G.N. and Troitzsch, K.G. (2005), Simulation for the Social Scientist, OUP McGraw-Hill International, Maidenhead. Gregor, S. (2006), The nature of theory in information systems, MIS Quarterly, Vol. 30 No. 3, pp. 611-42. Hales, D., Rouchier, J. and Edmonds, B. (2003), Model-to-model analysis, Journal of Artical Societies and Social Simulation, Vol. 6, available at: www/soc.surrey.ac.uk/JASS/6/4/5.html Hevner, A.R., March, S.T., Park, J. and Ram, S. (2004), Design science in information systems research, MIS Quarterly, Vol. 28, pp. 75-106. Hirschheim, R. and Klein, H.K. (1994), Realizing emancipatory principles in information systems development: the case for ETHICS, MIS Quarterly, Vol. 18 No. 1, pp. 83-109. Huhns, M.N. and Singh, M.P. (1998), Multiagent systems in information-rich environments, Cooperative Information Agents II, Volume 1435 of LNAI, Springer, Berlin. Johnson, J., Zamenopoulos, T. and Alexiou, K. (Eds) (2005), Proceedings of the ECCS 2005 Satellite Workshop: Embracing Complexity in Design, Paris, 17 November 2005. Jost, J. (2003), External and internal complexity of complex adaptive systems, Sante Fe Institute, Santa Fe, NM, available at: www.santafe.edu/ Kallinikos, J. (2006), Information out of information: on the self-referential dynamics of information growth, Information Technology & People, Vol. 19 No. 1, pp. 98-115. March, S. and Smith, G.F. (1995), Design and natural science research on information technology, Decision Support Systems, Vol. 15 No. 4, pp. 251-66. Markus, M.L., Majchrzak, A. and Gasser, L. (2002), A design theory for systems that support emergent knowledge processes, MIS Quarterly, Vol. 26 No. 3, pp. 179-212. Melerba, F., Nelson, R., Orsenigo, L. and Winter, S. (1999), History-friendly models of industry evolution: the computer industry, Industrial and Corporate Change, Vol. 8 No. 1, pp. 3-40. Moser, I. and Law, J. (2006), Fluids or ows? Information and qualcalculation in medical practice, Information Technology & People, Vol. 19 No. 1, pp. 55-73. North, M.J. and Macal, C.M. (2007), Managing Business Complexity, OUP, Oxford. Padgett, J.F. (2000), Modelling Florentine Republicanism, working paper 01-02-008. Santa Fe Institute, Santa Fe, NM. Patel, N.V. (2002), Emergent forms of IT governance to support global ebusiness models, Journal of Information Technology Theory and Application, Vol. 4 No. 2, pp. 33-48. Patel, N.V. (2003), Adaptive Evolutionary Information Systems, Idea Publishing, Hershey, PA. Patel, N.V. (2005), Sustainable systems: strengthening knowledge management systems with deferred action, International Journal of Information Technology and Management, Vol. 4 No. 4, pp. 344-65. Patel, N.V. (2006), Organization and Systems Design: Theory of Deferred Action, Palgrave Macmillan, Basingstoke. Randell, L. (2005), Warped Passages: Unravelling the Universes Hidden Dimensions, Allen Lane, London. Schroenauer, M., Kalynmoy, D., Rudoph, G., Yao, X., Lutton, E, Merelo, J.J. and Schwefel, H.-P. (Eds) (2000), Parallel Problem Solving from Nature PPSN VI, Springer, Berlin. Simon, H.A. (1996), The Sciences of the Articial, The MIT Press, Cambridge, MA. Sotiropoulou, A. and Theotokis, D. (2005), Tailoring information systems: an approach based on services and service composition, International Journal of Information Teachnology and Management, Vol. 4 No. 4, pp. 366-91.

Stamoulis, D., Kanellis, P. and Martakos, D. (2001), Tailorable information systems: resolving the deadlock of changing user requirements, Journal of Applied System Studies, Vol. 2 No. 2. Steels, L. (2000), Language as a complex adaptive system, Proceedings 6th International Conference Paris, France, September 18-20, 2000. Truex, D.P. and Klein, H.K. (1991), A rejection of structure as a basis for information systems development, Collaborative Work, Social Communications, and Information Systems, Elsevier, Amsterdam. Truex, D.P., Baskerville, R. and Klein, H. (1999), Growing systems in emergent organisations, Communications of the ACM, Vol. 42 No. 8, pp. 117-23. Truex, D.P., Baskerville, R. and Travis, J. (2000), Amethodological systems development: the deferred meaning of systems development methods, Accounting Management and Information Systems, Vol. 10, pp. 53-79. Vaishnavi, V. and Kuechler, W. (2004/2005), Design research in information systems 20 January, available at: www.isworld.org/Researchdesign/drisISworld.htm Walls, J., Widmeyer, G.R. and El Saway, O.A. (1992), Building an information system design theory for vigilant EIS, Information Systems Journal, Vol. 3 No. 1, pp. 36-59. Weick, K.E. (2004), Rethinking organisational design: managing as designing, in Boland, R.J. and Collopy, F.S. (Eds), Managing as Designing, Stanford University Press, Stanford, CA, pp. 36-53. Zamenopoulos, T. and Alexiou, K. (2005), Linking design and complexity: a review, in Johnson, J., Zamenopoulos, T. and Alexiou, K. (Eds), Proceedings of the ECCS 2005 Satellite Workshop on Embracing Complexity in Design, Paris, 17 November 2005, The Open University, Milton Keynes, pp. 91-102. Further reading Doran, J.E. and Palmer, M. (1995), The EOS project: integrating two models of Palaeolithic social change, in Gilbert, N. and Conte, R. (Eds), Articial Societies: The Computer Simulation of Social Life, UCL Press, London, pp. 103-25. Corresponding author Nandish V. Patel can be contacted at: nandish.patel@brunel.ac.uk

Theory of deferred action

537

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

International Journal of Physical Distribution & Logistics Management


Emerald Article: Using Monte Carlo simulation to refine emergency logistics response models: a case study Ruth Banomyong, Apichat Sopadang

Article information:
To cite this document: Ruth Banomyong, Apichat Sopadang, (2010),"Using Monte Carlo simulation to refine emergency logistics response models: a case study", International Journal of Physical Distribution & Logistics Management, Vol. 40 Iss: 8 pp. 709 721 Permanent link to this document: http://dx.doi.org/10.1108/09600031011079346 Downloaded on: 26-08-2012 References: This document contains references to 21 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 961 times since 2010. *

Users who downloaded this Article also downloaded: *


Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 Franois Des Rosiers, Jean Dub, Marius Thriault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376 Sandy Bond, (2011),"Barriers and drivers to green buildings in Australia and New Zealand", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 494 - 509 http://dx.doi.org/10.1108/14635781111150367

Access to this document was granted through an Emerald subscription provided by ALLAMEH TABATABA'I UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0960-0035.htm

Using Monte Carlo simulation to rene emergency logistics response models: a case study
Ruth Banomyong
Department of International Business, Logistics and Transport, Thammasat Business School, Thammasat University, Bangkok, Thailand, and

Emergency logistics

709

Apichat Sopadang
Department of Industrial Engineering, Engineering Faculty, Chiangmai University, Chiangmai, Thailand
Abstract
Purpose The purpose of this paper is to provide a framework for the development of emergency logistics response models. The proposition of a conceptual framework is in itself not sufcient and simulation models are further needed in order to help emergency logistics decision makers in rening their preparedness planning process. Design/methodology/approach The paper presents a framework proposition with illustrative case study. Findings The use of simulation modelling can help enhance the reliability and validity of developed emergency response model. Research limitations/implications The emergency response model outcomes are still based on simulated outputs and would still need to be validated in a real-life environment. Proposing a new or revised emergency logistics response model is not sufcient. Developed logistics response models need to be further validated and simulation modelling can help enhance validity. Practical implications Emergency logistics decision makers can make better informed decisions based on simulation model output and can further rene their decision-making capability. Originality/value The paper posits the contribution of simulation modelling as part of the framework for developing and rening emergency logistics response. Keywords Modelling, Monte Carlo simulation, Emergency measures, Response time Paper type Conceptual paper

1. Introduction Over the past few years, the literature related to emergency logistics has greatly expanded. A number of emergency or emergency logistics plans and response framework have been developed by numerous agencies and governments around the world. However, many of these seem to be purely theoretical and relatively ineffective in their initial response or subject to unforeseen constraints. It is, therefore important to develop a more comprehensive approach and provide a holistic emergency logistics planning framework that key-related stakeholders can adhere to. The purpose of this paper is to provide a framework for the development of emergency logistics response model. The proposition of a theoretical or conceptual model is in itself not sufcient. What is needed is to validate the proposed emergency response model through the use of simulation models. The proposed planning framework focuses on the preparedness phase of emergency operations with a focus on responsiveness capability. Responsiveness is a key issue for

International Journal of Physical Distribution & Logistics Management Vol. 40 No. 8/9, 2010 pp. 709-721 q Emerald Group Publishing Limited 0960-0035 DOI 10.1108/09600031011079346

IJPDLM 40,8/9

710

emergency logistics as aid has to arrive as quickly as possible, in the right place, in the right condition to help disaster victims. In order to illustrate the proposed emergency logistics planning framework and its application, an illustrative case study is presented in order to help emergency logistics decision makers understand better how to improve their emergency logistics planning process. This manuscript is separated into four main sections. The rst section introduces the paper and its objectives. The second section discusses key concepts related to the development of emergency logistics response model. The third section introduces the role of simulation models in emergency logistics. The fourth section describes the illustrative case study of a Thai emergency logistics response model and its simulated outcome. The summary further discusses lessons learned from the simulation outcome of the proposed Thai logistics response model and its impact on emergency logistics planning. 2. Basic principles of emergency logistics response modelling According to Beresford and Pettit (2009), the aim of emergency logistics is to establish a tailored supply pipeline that ts a particular crisis or natural disaster. The principal leg of the pipeline is usually transport and freight transport is generally a key driver of the emergency supply chain in most cases. A variety of transport modes from road to air (Barbarosoglu et al., 2002) is likely to be used in order for aid to reach a crisis area rapidly. Over the past few years, the literature related to emergency logistics has greatly expanded. A number of models have been identied which incorporated many key stages of the emergency cycle and are discussed in detail by Pettit and Beresford (2005). Relevant models include the disaster management cycle (Carter, 1999) and the recovery model of Haas et al. (1977). The latter identies overlaps that occur between each of phases of the full emergency cycle. Military involvement in the early stages of an emergency is usually greater due to the capability of military organisations to respond rapidly to severe needs. The work of Jennings et al. (2000) detailed some of the basic principles surrounding the movement of food and commodities into areas where assistance is required. The authors developed a response model expressed in terms of the selection of transport modes and networks required for effective delivery of assistance to refugees. Pettit and Beresford (2005) expanded the earlier Jennings model with the purpose of developing a better understanding of emergency logistics needs by splitting a specic emergency into different stages or phases. In their model, the focus was on the participation of military and non-governmental organisations (NGOs) in emergency situations. During the initial stages following any disaster, the body playing a pivotal role is the relevant government, often initially activating military resources but as soon as the situation stabilises so does the importance of military assistance declines; NGOs then take over, commonly leading to specic aspects of the emergency operations. Other situational factors that could either facilitate or hinder emergency operations were also accounted for in the proposed model such as, for example, the underlying political situation or physical geography/accessibility. Although each crisis is unique in its characteristics, most crises exhibit similar logistical elements. These elements allow the logistician involved in emergency operations to follow a structured response pattern when dealing with the majority of crisis. This response pattern is shown in Figure 1 which represents a basic disaster response model.

Establish location of crisis and number of affected population YES Can aid be diverted by land or sea within 72 hrs? NO Use air transport to fly emergency aid in crisis area (possible use of airdrop technique) Stop using air transport as soon as aid arrives Transport/divert emergency aid

Emergency logistics

711

YES Can aid be diverted or bought locally after 72 hrs? NO

Continue using air transport Assess neighbouring countries storage and handling facilitiesat ports, their infrastructure, political situation, topology and season (dry/wet) Assess port infrastructure within host country Choose most suitable port Predict possible bottlenecks and have alternative route in place

YES

Is the host country landlocked? NO

Establish sea-leg from donor location to chosen port Assess road/rail/inland waterway networks from port to recipient Choose most efficient route

Charge emergency logistics pipeline

YES

Reduce costs through selection of alternative routes/modes. Explore ways of reducing handling/storage Purchase/divert/borrow more aid/moneyaid/money

Establish storage facilities at distribution site

NO

Is emergency logistics pipeline sufficiently charged to meet demand? Emergency logistics process Source: Adapted from Jennings et al. (2000) Positive response Negative response

Figure 1. Basic emergency response model

IJPDLM 40,8/9

712

Crisis situations share similar logistical elements. Wherever the crisis occurs, the need for rst aid and food is immediate and ongoing. According to the World Food Program, it can take on average approximately four months for food aid to reach recipients in crisis area through a fully charged logistics pipeline. Therefore, the emergency agency either has to divert freight that is already aoat, borrow or buy food from a neighbouring country or geographical area, or even transport rst aid and food by air. This method of supplying rst aid and food can continue until aid from various donors arrives. In order to establish such emergency pipeline, the emergency logistics planner must rst assess various attributes of the crisis affected location and, in the case of landlocked nations, the neighbouring countries as well. As a starting point, a suitable port will need at least to be chosen. This decision will depend largely on the handling, storage and efciency of the port in question. The infrastructure from the port to the crisis stricken area needs to be considered and assessed, as well as the political situation, topology and seasonal uctuations of the weather. All these factors inuence the choice of emergency route and mode by which aid will be transported. Last but not the least a suitable storage site for aid must be established at the distribution site, so as to ensure a constant supply in reserve at all times. Once the port and route and storage facilities have been established and the emergency pipeline is sufciently charged, alternative routes can then be ascertained in case of bottlenecks and complications in the established emergency pipeline. It is of paramount importance throughout the crisis to always deliver sufcient quantities of aid at regular intervals, as the lives of crisis affected people depend on it. As the emergency matures, and a constant supply of aid is arriving at key distribution sites, the emergency logistics planners attention can then switch to minimizing emergency aid logistics cost. The extent to which emergency logistics costs can be reduced depend largely on the expected duration of the crisis, however, the main reductions in cost can occur through the selection of alternative modes and routes, and by reducing handling and storage. The principle objective of emergency logistics is to get the logistics response up and running as quickly as possible and to establish the pipeline to be as robust as circumstances dictate in order that interruptions in supply do not occur. While satisfying these objectives, the emergency logistics system must also be able to keep costs of operating the logistics pipeline to a minimum. A major advantage for any aid agency in a crisis situation is the gift of forethought. If an area is prone to natural disasters and/or civil conict, then it is benecial to have equipment needed at the outset of an emergency nearby, for example, Strategic Logistics Stock for Asias tsunami areas based in Kuala Lumpur, Malaysia. However, the storage of aid in large quantities is often not very cost-effective. The ability to borrow or buy food at short notice from a nearby location is useful at the outset of a crisis. Aid agencies can pre-arrange these agreements with countries and rms in disaster prone areas prior to any emergency occurring, thus reducing the time taken to move the initial supplies. These pre-arranged agreements can result in a reduction in costs, as aid that is borrowed or bought from a neighbouring area may have previously been own in and stocked. It must, however, be stressed that crisis situations are not static as the crisis is constantly mutating and changing. This can result in past emergency logistics

operations and cost-effective route, mode or even both, not being the best option under the present conditions. Therefore, it is of paramount importance that emergency logistics planners embrace a exible planning approach to allow for different scenarios. The generic disaster model presented in this paper tries to clarify some of the options available in logistics planning during crisis conditions. The logistics of moving aid to an actual crisis area is obviously far more complex than any model can portray. However, the basic emergency logistics response model, as shown in Figure 1, does illustrate key difculties that can arise in a crisis and the possible thought process that an emergency logistics planner may use. Logistics pipeline charging, and if necessary re-charging, is the key on-going task once supply lines are established and managing the logistics pipeline in constantly varying conditions is an on-going process. The ultimate aim in emergency logistics response planning is to establish a logistics pipeline that is tailored to t that particular crisis, and which must be able to minimize costs in the long run. The rationale behind emergency logistics cost reduction is to make more resources available for the acquisition of necessary aid. Meeting customers requirements is another core logistics principle but in the case of emergency logistics timeliness and responsiveness are critical performance dimensions. While the price of aid can sometime be considered as a secondary issue in emergency circumstances, cost should not be as high as via ad hoc channels which are established by a variety of aid agencies in a post-disaster scenario. Clear time-denite key performance indicators as well as best estimates of the types of goods needed would need to be clearly dened beforehand (Pettit and Beresford, 2009). 3. Simulation modelling for emergency logistics response The use of simulation modelling is relatively common in management plan formulation in order to consider the impact of decisions on the management of a given system. Simulation model output can be assessed, interpreted for further renement or other scenario planning. This avoids the risk of actually implementing a decision without understanding the possible consequences. Figure 2 shows the simulation model development framework. The input of the simulation model can include data from interviews of related respondents, a collection of previous emergency logistics response data and other published information. The data of interest are the time needed for each relief activity, during and after the disaster from both an information and physical perspective.

Emergency logistics

713

Fuzzy information Related personal Interview

Previous relief logistics response data

Linguistic transformation (triangular distribution)

Simulation model

Information for decision making

Published information

Input analysis

Emergency logistics planning

Gathering data

Figure 2. Simulation model development framework

IJPDLM 40,8/9

714

In the case of information derived from interview logs or news clippings, these details may sometime be qualitative and ambiguous. On the other hand, quantitative information such as response time data from previous emergency logistics operations can be mathematically analysed directly. These qualitative and quantitative data can then be used as input for the developed emergency response simulation model. The output of the simulation model can provide key information for decision making as the simulated emergency relief logistics model can be further improved to better meet emergency operations requirements. Figure 3 shows the developed simulation model. When the necessary data, such as the time needed for each activity, are collected, a distribution analysis can be conducted. The known distribution of inputs is used to generate data for the calculation and simulation of relief logistics activity time taken. The total time required for emergency logistics activities is then revealed. As the simulation model requires a series of iteration, this will increase the condence level of the output. The collection of the response time will then be analysed and nally incorporated in any necessary decision making related to emergency logistics relief planning. In developing simulation models for emergency logistics, the use of Monte Carlo simulation can be selected to simulate the output of any logistics response model. Monte Carlo simulation is a computerised mathematical technique that allows people to take into account risks in quantitative analysis and decision making (Mooney, 1997).
Distribution analysis

Generate input data based on distribution

Generate input data based on distribution

Check the number of simulation run

No

Yes Calculate relief logistics response time

Figure 3. Simulation model sequence

Information for decision making

The technique is used by professionals in such widely disparate elds as nance, project management, energy, manufacturing, engineering, research and development, insurance, oil and gas, transportation and the environment. To put it simply, Monte Carlo simulation is a method that evaluates iteratively a deterministic model using sets of random numbers as inputs. Monte Carlo simulation provides decision makers with a range of possible outcomes and the probabilities that will occur for any choice of action. This is of great importance to emergency logistics decision makers and planners. It shows the extreme possibilities, the outcomes of going for broke and for the most conservative decision, along with all possible consequences for middle-of-the-road decisions. Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values, a probability distribution, for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specied for them, a Monte Carlo simulation can involve as much as thousands or tens of thousands of re-calculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values. In using probability distributions, variables can have different probabilities of different outcomes occurring. Probability distributions are a much more realistic way of describing uncertainty in risk analysis variables. During a Monte Carlo simulation, values are sampled at random from the input probability distributions. Each set of samples is referred to an iteration and the resulting outcome from that sample is recorded. Monte Carlo simulation does this hundreds or thousands of times, and the result is a probability distribution of possible outcomes. In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen. It tells the emergency logistics decision maker not only what could happen, but how likely it is to happen. Emergency logistics planners need not only to conceptualise and propose emergency logistics response models but also simulate the potential outcomes of their proposed response model. This will enable them to better understand the impact of their planning decisions. Preparedness is a key condition to successful emergency operations. The usage of simulation models can rene proposed emergency logistics response model for further improvement before any eld trials are conducted. 4. From theory to practice: insights from Thailand 4.1 The Thai context The Southern coastline of Thailand was affected by a tsunami on December 26, 2004 and the emergency response was considered to be not appropriate, especially during the rst 72 hours of the disaster. After the event, the Thai Government revised their emergency plan to respond to the potential threat of another tsunami but shortcomings were identied (Beresford and Pettit, 2009). However, these issues were not addressed as the threat of another tsunami was considered extremely remote by Thai authorities. In Thailand, no detailed emergency logistics response plan has been put in place even though the Thai Government has developed a basic protocol for emergency operations. As an example, a drill was conducted by Thai authorities in the previously tsunami-affected provinces during August 2009 to test the readiness of the existing emergency protocol. The drill began at 10.20 a.m. when the authorities simulated

Emergency logistics

715

IJPDLM 40,8/9

716

an earthquake in the Andaman Sea. After being notied of the quake by the Thai Meteorological Department, the National Disaster Warning Centre then sent short messages to department executives, to the governors of the six provinces and to a variety of ofcials to monitor the earthquake situation. The disaster warning proceeded both in Thai and in English, instructing local residents, as well as Thai and international holidaymakers to evacuate from shoreline areas to safer places on higher ground. However, quite a few areas faced glitches regarding the low-volume sound of sirens. In fact, it was the second tsunami drill to occur within a month. A seabed earthquake in the Indian Ocean led to an unplanned tsunami practice in the early morning hours of August 11. Residents of the Phang Nga village of Nam Khem, where more that 800 people perished in 2004, decided to take no chances and evacuated their homes for higher ground. Elsewhere along the coast, most people continued sleeping unaware that an earthquake had taken place (level one alert) or that those local ofcials were on standby to begin evacuation (level two). The order to move people out (level three) never came because the earthquake did not generate a big wave. Drills are relatively easy to practice but there is no sign of any genuine preparedness for a tsunami that may come while the whole of the Andaman sleeps. The key to effective emergency response is to be prepared as well as to be able to control the output of the emergency logistics system in place. Emergency logistics planners need to be able to make an assessment of the risk area and the population at risk in order to estimate the types of goods needed if a disaster should happen. What is often forgotten is that each disaster victim is a customer whose requirements need to be met within a time limit by the emergency logistics system put in place (Oloruntoba and Gray, 2009). This research uses the emergency logistics response model by Banomyong et al. (2009) as the core structure for the developed simulation. The emergency logistics response model is specic for the six Southern Thai provinces that were affected by the tsunami in 2004. The main thrust of the emergency response model is related to rapid deployment of resources and aid within the rst 72 hours. This phase is often viewed as a period of necessary chaos which rapidly lters-out bad practice based on the learning by doing principle. According to Banomyong et al. (2009), the rationale behind the 72- hour timeline for this emergency logistics response model was derived from the tsunami-related literature in Thailand. After the December 2004 tsunami it became clear that during the rst three days (72 hours), victims requirements were solely focused on food, medicine, clothing and shelter. Nonetheless, equipment to help repair buildings and to build temporary accommodation was also required to support the sheltering process. This emergency response model process comes under a constant re-evaluation as the crisis develops; nonetheless, progress re-evaluation by taking a step back remains important. If goods delivered do not match requirements then it is the duty of the responsible unit to further procure and respond to the need of the affected area. Table I is a summary of the proposed logistics response model parameters. Based on this emergency logistics response model, there are two critical activities that need to be controlled for optimal implementation of the emergency plan. These two activities are the information ows and the physical ows between the affected areas and the Coordination and Control Department at the Thai Ministry of Interior. An indicator of an organisations efciency and effectiveness is the accuracy and timeliness of information within its system. This is even truer when the logistics system is responsible for handling emergency goods.

4.2 The developed Thai simulation model In this emergency logistics response model, a Monte Carlo simulation was utilised to predict the output of the proposed model. The probability distribution for the Monte Carlo simulation was based on a triangular distribution. Fuzzy information had to be transformed into a triangular distribution. Conventional quantitative transformation techniques are not well suited for dealing with decision problems involving fuzziness. As an example, the transportation time to prone areas can range from 1 to 10 hours depending on the clearance capability of related support units. The basic for this contention is what Zadeh (1965) calls the principle of incomparability. Zadeh developed fuzzy sets theory for solving problems in which the description of activities and observations are imprecise, vague and uncertain. The term fuzzy refers to situations in which there are no well-dened boundaries for the set of activities. The fuzzy set transformation, proposed by Chen and Hwang (1992), can be used to avoid the aforementioned difculty so that the emergency logistics response problem can be meaningful and efciently solved in a fuzzy environment. The fuzzy data can be in linguistic terms, fuzzy sets, or fuzzy numbers. If the fuzzy data are in linguistic terms, there is a need to transform such data into a quantitative format. Beside the normal method of assigning a linguistic value through the use of an interval method, there also exist a more reliable quantication attribute by using a technique based on probability theory and fuzzy set theory (Kao et al., 1994). The transformation of linguistic value and fuzzy numbers is normally done through the use of triangular distribution (Bass and Kwakernaak, 1977; Bonisson, 1982). In a triangular distribution, the simulation planner denes the minimum, most likely and maximum values, with values around the most likely are more likely to occur. Variables that can be described by a triangular distribution include past sales history per unit of time and inventory levels. Figure 4 shows an example of triangular distribution utilised in the simulation model for the emergency activity. The rationale for selecting triangular distribution for input probability was derived on the fact that most of the public and private sector stakeholders that were involved in the tsunami emergency operations had a fuzzy re-collection of the time taken for each specic emergency activity as the tsunami occurred in 2004 and the request for information was done in 2009. No reference data were kept by all key stakeholders previously involved in the 2004 emergency operations. It was therefore decided that the best approach was to ask for the cycle time range of each emergency activity. Requested time data were then compiled to describe the minimum, the maximum as well as the mode for each emergency-related activity. This reduced data ambiguity. The values were then sampled at random from the input probability distribution.
Activity Information ow to trigger response Coordination mechanism Physical ow Clearance activities Total response time Note: aThese activities do not need to be conducted in a sequential manner Timeline Within Within Within Within Within 3 hours 13 hours 22 hours 37 hours 72a hours

Emergency logistics

717

Table I. Response model cycle times

IJPDLM 40,8/9
Probability 30

718
Figure 4. Example of triangular distributiontransportation time from discount store to prone area

60 Time (mins.)

120

Each set of samples was referred to an iteration and the resulting outcome from that sample is recorded. In this way, the developed Monte Carlo simulation was able to provide a much more comprehensive view of what could happen. It informs the emergency logistics planner not only what may happen, but how likely it is going to happen. The software used for this Monte Carlo simulation was Arena. In Arena, the user builds an experimental model by placing modules (boxes of different shapes) that represent processes or logic (Altiok and Melamed, 2007). Connector lines are then used to join these modules together and specify the ow of entities. While modules have specic actions relative to entities, ow and timing, the precise representation of each module and entity relative to real-life objects is subjected to the modeller. Statistical data, such as cycle time and work-in-process levels, can be recorded and outputted as reports. Arena also provides the opportunity to test the proposed logistic response model under different types of constraints. Table II illustrates the types of constraints used in the simulation model. The result of the simulated emergency logistics simulation output stated that the average time to help victims is 97.4 hours, while the maximum time is 213.1 hours and the minimum time to help victims is 27.5 hours. These numbers are based from over 30,000 simulation runs. Table III illustrates the proposed logistics response model simulations output over a number of simulation runs. From the simulated output, it was observed that the average time to help victims still exceeded the 72 hours limitation emergency logistics operation time. The simulation also identied that communication activity in the emergency logistics operation system is critical problem. Other identied constraints in the simulated emergency logistics response model are related to the communication activity between retailers headquarters and their local branches that are near crisis areas in the South of Thailand, road clearing activity by the Thai army, and the goods distributing activity in the tsunami-affected area. After some model renement by controlling the time variance of mentioned activities within the simulation model, the average time to help victims was be reduced to 17.8 hours. The maximum information ow time is 35.1 hours and the minimum time is 6.7 hours. Table IV describes the simulated output of the proposed logistics response model under specic constraints which are related to the transport time from retail stores to prone areas.

The results of Table IV clearly show that even if the proposed logistics response model was able to meet the 72 hours deadline, the emergency operations will still be subjected to a number of constraints foreseen or unforeseen which can still hinder emergency operations. Key bottlenecks observed in the simulated emergency logistics response model are discount store communication lead time between headquarter and local branches, road clearance time, preparation of supplies for prone area and aid distribution in prone area. 5. Summary Proposing a new emergency or disaster logistics response model is in itself challenging enough (Kovacs and Spens, 2009) but not sufcient. What is needed is to be able to predict the model behaviour if it is going to be implemented. This can be done through the development of a Monte Carlo simulation model. This option is much less risky
Activity Information ow from disaster area to trigger response Clearance equipment preparation Transport of clearance equipment Clearance activity Transportation time from store to prone area Vehicle speed (km/hour) Aid distribution in prone area Low , 1 hour 2 hours 2 hours 1 hour , 1 hour 60 , 1 hour Medium 1 hour 2 hours 15 hours 24 hours 1-5 hours 45 2 hours High 3 hours 3 hours 18 hours 7 days 5-10 hours 20 2.5 hours

Emergency logistics

719

Table II. Possible types of activity constraints (non-exhaustive list) used in the emergency logistics response simulation model

Number of simulation runs 100 1,000 10,000 20,000 30,000

Minimum (hours) 29.5 29.6 27.5 27.5 27.5

Average (hours) 92.0 95.9 97.2 97.4 97.4

Maximum (hours) 188.6 198.8 206.9 212.5 213.1 Table III. Proposed logistics response model simulated output

Type of simulation Integrated model Basic modela Additional deliveries required Low constraint in transport time Medium constraint in transport time High constraint in transport time Separated model Information ow simulation Physical ow simulation

Min 27.5 51.1 28.2 31.0 35.1 6.7 18.6

Activities time (hours) Average 97.4 127.2 97.9 100.4 104.9 17.8 86.4

Max 213.1 250.9 214.8 215.8 219.8 35.1 197.9 Table IV. Simulated emergency logistics response output with specic constraint

Note: aThe emergency logistics response model by Banomyong et al. (2009)

IJPDLM 40,8/9

720

than actually waiting for another tsunami or another natural disaster to happen and test the developed emergency logistics response model in a real-life situation. A number of constraints have affected the simulation model outputs and this need to be considered in terms of their respective impact on the emergency operations. The biggest constraint in the Thai logistics response model has been identied as the road clearance activities that enable access to prone areas. The longer it takes to clear access, the longer will it take for the supplies to get to destination while the alternative would be of course, to use different transport mode to deliver aid such as air drops or even coastal shipping services. Emphasis can therefore be targeted at this specic activity as a key success factor for the emergency response to be within the determined time. The simulation model results enable decision makers to re-think proposed logistics response model and test their performance under a controlled environment. The re-engineering of information, physical and control ows within the logistics response model can then further be conducted and tested to improve the model output. However, it must not be forgotten that the revised and simulated logistics response model to be designed will never be perfect as there are still other factors and constraints that have not been and can never be included in a simulation model. A simulation model is only a tool that can help emergency logistics decision makers better understand the dynamics within an emergency logistics response plan. Simulation results are always subjected to limitations but they are a good starting point in any planning process.

References Altiok, T. and Melamed, B. (2007), Simulation Modeling and Analysis with Arena, Elsevier, Burlington, MA, p. 456. Banomyong, R., Beresford, A.K.C. and Pettit, S. (2009), Logistics emergency response model: the case of Thailands tsunami affected area, International Journal of Services Technology and Management, Vol. 12 No. 4, pp. 414-29. Barbarosoglu, G., Ozdamar, L. and Cevik, A. (2002), An interactive approach for hierarchical analysis of helicopter logistics in disaster relief operations, European Journal of Operational Research, Vol. 140, pp. 118-33. Bass, S.M. and Kwakernaak, H. (1977), Rating and ranking of multiple aspect alternative using fuzzy sets, Automatica, Vol. 13, pp. 47-58. Beresford, A.K.C. and Pettit, S. (2009), Emergency logistics and risk mitigation in Thailand following the Asian tsunami, International Journal of Risk Assessment and Management, Vol. 13 No. 1, pp. 7-21. Bonisson, P.P. (1982), A fuzzy sets based linguistic approach: theory and applications, in Gupta, M.M. and Sanchez, E. (Eds), Approximated Reasoning in Decision Analysis, North-Holland, Amsterdam, pp. 329-39. Carter, W.N. (1999), Disaster Management: A Disaster Management Handbook, Asian Development Bank, Manila, p. 416. Chen, S.J. and Hwang, C.L. (1992), Fuzzy Multiple Attribute Decision Making: Method and Applications, Springer, New York, NY. Haas, J.E., Kates, R.W. and Bowden, M. (1977), Reconstruction Following Disaster, MIT Press, Cambridge, p. 366.

Jennings, E., Beresford, A.K.C. and Banomyong, R. (2000), Emergency emergency logistics: a disaster response model, Occasional Paper No. 64, Department of Maritime Studies and International Transport Cardiff University, Cardiff, p. 35. Kao, H.P., Kimbler, D.L. and Juang, C.H. (1994), Fuzzy modeling for designing product of consumer-perceived quality, in Ayyub, B.M. and Gupta, M.M. (Eds), Uncertainty Modeling and Analysis: Theory and Applications, Elsevier Science, New York, NY, pp. 441-58. Kovacs, G. and Spens, K. (2009), Identifying challenges in humanitarian logistics, International Journal of Physical Distribution & Logistics Management, Vol. 39 No. 6, pp. 506-28. Mooney, C.Z. (1997), Monte Carlo Simulation, Sage University Paper Series on Quantitative Applications in the Social Sciences, 07-116, Thousand Oaks, CA. Oloruntoba, R. and Gray, R. (2009), Customer service in emergency relief chains, International Journal of Physical Distribution & Logistics Management, Vol. 39 No. 6, pp. 486-505. Pettit, S. and Beresford, A.K.C. (2005), Emergency emergency logistics: an evaluation of military, non-military and composite response models, International Journal of Logistics: Research and Applications, Vol. 8 No. 4, pp. 313-31. Pettit, S. and Beresford, A.K.C. (2009), Critical success factors in the context of humanitarian aid supply chains, International Journal of Physical Distribution & Logistics Management, Vol. 39 No. 6, pp. 450-68. Zadeh, L.A. (1965), Fuzzy sets, Information and Control, Vol. 8, pp. 338-52. Further reading Beresford, A.K.C. and Pettit, S. (2007), Disaster management and risk mitigation in Thailand following the Asian tsunami, Proceedings of the International Conference on Supply Chain Management, Bangkok, July. Pan American Health Organization (2001), Humanitarian Supply Management and Logistics in the Health Sector, Emergency Preparedness and Disaster Emergency Program, Department of Emergency and Humanitarian Action, Sustainable Development and Healthy Environments, World Health Organization, Washington, DC. Stephenson, R.S. (1993), Logistics, Disaster Management Programme, United Nations Development Program (UNDP), Geneva. Yin, R.K. (1994), Case Study Research: Design and Methods, Sage, Newbury Park, CA. About the authors Ruth Banomyong is an Associate Professor in the Department of International Business, Logistics and Transport Management at the Faculty of Commerce and Accountancy, Thammasat University in Thailand. He has co-authored four books in the areas of logistics, supply chain management and international trade. His research interests include multimodal transport, international logistics, logistics policy development and supply chain performance measurement. Ruth Banomyong is the corresponding author and can be contacted at: ruth@banomyong.com Apichat Sopadang is an Assistant Professor in the Department of Industrial Engineering, Faculty of Engineering, Chiangmai University, in Thailand. He received a Doctoral Degree from Clemson University, South Carolina, USA. His research interests include logistics and supply chain management. To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Emergency logistics

721

You might also like