Professional Documents
Culture Documents
Volume 35 Number 2
Spring 2004
Printed in the U.S.A.
How Software Project Risk Affects
Project Performance: An Investigation
of the Dimensions of Risk and
an Exploratory Model
Linda Wallace
Department of Accounting and Information Systems, Virginia Polytechnic Institute and State
University, 3007 Pamplin Hall (0101), Blacksburg, VA 24061, e-mail: wallacel@vt.edu
Mark Keil
Department of Computer Information Systems, J. Mack Robinson College of Business,
Georgia State University, Atlanta, GA 30303, e-mail: mkeil@gsu.edu
Arun Rai
Center for Process Innovation and Department of Computer Information Systems,
J. Mack Robinson College of Business, Georgia State University, Atlanta, GA 30303,
e-mail: arunrai@gsu.edu
ABSTRACT
To reduce the high failure rate of software projects, managers need better tools to assess
and manage software project risk. In order to create such tools, however, information sys-
tems researchers must rst develop a better understanding of the dimensions of software
project risk and how they can affect project performance. Progress in this area has been
hindered by: (1) a lack of validated instruments for measuring software project risk that
tap into the dimensions of risk that are seen as important by software project managers,
and (2) a lack of theory to explain the linkages between various dimensions of software
project risk and project performance. In this study, six dimensions of software project
risk were identied and reliable and valid measures were developed for each. Guided by
sociotechnical systems theory, an exploratory model was developed and tested. The re-
sults show that social subsystem risk inuences technical subsystem risk, which, in turn,
inuences the level of project management risk, and ultimately, project performance.
The implications of these ndings for research and practice are discussed.
Subject Areas: Sociotechnical Systems Theory, Software Project Risk,
and Structural Equation Modeling.
The authors would like to thank the Project Management Institutes Information Systems Special Interest
Group (PMI-ISSIG) for supporting this research. We would also like to thank Georgia State University for
their nancial support through the PhD research grant program. The authors gratefully acknowledge Al
Segars and Ed Rigdon for their insightful comments and assistance at various stages of this project.
Corresponding author.
289
290 How Software Project Risk Affects Project Performance
INTRODUCTION
With information technology playing an increasing role in the economy, compa-
nies have grown more heavily dependent on the successful delivery of information
systems (IS). Yet, many software projects result in systems that do not function
as intended, are not used, or are never delivered (Gordon, 1999; Johnson, 1999).
As organizations continue to invest time and resources into strategically important
software projects, managing the risk associated with such projects becomes a crit-
ical concern. Failure to understand, identify, and manage risk is often cited as a
major cause of IS project problems such as cost and schedule overruns, unmet user
requirements, and the production of systems that do not provide business value
(Alter & Ginzberg, 1978; Barki, Rivard, & Talbot, 1993; Boehm, 1991; Charette,
1989; McFarlan, 1981).
Advocates of IS risk management claim that by identifying and analyzing
threats to success, actions can be taken to reduce the chance of project failure.
Researchers have often stressed the importance of empirically categorizing the
sources and types of risks associated with software development projects to better
prioritize and assess the possible exposure and losses that may result (e.g., Boehm,
1991; McFarlan, 1981). Unfortunately, relatively few tools are available for iden-
tifying software project risk factors. Moreover, there is a lack of theory to explain
the linkages between various dimensions of software project risk and project per-
formance. While various risk checklists (e.g., Boehm, 1991) and frameworks (e.g.,
Keil, Cule, Lyytinen, & Schmidt, 1998) have been proposed, the software project
risk construct, its underlying dimensions, and the relationship of these dimen-
sions to project performance remain largely unexplored. Until there is a better
understanding of risk and its effects on software projects, project managers will
face difculties in developing the appropriate risk mitigation strategies that enable
them to produce successful information systems.
The purpose of the research was: (1) to identify underlying dimensions of
software project risk and to develop and validate an instrument for measuring
software project risk, and (2) to build and test a model guided by theory that relates
the risk dimensions to project performance.
The remainder of the paper is organized into ve sections. First, we present a
brief review of the literature, highlighting six dimensions of software project risk.
Second, guided by the project management literature and sociotechnical systems
theory, we introduce a model of risk and project performance that builds on the six
dimensions. Third, we describe the development and validation of a risk instrument
designed to measure the six dimensions. Fourth, the model of risk and performance
is tested using structural equation modeling and the results are discussed. Finally,
we summarize the implications of our work for both research and practice.
LITERATURE REVIEW
In decision theory a risk may lead to either positive or negative consequences, and
the concept of risk reects the variation in the distribution of possible outcomes
(Arrow, 1970). Thus, a risky alternative is one for which the variance is large. The
view of risk used in decision theory, however, is not consistent with the ndings
Wallace, Keil, and Rai 291
from empirical studies of how managers dene risk (March & Shapira, 1987). For
managers, the uncertainty associated with positive outcomes is not considered to
be a risk at all; only the threat of negative outcomes is considered a risk. Thus,
managers seek to inuence their environment so as to bring risk (i.e., possible
negative outcomes) under control (MacCrimmon & Wehrung, 1984). Consistent
with the views of March and Shapira (1987), we dene software project risk factors
as conditions that canpose a serious threat tothe successful completionof a software
development project.
Advocates of software project riskmanagement suggest that project managers
should identify and control risk factors to reduce the chance of project failure. If
the key elements to be controlled are the project risk factors (Karolak, 1996), then
the process of risk assessment must begin with the identication of these factors.
Though several lists of risk factors have been published in the literature (e.g.,
Alter & Ginzberg, 1978; Boehm, 1991; Heemstra & Kusters, 1996; McFarlan,
1981; Moynihan, 1997; Schmidt, Lyytinen, Keil, & Cule, 2001), there has been
little attempt to move beyond checklists to establish the underlying dimensions of
the software project risk construct or to develop good instruments for assessing
software project risk.
Unfortunately, there is no general agreement as to the dimensionality of the
software project risk construct. For example, in his widely cited article on managing
software project risk, McFarlan (1981) identies three dimensions of risk: project
size, project structure, and experience with the technology. The derivation of these
particular dimensions is not specied, and McFarlan does not provide an instrument
specically designed to measure these dimensions of risk. As an example of how
a manager might measure risk, McFarlan introduces a sample of a 54-item risk
assessment questionnaire used by a company for measuring software project risk.
While the Dallas Tire case (HBS case no. 9-180-006) is cited as the source for the
questionnaire items presented in the article, the exact origin of the questionnaire
and its psychometric properties remain unknown. Even McFarlan (1981, p. 144)
himself points out that, no analytic framework lies behind these questions.
There has been only one other previous attempt to provide a measure of
software project risk that we are aware of (Barki et al., 1993). Barki et al. (1993)
reviewed the IS literature and compiled a list of 35 risk variables, which formed the
basis for a questionnaire consisting of 144 items. They collected data, compared
several factor solutions, and found the most interpretable solution to consist of
ve factors, which they labeled: technological newness, application size, lack of
expertise, application complexity, and organizational environment. In rening their
instrument, they retained 23 variables related to uncertainty, measured by 83 items.
While the instrument developed by Barki et al. (1993) represents a signicant
advance in software risk measurement, there was no attempt to involve practicing
project managers in the identication or validation of risk items or the factors that
emerged from their analysis. In their initial factor analysis, a nine-factor solution
emerged, but was judged to be uninterpretable. Therefore, a ve-factor solution
was imposed because it was easiest to interpret. However, a recent study based
on a subset of the same variables suggested a six-factor solution (Jiang & Klein,
2001). Thus, the dimensionality of the software project risk construct remains open
to question.
292 How Software Project Risk Affects Project Performance
The variables on the Barki et al. (1993) instrument were measured with a
range of single-item and multi-item binary scales, ratio scales, interval scales,
and semantic differential scales. The large number of single-item measures makes
it impossible to assess the measurement reliability for many of the variables in
the instrument, and the broad range of different measurement scales makes it
cumbersome to calculate an overall measure of project risk. In order to assess
overall risk, each of the variables can be converted to a binary scale and averaged,
however, this reduces the precision of the instrument.
Given the limitations of both the checklist approach and existing instrumen-
tation for measuring software project risk, a need remains for a rigorously validated
instrument that is grounded in the IS literature and validated with practicing soft-
ware project managers. Therefore, our rst step was to develop a comprehensive
picture of the underlying dimensions of software project risk (Smith, Milberg, &
Burke, 1996). First, an extensive literature review was performed to gain more
insight into the factors that may increase the riskiness of a software project. In
addition to examining the IS academic literature, articles written by practition-
ers detailing their experiences with software development projects were reviewed
(e.g., Casher, 1984; Keider, 1984). Both types of literature were used to produce
an extensive list of factors that have been identied as potentially contributing to
IS failure. Similar threats were grouped together to get a clearer conceptualization
of the general types of software project risk factors. Commonly cited problems in
software development projects were sorted in an iterative and interpretive manner,
and categories were combined and modied in an intuitive manner to suggest un-
derlying dimensions of software project risk. This approach is consistent with how
other researchers have gone about the process of dening the domain of a construct
(e.g., Moore & Benbasat, 1991; Smith et al., 1996). Several iterations were per-
formed to develop a classication scheme in which the categories or dimensions of
risk were as distinct as possible. This procedure resulted in the identication of six
dimensions of risk, which are described briey in Table 1 along with representative
references.
A MODEL OF RISK AND PERFORMANCE
In this section, we present and describe a model (see Figure 1) that uses the six
dimensions of software project risk as building blocks for creating higher-level
latent constructs, which are then used to examine the relationship between risk
and project performance. The models development was guided by the project
management literature and sociotechnical systems theory.
Central to our model is the notion that project management has an impact on
project performance, or outcome. Our conceptualization of project performance
includes both product and process performance (Nidumolu, 1996). Product per-
formance refers to the successfulness of the system that was developed, whereas
process performance refers to the successfulness of the development process itself
(i.e., extent to which the project was delivered on schedule and within budget).
Project management includes a number of key processes, such as planning and
control (Project Management Institute, 2000), and requires an effective project
manager to ensure that these processes are in place. To be effective, the project
Wallace, Keil, and Rai 293
T
a
b
l
e
1
:
S
i
x
d
i
m
e
n
s
i
o
n
s
o
f
r
i
s
k
.
D
i
m
e
n
s
i
o
n
D
e
s
c
r
i
p
t
i
o
n
R
e
p
r
e
s
e
n
t
a
t
i
v
e
R
e
f
e
r
e
n
c
e
s
O
r
g
a
n
i
z
a
t
i
o
n
a
l
E
n
v
i
r
o
n
m
e
n
t
R
i
s
k
T
h
e
r
i
s
k
o
r
u
n
c
e
r
t
a
i
n
t
y
s
u
r
r
o
u
n
d
i
n
g
t
h
e
o
r
g
a
n
i
z
a
t
i
o
n
a
l
e
n
v
i
r
o
n
m
e
n
t
i
n
w
h
i
c
h
a
s
o
f
t
w
a
r
e
p
r
o
j
e
c
t
t
a
k
e
s
p
l
a
c
e
w
a
s
i
d
e
n
t
i
e
d
a
s
a
m
a
j
o
r
a
r
e
a
o
f
p
r
o
j
e
c
t
r
i
s
k
.
F
a
c
t
o
r
s
s
u
c
h
a
s
o
r
g
a
n
i
z
a
t
i
o
n
a
l
p
o
l
i
t
i
c
s
,
t
h
e
s
t
a
b
i
l
i
t
y
o
f
t
h
e
o
r
g
a
n
i
z
a
t
i
o
n
e
n
v
i
r
o
n
m
e
n
t
,
a
n
d
o
r
g
a
n
i
z
a
t
i
o
n
a
l
s
u
p
p
o
r
t
f
o
r
a
p
r
o
j
e
c
t
h
a
v
e
b
e
e
n
s
h
o
w
n
t
o
i
m
p
a
c
t
p
r
o
j
e
c
t
p
e
r
f
o
r
m
a
n
c
e
.
(
E
w
u
s
i
-
M
e
n
s
a
h
&
P
r
z
a
s
n
y
s
k
i
1
9
9
1
;
J
a
r
v
e
n
p
a
a
&
I
v
e
s
1
9
9
1
;
J
o
n
e
s
,
1
9
9
4
;
O
T
o
o
l
e
&
O
T
o
o
l
e
,
1
9
6
6
)
U
s
e
r
R
i
s
k
T
h
e
l
a
c
k
o
f
u
s
e
r
i
n
v
o
l
v
e
m
e
n
t
d
u
r
i
n
g
s
y
s
t
e
m
d
e
v
e
l
o
p
m
e
n
t
i
s
o
n
e
o
f
t
h
e
m
o
s
t
o
f
t
e
n
c
i
t
e
d
r
i
s
k
f
a
c
t
o
r
s
i
n
t
h
e
l
i
t
e
r
a
t
u
r
e
.
I
f
t
h
e
a
t
t
i
t
u
d
e
s
o
f
u
s
e
r
s
t
o
w
a
r
d
a
n
e
w
s
y
s
t
e
m
a
r
e
u
n
f
a
v
o
r
a
b
l
e
,
t
h
e
n
i
t
i
s
l
i
k
e
l
y
t
h
a
t
t
h
e
y
w
i
l
l
n
o
t
c
o
o
p
e
r
a
t
e
d
u
r
i
n
g
a
d
e
v
e
l
o
p
m
e
n
t
e
f
f
o
r
t
,
l
e
a
d
i
n
g
t
o
a
n
i
n
c
r
e
a
s
e
d
r
i
s
k
o
f
p
r
o
j
e
c
t
f
a
i
l
u
r
e
(
D
a
v
i
s
,
1
9
8
2
;
K
e
i
d
e
r
,
1
9
8
4
;
R
o
b
e
y
&
F
a
r
r
o
w
,
1
9
8
2
;
T
a
i
t
&
V
e
s
s
e
y
,
1
9
8
8
)
R
e
q
u
i
r
e
m
e
n
t
s
R
i
s
k
U
n
c
e
r
t
a
i
n
t
y
s
u
r
r
o
u
n
d
i
n
g
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
i
s
a
n
o
t
h
e
r
m
a
j
o
r
f
a
c
t
o
r
t
h
a
t
c
a
n
i
m
p
a
c
t
p
r
o
j
e
c
t
p
e
r
f
o
r
m
a
n
c
e
.
F
r
e
q
u
e
n
t
l
y
c
h
a
n
g
i
n
g
r
e
q
u
i
r
e
m
e
n
t
s
a
r
e
n
o
t
t
h
e
o
n
l
y
p
o
s
s
i
b
l
e
r
e
q
u
i
r
e
m
e
n
t
s
-
r
e
l
a
t
e
d
p
r
o
b
l
e
m
a
s
s
o
c
i
a
t
e
d
w
i
t
h
s
y
s
t
e
m
d
e
v
e
l
o
p
m
e
n
t
p
r
o
j
e
c
t
s
.
I
n
c
o
r
r
e
c
t
,
u
n
c
l
e
a
r
,
i
n
a
d
e
q
u
a
t
e
,
a
m
b
i
g
u
o
u
s
,
o
r
u
n
u
s
a
b
l
e
r
e
q
u
i
r
e
m
e
n
t
s
m
a
y
a
l
s
o
i
n
c
r
e
a
s
e
t
h
e
p
r
o
b
l
e
m
s
,
o
r
r
i
s
k
s
,
a
s
s
o
c
i
a
t
e
d
w
i
t
h
a
s
o
f
t
w
a
r
e
d
e
v
e
l
o
p
m
e
n
t
p
r
o
j
e
c
t
(
B
o
e
h
m
,
1
9
9
1
;
C
h
a
r
e
t
t
e
,
1
9
8
9
;
S
c
h
m
i
d
t
e
t
a
l
.
,
2
0
0
1
,
T
h
a
y
e
r
,
P
y
s
t
e
r
,
&
W
o
o
d
,
1
9
8
0
)
P
r
o
j
e
c
t
C
o
m
p
l
e
x
i
t
y
R
i
s
k
T
h
e
i
n
h
e
r
e
n
t
c
o
m
p
l
e
x
i
t
y
o
f
a
s
o
f
t
w
a
r
e
p
r
o
j
e
c
t
,
i
n
t
e
r
m
s
o
f
t
h
e
d
i
f
c
u
l
t
y
o
f
t
h
e
p
r
o
j
e
c
t
b
e
i
n
g
u
n
d
e
r
t
a
k
e
n
,
r
e
p
r
e
s
e
n
t
s
a
n
o
t
h
e
r
d
i
m
e
n
s
i
o
n
o
f
s
o
f
t
w
a
r
e
p
r
o
j
e
c
t
r
i
s
k
.
T
h
e
r
e
a
r
e
s
e
v
e
r
a
l
a
t
t
r
i
b
u
t
e
s
o
f
a
p
r
o
j
e
c
t
t
h
a
t
c
a
n
i
n
d
i
c
a
t
e
h
o
w
c
o
m
p
l
e
x
i
t
i
s
,
s
u
c
h
a
s
w
h
e
t
h
e
r
n
e
w
t
e
c
h
n
o
l
o
g
y
i
s
u
s
e
d
,
i
f
t
h
e
p
r
o
c
e
s
s
e
s
b
e
i
n
g
a
u
t
o
m
a
t
e
d
a
r
e
c
o
m
p
l
e
x
,
a
n
d
i
f
t
h
e
r
e
a
r
e
a
l
a
r
g
e
n
u
m
b
e
r
o
f
r
e
q
u
i
r
e
d
l
i
n
k
s
t
o
e
x
i
s
t
i
n
g
s
y
s
t
e
m
s
a
n
d
e
x
t
e
r
n
a
l
e
n
t
i
t
i
e
s
.
(
B
a
r
k
i
e
t
a
l
.
,
1
9
9
3
;
K
e
m
e
r
e
r
&
S
o
s
a
,
1
9
8
8
;
M
c
F
a
r
l
a
n
,
1
9
8
1
;
S
c
h
m
i
d
t
e
t
a
l
.
,
2
0
0
1
)
P
l
a
n
n
i
n
g
a
n
d
C
o
n
t
r
o
l
R
i
s
k
T
h
e
p
l
a
n
n
i
n
g
a
n
d
c
o
n
t
r
o
l
o
f
t
h
e
s
o
f
t
w
a
r
e
d
e
v
e
l
o
p
m
e
n
t
p
r
o
c
e
s
s
a
d
d
s
a
n
o
t
h
e
r
d
i
m
e
n
s
i
o
n
t
o
t
h
e
r
i
s
k
i
n
e
s
s
o
f
a
p
r
o
j
e
c
t
.
P
o
o
r
p
l
a
n
n
i
n
g
a
n
d
c
o
n
t
r
o
l
o
f
t
e
n
l
e
a
d
s
t
o
u
n
r
e
a
l
i
s
t
i
c
s
c
h
e
d
u
l
e
s
a
n
d
b
u
d
g
e
t
s
a
n
d
a
l
a
c
k
o
f
v
i
s
i
b
l
e
m
i
l
e
s
t
o
n
e
s
t
o
a
s
s
e
s
s
w
h
e
t
h
e
r
t
h
e
p
r
o
j
e
c
t
i
s
p
r
o
d
u
c
i
n
g
t
h
e
i
n
t
e
n
d
e
d
d
e
l
i
v
e
r
a
b
l
e
s
.
W
i
t
h
o
u
t
a
c
c
u
r
a
t
e
d
u
r
a
t
i
o
n
e
s
t
i
m
a
t
e
s
,
m
a
n
a
g
e
r
s
d
o
n
o
t
k
n
o
w
w
h
a
t
r
e
s
o
u
r
c
e
s
t
o
c
o
m
m
i
t
t
o
a
d
e
v
e
l
o
p
m
e
n
t
e
f
f
o
r
t
.
T
h
e
n
e
t
r
e
s
u
l
t
i
s
o
f
t
e
n
e
x
c
e
s
s
i
v
e
s
c
h
e
d
u
l
e
p
r
e
s
s
u
r
e
o
r
u
n
r
e
a
l
i
s
t
i
c
s
c
h
e
d
u
l
e
s
t
h
a
t
c
a
n
i
n
c
r
e
a
s
e
p
r
o
j
e
c
t
r
i
s
k
.
(
J
o
n
e
s
&
M
c
L
e
a
n
,
1
9
7
0
;
K
e
i
d
e
r
,
1
9
8
4
;
M
e
t
z
g
e
r
,
1
9
8
1
;
M
o
o
r
e
,
1
9
7
9
,
T
h
a
y
e
r
e
t
a
l
.
,
1
9
8
0
)
T
e
a
m
R
i
s
k
T
e
a
m
r
i
s
k
r
e
f
e
r
s
t
o
i
s
s
u
e
s
a
s
s
o
c
i
a
t
e
d
w
i
t
h
t
h
e
p
r
o
j
e
c
t
t
e
a
m
m
e
m
b
e
r
s
t
h
a
t
c
a
n
i
n
c
r
e
a
s
e
t
h
e
u
n
c
e
r
t
a
i
n
t
y
o
f
a
p
r
o
j
e
c
t
s
o
u
t
c
o
m
e
,
s
u
c
h
a
s
t
e
a
m
m
e
m
b
e
r
t
u
r
n
o
v
e
r
,
i
n
s
u
f
c
i
e
n
t
k
n
o
w
l
e
d
g
e
a
m
o
n
g
t
e
a
m
m
e
m
b
e
r
s
,
c
o
o
p
e
r
a
t
i
o
n
,
m
o
t
i
v
a
t
i
o
n
,
a
n
d
t
e
a
m
c
o
m
m
u
n
i
c
a
t
i
o
n
i
s
s
u
e
s
.
(
A
b
d
e
l
-
H
a
m
i
d
,
1
9
8
9
;
A
l
t
e
r
&
G
i
n
z
b
e
r
g
,
1
9
7
8
;
B
r
o
o
k
s
,
1
9
8
7
;
J
i
a
n
g
,
K
l
e
i
n
,
&
M
e
a
n
s
,
2
0
0
0
;
S
c
h
m
i
d
t
e
t
a
l
.
,
2
0
0
1
)
294 How Software Project Risk Affects Project Performance
Figure 1: A model of risk and performance.
User
Risk
Technical
Subsystem
Risk
Requirements
Risk
Project
Complexity
Risk
Project
Management
Risk
Team
Risk
Planning &
Control
Risk
Process
Performance
Organizational
Environment
Risk
User
Risk
Requirements
Risk
Project
Complexity
Risk
Team
Risk
Planning &
Control
Risk
Organizational
Environment
Risk
H4 (+)
H6(+)
H5(+)
H2(-)
H1(-)
H3(+)
Social
Subsystem
Risk
Product
Performance
manager must coordinate the activities of a diverse project team and he or she must
ensure that the team members have the skills to carry out the project. Thus, the
Project Management Risk construct can be modeled as a construct measured by
two key underlying risk dimensions: Planning/Control and Team.
One would expect that project management inuences both product and pro-
cess performance. The linkage to process performance is readily apparent in that
poorly managed projects are more likely to exhibit cost and schedule overruns. The
linkage to product performance is subtler. Nevertheless, it is reasonable to expect
that poorly managed projects are more likely to yield systems that fail to satisfy
user needs or are otherwise judged to be less successful on dimensions such as sat-
isfaction or quality. Furthermore, we posit a linkage between process and product
performance; projects that incur cost and schedule overruns are simply less likely
to deliver a successful product. Thus, we state the following hypotheses:
H1: Project Management Risk will have a signicant negative impact
on Process Performance.
H2: Project Management Risk will have a signicant negative impact
on Product Performance.
H3: Process Performance will have a signicant positive impact on
Product Performance.
To guide the development of our model further, we turned to concepts from
sociotechnical systems theory, which emphasizes the t between technical and
social subsystems (Pava, 1983; Trist, 1981). Sociotechnical systems theory (SST)
Wallace, Keil, and Rai 295
arose inthe early1950s fromsome eldresearchonthe Britishcoal miningindustry.
Researchers from the Tavistock Institute discovered a South Yorkshire coal-eld
where new technology had allowed the formation of relatively autonomous groups
of miners with interchanging roles. This work arrangement contrasted sharply with
the atmosphere that surrounded other coal mines where miners were assigned to
one-man-one-task roles in accordance with the principles of scientic management
(Trist, 1981).
Trist and others at the Tavistock Institute were fascinated with what they
had found. The autonomous work groups seemed to enjoy a more positive work
environment. Moreover, they experienced higher productivity with lower absen-
teeism and fewer accidents than workers at conventional mines. What intrigued
Trist and his colleagues was that management and labor had made a conscious
choice to modify the work design to accommodate the shift in technology. This
was interesting to the Tavistock researchers because the mining company might
just have easily instituted the new technology without examining or changing the
work practices.
Several IS researchers have embraced the sociotechnical approach, applying
the concepts to the design and implementation of information systems (Bostrom &
Heinen, 1977; Eason, 1988; Mumford, 1981; Pava, 1983). Sociotechnical design
requires a high degree of participation from users, an idea that has been discussed
heavily in the IS literature. Consistent with sociotechnical systems theory, we posit
that there are two risk elements that must be simultaneously managed in order to
achieve favorable project outcomes. We label these two elements Social Subsystem
Risk and Technical Subsystem Risk.
Social SubsystemRiskcaptures the notionthat a software project is embedded
withina social context that maybe unstable or highlypoliticized, causingreductions
in commitment and resources needed to successfully complete the project. The
social context in which the project is situated may also be characterized by users
that are resistant to change, or not committed to the project. Thus, Social Subsystem
Risk can be modeled as a construct consisting of two underlying risk dimensions:
Organizational Environment and User.
Technical Subsystem Risk captures the notion that a software project in-
volves the creation of a technical artifact of some complexity that is described by
a set of requirements. The technical artifact may be more difcult to create when
the requirements are unclear or highly volatile. The technical artifact can also be
characterized by its level of complexity. The use of new technology or unfamiliar
technology can have an effect on the overall complexity of the technical subsystem.
Thus, Technical Subsystem Risk can be modeled as a construct represented by two
underlying risk dimensions: Requirements and Project Complexity.
Prior literature has acknowledged a contingent relationship between the na-
ture of the project and how it should be managed. McFarlan (1981), for example,
has suggested that the selection and use of formal planning and control techniques,
as well as internal and external integration tools, should be tailored to the type
of project. Extrapolating this notion to the domain of software project risk, we
posit a contingent relationship between the type of project (i.e., the social and
technical subsystems within which the project is situated) and the level of project
management risk. Thus, we state the following hypotheses:
296 How Software Project Risk Affects Project Performance
H4: Social Subsystem Risk will have a signicant positive impact on
Project Management Risk.
H5: Technical Subsystem Risk will have a signicant positive impact
on Project Management Risk.
Moreover, since the technical subsystem is situated within a particular social
context, we posit that the social subsystem can affect the technical subsystem. As
noted earlier, the social subsystem consists of Organization and User Risk. If the
organizational environment is unstable or the users are not cooperative, this will
likely make it more difcult to produce an accurate and stable set of requirements,
thus making it more challenging to create the technical artifact. Thus, we state the
following hypothesis:
H6: Social Subsystem Risk will have a signicant positive impact on
Technical Subsystem Risk (i.e., more unstable environments will
be associated with more complex and uncertain requirements,
thus leading to greater risks associated with creation of the artifact
itself).
Before testing the model and the above hypotheses, we turn to the process that was
used to develop and validate measures of the modeled constructs.
INSTRUMENT DEVELOPMENT AND VALIDATION
Figure 2 provides an overview of the methodology used to develop and validate
an instrument to measure software project risk, and Appendix 1 provides more
details about the phases (based on previous work by Churchill [1979] and Smith
et al. [1996]). As seen in Phase One of Figure 2, the purpose of the initial phase
was to specify the domain and dimensionality of the construct. After culling the six
dimensions fromthe literature (as described in the literature review), we conducted
interviews with practicing software project managers to determine if they were
consistent with how practitioners regard the software project risk construct.
The focus of Phase Two was item creation and scale development. A pool of
possible items was generated to measure software project risk. These items were
subjected to various sorting techniques to validate the six underlying dimensions
of software project risk that were proposed in Phase One and to rene the scale
items intended to tap into the dimensions. In Phase Three, the items were pretested
through the administration of a preliminary version of the instrument to a small
sample of practicing project managers. The analysis of the pretest data was used to
add, delete, or modify items. The survey instrument consisted of 53 items designed
to tap into six different dimensions of software project risk. In Phase Four, the
survey instrument was administered to a large sample (n =507) of software project
managers and tested for reliability and validity. The sample covered a wide variety
of projects and varied from very small to very large projects, thereby strengthening
the generalizability of the ndings.
Phase Five focused on scale renement using statistical analysis. Each scale
model was examined in isolation, in pairs, and in a full-measurement model con-
taining all indicators. The analysis was performed with AMOS 4.01 (Arbuckle,
Wallace, Keil, and Rai 297
Figure 2: Overview of methodology used to develop and validate the instrument.
1999). Maximum Likelihood Estimation (MLE) was selected as the most appro-
priate estimation procedure given the large sample size (Bentler & Bonett, 1980;
Hatcher, 1994; Joreskog & Sorbom, 1989).
There has been some debate in the structural equation modeling community
regarding whether a measurement model should be analyzed separately, before
the analysis of a structural model. Recommendations from researchers in the area
have ranged from a single process combining the analysis of the measurement and
structural model (Hayduk, 1987; Hayduk &Glaser, 2000) to a two-step process that
298 How Software Project Risk Affects Project Performance
separates the measurement model and the structural model (Anderson & Gerbing,
1988; Herting &Costner, 2001), to a four-step process consisting of a factor model,
a conrmatory factor model, the proposed structural model, and a more constrained
model (James, Mulaik, & Brett, 1982; Mulaik & Millsap, 2000; Mulaik, 1998).
For this research we chose to separate the measurement model from the
structural model. This approachis consistent withprior research(Carr, 2002; Chong
& Chong, 2002; Frohlich, 2002; Rai, Lang, & Welker, 2002) and was appropriate
here because it allowed us to rene our measures before testing the structural
model. Following a suggestion by Bollen (2000), we applied some of the concepts
behind his jigsaw piecewise technique, in which pieces of the model are tted
individually and then together until the whole model is complete. We did this by
adding a preliminary step where each latent variable was modeled separately to
identify problems with model t. This helped to identify psychometric issues so
that our measures could be rened before a full conrmatory factor analysis was
performed. Each of the steps we followed is discussed below.
Analysis of Isolated Models
An analysis of an isolated model of each dimension was performed to assess the
ability of the set of items to tap into their associated dimension of risk. After
assessing the t of each model, steps were taken to identify and consider areas
of possible model improvement. Specically, items with loadings less than .60
were considered as candidates for deletion. All deletion decisions were assessed
to ensure that content validity was preserved to the maximum extent possible. The
scales were rened through repeated model ttings based upon an examination of
the standardized loadings, modication indices, overall t indices, interpretability,
and content validity implications (Frohlich, 2002). Table A1-1 indicates the items
that were deleted as a result of this analysis.
In the process of rening the instrument, we encountered some instances
in which construct measures exhibited less than optimal psychometric properties
because they tapped into multiple facets of a construct. In these cases, we found
that the only way to improve the psychometric properties of our measures was to
sharpen the focus of a construct. Accordingly, we decided to narrow the domain
of coverage of the complexity and team constructs, so as to achieve reasonable
measurement properties.
Initially, the measurement items pertaining to the complexity construct at-
tempted to capture information concerning technological complexity, as well as
other elements that might contribute to overall project complexity (e.g., complex
requirements). However, the model analysis showed that only the four items dealing
with technical complexity were loading strongly on the complexity construct. Thus,
a decision was made to eliminate the other items, thereby focusing the measure of
complexity around issues involving technical complexity.
Similarly, the initial measurement items pertaining to the team construct
attempted to capture information concerning whether the team members had suf-
cient training and experience, as well as other elements that might contribute to
team risk (e.g., conicts or turnover within the team). The isolated model analy-
sis revealed, however, that only the three items dealing with the expertise of the
Wallace, Keil, and Rai 299
Table 2: Final survey items.
Organizational Environment Risk
Org1 Change in organizational management during the project
Org2 Corporate politics with negative effect on project
Org3 Unstable organizational environment
Org4 Organization undergoing restructuring during the project
User Risk
User1 Users resistant to change
User2 Conict between users
User3 Users with negative attitudes toward the project
User4 Users not committed to the project
User5 Lack of cooperation from users
Requirements Risk
Req1 Continually changing system requirements
Req2 System requirements not adequately identied
Req3 Unclear system requirements
Req4 Incorrect system requirements
Project Complexity Risk
Comp1 Project involved the use of new technology
Comp2 High level of technical complexity
Comp3 Immature technology
Comp4 Project involves use of technology that has not been used in prior projects
Planning & Control Risk
P&C1 Lack of an effective project management methodology
P&C2 Project progress not monitored closely enough
P&C3 Inadequate estimation of required resources
P&C4 Poor project planning
P&C5 Project milestones not clearly dened
P&C6 Inexperienced project manager
P&C7 Ineffective communication
Team Risk
Team1 Inadequately trained development team members
Team2 Inexperienced team members
Team3 Team members lack specialized skills required by the project
team were loading strongly onto the team construct. Thus, a decision was made
to eliminate the other items, thereby focusing the measure of team risk around
skill-related issues. Table 2 shows the items that were ultimately retained for each
risk dimension.
Overall, the t measures for the isolated models based on the rened scales
indicated that the proposed models were a good t with the observed data. The
goodness of t indices (GFIs) for all the models were .96 or above and the adjusted
goodness of t indices (AGFIs) were all .86 or above, which both indicate very good
t (Bollen, 1989). The lowest comparative t index (CFI) and the lowest normed
t index (NFI) were .96 and .95 respectively, and the lowest nonnormed t index
(NNFI) was .91, which are all above their targets of .90 (Bentler, 1990; Bentler &
Bonett, 1980). The highest standardized root mean square residual (RMR) was .05,
300 How Software Project Risk Affects Project Performance
which is an acceptable level (Bagozzi & Yi, 1988). All six of the calculated com-
posite scale reliabilities were .78 or above; higher than the recommended minimum
level of .70 (Fornell & Larcker, 1981). In general, the t statistics suggested that
the scales provided an adequate and reliable measure of t for the six dimensions
of software project risk.
In addition to measures for the six risk dimensions, the nal instrument in-
cluded ve items designed to measure Product Performance and two items designed
to measure Process Performance (as shown in Appendix 2). The measures for Prod-
uct Performance and Process Performance were adapted from Rai and Al-Hindi
(2000) and Nidumolu (1996) and the calculated composite reliabilities for both
measures were above .80. A model of the two performance constructs and their
items was also analyzed and the t statistics were all well above the levels described
above.
Discriminant and Convergent Validity
In order to assess the discriminant validity of the six risk scales, models of each
pair of constructs were estimated. With six constructs of interest, there are fteen
possible combinations of construct pairs. Two models, a constrained model and
an unconstrained model, were estimated for each possible pair of constructs. A
signicant value in an
2
difference test suggests that the unconstrained model is a
better t to the data than the constrained model (where the constructs are proposed
to correlate perfectly) (Bagozzi & Phillips, 1982; Gerbing & Anderson, 1988).
The
2
difference tests for every pair of constructs was signicant at p < .001,
indicating that each scale was measuring a construct that was signicantly different
from all other constructs, thus establishing discriminant validity.
To further establish discriminant validity, a condence interval test was per-
formed. A condence interval of plus or minus 2 standard errors was calculated
around the correlations between each pair of constructs. None of the condence in-
tervals included 1.0, thereby providing additional evidence of discriminant validity
(Anderson & Gerbing, 1988; Hatcher, 1994).
The variance-extracted test can be used to establish both discriminant and
convergent validity (Fornell & Larcker, 1981; Netemeyer, Johnston, & Burton,
1990). This test compares the variance-extracted estimates for two constructs of
interest with the square of the correlation of the two constructs. Validity is demon-
strated if both variance-extracted estimates are greater than this squared correlation
(Hatcher, 1994). Table 3 shows the results of this test for the six dimensions. The di-
agonal elements in Table 3 are the variance-extracted estimates for each dimension,
and the off-diagonal elements are the squared correlations between constructs. The
diagonal elements are all larger than their corresponding correlation coefcients,
which indicate that they exhibit both convergent and discriminant validity (Gefen,
Straub, & Boudreau, 2000).
Adequacy of Model Fit
Conrmatory factor analysis (CFA) helps to test that indicator variables load highly
on predetermined factors, but do not load highly on unrelated factors (Hatcher,
1994). AMOS 4.01 was used to perform a CFA to assess the measurement model
Wallace, Keil, and Rai 301
Table 3: Variance-extracted test.
Planning
Organizational Project and
Environment User Requirements Complexity Control Team
Organizational 0.478
Environment
User 0.189 0.617
Requirements 0.143 0.175 0.675
Project Complexity 0.006 0.007 0.061 0.524
Planning & Control 0.125 0.102 0.419 0.036 0.589
Team 0.072 0.059 0.203 0.070 0.283 0.606
Figure 3: Measurement model of software project risk.
of software project risk (Arbuckle, 1999). The measurement model (see Figure 3)
consisted of six latent variables: Organizational Environment risk (
1
), User risk
(
2
), Requirements risk (
3
), Project Complexity risk (
4
), Planning/Control risk
(
5
), and Team risk (
6
). The covariance matrix of the items comprising the six
dimensions of risk was used as the input for the analysis and a reference variable
for each latent construct was set to one in order to set the scale for the analysis.
Each latent variable had between three and seven indicator variables (see Table 2).
Each indicator variable represented a questionnaire item that was expected to load
on only one factor.
The six-factor measurement model was estimated and Figure 3 contains the
t statistics and scale reliabilities that were obtained. The GFI, CFI, NFI, and NNFI
are all higher than the recommended minimumvalues of .90 (Bentler, 1990; Bentler
302 How Software Project Risk Affects Project Performance
& Bonett, 1980; Hair, Anderson, Tatham, & Black, 1998). The AGFI is also well
above the recommended minimum value of .80, suggesting good model t. The
standardized RMR is .05, which was judged to be an acceptable value (Bagozzi &
Yi, 1988). The root mean square error of approximation (RMSEA) is well under
the threshold of .08, indicating good model t after adjusting for the large sample
size (Browne & Cudeck, 1993).
The standardized factor loadings were all signicant, with values ranging
from .62 to .91. The t-values, representing the signicance of the loadings of the
indicators on the constructs, indicated that all paths were signicant at the p <.001
level. This provides evidence to support the convergent validity and unidimen-
sionality of the indicators (Anderson & Gerbing, 1988). Furthermore, the loadings
between the indicators and their underlying constructs were almost identical to the
loadings that were obtained during the paired comparisons, thereby indicating a
high degree of stability of the estimates. Finally, the composite reliability index
(see Fornell & Larcker, 1981) of each scale was also acceptable with all values
in excess of .78 (Nunnally & Bernstein, 1994). Overall, the t measures and item
loadings we obtained were found to meet accepted thresholds, thus supporting the
reliability and validity of the rened instrument (Hair, Anderson, Tatham, &Black,
1992; Henry & Stone, 1994).
The measurement model is analogous to a rst-order model, where the six
dimensions of risk are allowed to covary. This rst-order model can serve as a
baseline model against which a theoretical model, such as a second-order factor
model, can be tested. Achi-square difference test can be used to determine whether
there is a signicant difference between the t provided by the measurement model
and the t provided by a theoretical model. If the theoretical model is successful in
accounting for the observed relationships between the variables in the measurement
model then there will not be a signicant difference between the chi-square for
the theoretical model and the chi-square for the measurement model (Anderson
& Gerbing, 1988; Hatcher, 1994). The second-order factor (theoretical) model
in this case includes the three higher-level latent constructssocial subsystem
risk, technical subsystem risk, and project management riskthat are proposed
to govern the relationship among the six rst-order constructs. The second-order
factor model is shown in Figure 4.
As shown in Figure 4, the six dimensions of risk are not allowed to correlate,
but rather their covariation is explained by the second-order constructs, making
it a more parsimonious model (i.e., fewer paths) than the measurement model.
With the exception of the chi-square statistic (
2
(315) = 737.30), the measures
of t from the second-order model were identical to the rst-order model (see
Figure 3 for the t statistics), and the loadings of the variables onto their respective
constructs were the same, thereby demonstrating the stability of the second-order
model. To statistically compare the two models, a chi-square difference test was
performed, resulting in a chi-square difference value of 11.88 with 6 degrees of
freedom. With 6 degrees of freedom, the critical value of the chi-square is 22.458
at p <.001. The chi-square difference value of 11.88 is less than the critical value,
meaning that there is no signicant deterioration in the t of the theoretical model
as compared to the measurement model, given the additional constraints that were
imposed.
Wallace, Keil, and Rai 303
Figure 4: Second-order model of software project risk.
It has been suggested that if two models have similar t, the more parsi-
monious model should be tentatively selected as the superior model (Anderson &
Gerbing, 1988; Hatcher, 1994). However, the choice of a model based solely on
parsimony considerations can lead to the selection of an incorrect model (Marsh
& Hau, 1996; Raykov & Marcoulides, 1999), so other considerations must ac-
company the selection of the model in a situation where multiple models produce
similar t. Our preference of the theoretical model shown in Figure 4 over the mea-
surement model is based on the empirical conrmation of the expected theoretical
relationships between the rst- and second-order constructs, which were derived
from sociotechnical systems theory. The theoretical model is more parsimonious
than the measurement model; it does not show any signicant deterioration of t
when compared to the measurement model, and it is grounded in an established
theory. Furthermore, other second-order factor models were tested using different
combinations of the rst-order factors, and all of themshowed signicant deteriora-
tion of t when compared to the measurement model, thereby providing additional
validation of the theoretical model shown in Figure 4.
TESTING THE MODEL OF RISK AND PERFORMANCE
Structural equation modeling using AMOS 4.01 was selected to evaluate the
relationships between indicators and latent constructs, as well as the structural
304 How Software Project Risk Affects Project Performance
Figure 5: SEM analysis results.
All paths were signicant at p < .001 unless noted.
i
c
t
s
b
e
t
w
e
e
n
p
r
o
j
e
c
t
t
e
a
m
m
e
m
b
e
r
s
M
D
6
I
n
a
d
e
q
u
a
t
e
s
u
p
p
o
r
t
f
r
o
m
s
u
p
p
l
i
e
r
(
s
)
D
7
U
s
e
o
f
i
n
a
d
e
q
u
a
t
e
d
e
v
e
l
o
p
m
e
n
t
m
e
t
h
o
d
o
l
o
g
y
D
8
P
r
o
j
e
c
t
r
e
q
u
i
r
e
s
a
c
h
a
n
g
e
i
n
c
u
r
r
e
n
t
l
y
u
t
i
l
i
z
e
d
t
e
c
h
n
o
l
o
g
y
M
C
o
m
p
4
P
r
o
j
e
c
t
i
n
v
o
l
v
e
s
u
s
e
o
f
t
e
c
h
n
o
l
o
g
y
t
h
a
t
h
a
s
n
o
t
b
e
e
n
u
s
e
d
i
n
p
r
i
o
r
p
r
o
j
e
c
t
s
9
R
e
s
p
o
n
s
i
b
i
l
i
t
i
e
s
f
o
r
p
r
o
j
e
c
t
t
a
s
k
a
s
s
i
g
n
m
e
n
t
s
n
o
t
c
l
e
a
r
l
y
d
e
n
e
d
D
1
0
P
r
o
j
e
c
t
a
f
f
e
c
t
s
a
l
a
r
g
e
n
u
m
b
e
r
o
f
d
e
p
a
r
t
m
e
n
t
s
o
r
o
r
g
a
n
i
z
a
t
i
o
n
a
l
u
n
i
t
s
M
M
D
1
1
T
e
a
m
m
e
m
b
e
r
s
c
o
n
t
i
n
u
a
l
l
y
a
d
d
e
d
t
o
t
h
e
p
r
o
j
e
c
t
a
f
t
e
r
i
t
b
e
g
a
n
D
1
2
L
o
w
l
e
v
e
l
o
f
s
t
a
f
f
m
o
t
i
v
a
t
i
o
n
D
1
3
T
e
a
m
m
e
m
b
e
r
s
n
o
t
f
a
m
i
l
i
a
r
w
i
t
h
t
h
e
t
y
p
e
o
f
a
p
p
l
i
c
a
t
i
o
n
b
e
i
n
g
d
e
v
e
l
o
p
e
d
D
1
4
P
r
o
j
e
c
t
p
r
o
g
r
e
s
s
n
o
t
m
o
n
i
t
o
r
e
d
c
l
o
s
e
l
y
e
n
o
u
g
h
P
&
C
2
P
r
o
j
e
c
t
p
r
o
g
r
e
s
s
n
o
t
m
o
n
i
t
o
r
e
d
c
l
o
s
e
l
y
e
n
o
u
g
h
1
5
S
y
s
t
e
m
i
n
t
e
n
d
e
d
t
o
s
e
r
v
i
c
e
a
l
a
r
g
e
n
u
m
b
e
r
o
f
u
s
e
r
s
D
1
6
L
a
r
g
e
n
u
m
b
e
r
o
f
l
i
n
k
s
t
o
o
t
h
e
r
s
y
s
t
e
m
s
r
e
q
u
i
r
e
d
b
y
t
h
e
p
r
o
j
e
c
t
M
D
1
7
L
a
c
k
o
f
t
o
p
m
a
n
a
g
e
m
e
n
t
s
u
p
p
o
r
t
f
o
r
t
h
e
p
r
o
j
e
c
t
D
1
8
L
a
c
k
o
f
a
f
u
l
l
-
t
i
m
e
p
r
o
j
e
c
t
m
a
n
a
g
e
r
D
1
9
P
r
o
j
e
c
t
m
i
l
e
s
t
o
n
e
s
n
o
t
c
l
e
a
r
l
y
d
e
n
e
d
P
&
C
5
P
r
o
j
e
c
t
m
i
l
e
s
t
o
n
e
s
n
o
t
c
l
e
a
r
l
y
d
e
n
e
d
2
0
U
s
e
r
s
r
e
s
i
s
t
a
n
t
t
o
c
h
a
n
g
e
U
s
e
r
1
U
s
e
r
s
r
e
s
i
s
t
a
n
t
t
o
c
h
a
n
g
e
2
1
I
n
e
x
p
e
r
i
e
n
c
e
d
p
r
o
j
e
c
t
m
a
n
a
g
e
r
P
&
C
6
I
n
e
x
p
e
r
i
e
n
c
e
d
p
r
o
j
e
c
t
m
a
n
a
g
e
r
2
2
C
h
a
n
g
e
i
n
o
r
g
a
n
i
z
a
t
i
o
n
a
l
m
a
n
a
g
e
m
e
n
t
d
u
r
i
n
g
t
h
e
p
r
o
j
e
c
t
O
r
g
1
C
h
a
n
g
e
i
n
o
r
g
a
n
i
z
a
t
i
o
n
a
l
m
a
n
a
g
e
m
e
n
t
d
u
r
i
n
g
t
h
e
p
r
o
j
e
c
t
Wallace, Keil, and Rai 315
T
a
b
l
e
A
1
.
1
:
(
c
o
n
t
i
n
u
e
d
)
S
u
m
m
a
r
y
o
f
i
t
e
m
c
h
a
n
g
e
s
.
O
r
i
g
i
n
a
l
I
t
e
m
S
1
S
2
S
3
P
I
S
c
a
l
e
F
i
n
a
l
I
t
e
m
2
3
P
r
o
j
e
c
t
s
t
a
f
f
n
o
t
f
a
m
i
l
i
a
r
w
i
t
h
t
e
c
h
n
o
l
o
g
y
r
e
q
u
i
r
e
d
f
o
r
t
h
e
p
r
o
j
e
c
t
D
2
4
P
r
o
j
e
c
t
w
i
t
h
h
i
g
h
l
e
v
e
l
o
f
t
e
c
h
n
i
c
a
l
c
o
m
p
l
e
x
i
t
y
M
C
o
m
p
2
H
i
g
h
l
e
v
e
l
o
f
t
e
c
h
n
i
c
a
l
c
o
m
p
l
e
x
i
t
y
2
5
I
n
c
o
r
r
e
c
t
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
R
e
q
4
I
n
c
o
r
r
e
c
t
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
2
6
O
n
e
o
f
t
h
e
l
a
r
g
e
s
t
p
r
o
j
e
c
t
s
a
t
t
e
m
p
t
e
d
b
y
t
h
e
o
r
g
a
n
i
z
a
t
i
o
n
D
2
7
I
n
a
d
e
q
u
a
t
e
l
y
t
r
a
i
n
e
d
p
r
o
j
e
c
t
t
e
a
m
m
e
m
b
e
r
s
M
T
e
a
m
1
I
n
a
d
e
q
u
a
t
e
l
y
t
r
a
i
n
e
d
d
e
v
e
l
o
p
m
e
n
t
t
e
a
m
m
e
m
b
e
r
s
2
8
L
a
c
k
o
f
e
f
f
e
c
t
i
v
e
p
r
o
j
e
c
t
m
a
n
a
g
e
m
e
n
t
m
e
t
h
o
d
o
l
o
g
y
M
P
&
C
1
L
a
c
k
o
f
a
n
e
f
f
e
c
t
i
v
e
p
r
o
j
e
c
t
m
a
n
a
g
e
m
e
n
t
m
e
t
h
o
d
o
l
o
g
y
2
9
P
r
o
j
e
c
t
i
n
v
o
l
v
e
s
t
h
e
u
s
e
o
f
n
e
w
t
e
c
h
n
o
l
o
g
y
M
C
o
m
p
1
P
r
o
j
e
c
t
i
n
v
o
l
v
e
d
t
h
e
u
s
e
o
f
n
e
w
t
e
c
h
n
o
l
o
g
y
3
0
P
r
o
j
e
c
t
g
o
a
l
s
n
o
t
a
g
r
e
e
d
u
p
o
n
M
D
3
1
U
n
r
e
a
l
i
s
t
i
c
u
s
e
r
e
x
p
e
c
t
a
t
i
o
n
s
f
o
r
t
h
e
s
y
s
t
e
m
u
n
d
e
r
d
e
v
e
l
o
p
m
e
n
t
M
M
D
3
2
F
a
i
l
u
r
e
t
o
g
a
i
n
u
s
e
r
c
o
m
m
i
t
m
e
n
t
M
U
s
e
r
4
U
s
e
r
s
n
o
t
c
o
m
m
i
t
t
e
d
t
o
t
h
e
p
r
o
j
e
c
t
3
3
F
r
e
q
u
e
n
t
t
e
a
m
m
e
m
b
e
r
t
u
r
n
o
v
e
r
M
M
D
3
4
T
e
a
m
m
e
m
b
e
r
s
l
a
c
k
s
p
e
c
i
a
l
i
z
e
d
s
k
i
l
l
s
r
e
q
u
i
r
e
d
b
y
t
h
e
p
r
o
j
e
c
t
T
e
a
m
3
T
e
a
m
m
e
m
b
e
r
s
l
a
c
k
s
p
e
c
i
a
l
i
z
e
d
s
k
i
l
l
s
r
e
q
u
i
r
e
d
b
y
t
h
e
p
r
o
j
e
c
t
3
5
I
n
s
u
f
c
i
e
n
t
r
e
s
o
u
r
c
e
s
p
r
o
v
i
d
e
d
f
o
r
t
h
e
p
r
o
j
e
c
t
M
D
3
6
U
n
d
e
n
e
d
p
r
o
j
e
c
t
s
u
c
c
e
s
s
c
r
i
t
e
r
i
a
D
3
7
U
s
e
o
f
n
e
w
d
e
v
e
l
o
p
m
e
n
t
m
e
t
h
o
d
o
l
o
g
y
D
3
8
M
a
n
y
e
x
t
e
r
n
a
l
s
u
p
p
l
i
e
r
s
i
n
v
o
l
v
e
d
i
n
t
h
e
d
e
v
e
l
o
p
m
e
n
t
p
r
o
j
e
c
t
D
3
9
O
r
g
a
n
i
z
a
t
i
o
n
u
n
d
e
r
g
o
i
n
g
l
a
y
o
f
f
s
o
r
d
o
w
n
s
i
z
i
n
g
d
u
r
i
n
g
t
h
e
p
r
o
j
e
c
t
M
O
r
g
4
O
r
g
a
n
i
z
a
t
i
o
n
u
n
d
e
r
g
o
i
n
g
r
e
s
t
r
u
c
t
u
r
i
n
g
d
u
r
i
n
g
t
h
e
p
r
o
j
e
c
t
4
0
C
o
n
i
c
t
i
n
g
r
e
q
u
i
r
e
m
e
n
t
s
p
r
o
v
i
d
e
d
b
y
u
s
e
r
s
M
M
D
4
1
C
o
n
i
c
t
b
e
t
w
e
e
n
u
s
e
r
d
e
p
a
r
t
m
e
n
t
s
M
U
s
e
r
2
C
o
n
i
c
t
b
e
t
w
e
e
n
u
s
e
r
s
4
2
P
r
o
j
e
c
t
g
o
a
l
s
a
r
e
n
o
t
i
d
e
n
t
i
e
d
M
D
4
3
G
o
l
d
p
l
a
t
i
n
g
(
i
.
e
.
,
t
h
e
a
d
d
i
t
i
o
n
o
f
u
n
n
e
e
d
e
d
f
e
a
t
u
r
e
s
t
o
t
h
e
s
y
s
t
e
m
s
p
e
c
i
c
a
t
i
o
n
s
)
D
4
4
I
n
c
o
r
r
e
c
t
a
s
s
u
m
p
t
i
o
n
s
m
a
d
e
b
y
p
r
o
j
e
c
t
t
e
a
m
a
b
o
u
t
u
s
e
r
n
e
e
d
s
D
316 How Software Project Risk Affects Project Performance
T
a
b
l
e
A
1
.
1
:
(
c
o
n
t
i
n
u
e
d
)
S
u
m
m
a
r
y
o
f
i
t
e
m
c
h
a
n
g
e
s
.
O
r
i
g
i
n
a
l
I
t
e
m
S
1
S
2
S
3
P
I
S
c
a
l
e
F
i
n
a
l
I
t
e
m
4
5
P
o
o
r
p
r
o
j
e
c
t
p
l
a
n
n
i
n
g
P
&
C
4
P
o
o
r
p
r
o
j
e
c
t
p
l
a
n
n
i
n
g
4
6
U
s
e
r
s
n
o
t
i
n
v
o
l
v
e
d
i
n
d
e
v
e
l
o
p
m
e
n
t
p
r
o
c
e
s
s
M
M
D
4
7
I
n
e
x
p
e
r
i
e
n
c
e
d
t
e
a
m
m
e
m
b
e
r
s
T
e
a
m
2
I
n
e
x
p
e
r
i
e
n
c
e
d
t
e
a
m
m
e
m
b
e
r
s
4
8
L
a
c
k
o
f
t
e
a
m
c
o
m
m
i
t
m
e
n
t
t
o
t
h
e
p
r
o
j
e
c
t
M
D
4
9
U
n
s
t
a
b
l
e
o
r
g
a
n
i
z
a
t
i
o
n
a
l
e
n
v
i
r
o
n
m
e
n
t
O
r
g
3
U
n
s
t
a
b
l
e
o
r
g
a
n
i
z
a
t
i
o
n
a
l
e
n
v
i
r
o
n
m
e
n
t
5
0
I
n
a
p
p
r
o
p
r
i
a
t
e
t
e
c
h
n
o
l
o
g
y
s
e
l
e
c
t
e
d
f
o
r
t
h
e
p
r
o
j
e
c
t
M
M
M
C
o
m
p
3
I
m
m
a
t
u
r
e
t
e
c
h
n
o
l
o
g
y
5
1
C
o
n
t
i
n
u
a
l
l
y
c
h
a
n
g
i
n
g
s
c
o
p
e
/
o
b
j
e
c
t
i
v
e
s
M
D
5
2
P
r
o
j
e
c
t
i
n
v
o
l
v
i
n
g
t
h
e
a
u
t
o
m
a
t
i
o
n
o
f
u
n
s
t
r
u
c
t
u
r
e
d
t
a
s
k
(
s
)
M
D
5
3
T
e
a
m
m
e
m
b
e
r
s
n
o
t
f
a
m
i
l
i
a
r
w
i
t
h
t
h
e
t
a
s
k
(
s
)
b
e
i
n
g
a
u
t
o
m
a
t
e
d
D
5
4
S
y
s
t
e
m
w
i
l
l
r
e
s
u
l
t
i
n
a
l
a
r
g
e
n
u
m
b
e
r
o
f
c
h
a
n
g
e
s
t
o
t
h
e
o
r
g
a
n
i
z
a
t
i
o
n
D
5
5
N
e
g
a
t
i
v
e
a
t
t
i
t
u
d
e
s
b
y
p
r
o
j
e
c
t
s
t
a
f
f
M
D
5
6
L
a
c
k
o
f
p
e
o
p
l
e
s
k
i
l
l
s
i
n
p
r
o
j
e
c
t
l
e
a
d
e
r
s
h
i
p
D
5
7
L
a
c
k
o
r
l
o
s
s
o
f
o
r
g
a
n
i
z
a
t
i
o
n
a
l
c
o
m
m
i
t
m
e
n
t
t
o
t
h
e
p
r
o
j
e
c
t
D
5
8
U
n
c
l
e
a
r
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
R
e
q
3
U
n
c
l
e
a
r
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
5
9
S
h
o
r
t
f
a
l
l
s
i
n
e
x
t
e
r
n
a
l
l
y
f
u
r
n
i
s
h
e
d
c
o
m
p
o
n
e
n
t
s
D
6
0
N
o
c
h
a
n
g
e
m
a
n
a
g
e
m
e
n
t
p
r
a
c
t
i
c
e
s
i
n
p
l
a
c
e
D
6
1
U
s
e
r
s
n
o
t
a
b
l
e
t
o
i
d
e
n
t
i
f
y
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
M
R
e
q
2
S
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
n
o
t
a
d
e
q
u
a
t
e
l
y
i
d
e
n
t
i
e
d
6
2
R
e
s
o
u
r
c
e
s
s
h
i
f
t
e
d
a
w
a
y
f
r
o
m
t
h
e
p
r
o
j
e
c
t
b
e
c
a
u
s
e
o
f
c
h
a
n
g
e
s
i
n
o
r
g
a
n
i
z
a
t
i
o
n
a
l
p
r
i
o
r
i
t
i
e
s
D
6
3
U
n
r
e
a
l
i
s
t
i
c
b
u
d
g
e
t
M
D
6
4
C
o
n
t
i
n
u
a
l
l
y
c
h
a
n
g
i
n
g
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
R
e
q
1
C
o
n
t
i
n
u
a
l
l
y
c
h
a
n
g
i
n
g
s
y
s
t
e
m
r
e
q
u
i
r
e
m
e
n
t
s
6
5
L
a
r
g
e
n
u
m
b
e
r
o
f
p
e
o
p
l
e
w
o
r
k
i
n
g
o
n
t
h
e
p
r
o
j
e
c
t
D
6
6
H
i
g
h
l
y
c
o
m
p
l
e
x
t
a
s
k
b
e
i
n
g
a
u
t
o
m
a
t
e
d
D
6
7
I
n
e
x
p
e
r
i
e
n
c
e
d
u
s
e
r
s
D
Wallace, Keil, and Rai 317
T
a
b
l
e
A
1
.
1
:
(
c
o
n
t
i
n
u
e
d
)
S
u
m
m
a
r
y
o
f
i
t
e
m
c
h
a
n
g
e
s
.
O
r
i
g
i
n
a
l
I
t
e
m
S
1
S
2
S
3
P
I
S
c
a
l
e
F
i
n
a
l
I
t
e
m
6
8
P
o
o
r
r
i
s
k
m
a
n
a
g
e
m
e
n
t
p
r
a
c
t
i
c
e
s
D
6
9
P
o
o
r
r
e
w
a
r
d
s
t
r
u
c
t
u
r
e
f
o
r
t
e
a
m
m
e
m
b
e
r
p
e
r
f
o
r
m
a
n
c
e
D
7
0
O
r
g
a
n
i
z
a
t
i
o
n
i
n
v
o
l
v
e
d
i
n
a
m
e
r
g
e
r
d
u
r
i
n
g
t
h
e
p
r
o
j
e
c
t
D
7
1
I
n
e
f
f
e
c
t
i
v
e
p
r
o
j
e
c
t
t
e
a
m
c
o
m
m
u
n
i
c
a
t
i
o
n
M
P
&
C
7
I
n
e
f
f
e
c
t
i
v
e
c
o
m
m
u
n
i
c
a
t
i
o
n
7
2
I
n
a
d
e
q
u
a
t
e
e
s
t
i
m
a
t
i
o
n
o
f
p
r
o
j
e
c
t
b
u
d
g
e
t
,
s
c
h
e
d
u
l
e
a
n
d
r
e
q
u
i
r
e
d
r
e
s
o
u
r
c
e
s
M
P
&
C
3
I
n
a
d
e
q
u
a
t
e
e
s
t
i
m
a
t
i
o
n
o
f
r
e
q
u
i
r
e
d
r
e
s
o
u
r
c
e
s
7
3
I
n
e
f
f
e
c
t
i
v
e
p
r
o
j
e
c
t
m
a
n
a
g
e
r
D
7
4
E
x
c
e
s
s
i
v
e
s
c
h
e
d
u
l
e
p
r
e
s
s
u
r
e
M
D
7
5
C
o
r
p
o
r
a
t
e
p
o
l
i
t
i
c
s
w
i
t
h
n
e
g
a
t
i
v
e
e
f
f
e
c
t
o
n
p
r
o
j
e
c
t
O
r
g
2
C
o
r
p
o
r
a
t
e
p
o
l
i
t
i
c
s
w
i
t
h
n
e
g
a
t
i
v
e
e
f
f
e
c
t
o
n
p
r
o
j
e
c
t
7
6
U
s
e
r
s
w
i
t
h
n
e
g
a
t
i
v
e
a
t
t
i
t
u
d
e
s
t
o
w
a
r
d
t
h
e
p
r
o
j
e
c
t
U
s
e
r
3
U
s
e
r
s
w
i
t
h
n
e
g
a
t
i
v
e
a
t
t
i
t
u
d
e
s
t
o
w
a
r
d
t
h
e
p
r
o
j
e
c
t
N
e
w
1
U
s
e
o
f
r
e
l
a
t
i
v
e
l
y
n
e
w
d
e
v
e
l
o
p
m
e
n
t
t
o
o
l
s
t
h
a
t
h
a
v
e
n
o
t
b
e
e
n
w
i
d
e
l
y
u
s
e
d
b
e
f
o
r
e
A
M
M
D
N
e
w
2
C
o
m
p
l
i
c
a
t
e
d
c
o
n
t
r
a
c
t
u
a
l
a
r
r
a
n
g
e
m
e
n
t
s
w
i
t
h
o
u
t
s
i
d
e
s
u
p
p
l
i
e
r
s
A
M
D
C
o
l
u
m
n
H
e
a
d
i
n
g
s
:
S
c
a
l
e
=
I
t
e
m
t
o
m
e
a
s
u
r
e
o
n
e
o
f
t
h
e
f
o
l
l
o
w
i
n
g
s
c
a
l
e
s
:
L
e
g
e
n
d
:
S
1
=
C
h
a
n
g
e
s
m
a
d
e
a
f
t
e
r
r
s
t
s
o
r
t
i
n
g
e
x
e
r
c
i
s
e
U
s
e
r
=
U
s
e
r
A
=
N
e
w
i
t
e
m
a
d
d
e
d
S
2
=
C
h
a
n
g
e
s
m
a
d
e
a
f
t
e
r
s
e
c
o
n
d
s
o
r
t
i
n
g
e
x
e
r
c
i
s
e
C
o
m
p
=
P
r
o
j
e
c
t
C
o
m
p
l
e
x
i
t
y
M
=
I
t
e
m
m
o
d
i
e
d
S
3
=
C
h
a
n
g
e
s
m
a
d
e
a
f
t
e
r
t
h
i
r
d
s
o
r
t
i
n
g
e
x
e
r
c
i
s
e
T
e
a
m
=
D
e
v
e
l
o
p
m
e
n
t
T
e
a
m
D
=
I
t
e
m
d
e
l
e
t
e
d
P
=
C
h
a
n
g
e
s
m
a
d
e
a
f
t
e
r
t
h
e
p
r
e
t
e
s
t
O
r
g
=
O
r
g
a
n
i
z
a
t
i
o
n
a
l
E
n
v
i
r
o
n
m
e
n
t
I
=
C
h
a
n
g
e
s
m
a
d
e
a
f
t
e
r
t
h
e
i
s
o
l
a
t
e
d
m
o
d
e
l
a
n
a
l
y
s
i
s
P
&
C
=
P
l
a
n
n
i
n
g
/
C
o
n
t
r
o
l
R
e
q
=
R
e
q
u
i
r
e
m
e
n
t
s
318 How Software Project Risk Affects Project Performance
grouping together into the six dimensions proposed in Phase One. Table A1.1
identies the items modied in this rst sorting exercise.
The next part of Phase Two involved a different type of sorting exercise to
further rene the itemmeasures. Asorting instrument was developed that contained
a short paragraph description of each of the hypothesized dimensions of software
project risk as well as a random listing of the 61 items remaining from the earlier
sort.
Three ISfaculty, three PhDstudents andtwopracticingproject managers were
asked to evaluate and critique the dimension descriptions to ensure that they were
accurate representations of each dimension. Changes were made to the denitions
based on the feedback from these eight individuals. Then, the sorting instrument
was administered to a convenience sample of 46 practicing project managers. The
respondents were asked to assign each of the 61 remaining items to the dimension
that they felt was best represented by that item.
The results were rst analyzed to identify those items that were assigned
to the same category by at least 32 of the 46 respondents (70%). This frequency
measure revealed 30 items that were placed into the same category by at least 70%
of the subjects. These 30 items were retained without any modications for the
next phase of the research.
However, there were 31 items that were not placed into the same category by
at least 70%of the subjects. Modications to the 31 items were considered. Fifteen
of the 31 items were not modied either because it was not clear howbest to reword
them or because, although they were not placed into the same category by 70% of
the respondents, they were placed into a category by a relatively high percentage
of the respondents (i.e., 60%). The remaining sixteen items were reworded in an
attempt to improve their classication rate and to provide a better t with the other
items in their measurement scale. No items were deleted as a result of this analysis.
Table A1.1 identies the items that were altered after this second sort.
After the changes described above were made, the sorting exercise was re-
peated to validate the modications. The subjects who performed the sort were
a convenience sample of 20 PhD students majoring in IS. The responses to this
exercise were analyzed in a similar manner as the previous exercise. Because item
renement had taken place, the results were much improved. This time, 53 of the
61 items were consistently placed into the same category by at least 70% of the
respondents. Of the remaining eight items, two items were deleted, ve items were
reworded, and one remained unmodied. The output of Phase Two was a compre-
hensive list of 59 risk items designed to measure the six dimensions of the software
project risk construct (see Table A1.1).
Phase three: Pre-testing the instrument and assessing its reliability
The goal of Phase Three was to produce an instrument containing a set of reliable
and internally consistent scales to measure each of the dimensions of software
project risk and to further validate the measure of software project risk. A pretest
was performed to obtain a general assessment of the instruments appearance, to
further eliminate items that did not contribute signicantly to the value of the
instrument, and to initially assess the reliability of the measurement scales. The 59
Wallace, Keil, and Rai 319
remaining items fromPhase Two were placed on an instrument and the respondents
were asked to read each statement and indicate the extent to which it characterized
their most recently completed project. The response format for each item was a
seven-point Likert-type scale ranging fromstrongly disagree to strongly agree.
The pretest was administered to a convenience sample of 43 software project
managers. The survey and a cover letter were distributed to 90 participants of a soft-
ware project management conference along with their registration packets. Four-
teen of these individuals completed the survey and returned themto the registration
desk. The remainder of the sample of 43 in this phase came from personal contacts
as well as surveys that were distributed to graduate IS students and completed by
software project managers in the organizations where they were employed.
The pretest analysis consisted of calculating coefcient alpha and itemtotal
correlations for each scale. The coefcient alphas and itemtotal correlations were
used to identify problematic items that were either reworded or deleted from the
scale, resulting in a set of six internally consistent scales. Table A1.1 identies the
items that were deleted or modied as a result of the pretest analysis.
Phase four: Large-scale administration of the instrument
The purpose of the fourth phase of the instrument development process was to
further examine the reliability and validity of the instrument, which now consisted
of 53 items, and to test a measurement model of software project risk. The nal
instrument was administered via a web-based survey. As with the pretest, the
subjects were asked to complete the survey as it related to their most recently
completed project. The 53 items were randomly placed onto the nal survey and
respondents were again asked to indicate on a seven-point Likert-type scale the
extent to which each statement characterized their most recently completed project.
The subjects were members of the Information Systems Special Interest
Group (ISSIG) of the Project Management Institute (PMI). At the time of the sur-
veys administration, there were approximately 7,200 members of the Information
Systems Special Interest Group (PMI-ISSIG). The members of this organization
regularly receive an electronic newsletter or a paper newsletter that updates them
on the activities of the PMI-ISSIG. The electronic newsletter is sent to those indi-
viduals who have shared their e-mail addresses with PMI-ISSIG (approximately
3,800 of the 7,200 members) and the paper newsletter is sent to all 7,200 members
of the PMI-ISSIG. PMI-ISSIG agreed to include information about this research
effort in both the electronic and paper versions of their newsletter. The newsletters
asked project managers to ll out the web survey. In return, the respondents were
told that their names would be entered into a drawing to win a polo-style shirt
with the new PMI-ISSIG logo on it. The e-mail version of the newsletter had an
electronic link directly to the survey web site.
The electronic newsletter was sent on July 23, 1998, and the paper newsletter
was mailed out a few weeks later. As of July 28, 1998, there were approximately
85 responses received from the PMI-ISSIG. At this time a follow-up e-mail to
the approximately 3,800 individuals with known e-mail addresses was sent out
in an attempt to increase the number of responses. It was felt that the request for
participation might have been overlooked in the rst newsletter as it focused on
320 How Software Project Risk Affects Project Performance
several other issues in addition to the request for participation in the survey. The
second request elicited a much higher response rate than the rst request. The
web survey was accessible until August 31, 1998, and at that time there were 507
responses.
There was no practical way to determine whether the respondents completed
the web-based survey as a result of the request in the paper newsletter they re-
ceived or if they were responding to the electronic newsletter. However there was
a signicant delay in the mailing of the paper newsletter to the 7,200 PMI-ISSIG
members, and the majority of the 507 responses were obtained before the paper
newsletter was mailed out. Therefore most of the respondents are assumed to be
from the group of 3,800 members who received an electronic request for partic-
ipation. Given that there were 507 respondents, this indicates a response rate of
approximately 13.34%.
The 507 project managers who responded to the nal survey ranged in age
from 24 to 62, with an average age of 40.5. There were 371 males (73.2%), 127
females (25.0%), and 9 missing values (1.8%). Their previous software project
management experience ranged from 1 to 38 years, with an average of 6.6 years.
The average amount spent on a project was approximately 11 million dollars. The
distribution of the amount spent on the projects was bi-modal with the largest
number of respondents stating that they had spent either 500 thousand dollars or
1 million dollars on their most recently completed project.
Almost three-fourths of the respondents were the project manager of their
most recently completed project. Another 10% classied themselves as a member
of the project team, and 17% classied themselves as something other than the
project manager or a member of the project team. The diversity of project roles
should improve the generalizability of the results of this study as project risk was
assessed from the viewpoint of individuals with many different responsibilities
during a software development effort.
APPENDIX 2
Product and Process Performance Measures
Product performance measure
(assessed on a 7-point Likert scale with strongly disagree and strongly agree as
anchors)
PROD1 The application developed is reliable.
PROD2 The application is easy to maintain.
PROD3 The users perceive that the system meets intended functional
requirements.
PROD4 The system meets user expectations with respect to response time.
PROD5 The overall quality of the developed application is high.
Process performance measures
(assessed on a 7-point Likert scale with strongly disagree and strongly agree as
anchors)
Wallace, Keil, and Rai 321
PROC1 The system was completed within budget.
PROC2 The system was completed within schedule.
Linda Wallace is an assistant professor in the Department of Accounting and
Information Systems at Virginia Polytechnic Institute and State University. She
obtained her PhD in computer information systems from Georgia State University
in 1999. Her research interests include software project risk, project management,
and agile software development. Her research has been accepted for publication in
Communications of the ACM, Journal of Systems andSoftware, andthe Proceedings
of the Academy of Management.
Mark Keil is a professor in the Department of Computer Information Systems at
Georgia State University. His research focuses on software project management,
with particular emphasis on understanding and preventing software project esca-
lation. His research is also aimed at providing better tools for assessing software
project risk and removing barriers to software use. His research has been pub-
lished in MIS Quarterly, Sloan Management Review, Communications of the ACM,
Journal of Management Information Systems, Information & Management, IEEE
Transactions on Engineering Management, Decision Support Systems, and other
journals. He has served as an associate editor for the MIS Quarterly and as co-editor
of the DATA BASE for Advances in Information Systems. He currently serves on
the editorial board of IEEE Transactions on Engineering Management.
Arun Rai is the Harkins Professor in the Center for Process Innovation and Depart-
ment of Computer Information Systems at Georgia State University. His research
interests include digitally enabled supply chain management, diffusion and impacts
of information technology, and management of systems delivery. His research has
been published in Accounting, Management and Information Technologies, Annals
of Operations Research, Communications of the ACM, Decision Sciences, Decision
Support Systems, European Journal of Operations Research, IEEETransactions on
Engineering Management, Information Systems Research, Journal of Management
Information Systems, MIS Quarterly, Omega, and other journals. He has served, or
serves, on the editorial boards for IEEETransactions on Engineering Management,
Information Systems Research, MIS Quarterly, and other journals. Leading corpo-
rations, including A. T. Kearney, Bozell Worldwide, Daimler-Chrysler, Comdisco,
SAP, IBM, among others, have sponsored his research work.