You are on page 1of 32

Journal of Management Information Systems

ISSN: 0742-1222 (Print) 1557-928X (Online) Journal homepage: http://www.tandfonline.com/loi/mmis20

Measuring the Performance of Information


Systems: A Functional Scorecard
JERRY CHA-JAN CHANG & WILLIAM R. KING
To cite this article: JERRY CHA-JAN CHANG & WILLIAM R. KING (2005) Measuring the
Performance of Information Systems: A Functional Scorecard, Journal of Management
Information Systems, 22:1, 85-115
To link to this article: http://dx.doi.org/10.1080/07421222.2003.11045833

Published online: 08 Dec 2014.

Submit your article to this journal

Article views: 29

View related articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=mmis20
Download by: [University of Kelaniya]

Date: 21 September 2015, At: 02:31

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

85

Measuring the Performance of


Information Systems:
A Functional Scorecard

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

JERRY CHA-JAN CHANG AND WILLIAM R. KING


JERRY CHA-JAN CHANG is an Assistant Professor in the Department of MIS in the
College of Business, University of Nevada, Las Vegas. He has a B.S. in Oceanography from National Ocean University, Taiwan, an M.S. in Computer Science from
Central Michigan University, an MBA from Texas A&M University, and an M.S. in
MoIS and a Ph.D. in MIS from the University of Pittsburgh. His research interest
includes performance measurement, IS strategy, management of IS, group support
systems, humancomputer interaction, organizational learning, and strategic planning. His work has appeared in Information & Management, Decision Support Systems, DATABASE, Communications of the ACM, and Journal of Computer Information
Systems, and several major IS conference proceedings.
WILLIAM R. KING holds the title University Professor in the Katz Graduate School of
Business at the University of Pittsburgh. He has published more than 300 papers and
15 books in the areas of Information Systems, Management Science, and Strategic
Planning. He has served as Founding President of the Association for Information
Systems (AIS), President of TIMS (now INFORMS), and Editor-in-Chief of MIS
Quarterly. He was instrumental in the creation of INFORMS and of the Information
Systems Research journal. He recently received the Leo Lifetime Exceptional Achievement Award by AIS.
ABSTRACT: This study develops an instrument that may be used as an information
systems (IS) functional scorecard (ISFS). It is based on a theoretical inputoutput
model of the IS functions role in supporting business process effectiveness and organizational performance. The research model consists of three system output dimensionssystems performance, information effectiveness, and service performance. The
updated paradigm for instrument development was followed to develop and validate the ISFS instrument. Construct validation of the instrument was conducted using
responses from 346 systems users in 149 organizations by a combination of exploratory factor analysis and structural equation modeling using LISREL. The process
resulted in an instrument that measures 18 unidimensional factors within the three
ISFS dimensions. Moreover, a sample of 120 matched-paired responses of separate
CIO and user responses was used for nomological validation. The results showed that
the ISFS measure reflected by the instrument was positively related to improvements
in business processes effectiveness and organizational performance. Consequently,
the instrument may be used for assessing IS performance, for guiding information
technology investment and sourcing decisions, and as a basis for further research and
instrument development.
KEY WORDS AND PHRASES: functional scorecard, information systems performance
measurement, instrument development, structural equation modeling.
Journal of Management Information Systems / Summer 2005, Vol. 22, No. 1, pp. 85115.
2005 M.E. Sharpe, Inc.
07421222 / 2005 $9.50 + 0.00.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

86

JERRY CHA-JAN CHANG AND WILLIAM R. KING

ASSESSING THE INFORMATION SYSTEM (IS) functions performance has long been an
important issue to IS executives. This interest is evident from the prominence of this
issue in the various IS issue studies [12, 13, 34, 49, 72] as well as the popularity of
annual publications such as ComputerWorld Premier 100 and InformationWeek 500,
which involve the use of surrogate metrics to assess overall IS functional performance (ISFP). Executives routinely seek evidence of returns on information technology (IT) investments and sourcing decisionsboth types of choices that have become
more substantial and a competitive necessity. As the unit that has major responsibilities for these decisions, the IS function is usually believed to be an integral part of
achieving organizational success. Yet the overall performance of the IS function has
proved to be difficult to conceptualize and to measure.
As the outsourcing of IS subfunctional areas such as data centers and help desks
has grown into the outsourcing of the entire IS function, there is an ever-growing
need for formal performance assessment [61]. This will permit the establishment of
baseline measures to use in judging outsourcing success. So, the issue of an overall IS
functional metric, which is, and has been, high on IS executives priorities, is becoming even more important.
Although there has been a good deal of research on IS efficiency, effectiveness, and
success at various levels of analysis, overall functional-level performance is one of
the least discussed and studied. According to Seddon et al. [88], only 24 out of 186
studies between 1988 and 1996 can be classified as focusing on the IS functional
level. Nelson and Coopriders [70] work epitomizes this need.
Moreover, while there exist metrics and instruments to assess specific IS subfunctions
and specific IS subareas, such as data center performance, productivity and data quality, typically these measures cannot be aggregated in any meaningful way. This limits
their usefulness as the bases for identifying the sources of overall performance improvements or degradations. As an anonymous reviewer of an earlier version of this
paper said, The critical issue is that the performance of the IS function is now under
the microscope and decisions to insource/outsource and spend/not spend must be
made in a structured context.
The objective of this research is to develop such an instrumenta scorecard
for evaluating overall ISFP.

The Theoretical Bases for the Study


THE DEFINITION OF THE IS FUNCTION that is used here includes all IS groups and
departments within the organization [84]. This definition is broad enough to include
various structures for the IS function, from centralized to distributed, yet specific
enough to include only the formal IS function that can be readily identified.
Figure 1 shows the modified inputoutput (I/O) model that is the theoretical basis
for the study. The model in Figure 1 has been utilized as a basis for other IS research
studies [63, 113]. It incorporates a simple inputoutput structure wherein the IS func-

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

87

Figure 1. Theoretical InputOutput Performance Model

tion uses resources to produce IS performance, which in turn influences both business process effectiveness and organizational performance.
The resources utilized by the IS function are shown in Figure 1 to be hardware,
software, human resources, and integrated managerial and technical capabilities [14,
15, 36]. The IS function is shown to produce systems, information, and services [56,
92], which collectively affect the organization in a fashion that is termed IS functional
performance (ISFP), which is to be assessed through an IS functional scorecard (ISFS),
the development of which is the objective of this study.
In the theoretical model, IS outputs are also shown as significant enablers and drivers of business process effectiveness, since IS are often the basis for business process
operations and redesign [94, 113]. ISFP also is shown to influence business process
effectiveness, and both influence overall organizational performance [113].
Although it is not the primary purpose of this study to directly address the business
process effectiveness and organizational performance elements of Figure 1, data were
collected on these elements of the model for purposes of nomological validation of
the scorecard that is being developed.
The model of Figure 1 is based on streams of research in IS capabilities, IS effectiveness/success, IS service quality, IS functional evaluation, and IS subfunctional
assessment.

IS Capabilities
IS capabilities are integrated sets of hardware, software, human skills, and management processes that serve to translate financial investments in IS into IS performance
[17, 23, 42, 83, 99, 111, 113]. For instance, an IS strategic planning capability might
consist of well-trained planners, computer-based planning models, knowledgeable

88

JERRY CHA-JAN CHANG AND WILLIAM R. KING

technical people, adequate planning budgets, and a well-formulated and specified


planning process.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

IS Effectiveness/Success
DeLone and McLean [30] categorized over 100 IS dependent variables into six
categories and developed an IS success model to describe the relationships between
the categories. They concluded that IS success should be a multidimensional measure
and recommended additional research to validate the model. Other researchers have
since tested and expanded their model [7, 46, 79]. DeLone and McLean [31] have
updated the model based on a review of research stemming from their original work.
They concluded that their original model was valid and suggested that service quality be incorporated as an important dimension of IS success.

IS Service Quality
Recognizing the importance of the services provided by the IS function, the
SERVQUAL measure, originally developed in marketing [74], has been adapted to
measure IS service quality [75, 110]. However, the controversy over SERVQUAL in
marketing [27] has carried over into IS [52, 104], suggesting that more research needs
to be conducted to measure IS service quality. Proponents of this measure sometimes
advocate its use as proxy for ISFP. However, as depicted in Figure 1, it is directly
applicable only to one of the three major outputs of the IS function.

IS Functional Evaluation
Only a few studies directly address the comprehensive evaluation of the performance
of the IS function. No one has developed a validated metric. Wells [112] studied
existing and recommended performance measures for the IS function and identified
six important goals/issues. Saunders and Jones [84] developed and validated 11 IS
function performance dimensions through a three-round Delphi study. They proposed
an IS function performance evaluation model to help organizations select and prioritize IS performance dimensions and to determine assessments for each dimension.
Both studies focused on top managements perspective of ISFP and did not offer any
specific measures.

IS Subfunctional Assessment
Measuring IS subfunctional performance has been important to IS practitioners and
academics, and such measures have been developed at a variety of levels using a
number of different perspectives. For instance, measurements have been made of the
effects of IS on users (e.g., [105]), learning outcomes (e.g., [1]), service (e.g., [52]),
e-business (e.g., [96]) and other contexts using economic approaches (e.g., [16]), a

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

89

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Table 1. Implementation of Cameron and Whettons [19] Guidelines


Guidelines

Implementations

1. From whose perspective is effectiveness


being assessed?
2. On what domain of activity is the
assessment focused?
3. What level of analysis is being used?
4. What is the purpose for judging
effectiveness?
5. What time frame is being employed?

Organizational users of IS services and


systems.
Products and services provided by the IS
function.
The IS function [84].
Identify strengths and weaknesses; track
overall effectiveness.
Periodically, ranging from quarterly to
annually.
Subjective; perceptual data from
individual.
Past performance measures.

6. What type of data are being used for


judgments of effectiveness?
7. What is the referent against which
effectiveness is judged?

financial perspective (e.g., [10]), a social science perspective (e.g., [80]), an IT value
approach [24], a business process viewpoint (e.g., [98]), and probably others.
So, there has been no paucity of interest in IS assessment, or in the development of
measures. However, there is a great need for a comprehensive measure of IS performance that will provide a configural, or Gestalt [41], view of an organizations
formal IS activities and facilitate decision making and functional improvement.

The Methodological Basis for the Study


TO ENSURE THE APPROPRIATENESS OF THE STUDY at the IS functional level, it was
designed according to guidelines from the organizational effectiveness literature. These
guidelines were developed in response to problems plaguing organizational effectiveness research as described by Steers [95]. Cameron and Whetton [19] developed
seven basic guidelines that are listed in the lefthand column of Table 1. Cameron [18]
later demonstrated the usefulness of these guidelines in a study of 29 organizations.
These guidelines have also been adopted by IS researchers to clarify conceptual developments in examining IS functional effectiveness [69, 88].
The implementations of Cameron and Whettons [19] guidelines for this study are
shown in the righthand column of Table 1. Thus, the ISFS developed here is defined
as organizational IS users perception of the performance for all of the aspects of the
IS function that they have personally experienced. Organizational users of IS services
and systems are the primary stakeholder for the IS function [92]. Although there are
many other stakeholders for the IS function, users represent the largest group, and
their efficacy in utilizing IS products and services directly affects the organizations
bottom line. Therefore, the aggregated evaluation of individual users assessments
forms a quite comprehensive picture of the ISFP.

90

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Despite its focus on users, this approach is different from the popular user satisfaction measures [6, 8, 35], because it is designed to assess peoples perceptions of the
overall IS function rather than to capture users attitudes toward a specific system.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

The Domain and Operationalization of the


IS Performance Construct
USERS PERCEPTION OF IS ACTIVITIES derive from their use of the IS products and
the services provided by the IS function. IS research has traditionally separated the
effect of systems and information as two distinct constructs [30]. However, system
and information quality are attributes of applications, not of IS departments [87, p.
244]. Therefore, they are not sufficient to reflect the effectiveness of the entire IS
function.

Domain of the ISFS Construct


The domain of ISFP used in this study reflects the theory of Figure 1 and the models
suggested by Pitt et al. [75] and Delone and McLean [31]. The definitions of the three
basic output-related dimensions are given below. A model of the ISFS construct, using LISREL notation, is presented in Figure 2.
Systems performance: Assesses the quality aspects of systems such as reliability, response time, ease of use, and so on, and the various impacts that systems
have on the users work. Systems encompass all IS applications that the user
regularly uses.
Information effectiveness: Assesses the quality of information in terms of the
design, operation, use, and value [108] provided by information as well as the
effects of the information on the users job. The information can be generated
from any of the systems that the user makes use of.
Service performance: Assesses the users experience with services provided by
the IS function in terms of quality and flexibility [38]. The services provided by
the IS function include activities ranging from systems development to help
desk to consulting.
In order to develop a measurement instrument with good psychometric properties,
the updated paradigm that emphasizes establishing the unidimensionality of measurement scales [40, 89] was followed. A cross-section mail survey is appropriate to
obtain a large sample for analysis and to ensure the generalizability of the resulting
instrument.

Operationalization of Constructs
Two sets of constructs were operationalized in this studythe three-dimensional ISFS
construct and the constructs related to the consequences of ISFP. These consequences

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

91

Figure 2. Three-Dimensional Model of ISFS

constructs (business process effectiveness and organizational performance), as shown


in Figure 1, were used to assess nomological validity. Whenever possible, previously
developed items that had been empirically tested were used or adopted to enhance the
validity and reliability of the instrument under development. Some new measures
were also developed from reviews of both practitioner and research literatures to
reflect developments that have occurred subsequent to the development of the measures from which most items were obtained (e.g., e-commerce, enterprise resource
planning [ERP], etc.).
The three output dimensions of Figure 1 are the basis for three ISFS dimensions.
Systems Performance
Measures of systems performance assess the quality aspects of systems and the various effects that IS have on the users work. Empirical studies listed under the categories system quality and individual impact in DeLone and McLeans [30] IS Success
Model were reviewed to collect the measure used in those studies. In addition, instruments developed by Baroudi and Orlikowski [8], Doll and Torkzadeh [35], Davis
[29], Kraemer et al. [59], Mirani and King [67], Goodhue and Thompson [43], Ryker
and Nath [81], Saarinen [82], and Torkzadeh and Doll [103] were also reviewed and

92

JERRY CHA-JAN CHANG AND WILLIAM R. KING

included for more updated measures published subsequent to DeLone and McLeans
original review.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Information Effectiveness
Measures of information effectiveness assess the quality of the information provided
by IS as well as the effects of the information on the users job. Although DeLone and
McLeans [30] information quality provided a good source for existing measures,
Wang and Strong [109] developed a more comprehensive instrument that encompasses all measures mentioned in DeLone and McLeans review. Therefore, the 118
measures developed by Wang and Strong make up the majority of items in this dimension. However, since the focus of their instrument is on quality of information, in
order to ensure coverage of measures on the effects of information on the users job,
some new items were developed.
Service Performance
Measures of service performance assess each users experience with the services provided by the IS function in terms of the quality and flexibility of the services. The
entire ISSERVQUAL instrument is included in this dimension for comprehensiveness. New measures were also incorporated to augment the ISSERVQUAL items
based on the more comprehensive view of service performance proposed by Fitzgerald
et al. [38]. In addition, literature on three areas of IS functional services that were not
explicitly covered by the service quality literaturetraining [60, 65, 71], information
centers [11, 44, 47, 66], and help desks [20]were also reviewed and included to
ensure the comprehensiveness of measures for this dimension.
In addition to utilizing existing items to measure these constructs, the emergence of
innovations that have come into use since most of the prior instruments were developed
prompted the inclusion of new items to measure the IS functions performance in seven
new areas: ERP [51], knowledge management [45, 64, 97], electronic business [9, 55],
customer relationship management [39], supply chain management [37], electronic
commerce [22, 33, 102], and organizational learning [86, 100]. In total, 31 new items
gleaned from the practitioner and research literatures to reflect potential user assessments of IS functions contribution to those areas in terms of systems, information, and
services were incorporated to expand the item pools for each dimension.

Instrument Development
A total of 378 items were initially generated. Multiple rounds of Q-sorting and item
categorization were conducted [29, 68] to reduce the number of items and to ensure
the content validity of the ISFS instrument. The last round of Q-sort resulted in the
identification of subconstructs with multiple items for each of the three dimensions.
Table 2 shows the subconstructs for each dimension that resulted from the Q-sort
process.

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

93

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Table 2. Sub-ISFS Constructs from Q-Sort


Systems
performance

Information
effectiveness

Service
performance

Effect on job
Effect on external
constituencies
Effect on internal
processes
Effect on knowledge
and learning
Systems features
Ease of use

Intrinsic quality of
information
Contextual quality of
information
Presentational quality
of information
Accessibility of
information
Reliability of information
Flexibility of information
Usefulness of information

Responsiveness
Reliability
Service provider quality
Empathy
Training
Flexibility of services
Cost/benefit of services

The Q-sorts resulted in an ISFS instrument that consists of 42, 36, and 32 items for
the three dimensions, respectively. All items were measured using a Likert-type scale
ranging from 1 (hardly at all) to 5 (to a great extent) with 0 denoting not applicable.
The final version of the instrument is in the Appendix.

Survey Design and Execution


A sample of 2,100 medium-to-large companies with annual sales over $250 million
was randomly selected from Hoovers Online (www.hoovers.com) and InformationWeek 500.
To avoid common-source bias, data were collected from two types of respondents
in each of the sampled organizations. Data for the ISFS instrument were collected
from IS users, and organizational CIOs were asked to respond to a Consequences of
ISFP survey which was used as a basis for establishing nomological validity.
A packet consisting of one Consequences of ISFP instrument and three ISFS
instruments was sent to the CIOs of these companies. The CIO was asked to respond
to the Consequences of ISFP survey and to forward ISFS instruments to three IS
users. The characteristics of desirable userrespondents in terms of various functional areas, familiarity with IS, and so on, were specified.
The CIO is deemed to be suitable for receiving the packet because the topic of this
research would be of great interest to him or her, therefore increasing the potential for
participation. The CIO is also an appropriate respondent to the Consequence of ISFP
survey because he or she is at a high-enough position to provide meaningful responses
concerning consequences. Although it is possible that the CIO might distribute the
ISFS survey to friendly users and potentially bias the responses, it is unlikely that
the users would be able to consciously bias the results due to the focus of the analysis
(variance explanation) and the length and complexity of the ISFS instrument.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

94

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Two rounds of reminders were sent to initial nonrespondents to improve the response rate. In addition, where appropriate, letters were sent to the CIOs who returned the CIO survey soliciting additional user participation.
At the conclusion of data collection in 2001, 346 usable ISFS instruments and 130
Consequences of ISFS surveys were received, with 120 companies having responses
from at least one IS user and the CIO. This resulted in a response rate of 7.2 percent
for the CIO survey, 5.6 percent for the ISFS questionnaire, and 6.1 percent matchedpair responses.
Two analyses were conducted to assess possible nonresponse bias. t-tests of company size in terms of revenue, net income, and number of employees between responding and nonresponding companies showed no significant differences. t-tests of
30 items randomly selected from the three ISFS dimensions (10 items each) between
the early (first third) and late (last third) respondents [4, 62] also showed no significant differences. Therefore, it can be concluded that there was no nonresponse bias in
the sample and that the relatively low percentage response rate does not degrade the
generalizability of the ISFS instrument [57].

Sample Demographics
The participating companies represent more than 20 industries with nearly a quarter
of the companies in manufacturing (24.6 percent), followed by wholesale/retail (13.8
percent), banking/finance (10.8 percent), and medicine/health (7.7 percent). The range
of annual sales was between $253 million and $45.352 billion, with an average of $4
billion for the sample. For the Consequences of ISFP surveys, 46.9 percent of the
respondents hold the title of CIO. More than 80 percent of the respondents have titles
that are at the upper-management level, indicating that the returned surveys were
responded to by individuals at the desired level. For the ISFS instrument, 47.7 percent
of the respondents are at the upper-management level and 39.9 percent were at the
middle management level. The respondents are distributed across all functional areas, with accounting and finance, sales and marketing, and manufacturing and operations being the top three.

Instrument Validation
INSTRUMENT VALIDATION REQUIRES THE EVALUATION of content validity, reliability,
construct validity, and nomological validity. Following Segarss [89] process for instrument validation, we first use exploratory factor analysis to determine the number
of factors, then use confirmatory factor analysis iteratively to eliminate items that
loaded on multiple factors to establish unidimensionality.

Content Validity
Content validity refers to the extent to which the measurement items represent and
cover the domain of the construct [54]. It is established by showing that the items are

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

95

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

a sample of a universe of the investigators interest and by defining a universe of


items and sampling systematically within this universe [26, p. 58]. Churchill [25]
recommended specifying the domain of the construct followed by generating a sample
of items as the first two steps in instrument development to ensure content validity.
Domain development should be based on existing theories, and sample items should
come from existing instruments, with the development of new items when necessary.
In this study, domain development was guided by theories in both organizational
effectiveness and IS research. Items from existing instruments formed the overwhelming majority of the item pool. The initial items were refined through a series of Q-sorts
and a pilot test. These development procedures ensured the content validity of the
instruments.

Unidimensionality and Convergent Validity


Unidimensionality requires that only a single trait or construct is being measured by
a set of measures and is the most critical and basic assumption of measurement
theory [50, p. 49]. Gerbing and Anderson suggest that confirmatory factor analysis
affords a stricter interpretation of unidimensionality [40, p. 186] than other commonly used methods. Although the subconstructs of the three basic dimensions described earlier were identified during the Q-sort, those factors needed to be empirically
tested. Therefore, exploratory factor analyses were first conducted for items within
each dimension to determine the factors. This is acceptable, since the items for each
dimension were clearly separated in the instrument into sections with opening statements describing the nature of the items in the sections.
Three separate exploratory factor analyses were conducted using principal components with varimax rotation as the extraction method. There were seven, seven, and
five factors with eigenvalues greater than 1.0 that explained 70.8 percent, 68.6 percent, and 69.6 percent of variance for systems performance, information effectiveness, and service performance, respectively. Review of the items showed that most
factors loaded very closely to the subconstructs identified by the Q-sort.
To establish unidimensionality, the items that loaded on the same factor were then
analyzed with confirmatory factor analysis using LISRELwith two exceptions. One
factor in systems performance had only one item. Since it is one of the original
ease-of-use items from Davis [29], it was included into the factor that contains the
rest of the ease-of-use items. Another factor in information effectiveness had
only three items. It would be just identified for confirmatory factor analysis and
was only analyzed in conjunction with other factors in the same dimension. This
process resulted in six factors for systems performance, six for information effectiveness, and five for service performance.
Segars and Grover suggest that measured factors be modeled in isolation, then in
pairs, and then as a collective network [91, p. 148]. This method of analysis provides
the fullest evidence of measurement efficiency and avoids problems caused by excessive error in measurement [2, 3, 53, 90]. In total, 17 measurement models were analyzed. Each model went through an iterative modification process to improve its model

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

96

JERRY CHA-JAN CHANG AND WILLIAM R. KING

fit. First, items with standardized factor loading below 0.45 were eliminated [78] one
at a time. Second, error terms between pairs of items were allowed to correlate based
on a modification index. However, this modification was only implemented when
theories suggested that the two items should be correlated. This process was conducted iteratively by making one modification at a time until either good model fit
was achieved or no modification was suggested.
Following Segars and Grovers [90] procedure, after every measurement model
completed its modification process, pairs of models within each dimension were tested
iteratively to identify and eliminate items with cross-loadings. With all cross-loading
items eliminated, all factors within the same dimension were tested in a full measurement model. Again, items with cross-loadings in the full model were dropped. After
the full measurement models were purified, second-order models that reflect the subconstructs within each ISFS dimension were tested. The final, second-order measurement models for the three ISFS dimensions are presented in Figures 3, 4, and 5.
The chi-square and significant factor loadings provide direct statistical evidences
of both convergent validity and unidimensionality [91]. With each of the three ISFS
dimensions properly tested independently, all three dimensions were combined and
tested for model fit (Figure 6). The complete ISFS model, shown in Figure 6, showed
remarkably good fit for such high complexity.

Reliability
In assessing measures using confirmatory factor analysis, a composite reliability for
each factor can be calculated [5, 93]. This composite reliability is a measure of
internal consistency of the construct indicators, depicting the degree to which they
indicate the common latent (unobserved) construct [48, p. 612]. Another measure
of reliability is the average variance extracted (AVE), which reflects the overall amount
of variance that is captured by the construct in relation to the amount of variance due
to measurement error [48, 89]. The value of AVE should exceed 0.5 to indicate that
the variance explained by the construct is larger than measurement error. The construct reliability and AVE of all dimensions and subconstructs are presented in Table 3.
Table 3 indicates that all subconstructs showed good composite reliability except
the IS training scale. However, there are some scales with an AVE below 0.50. This
suggests that even though all scales (except one) were reliable in measuring their
respective constructs, some of them were less capable of providing good measures of
their own construct. Despite the low AVE, those scales were retained to ensure the
comprehensiveness of the ISFS instrument.

Discriminant Validity
Discriminant validity refers to the ability of the items in a factor to differentiate themselves from items that are measuring other factors. In structural equation modeling
(SEM), discriminant validity can be established by comparing the model fit of an
unconstrained model that estimates the correlation between a pair of constructs and a

97

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

Figure 3. Full Second-Order Measurement Model for Systems Performance


Notes: 2 = 618.62; d.f. = 411; p = 0.00; RMSEA (root mean square error of approximation) = 0.038; GFI (goodness-of-fit index) = 0.90; AGFI (adjusted goodness-of-fit index) =
0.87.

constrained model that fixes the correlation between the constructs to unity. Discriminant validity is demonstrated when the unconstrained model has a significantly
better fit than the constrained model. The difference in model fit is evaluated by the
chi-square difference (with one degree of freedom) between the models. Tests of all
possible pairs of subconstructs within each dimension were conducted; the results are

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

98

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Figure 4. Full Second-Order Measurement Model for Information Effectiveness


Notes: 2 = 216.20; d.f. = 156; p = 0.00; RMSEA = 0.033; GFI = 0.94; AGFI = 0.92.

presented in Table 4. As shown, all chi-square differences are significant at p < 0.001,
indicating that each scale captures a construct that is significantly unique and independent of other constructs. This provides evidence of discriminant validity.

Nomological Validity
A nomological network that specifies probable (hypothetical) linkages between the
construct of interest and measures of other constructs [85, p. 14] further clarifies the
ISFS construct and provides an additional basis for construct validation. An
operationalization of the theoretical model of Figure 1 that considers the two important consequences of ISFP was used. This model consists of the rightmost portions of

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

99

Figure 5. Full Second-Order Measurement Model for Service Performance


Notes: 2 = 139.09; d.f. = 94; p = 0.00; RMSEA = 0.037; GFI = 0.95; AGFI = 0.93.

Figure 1 that relate ISFP to business process effectiveness and to organizational performance.
Organizational Performance
Although a positive relationship between IS effectiveness and business performance
has been suggested, the evidence of such an effect has proved to be elusive [58].
Using user information satisfaction and strategic impact of IS as surrogate IS
effectiveness measures and several perceptual measures as business performance, Chan
et al. [21] empirically showed a significant positive relationship between IS and business performance. Since ISFS is posited as a more comprehensive measure of IS
performance, the positive relationship should hold in this study.

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

100

Figure 6. The Complete ISFS Model


Notes: 2 = 3,164.90; d.f. = 2,094; p = 0.00; RMSEA = 0.039; GFI = 0.79; AGFI = 0.77.

This construct captures the IS functions contribution to the overall performance of


the organization. The literature has focused on assessing the extent to which the IS
improves the organizations return on investment (ROI), market share, operational
efficiency, sales revenue, customer satisfaction, competitiveness, and customer relations [21, 77, 101, 106, 113]. Since subjective measures of those variables have been

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

101

Table 3. Reliability of Measurement Factors in ISFS Dimensions

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Factor names
Systems performance
Impact on job
Impact on external constituencies
Impact on internal processes
Impact on knowledge and learning
Systems usage characteristics
Intrinsic systems quality
Information effectiveness
Intrinsic quality of information
Reliability of information
Contextual quality of information
Presentational quality of information
Accessibility of information
Flexibility of information
Usefulness of information
Service performance
Responsiveness of services
Intrinsic quality of service provider
Interpersonal quality of service provider
IS training
Flexibility of services

Reliability

AVE

0.92
0.95
0.88
0.89
0.89
0.85
0.79
0.92
0.73
0.79
0.85
0.87
0.80
0.81
0.91
0.89
0.88
0.84
0.93
0.59
0.69

0.66
0.68
0.56
0.80
0.62
0.49
0.56
0.63
0.48
0.66
0.75
0.77
0.57
0.58
0.66
0.63
0.79
0.56
0.82
0.33
0.37

considered to be acceptable in the literature [32, 107], seven items that assess the
CIOs perception of ISs contribution to improving the organizations performance in
those areas were used.
Business Processes Effectiveness
Aside from directly affecting organizational performance, the IS function should also
have an effect on organizational performance through its impact on the effectiveness
of business processes, as shown in Figure 1. IS have traditionally been implemented
to improve the efficiencies of internal operations. This use of IT has more recently
been applied in redesigning both intra- and interfunctional business processes [94].
Improvements to the value-chain activities through IT are captured in this construct. Based on Porter and Millar [76] and Devenport [28], Xia [113] developed a
39-item instrument to assess executives perception of the extent to which IT improved the effectiveness of six value-chain activities. Data analysis resulted in six
factors: production operations, product development, supplier relations, marketing
services, management processes, and customer relations. Items representing those
six factors were generated for this construct.
Validity and Reliability of the Measures Used in Nomological Analysis. Although all
scales in the Consequences of ISFP survey were from previously tested instruments,

102

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Table 4. Chi-Square Differences Between Factors

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Chi-square differences
Factor1

Factor2

Factor3

Factor4

Factor5

Systems performance
Factor2
39.61***
Factor3
31.33***
Factor4
26.31***
Factor5
66.45***
Factor6
49.47***

37.00***
38.67***
78.42***
64.66***

28.55***
59.97***
55.23***

65.97***
58.68***

70.76***

Information effectiveness
Factor2
63.61***
Factor3
80.96***
Factor4
64.16***
Factor5
65.65***
Factor6
62.68***
Factor7
67.31***

73.22***
48.21***
40.60***
52.31***
47.84***

75.03***
65.60***
78.87***
67.80***

47.34***
60.04***
57.82***

48.66***
47.27***

52.93***
48.52***
46.89***

73.42***
75.73***

53.93***

Service performance
Factor2
23.84***
Factor3
51.52***
Factor4
46.31***
Factor5
30.69***
*** p < 0.001.

Factor6

47.36***

since a different sample was used, tests were conducted to ensure the reliability and
validity of those constructs. Reliability was evaluated using Cronbachs alpha. Items
with low corrected item-total correlation, indicating low internal consistency for the
items, were dropped. Construct validation was assessed by exploratory factor analysis
using principal components with oblique rotation as the extraction method. Table 5
presents the results of the analyses.
Although both constructs had two factors extracted, the two factors were significantly correlated in both cases. Therefore, all items within each construct were retained and used to create an overall score for the construct. Items in the final
measurement models were used to create an overall score for the ISFS construct. The
average of all items for each construct was used to avoid problems that may occur due
to differences in measurement scales. Table 6 shows the correlation among the constructs.
As shown in Table 6, there were significant positive correlations between the ISFS
construct and the two consequence constructs. There was also significant positive
correlation between business processes effectiveness and organizational performance.
Although correlation is not sufficient to establish causal relationships, the purpose
here is to demonstrate the expected association between the constructs. Therefore, as
shown in Tables 5 and 6, the nomological network was supported.

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

103

Table 5. Reliability and Validity of Nomological Constructs

Constructs

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Business processes
effectiveness
Organizational
performance

Number of
factors
extracted

Number of
items
retained

Variance
explained
(percent)

Cronbachs
alpha

63.48

0.759

70.04

0.860

Table 6. Correlation of Constructs in the Nomological Network


Constructs

Business
processes

Organizational performance
0.750**
ISFS
0.214**
** Correlation is significant at the 0.01 level (two-tailed).

Organizational
performance
0.205**

Results, Limitations, and Managerial Uses


THE ISFS INSTRUMENT IS A COMPREHENSIVE ONE that has been designed to measure
the performance of the entire IS function. The instrument consists of three major
dimensions: systems performance, information effectiveness, and service performance.
Each dimension contains several unidimensional subconstructs, each of which is
measured by at least two items. All scales have high reliability. Evidence from discriminant validity analyses showed that each scale is measuring a construct that is
different from the other constructs.
Of course, some limitations to the instrument need to be pointed out. The sample size,
while large, especially for matched-pair survey studies, is borderline for the number of variables relative to the number of observations that are involved in the SEM
analysis. Thus, some caution should be taken until it is revalidated. The nonresponse
bias analysis was conducted by comparing early responders to late responders and in
terms of organizational sizerelated variables for responders and nonresponders. Although this is common practice, other variables might have been analyzed [57]. We
also note that two subconstructsIS training and flexibility of serviceswere
borderline with respect to reliability. Despite this, these items were retained for comprehensiveness or theoretical soundness. Further studies will need to explore and improve these items. The ISFS may therefore be thought of as a preliminary step that can
guide future research and enhance practice in a significant, but limited, way.
The ISFS integrates aspects of various philosophical approaches that have been
taken to developing IT metrics (e.g., [10, 16, 24, 80, 98]) as well as various subfunctional levels that have previously been measured (e.g., [31, 75, 84]). The comprehensiveness of the ISFS instrument was demonstrated by its consideration of all

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

104

JERRY CHA-JAN CHANG AND WILLIAM R. KING

IS activities as reflected in the hierarchical structure of 18 unidimensional subconstructs


within the three dimensions. This allows IS managers to use this instrument to assess
their strengths and weaknesses in those subconstructs and dimensions. Of course,
since the dimensions are not independent, the fact that one dimension may be low
does not tell the whole story. Thus, such an indication must be further assessed in
light of the overall Gestalt of dimensions. In this sense, the ISFS also allows the IS
function to pinpoint specific areas that need improvements and to track both these
areas and overall performance over time, thus providing the basis for continuous
improvement [73].
When used in large organizations with decentralized IS functions, the ISFS instrument can offer valuable insights for internal benchmarking. Comparing the results of
ISFS instruments from different divisions or areas would help identify areas of IS
excellence and facilitate the transfer of knowledge to other areas. It has already been
used in each of these ways in a number of organizations that participated in the study.
In addition, with the intensifying scrutiny on IT investment, analysis, and outsourcing,
the ISFS instrument can be very useful to establish a baseline on the current status of
ISFP. Comparing postinvestment or outsourcing IS performance to the baseline would
provide a more objective evaluation of the efficacy of the actions taken. This, in turn,
will allow the organization to develop follow-up actions to maximize IS performance
and, ultimately, to improve organizational performance.
Thus, the instrument can be used in various waysas an overall evaluative tool, as
a Gestalt of areas that may be tracked over time, or in evaluating specific subareas.
At this latter level, it also provides means of identifying the specific performance
areas, as represented by the subconstructs, that may need improvement.
Because of the use of data from a cross-sectional field survey for validation, the
ISFS instrument is applicable to a variety of industries. When used within an organization, the instrument should be administered to a range of systems users, in terms of
both functional areas and organizational levels. This would ensure appropriate representations of the diverse users in the organization. The average scores for each subconstruct or dimension are the indicators of the IS functions performance for the
specific subarea or dimension. To be effective, the ISFS instrument should be administered repeatedly at a fixed interval between quarterly and annually. The result of
later assessments should be compared to earlier evaluations to detect changes that
would indicate improvements or degradation in the IS functions performance and in
specific performance areas. One additional caveat may be useful. The nomological
validation was performed in terms of the overall ISFS score. As a result, there is no
assurance that the subscores have the same degree of nomological validity. Since
such an analysis is beyond the scope of this study, we leave it to others who may wish
to concern themselves with this issue.
Overall, the goal of developing a measure to assess the performance of the IS function was successfully achieved in this study. The resulting instrument is not only
comprehensive enough to cover all aspects of ISFP but also sensitive enough to pinpoint specific areas that need attention. The ISFS instrument should be a useful tool
for organizations to use in continuously monitoring the performance of their IS func-

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

105

tion and for researchers to use in studies that require ISFP as a dependent or independent construct, as well as in studies that seek to complement the ISFS through other
analyses.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

REFERENCES
1. Alavi, M.; Marakas, G.M.; and Yoo, Y. A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 4 (December 2002), 404
415.
2. Anderson, J.C. An approach for confirmatory measurement and structural equation modeling of organizational properties. Management Science, 33, 4 (April 1987), 525541.
3. Anderson, J.C., and Gerging, D.W. Structural equation modeling in practice: A review
and recommended two-step approach. Psychological Bulletin, 103, 3 (May 1988), 411423.
4. Armstrong, J.S., and Overton, T.S. Estimating nonresponse bias in mail surveys. Journal
of Marketing Research, 14, 3 (August 1977), 396402.
5. Bagozzi, R.P. An examination of the validity of two models of attitude. Multivariate
Behavioral Research, 16, 3 (July 1981), 323359.
6. Bailey, J.E., and Pearson, S.W. Development of a tool for measuring and analyzing
computer user satisfaction. Management Science, 29, 5 (May 1983), 530545.
7. Ballantine, J.; Bonner, M.; Levy, M.; Martin, A.; Monro, I.; and Powell, P.L. Developing
a 3-D model of information systems success. In E.J. Garrity and G.L. Sanders (eds.), Information Systems Success Measurement. Hershey, PA: Idea Group, 1998, pp. 4659.
8. Baroudi, J.J., and Orlikowski, W.J. A short-form measure of user information satisfaction: A psychometric evaluation and notes on use. Journal of Management Information Systems, 4, 4 (Spring 1988), 4459.
9. Basu, A., and Kumar, A. Workflow management issues in e-business. Information Systems Research, 13, 1 (March 2002), 114.
10. Benaroch, M. Managing information technology investment risk: A real options perspective. Journal of Management Information Systems, 19, 2 (Fall 2002), 4384.
11. Bergeron, F.; Rivard, S.; and De Serre, L. Investigating the support role of the information center. MIS Quarterly, 14, 3 (September 1990), 247260.
12. Brancheau, J.C., and Wetherbe, J.C. Key issues in information systems management.
MIS Quarterly, 11, 1 (March 1987), 2345.
13. Brancheau, J.C.; Janz, B.D.; and Wetherbe, J.C. Key issues in information systems management: 199495 SIM Delphi results. MIS Quarterly, 20, 2 (June 1996), 225242.
14. Broadbent, M., and Weill, P. Management by Maxim: How business and IT managers
can create IT infrastructers. Sloan Management Review, 38, 3 (Spring 1997), 7792.
15. Broadbent, M.; Weill, P.; OBrien, T.; and Neo, B.N. Firm context and patterns of IT
infrastructure capability. In J.I. DeGross, S.L. Jarvenpaa, and A. Srinivasan (eds.), Proceedings of Seventeenth International Conference on Information Systems. Atlanta: Association for
Information Systems, 1996, pp. 174194.
16. Brynjolfsson, E. The productivity paradox of information technology. Communications
of the ACM, 36, 12 (December 1993), 6777.
17. Byrd, T.A., and Turner, D.E. Measuring the flexibility of information technology infrastructure: Exploratory analysis of a construct. Journal of Management Information Systems,
17, 1 (Summer 2000), 167208.
18. Cameron, K.S. A study of organizational effectiveness and its predictors. Management
Science, 32, 1 (January 1986), 87112.
19. Cameron, K.S., and Whetton, D.A. Some conclusions about organizational effectiveness. In K.S. Cameron and D.A. Whetton (eds.), Organizational Effectiveness: A Comparison
of Multiple Models. New York: Academic Press, 1983, pp. 261277.
20. Carr, C.L. Managing service quality at the IS help desk: Toward the development and
testing of TECH-QUAL, a model of IS technical support service quality. Ph.D. dissertation,
University of Minnesota, Minneapolis, 1999.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

106

JERRY CHA-JAN CHANG AND WILLIAM R. KING

21. Chan, Y.E.; Huff, S.L.; Barclay, D.W.; and Copeland, D.G. Business strategic orientation, information systems strategic orientation, and strategic alignment. Information Systems
Research, 8, 2 (June 1997), 125150.
22. Chatterjee, D.; Grewal, R.; and Sambamurthy, V. Shaping up for e-commerce: Institutional enablers of the organizational assimilation of Web technologies. MIS Quarterly, 26, 2
(June 2002), 6590.
23. Chatterjee, D.; Pacini, C.; and Sambamurthy, V. The shareholder-wealth and tradingvolume effects of information technology infrastructure investments. Journal of Management
Information Systems, 19, 2 (Fall 2002), 742.
24. Chircu, A.M., and Kauffman, R.J. Limits to value in electronic commercerelated IT
investments. Journal of Management Information Systems, 17, 2 (Fall 2000), 5980.
25. Churchill, G.A. A paradigm for developing better measures of marketing constructs.
Journal of Marketing Research, 16, 1 (February 1979), 6473.
26. Cronbach, L.J., and Meehl, P.E. Construct validity in psychological tests. In D.M. Jackson and S. Messick (eds.), Problems in Human Assessment. New York: McGraw-Hill, 1967,
pp. 5777.
27. Cronin, J.J.J., and Taylor, S.A. SERVPERF versus SERVQUAL: Reconciling performance-based and perceptions-minus-expectations measurement of service quality. Journal of
Marketing, 58, 1 (January 1994), 125131.
28. Davenport, T.H. Process Innovation: Reengineering Work Through Information Technology. Boston: Harvard Business School Press, 1993.
29. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 3 (September 1989), 319340.
30. DeLone, W.H., and McLean, E.R. Information systems success: The quest for the dependent variable. Information Systems Research, 3, 1 (March 1992), 6095.
31. DeLone, W.H., and McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19, 4 (Spring
2003), 930.
32. Dess, G.G., and Robinson, R.B.J. Measuring organizational performance in the absence
of objective measures: The case of the privately-held firm and conglomerate business unit.
Strategic Management Journal, 5, 3 (JulySeptember 1984), 265273.
33. Devaraj, S.; Fan, M.; and Kohli, R. Antecedents of B2C channel satisfaction and preference: Validating e-commerce metrics. Information Systems Research, 13, 3 (September 2002),
316333.
34. Dickson, G.W.; Leitheiser, R.L.; Nechis, M.; and Wetherbe, J.C. Key information systems issues for the 1980s. MIS Quarterly, 8, 3 (September 1984), 135148.
35. Doll, W.J., and Torkzadeh, G. The measurement of end-user computing satisfaction. MIS
Quarterly, 12, 2 (June 1988), 259274.
36. Duncan, N.B. Capturing flexibility of information technology infrastructure: A study of
resource characteristics and their measure. Journal of Management Information Systems, 12, 2
(Fall 1995), 3757.
37. Fan, M.; Stallaert, J.; and Whinston, A.B. Decentralized mechanism design for supply
chain organizations using an auction market. Information Systems Research, 14, 1 (March
2003), 122.
38. Fitzgerald, L.; Johnston, R.; Brignall, S.; Silvestro, R.; and Voss, C. Performance Measurement in Service Businesses. London: Chartered Institute of Management Accountants, 1993.
39. Gefen, D., and Ridings, C.M. Implementation team responsiveness and user evaluation
of customer relationship management: A quasi-experimental design study of social exchange
theory. Journal of Management Information Systems, 19, 1 (Summer 2002), 4770.
40. Gerbing, D.W., and Anderson, J.C. An updated paradigm for scale development incorporating unidimensionality and its assessment. Journal of Marketing Research, 25, 2 (May 1988),
186192.
41. Glazer, R. Measuring the knower: Towards a theory of knowledge equity. California
Management Review, 40, 3 (Spring 1998), 175194.
42. Gold, A.H.; Malhotra, A.; and Segars, A.H. Knowledge management: An organizational
capabilities perspective. Journal of Management Information Systems, 18, 1 (Summer 2001),
185214.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

107

43. Goodhue, D.L., and Thompson, R.L. Task-technology fit and individual performance.
MIS Quarterly, 19, 2 (June 1995), 213236.
44. Govindarajulu, C., and Reithel, B.J. Beyond the information center: An instrument to
measure end-user computing support from multiple sources. Information & Management, 33,
5 (May 1998), 241250.
45. Grover, V., and Davenport, T.H. General perspectives on knowledge management: Fostering a research agenda. Journal of Management Information Systems, 18, 1 (Summer 2001),
522.
46. Grover, V.; Jeong, S.R.; and Segars, A.H. Information systems effectiveness: The construct space and patterns of application. Information & Management, 31, 4 (December 1996),
177191.
47. Guimaraes, T., and Igbaria, M. Exploring the relationship between IC success and company performance. Information & Management, 26, 3 (March 1994), 133141.
48. Hair, J.F.J.; Anderson, R.E.; Tatham, R.L.; and Black, W.C. Multivariate Data Analysis,
5th ed. Upper Saddle River, NJ: Prentice Hall, 1998.
49. Hartog, C., and Herbert, M. 1985 opinion survey of MIS managers: Key issues. MIS
Quarterly, 10, 4 (December 1986), 351361.
50. Hattie, J. Methodology review: Assessing unidimensionality of tests and items. Applied
Psychological Measurement, 9, 2 (June 1985), 139164.
51. Hitt, L.M.; Wu, D.J.; and Zhou, X. Investment in enterprise resource planning: Business
impact and productivity measures. Journal of Management Information Systems, 19, 1 (Summer 2002), 7198.
52. Jiang, J.J.; Klein, G.; and Carr, C.L. Measuring information system service quality:
SERVQUAL from the other side. MIS Quarterly, 26, 2 (June 2002), 145166.
53. Jreskog, K.G. Testing structural equation models. In K.A. Boland and L.S. Long (eds.),
Testing Structural Equation Models. Newbury Park, CA: Sage, 1993, pp. 294316.
54. Kerlinger, F.N. Foundations of Behavioral Research. New York: McGraw-Hill, 1978.
55. Kim, J.; Lee, J.; Han, K.; and Lee, M. Businesses as buildings: Metrics for the architectural quality of Internet businesses. Information Systems Research, 13, 3 (September 2002),
239254.
56. King, W.R. Management information systems. In H. Bidgoli (ed.), Encyclopedia of
Management Information Systems, vol. 3. New York: Academic Press, 2003.
57. King, W.R., and He, J. External validity, coverage and nonresponse errors in IS survey
research. Katz Graduate School of Business, University of Pittsburgh, 2004.
58. Kohli, R., and Devaraj, S. Measuring information technology payoff: A meta-analysis of
structural variables in firm-level empirical research. Information Systems Research, 14, 2 (June
2003), 127145.
59. Kraemer, K.L.; Danziger, J.N.; Dunkle, D.E.; and King, J.L. The usefulness of computer-based information to public managers. MIS Quarterly, 17, 2 (June 1993), 129148.
60. Kraut, R.; Dumais, S.; and Susan, K. Computerization, productivity, and quality of worklife. Communications of the ACM, 32, 2 (February 1989), 220238.
61. Lacity, M., and Willcocks, L. Global Information Technology Outsourcing. Chichester,
UK: Wiley, 2001.
62. Lambert, D.M., and Harrington, T.C. Measuring nonresponse bias in customer service
mail surveys. Journal of Business Logistics, 11, 2 (1990), 525.
63. Larsen, K.R.T. A taxonomy of antecedents of information systems success: Variable
analysis studies. Journal of Management Information Systems, 20, 2 (Fall 2003), 169246.
64. Lee, H. Knowledge management enablers, processes, and organizational performance:
An integrative view and empirical examination. Journal of Management Information Systems,
20, 1 (Summer 2003), 179228.
65. Lee, H.; Kwak, W.; and Han, I. Developing a business performance evaluation system:
An analytical hierarchical model. Engineering Economist, 40, 4 (Summer 1995), 343357.
66. Magal, S.R.; Carr, H.H.; and Watson, H.J. Critical success factors for information center
managers. MIS Quarterly, 12, 3 (September 1988), 314425.
67. Mirani, R., and King, W.R. The development of a measure for end-user computing support. Decision Sciences, 25, 4 (JulyAugust 1994), 481498.
68. Moore, G.C., and Benbasat, I. Development of an instrument to measure the perceptions

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

108

JERRY CHA-JAN CHANG AND WILLIAM R. KING

of adopting an information technology innovation. Information Systems Research, 2, 3 (September 1991), 192222.
69. Myers, B.L.; Kappelman, L.A.; and Prybutok, V.R. A comprehensive model for assessing the quality and productivity of the information systems function: Toward a theory for
information systems assessment. In E.J. Garrity and G.L. Sanders (eds.), Information Systems
Success Measurement. Hershey, PA: Idea Group, 1998, pp. 94121.
70. Nelson, K.M., and Cooprider, J.G. The contribution of shared knowledge to IS group
performance. MIS Quarterly, 20, 4 (December 1996), 409432.
71. Nelson, R.R., and Cheney, P.H. Training end users: An exploratory study. MIS Quarterly,
11, 4 (December 1987), 547559.
72. Niederman, F.; Brancheau, J.C.; and Wetherbe, J.C. Information systems management
issues for the 1990s. MIS Quarterly, 15, 4 (December 1991), 475500.
73. Olian, J.D., and Rynes, S.L. Making total quality work: Aligning organizational processes, performance measures, and stakeholders. Human Resource Management, 30, 3 (Fall
1991), 303333.
74. Parasuraman, A.; Zeithaml, V.A.; and Berry, L.L. Refinement and reassessment of the
SERVQUAL scale. Journal of Retailing, 64, 4 (Winter 1991), 420450.
75. Pitt, L.F.; Watson, R.T.; and Kavan, C.B. Service quality: A measure of information
systems effectiveness. MIS Quarterly, 19, 2 (June 1995), 173185.
76. Porter, M.E., and Millar, V.E. How information gives you competitive advantage. Harvard
Business Review, 63, 4 (JulyAugust 1985), 149160.
77. Premkumar, G. Evaluation of Strategic Information Systems Planning: Empirical Validation of a Conceptual Model. Pittsburgh: University of Pittsburgh, 1989.
78. Raghunathan, B.; Raghunathan, T.S.; and Tu, Q. Dimensionality of the strategic grid
framework: The construct and its measurement. Information Systems Research, 10, 4 (December 1999), 343355.
79. Rai, A.; Lang, S.S.; and Welker, R.B. Assessing the validity of IS success models: An
empirical test and theoretical analysis. Information Systems Research, 13, 1 (March 2002), 5069.
80. Ryan, S.D., and Harrison, D.A. Considering social subsystem costs and benefit in information technology investment decisions: A view from the field on anticipated payoffs. Journal
of Management Information Systems, 16, 4 (Spring 2000), 1140.
81. Ryker, R., and Nath, R. An empirical examination of the impact of computer information
systems on users. Information & Management, 29, 4 (1995), 207214.
82. Saarinen, T. An expanded instrument for evaluating information system success. Information & Management, 31, 2 (1996), 103118.
83. Santhanam, R., and Hartono, E. Issues in linking information technology capability to
firm performance. MIS Quarterly, 27, 1 (March 2003), 125154.
84. Saunders, C.S., and Jones, J.W. Measuring performance of the information systems function. Journal of Management Information Systems, 8, 4 (Spring 1992), 6382.
85. Schwab, D.P. Construct validation in organizational behavior. In B.M. Staw and L.L.
Cummings (eds.), Research in Organizational Behavior, vol. 2. Greenwich, CT: JAI Press,
1980, pp. 343.
86. Scott, J.E. Facilitating interorganizational learning with information technology. Journal
of Management Information Systems, 17, 2 (Fall 2000), 81114.
87. Seddon, P.B. A respecification and extension of the DeLone and McLean model of IS
success. Information Systems Research, 8, 3 (September 1997), 240253.
88. Seddon, P.B.; Staples, S.; Patnayauni, R.; and Bowtell, M. Dimensions of information
systems success. Communications of the AIS, 2 (November 1999), 239.
89. Segars, A.H. Assessing the unidimensionality of measurement: A paradigm and illustration within the context of information systems research. Omega, 25, 1 (February 1997), 107121.
90. Segars, A.H., and Grover, V. Re-examining perceived ease of use and usefulness: A
confirmatory factor analysis. MIS Quarterly, 17, 4 (December 1993), 517525.
91. Segars, A.H., and Grover, V. Strategic information systems planning success: An investigation of the construct and its measurement. MIS Quarterly, 22, 2 (June 1998), 139163.
92. Segars, A.H., and Hendrickson, A.R. Value, knowledge, and the human equation: Evolution of the information technology function in modern organizations. Journal of Labor Research, 21, 3 (Summer 2000), 431445.

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

109

93. Sethi, V., and King, W.R. Development of measures to assess the extent to which an
information technology application provides competitive advantage. Management Science, 40,
12 (December 1994), 16011627.
94. Sethi, V., and King, W.R. Organizational Transformation Through Business Process
Re-Engineering. Upper Saddle River, NJ: Prentice Hall, 1998.
95. Steers, R.M. Problems in the measurement of organizational effectiveness. Administrative Science Quarterly, 20, 4 (December 1975), 546558.
96. Straub, D.W.; Hoffman, D.L.; Weber, B.W.; and Steinfield, C. Measuring e-commerce
in Net-enabled organizations: An introduction to the special issue. Information Systems Research, 13, 2 (June 2002), 115124.
97. Sussman, S.W., and Siegal, W.S. Informational influence in organizations: An integrated approach to knowledge adoption. Information Systems Research, 14, 1 (March 2003),
24765.
98. Tallon, P.P.; Kraemer, K.L.; and Gurbaxani, V. Executives perceptions of the business
value of information technology: A process-oriented approach. Journal of Management Information Systems, 16, 4 (Spring 2000), 145173.
99. Tam, K.Y. The impact of information technology investment on firm performance and
evaluation: Evidence from newly industrialized economies. Information Systems Research, 9,
1 (March 1998), 8598.
100. Templeton, G.F.; Lewis, B.R.; and Snyder, C.A. Development of a measure for the
organizational learning construct. Journal of Management Information Systems, 19, 2 (Fall
2002), 175218.
101. Teo, T.S.H. Integration between business planning and information systems planning:
Evolutionary-contingency perspectives. Ph.D. dissertation, University of Pittsburgh, 1994.
102. Torkzadeh, G., and Dhillon, G. Measuring factors that influence the success of Internet
commerce. Information Systems Research, 13, 2 (June 2002), 187204.
103. Torkzadeh, G., and Doll, W.J. The development of a tool for measuring the perceived
impact of information technology on work. Omega, International Journal of Management
Science, 27, 3 (June 1999), 327339.
104. Van Dyke, T.P.; Kappelman, L.A.; and Prybutok, V.R. Measuring information systems
service quality: Concerns on the use of the SERVQUAL questionnaire. MIS Quarterly, 21, 2
(June 1997), 195208.
105. Venkatesh, V.; Morris, M.G.; Davis, G.B.; and Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 3 (September 2003), 425478.
106. Venkatraman, N. Strategic orientation of business enterprises: The construct, dimensionality, and measurement. Management Science, 35, 8 (August 1989), 942962.
107. Venkatraman, N., and Ramanujam, V. Measurement of business economic performance:
An examination of method convergence. Journal of Management, 13, 1 (1987), 109122.
108. Wand, Y., and Wang, R.Y. Anchoring data quality dimensions in ontological foundations. Communications of the ACM, 39, 11 (November 1996), 8695.
109. Wang, R.Y., and Strong, D.M. Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12, 4 (Spring 1996), 534.
110. Watson, R.T.; Pitt, L.F.; and Kavan, C.B. Measuring information systems service quality: Lessons from two longitudinal case studies. MIS Quarterly, 22, 1 (March 1998), 6179.
111. Weill, P. The relationship between investment in information technology and firm performance: A study of the valve manufacturing sector. Information Systems Research, 3, 4 (December 1992), 307333.
112. Wells, C.E. Evaluation of the MIS function in an organization: An exploration of the
measurement problem. Ph.D. dissertation, University of Minnesota, Minneapolis, 1987.
113. Xia, W. Dynamic capabilities and organizational impact of IT infrastructure: A research
framework and empirical investigation. Ph.D. dissertation, University of Pittsburgh, 1998.

110

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Appendix. ISFS Instrument


What Is the IS Function?

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

THIS QUESTIONNAIRE IS DESIGNED TO ASSESS the performance of the information


systems (IS) function in your organization. The IS function includes all IS individuals, groups, and departments within the organization with whom you interact regularly. As a user of some information systems/technology, you have your own definition
of what the IS function means to you, and it is the performance of your IS function
that should be addressed here.
Effectiveness of Information
The following statements ask you to assess the general characteristics of the information that IS provides to you. Please try to focus on the data and information itself
in giving the response that best represents your evaluation of each statement. If a
statement is not applicable to you, circle 0.
To a
great
extent N/A

Hardly
at all
The extent that the information is:
Interpretable
Understandable
Complete
Clear
Concise
Accurate
Secure
Important
Relevant
Usable
Well organized
Well defined
Available
Accessible
Up-to-date
Received in a timely manner
Reliable
Verifiable
Believable
Unbiased

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2

3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3

4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4

5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

To a
great
extent N/A

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Hardly
at all
The extent that the information:
Can be easily compared to past
information.
Can be easily maintained.
Can be easily changed.
Can be easily integrated.
Can be easily updated.
Can be used for multiple purposes.
Meets all your requirements.

1
1
1
1
1
1
1

111

2
2
2
2
2
2
2

3
3
3
3
3
3
3

4
4
4
4
4
4
4

5
5
5
5
5
5
5

0
0
0
0
0
0
0

The following statements ask you to assess the outcome of using the information that
IS provided to you.
To a
great
extent N/A

Hardly
at all
The extent that:
The amount of information is
adequate.
It is easy to identify errors in
information.
It helps you discover new
opportunities to serve customers.
It is useful for defining problems.
It is useful for making decisions.
It improves your efficiency.
It improves your effectiveness.
It gives your company a
competitive edge.
It is useful for identifying problems.

1
1
1
1
1

2
2
2
2
2

3
3
3
3
3

4
4
4
4
4

5
5
5
5
5

0
0
0
0
0

1
1

2
2

3
3

4
4

5
5

0
0

112

JERRY CHA-JAN CHANG AND WILLIAM R. KING

IS Service Performance
The following statements ask you to assess the performance of services provided by the
IS department or function. Please circle the number that best represents your evaluation
of each statement. If a statement is not applicable to you, circle the number 0.
To a
great
extent N/A

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Hardly
at all
The extent that the:
Training programs offered by the
IS function are useful.
Variety of training programs
offered by the IS function is
sufficient.
IS functions services are
cost-effective.
Training programs offered by the
IS function are cost-effective.
IS functions services are valuable.
IS functions services are helpful.

1
1
1

2
2
2

3
3
3

4
4
4

5
5
5

0
0
0

To a
great
extent N/A

Hardly
at all
The extent that the IS function:
Responds to your service requests
in a timely manner.
Completes its services in a timely
manner.
Is dependable in providing services.
Has your best interest at heart.
Gives you individual attention.
Has sufficient capacity to serve all
its users.
Can provide emergency services.
Provides a sufficient variety of
services.
Has sufficient people to provide
services.
Extends its systems/services to your
customers/suppliers.

1
1
1
1

2
2
2
2

3
3
3
3

4
4
4
4

5
5
5
5

0
0
0
0

1
1

2
2

3
3

4
4

5
5

0
0

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

To a
great
extent N/A

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Hardly
at all
The extent that IS people:
Provide services for you promptly.
Are dependable.
Are efficient in performing their
services.
Are effective in performing their
services.
Have the knowledge and skill to do
their job well
Are reliable.
Are polite.
Are sincere.
Show respect to you.
Are pleasant to work with.
Instill confidence in you.
Are helpful to you.
Solve your problems as if they
were their own.
Understand your specific needs.
Are willing to help you.
Help to make you a more
knowledgeable computer user.

113

1
1

2
2

3
3

4
4

5
5

0
0

1
1
1
1
1
1
1
1

2
2
2
2
2
2
2
2

3
3
3
3
3
3
3
3

4
4
4
4
4
4
4
4

5
5
5
5
5
5
5
5

0
0
0
0
0
0
0
0

1
1
1

2
2
2

3
3
3

4
4
4

5
5
5

0
0
0

Systems Performance
The following statements ask you to assess the extent that systems produce various
outcomes for you. The term systems does not refer to the information itself. Rather, it
refers to the capability to access, produce, manipulate, and present information to
you (e.g., to access data bases, or to develop a spreadsheet). Please circle the number
that best represents your evaluation of each statement. If a statement is not applicable
to you, circle 0.
To a
great
extent N/A

Hardly
at all
The extent that systems:
Make it easier to do your job.
Improve your job performance.
Improve your decisions.

1
1
1

2
2
2

3
3
3

4
4
4

5
5
5

0
0
0

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

114

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Give you confidence to


accomplish your job.
Increase your productivity.
Increase your participation in
decisions.
Increase your awareness of
job-related information.
Improve the quality of your
work product.
Enhance your problem-solving
ability.
Help you manage relationships
with external business partners.
Improve customer satisfaction.
Improve customer service.
Enhance information sharing with
your customers/suppliers.
Help retain valued customers.
Help you select and qualify desired
suppliers.
Speed product delivery.
Help you manage inbound logistics.
Improve management control.
Streamline work processes.
Reduce process costs.
Reduce cycle times.
Provide you information from
other areas in the organization.
Facilitate collaborative problem
solving.
Facilitate collective group
decision making.
Facilitate your learning.
Facilitate collective group learning.
Facilitate knowledge transfer.
Contribute to innovation.
Facilitate knowledge utilization.

1
1

2
2

3
3

4
4

5
5

0
0

1
1
1

2
2
2

3
3
3

4
4
4

5
5
5

0
0
0

1
1

2
2

3
3

4
4

5
5

0
0

1
1
1
1
1
1
1

2
2
2
2
2
2
2

3
3
3
3
3
3
3

4
4
4
4
4
4
4

5
5
5
5
5
5
5

0
0
0
0
0
0
0

1
1
1
1
1
1

2
2
2
2
2
2

3
3
3
3
3
3

4
4
4
4
4
4

5
5
5
5
5
5

0
0
0
0
0
0

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

115

The following statements ask you to assess general characteristics of the information
systems that you use regularly. Please circle the number that best represents your
evaluation of each statement. If a statement is not applicable to you, circle 0.
To a
great
extent N/A

Downloaded by [University of Kelaniya] at 02:31 21 September 2015

Hardly
at all
The extent that:
Systems have fast response time.
System downtime is minimal.
Systems are well integrated.
Systems are reliable.
Systems are accessible.
Systems meet your expectation.
Systems are cost-effective.
Systems are responsive to meet
your changing needs.
Systems are flexible.
Systems are easy to use.
System use is easy to learn.
Your companys intranet is easy
to navigate.
It is easy to become skillful in
using systems.

1
1
1
1
1
1
1

2
2
2
2
2
2
2

3
3
3
3
3
3
3

4
4
4
4
4
4
4

5
5
5
5
5
5
5

0
0
0
0
0
0
0

1
1
1
1

2
2
2
2

3
3
3
3

4
4
4
4

5
5
5
5

0
0
0
0

You might also like