You are on page 1of 13

STRUCTURAL EQUATION MODELING IN MARKETING:

SOME PRACTICAL REMINDERS


Wynne W. Chin, Robert A. Peterson, and Steven P. Brown
The authors review issues related to the application of structural equation modeling (SEM) in marketing.
The discussion begins by considering issues related to the process of applying SEM in empirical research,
including model specification, identification, estimation, evaluation, and respecification, and reporting
of results. In addition to these process issues, a number of other issues, such as formulation of multiple
theoretical models, model error versus sampling error, and relating study objectives to the capabilities of
SEM, are considered, and suggestions offered regarding ways that SEM applications might be improved.

Structural equation modeling (SEM) first appeared in the


marketing literature some three decades ago. Legitimized
in part by a 1982 special issue of the Journal of Marketing
Research and initially propagated and promoted by marketing researchers, including Anderson (e.g., Anderson and
Gerbing 1988; Gerbing and Anderson 1988), Bagozzi (e.g.,
Bagozzi 1980; Bagozzi and Yi 1988), and Fornell (e.g., Fornell 1983; Fornell and Larcker 1981), among others, SEM
has now become ubiquitous. Because it simultaneously
reflects a theoretical network of manifest (observed) variables and latent (unobserved) variables (constructs) as well
as general statistical technique, SEM represents a versatile
and powerful tool for addressing a variety of substantive
and methodological issues.
Recently, a number of methodological advances, useful extensions, and interesting applications have been
reported, primarily in the psychology literature or the
Structural Equations Modeling. For example, multilevel or
hierarchical modeling is concerned with situations in which
study objects are nested (e.g., students within classes) or
observations are nested (e.g., repeated measures over time
within an individual) and traditionally has been treated
as a distinct statistical technique. However, Bauer (2003),

Butner, Amazeen, and Mulvey (2005), Curran (2003), and


Mehta and Neale (2005) have demonstrated the applicability of SEM to a variety of these situations. Long-standing
issues in SEM, such as appropriate sample sizes (Lee and
Song 2004b; Nevitt and Hancock 2004) and missing data
(Allison 2003; Lee and Song 2004a; Nevitt and Hancock
2004) are being addressed. Maydeu-Olivares and Bckenholt (2005) showed how classic paired-comparison models
can be embedded within structural equation models, and
Marsh, Wen, and Hau (2004) suggested ways to incorporate
latent variable interactions in SEM models. Cheung and
Chan (2005) and Furlow and Beretvas (2005) suggested
approaches for combining meta-analysis techniques and
SEM. In brief, knowledge regarding SEM is evolving at an
increasing rate.
At the risk of oversimplifying, a prototypical application
of SEM in marketing consists of specifying a hypothesized
model and evaluating the model using cross-sectional data.
Subsequent to model-fitting, the hypothesized model is
modified by adding or deleting paths until the best model
is identified. Model fit is then recorded and the results interpreted and reported. Such applications frequently violate
several best practices in SEM.

THE SEM PROCESS


Wynne W. Chin (Ph.D., University of Michigan), Professor and
Bauer Faculty Fellow, Decision and Information Sciences Department, C.T. Bauer College of Business, University of Houston,
Houston, TX, wchin@uh.edu.
Robert A. Peterson (Ph.D., University of Minnesota), John T.
Stuart III Chair in Business Administration, McCombs School of
Business, and Associate Vice President for Research, University of
Texas at Austin, Austin, TX, rap@mail.utexas.edu.
Steven P. Brown (Ph.D., University of Texas at Austin), Bauer
Professor of Marketing, C.T. Bauer College of Business, University
of Houston, Houston, TX, steve.brown@mail.uh.edu.

To facilitate and elucidate practical issues and decisions related to the application of SEM and highlight certain issues
that researchers should consider more seriously, we present
a modified decision-making framework. Specifically, SEM
applications typically follow a five-step process (Bollen and

The authors appreciate the financial support of the IC2 Institute,


University of Texas at Austin.
Journal of Marketing Theory and Practice, vol. 16, no. 4 (fall 2008), pp. 287298.
2008 M.E. Sharpe, Inc. All rights reserved.
ISSN 1069-6679 / 2008 $9.50 + 0.00.
DOI 10.2753/MTP1069-6679160402

288

Journal of Marketing Theory and Practice

Long 1993; McDonald and Moon-Ho 2002). In abbreviated


form, these steps are

Step 1: model specification,


Step 2: model identification,
Step 3: model estimation,
Step 4: model evaluation,
Step 5: model respecification.

Each step involves decisions that have implications for


subsequent steps. Problems in an earlier step can render
decisions in subsequent steps moot. In addition to these
five steps, we posit a sixth step that incorporates reporting
of SEM results. After discussing these steps, we also consider
a number of other issues relevant to the appropriate use of
SEM techniques, such as formulation of multiple theoretical models, model error versus sampling error, and relating
study objectives to the capabilities of SEM to determine
whether it represents the best analytical approach for a
given research problem.
When discussing each step, we briefly review selected
best practices as reminders when applying SEM. We also
discuss best practices with the objective of assisting novice
structural equation modelers. Advances in SEM software
have facilitated its adoption by new generations of users,
many of whom are not statistically adroit. Although these
advances have improved both the conceptual and analytical aspects of conducting research, they may also have
unintentionally derogated the analysis process by removing the need to confront critical research decisions. Even
though some of the reminders we offer might seem obvious, experience suggests that even sophisticated modelers
need to be refreshed from time to time. Our objective is
to offer a set of guidelines to improve the practice of SEM
in marketing, although we do not claim that the guidelines
are comprehensive.

Step 1: Model Specification


When specifying a structural equation model, researchers
should recognize that their model not only specifies a set
of conjectured relations among the manifest variables and
constructs of interest, but also that missing paths imply
their own hypotheses. In particular, the absence of a path
implies the absence of a direct effect between two constructs, whereas the absence of a nondirected arc implies
that there are no omitted factors explaining a relationship
(McDonald and Moon-Ho 2002). Omitted nondirected arcs,
for example, can represent potentially biasing latent method
factors that need to be considered (Podsakoff et al. 2003).
Thus, it has been suggested that researchers need sound

theoretical and substantive justifications not only for the


indicated structural relationships but also for those that
are not indicated (Boomsma 2000; Hoyle and Panter 1995;
McDonald and Moon-Ho 2002). Otherwise, because SEM
represents a full-information statistical approach, estimates
resulting from a model that omits key relationships will
be biased in Step 3.

Formative Versus Reflective Indicators


In SEM, the relationship between a manifest variable and
a construct is expressed as being either formative or reflective. If the relationship is formative, the manifest variable
is deemed to produce or cause the construct, whereas if the
relationship is reflective, the construct is deemed to produce
or cause the manifest variable. Although most structural
equation models specify the manifest variableconstruct
relationship as reflective, recent research has suggested a
need for increasing use of formative measurement indicators (e.g., Diamantopoulos and Winklhofer 2001; Jarvis,
MacKenzie, and Podsakoff 2003; MacKenzie, Podsakoff,
and Jarvis 2005). The formative indicator versus reflective
indicator issue is sufficiently complex that it warrants some
reminders.
It has long been recognized that in many modeling
situations, reflective indicators should more appropriately be treated as formative (e.g., Hauser 1972; Jreskog
and Goldberger 1975). Recently, Jarvis, MacKenzie, and
Podsakoff (2003) pointed out the threats to statistical
conclusion validity posed by misrepresenting what should
be formative measurement indicators as reflective indicators. Following an extensive review of structural equation
models in marketing research, they suggested that such
misrepresentations have been common, amounting to 28
percent of all instances in which measurement indicators
were improperly modeled as reflective. Although drawing
attention to the distinctions between these different types
of measurement models is undoubtedly beneficial, it is
possible that Jarvis, MacKenzie, and Podsakoff (2003) might
have overstated the frequency of misrepresentations that
have occurred in the literature and the threat to validity
that confusion over measurement models poses to findings
reported in the literature.
Jarvis, MacKenzie, and Podsakoff (2003) classified measurement indicators as formative when they can be construed as (1) defining characteristics of the latent construct;
(2) causing changes in the construct, rather than vice versa;
(3) not sharing a common theme among themselves; (4) being fundamental to the definition of the construct, rather

Fall 2008 289


than parallel and substitutable reflections of it; (5) not
necessarily covarying, and (6) not having the same antecedents and consequences. When measurement indicators
appear to match this set of criteria, Jarvis, MacKenzie, and
Podsakoff (2003) would classify them as being properly
formative. An example would be using manifest variables
such as income, education, and occupation as causes of the
social economic status construct.
Although their criteria are intuitively reasonable, judgments regarding whether attitudinal and behavioral scales
in particular should be considered formative or reflective
indicators should depend on more than just the form of
measurement items. Stated somewhat differently, it is
difficult to meaningfully categorize measurement scales
unequivocally as being formative or reflective based on
the measurement items alone. As Pedhazur and Schmelkin
noted, determination of the type of indicator depends on
the theoretical formulations about the construct (1991,
p. 54). From this perspective, some of the measures that Jarvis, MacKenzie, and Podsakoff (2003) classified as formative
(even though they were mistakenly specified as reflective
by some researchers) are not so easily classified as such;
it is not clear from a simple inspection of scale items that
they should necessarily be treated as formative.
For example, when measuring overall satisfaction with
life using three items assessing satisfaction with work life,
home life, and extracurricular activities, one might classify
these items as formative. A thought experiment often used is
to consider whether a decline in satisfaction at work necessarily implies a corresponding decline in satisfaction with
the home life or extracurricular activities. If one believes
the answer is no (e.g., because these facets of satisfaction
are believed to be relatively independent and, more importantly, because they are critical inputs to overall evaluation
of general life satisfaction), then it would be appropriate
to treat them as formative. But simple item examination is
not enough.
Discriminating between formative and reflective indicators implies a cognitive process by which responses to
measurement items are constructed. Modeling measurement items as formative assumes a temporal ordering in the
sense that the respondent must first experience a genuine
change in one or more of these facets of life satisfaction
and then incorporate these evaluations into a subsequent
judgment of overall life satisfaction. It also assumes that
respondents answer these items by retrieving perceptions
stored in memory rather than by constructing responses in
real time as they complete the survey instrument (Peterson
2005). Conversely, if we believe that responses to satisfac-

tion items were conditioned by other material included


in the questionnaire (e.g., the researchers introduction,
survey instructions, other questions, etc.) that caused them
to reflect on their overall life satisfaction, then subsequent
responses to items about work, home, and extracurricular
activities would best be judged as reflective of this overall
level of life satisfaction.
The adequacy of SEM measurement models in behavioral
research depends on their ability to accurately represent the
responses or answers of study participants to measurement
items. This is because study participants responses to the
measurement items, not the measurement items per se, are
modeled. Thus, the manner in which study participants
process information in generating responses to measurement items is as relevant to judgments regarding whether
construct measurement should be treated as formative or
reflective as is the nature of the measurement items themselves. If, in generating responses to specific measurement
items, study participants retrieve material from cognitive
structures corresponding roughly to the latent constructs
indicated by these items, a basis exists for considering the
measurement items as being to some degree reflective.
Moreover, there is an extensive literature demonstrating
that survey instructions and other measurement items prime
and activate such cognitive structures such that it would be
unlikely that study participants would not refer to them in
the process of retrieving or generating survey responses. In
brief, although certain formative-versus-reflective indicator
issues have been acknowledged or addressed (e.g., Bollen
and Ting 2000; Edwards and Bagozzi 2000; MacCallum and
Browne 1993), additional research is required.

Step 2: Model Identification


The model specified in Step 1 has important implications
for Step 2, which concerns the determination of whether
model estimates are unique. Ideally, the identifiability of
a model should be determined prior to estimation (e.g.,
Bekker, Merckens, and Wansbeek 1994; Pearl 2000). Identification problems can be related either to the measurement
or structural portions of the model. When identification
problems exist, subsequent steps are rendered meaningless. Three measurement items per construct would be
enough for identification if each item is connected only to
its respective construct and error terms are not correlated.
When there are only two items per construct, and each
item is connected only to its construct and error terms are
not correlated, identification can be achieved if there are
structural connections among all the constructs.

290

Journal of Marketing Theory and Practice

Step 3: Model Estimation


Key model decisions relate to the statistical method used to
estimate model parameters. A variety of methods, such as
maximum likelihood (ML), generalized least squares (GLS),
weighted least squares (WLS) or arbitrary distribution free
(ADF), and ordinary least squares (OLS) methods are available. The choice depends, in part, on data conditions, such as
sample size, data distribution (e.g., degree of univariate and
multivariate normality), and the type of data matrix used
as input (i.e., covariance versus correlation). ADF entails
minimal distributional assumptions, but it also requires a
sample size of several thousand if the number of measures
exceeds 15, and, in general, the Satorra and Bentler rescaled
statistics may work better (e.g., Curran, West, and Finch
1996; Hoogland and Boomsma 1998; Hu, Bentler, and Kano
1992; West, Finch, and Curran 1995). ML assumes normally
distributed data and requires the data matrix to be positive
definite. In general, the structural paths and factor loading
estimates using the ML estimator are quite robust as a default option, but statistical tests can be positively biased,
resulting in a tendency toward Type I errors. One approach
is to use the SatorraBentler procedure (Satorra and Bentler
1994) that downwardly adjusts a test statistic according to
the degree of observed kurtosis. Another option is to use
bootstrapping, which has been shown to be generally less
biased for a confirmatory three-factor model at sample sizes
of 200 or more (Nevitt and Hancock 2001).
Most SEM software programs, such as LISREL, AMOS, and
EQS, are designed for the analysis of covariances (Steiger
2001). If correlations are used, standard error estimates
are often wrong by an order of magnitude (Cudeck 1989).
Constrained estimation methods (Browne and DuToit 1992),
as implemented in SEPATH and RAMONA, provide solutions to this problem. Categorical indicator variables may
also cause nonnormality, and Muthn (1989; Muthn and
Kaplan 1992) suggests that his Mpls program estimator may
outperform Satorra and Bentler and ADF when the number
of categories is small (i.e., less than five).

Step 4: Model Evaluation


Once estimates are obtained, the next decision (Step 4) involves evaluating model fit. This decision involves selecting
fit indices as well as choosing which aspects of the model
to assess. Many absolute and incremental fit indices exist,
and to date a consensus has not been reached regarding
which should be reported or what normative threshold
standards should be (Hu and Bentler 1999; Marsh, Hau,
and Wen 2004). Many indices, such as the goodness-of-fit

index (GFI) and adjusted goodness-of-fit index (AGFI), are


significantly influenced by sample size. These indices have
also been shown to be insensitive to model misspecification (Hu and Bentler 1998). Implementing a representative
set of fit indices, including chi-square, degrees of freedom,
and root mean square residual (RMR) and root mean square
error of approximation (RMSEA), is recommended when
evaluating model fit.
In terms of model evaluation, the traditional approach
involves assessing only overall model fit (as opposed to
separate assessments of the measurement and structural
models). The fit of the structural model can be assessed as
the difference in model fit discrepancy between the full
model and the measurement model. Under ML or GLS estimation, the discrepancy function for the full model can
be decomposed into independent, additive noncentral chisquares (Steiger, Shapiro, and Browne 1985); their respective
degrees of freedom are also additive. Such decomposition
can provide important information.
For example, a good overall fit for the measurement
model may conceal an unsatisfactory structural portion
when fit indices are reported only for the overall model.
Likewise, in the event of poor fit, one cannot attribute the
poor overall fit specifically to misspecified measures, misspecified structural paths, or possibly both. According to
McDonald and Moon-Ho (2002), their analysis of 14 studies
that provided the necessary information for model decomposition revealed that the majority of studies, contrary to
the published conclusions, actually showed support for
unacceptable path models. Future studies in marketing
should examine the fits of the structural and measurement
models independently as well as the overall model.

Step 5: Model Respecification


The fifth step in applying SEM presents an opportunity for
reflection on and reconsideration of the original nomological network and the theory underlying the model. With a
reasonably good model fit, one might consider whether
simplification is warranted. Conversely, a poor fit begs
the question of whether one should modify the model to
improve fit. (The process of making such modifications
likewise merits consideration.) Before attributing a poor fit
solely to model misspecification, however, factors such as
sample size, data distribution, multilevel data, and so forth
should be considered. Estimated t-values can suggest simplifications, whereas modification indices identify potential
expansions. Kaplan (1990; 1991) recommended considering
both the expected parameter change statistic introduced by
Saris, Satorra, and Srbom (1987) and modification indices

Fall 2008 291


in this process. Nonetheless, purely data-driven approaches,
such as allowing correlated error terms without an appropriate theoretical rationale (e.g., representing one or more
underlying trait or method effects) should be avoided
(Hoyle and Panter 1995). Moreover, as Kaplan noted, External specification errors, in the sense of omitted variables,
are not addressed by this method (1990, p. 153, emphasis
in original). Clearly, wholesale sets of modifications, as
opposed to careful, deliberate incremental changes, will
likely produce a fallible final model.
Relatively few of the initially estimated (hypothesized)
structural equation models in marketing studies remain
intact. Most initial models are modified; paths are added,
deleted, or both on the basis of global model fit or modification indices. Subsequent models invariably outperform the
models initially estimated, and inferences and conclusions
are then based on the modified models without regard to
how the final models were parameterized.
Chief among the drawbacks of this modification approach
is that, by definition, model modifications capitalize on
prior knowledge of path values and chance characteristics
in the data, especially if a maximum likelihood estimation
procedure is employed (e.g., MacCallum 1986; MacCallum,
Roznowski, and Necowitz 1992). Indeed, by changing paths
or permitting manifest variable or construct error variances
to covary, virtually any theoretical model can be sufficiently distorted to fit existing data (e.g., McQuitty 2004).
At a minimum, modifying an initially estimated structural
equation model reduces its generality and requires that it
be validated with an independent sample.

start values if different from the default used by the


software
the specific software (brand name including version
number used) and discrepancy function employed
to estimate model parameters (e.g., ML or GLS)
the software-reported discrepancy between the empirical covariance matrix, S, and the covariance matrix implied by the model being tested
the software-reported discrepancy between the
empirical covariance matrix, S, and the covariance
matrix implied by a baseline independence model
(necessary to calculate many fit indices)
the reported chi-square and degrees of freedom for
each model tested.

An advantage of the SEM approach is that any researcher


can replicate the analysis reported in a paper as long as the
necessary input matrices (typically a covariance matrix or
correlation matrix with item standard deviations) and information on the specific models that were analyzed (e.g.,
model specification, software version, estimation algorithm,
and starting values) are reported. Replication can prove extremely useful for initial screening by editors or reviewers
and also provides an opportunity for reviewers to detect and
possibly offer solutions to problems that may exist in the
analysis. All statistical models tested can easily be described
through graphical representation and simple language that
explicitly present the model paths and indicate which parameters are being estimated and which are fixed or constrained.
For any given model, certain linkages (structural or loadings)
and variances (for manifest or latent variables) are estimated,
whereas others are fixed to equal a specific number (typically 1) or constrained to equal another parameter. All such
specifications should be explicitly reported.

Step 6: Clear and Informative Reporting


After making the decisions involved in the preceding five
steps, it is necessary to decide what to report. Several articles (e.g., Boomsma 2000; Chin 1998a; Hoyle and Panter
1995; McDonald and Moon-Ho 2002; Steiger 1988, 2001)
have emphasized adequate reporting as a foundation for
constructing a tradition that provides exemplars for students to learn SEM and researchers to move the knowledge
base forward. Beyond encouraging researchers to read these
articles, we would also emphasize the need for all SEM
papers, at a minimum, to provide sufficient information
for researchers to replicate the analyses, estimates, and fits
that are reported. The most critical information that needs
to be reported includes
the covariance matrix used for each model tested
and the associated sample size
the model setup describing the parameters in each
model that are estimated versus fixed and the initial

OTHER CONSIDERATIONS WHEN


APPLYING SEM
The foregoing discussion touches on issues and decisions
that arise in the process of applying SEM. A number of other
issues are important to consider during and even before
undertaking an SEM application. Researchers should not
assume that SEM is the best approach for every empirical application. Issues to consider include formulation of
multiple theoretical models, model error versus sampling
error, and relating study objectives to the capabilities of
SEM. Each of these issues is briefly addressed in the following sections.

Multiple Theoretical Models


Although an oversimplification, we noted that SEM
was initially conceptualized as an approach for making

292

Journal of Marketing Theory and Practice

quantitative comparisons among multiple hypothesized


networks of manifest variables and latent constructs. As
such, the approach was consistent with the proposals of
Chamberlin (1965) and Platt (1964), that multiple working
hypotheses should be the basis of scientific exploration and
discovery.1 Chamberlin argued against the null, single, or
dominant hypothesis approach to science:
In following a single hypothesis, the mind is presumably led to a single explanatory conception. But an
adequate explanation often involves the co-ordination
of several agencies, which enter into the combined
result in varying proportions. The true explanation is
therefore necessarily complex. Such complex explanations of phenomena are specially encouraged by the
method of multiple hypotheses, and constitute one of
its chief merits. . . . [A] single working hypothesis may
lead investigation along a given line to the neglect of
others equally important. (1965, p. 756)
Platt echoed Chamberlins argument and described the
use of multiple hypotheses to produce a form of strong
inductive inference.
With few exceptions (e.g., Armstrong, Brodie, and Parsons 2001), the contributions of Chamberlin and Platt seem
to be lost in the mists of time. Even so, there is wide support
for the notion of multiple hypothetical networks in the
context of SEM, especially among experienced modelers.
This is so because of a well-known limitation of SEM: in
every SEM application there are a number of theoretically
plausible models that cannot be distinguished empirically
from each other on the basis of global model fit (e.g., MacCallum et al. 1993; Quintana and Maxwell 1999). These
alternative structures might be equivalent models that
produce identical measures of fit or nonequivalent models
that fit the data as well as or better than a theoretical target
model (Tomarken and Waller 2005). Consequently, positing
only a single hypothetical model can lead to acceptance of
an improper model, in part due to the presence of what
is termed a confirmation bias, a partiality favoring the
model being evaluated, coupled with a lack of motivation
to examine alternative models (MacCallum and Austin
2000).
In the relatively small proportion of studies in which
multiple model formulations are evaluated, testing generally takes the form of assessing incremental improvements
in model fit in a series of nested models, which can extend
from a null model to a saturated one. Although this practice
clearly has merit in identifying the most parsimonious model in a series of models that can potentially represent the
data, it does not assure that alternative theoretical models

outside of the nested structure would not represent the data


more accurately. Obviously, testing all possible theoretical
models is generally not feasible and would amount to pure
dust bowl empiricism, but when theoretical rationales
suggest multiple plausible ways of evaluating relationships,
comparing multiple conceptual models is desirable, even
when the models are not nested. Metrics such as Akaikes
information criterion provide some basis for meaningful
comparisons among such models.

Model Error Versus Sampling Error


Clearly, the benefits of SEM are predicated in large part on
the accuracy of the model being tested. Being a full-information procedure, model fits and estimates can be influenced
by many sources of error, including simply having one or
two poor measures that do not belong with the other measures for a particular construct. A model positing that only
a single factor can explain the covariances among a group
of downstream constructs is likely incorrect. Subsets of
items may be correlated because they are mutually affected
by other underlying trait or method factors. Nonlinear
relationships may also exist between the construct and its
measures. The key question becomes how robust are the
estimates that are obtained from models that are imperfect
representations of the underlying real world.
One framework to make sense of this issue was presented
by Cudeck and Henly (1991), based on the work of Linhart
and Zucchini (1986). Within the covariance structure
paradigm of SEM, we can consider four different matrices.
One, , represents the true population covariance matrix,
which is unknown in practice. But if were available and
we tested our model using it, we would obtain vector of
parameter values. These values imply a covariance matrix }
(). Because a model, by definition, is a simplified representation of the actual covariances, we would expect to differ
from }(), representing an approximation error similar to
MacCallum and Tuckers (1991) notion of model error.
Unfortunately, other than through a Monte Carlo simulation, there is no access to the population covariance and
researchers must use the sample covariance matrix S. When
we fit a model to this matrix, we obtain the vector [ of parameter estimates and the implied covariance matrix {( [).
The difference between S and {( [) represents the sample
discrepancy. By implication, the difference between the two
implied covariances, {() and {( [), reflects the estimation
discrepancy due to sampling error. Finally, the difference
between the sample-based-implied covariance matrix,
{( [), and the unknown population covariance matrix

Fall 2008 293


is the overall discrepancy influenced by both the error in
the model as well as error in the data. Figure 1 provides a
summary of these discrepancies.
The takeaway from this view of model fit is the recognition that the model parameters researchers estimate are
not fixed but are dependent on the discrepancy function.
Estimates obtained using different discrepancy functions
are not estimating the same things. The parameter values
depend on the discrepancy function being used, and the
parameter estimates in [ estimate values in . For example,
fitting a model using ML would yield an approximation
discrepancy based on the vector ML of parameter values.
If we use a different estimation routine, such as OLS, the
resulting OLS values will be different. This, in turn, implies
that the covariance matrices and corresponding approximation discrepancies will be different. Thus, the ML and
OLS estimates for a given sample should not be viewed as
attempts to estimate the same fixed set of parameter values
(which is the conventional perspective). A fixed parameter
perspective is based on the notion that a model is correct
in the population. Instead, researchers should begin an
analysis with the viewpoint that the model being evaluated
will be wrong in varying degrees, and the parameter values
being estimated from the sample will differ depending on
the discrepancy function used.
To highlight this point, MacCallum, Tucker, and Briggs
(2001) simulated data where both model error and sampling error were present, with model error introduced by
including a large number of minor factors. Factor analysis
showed that OLS was a better estimation procedure than
ML. In part, this resulted from the fact that the underlying
model generating the data was inconsistent with the ML
assumptions that the model was correct and that all error
was sampling error under normal theory conditions. OLS, in
contrast, makes no distributional assumptions. MacCallum
noted that ML is used in the majority of SEM studies, but
depending on the nature of error in data, one discrepancy
function may be better than another because of better
correspondence between assumptions about error and the
nature of error . . . a discrepancy function such as ML that
assumes all error is sampling error under a particular distribution theory might not be the optimal choice in many
situations (2003, p. 124). This assumption leads to the corresponding assumption that larger correlations among items
in a matrix are subject to less error and vice versa for smaller
correlations, resulting in MacCallums conjecture that ML
estimation works harder at fitting larger correlations than
smaller ones, as opposed to OLS, which works approximately
the same regardless of correlation magnitudes.

Figure 1
CudeckHenly (1991) Representation of the
Relationship Between a Model and the Real World

If one accepts this perspective, there are also implications


regarding the value of existing SEM-based Monte Carlo
studies. MacCallum believes they are of limited value and
can be faulted for lack of realism inasmuch as they tend
to ignore the notion that all models are wrong to some
degree. Instead, investigators typically specify a true factor structure with varying number of factors, loadings, and
intercorrelations. Sample data are then generated and model
fits obtained. This approach examines only sampling issues
and evaluates estimation method performance when the
model in question matches the population model. Thus,
new studies should be performed that vary both model
error and sampling error.
The perspective that the model tested may be a poor
approximation of population covariances can be extended
even further. What if the underlying population model
is not covariance based? For example, it has been shown
that a components-based procedure known as partial least
squares (PLS) often provides component-based loadings
and structural paths similar to SEM without requiring the
distributional assumptions (e.g., Fornell and Bookstein
1982). Moreover, PLS estimates can be obtained with
smaller sample sizes relative to model complexity. Even
so, some researchers have noted that PLS estimates can be
biased. Essentially, the case values for the constructs are
inconsistent relative to an SEM model analysis because

294

Journal of Marketing Theory and Practice

PLS components are aggregates of the observed variables


and include measurement error (Chin 1998b). This bias
tends to manifest itself in somewhat higher estimates for
loadings and lower structural path estimates. The estimates
will approach the true parameter values when both the
number of indicators per construct and sample size increase.
This limiting case has been termed consistency at large
(Wold 1982, p. 25).
Other researchers, however, have suggested these estimated biases were calculated relative to the covariance-based
ML estimation, which presupposes that the underlying
model is true and the generated data are covariance
based. Schneeweiss suggested that the consistency-at-large
notion is really a justification for using PLS as an estimation method to estimate LISREL parameters in cases where
the number of manifest variables is large (1990, p. 38). He
argued that PLS can be seen as a consistent estimator of
parameters and latent variables as long as we ask the question of which population parameters we are attempting to
estimate. If estimating the parameters for the population
model as defined by PLS, then we have the advantage of
treating PLS as a method to define parameters and latent
variables that are useful for describing the relations that
may exist between blocks of observable (manifest) variables (Schneeweiss 1990, p. 38), even if the data cannot
be regarded as stemming from a covariance model. In this
situation, PLS will estimate the parameters consistently. If,
on the other hand, the data are generated from a covariancebased model, PLS will produce inconsistent estimates.

Scope and Objectives Consistent with SEM


Capabilities
The prior discussion leads to what we might argue is a key
aspect of SEM decision making: the objectives and requirements of the modeling process. Meehl (1990, p. 114), in
borrowing the concept of verisimilitude (i.e., truth-likeness)
from philosophy, noted that models are always imperfect
and vary in the degree to which they approximate reality in
two ways. One way deals with incompletenesshow well the
complexities of the real world are represented in the model.
The second way deals with falsenesshow well contradictions between the model and the world are represented.
Rozeboom (2005) noted that Meehls notion of verisimilitude urged us to recognize that scientific theories are never
impeccably veridical in all respects, and practical theory
adjudication needs to ask not whether a model is true but
how a model is true and to what degree it is true.
Most SEM studies seem to focus on the falsity of a
model as opposed to its completeness. In part because of

algorithmic constraints, few SEM models are very complex


(i.e., have a large number of latent variables). Emphasis on
model fit tends to restrict researchers to testing relatively
elementary models representing either a simplistic theory
or a narrow slice of a more complex theoretical domain.
As an example, Shah and Goldstein (2006), in their review
of 93 SEM-based articles in operations management, found
an average of 4.4 latent variables per model with a range
of 1 to 12 latent variables, and between 3 and 80 manifest
variables, with a mean of 14. MacCallum concluded that
the empirical phenomena that yield the population variances and covariances in are far too complex to be fully
captured by a linear common factor model with a relatively
small number of common factors (2003, p. 118).
In his presidential address dealing with measurement
and conceptualization problems in sociology more than
a quarter of a century ago, Blalock opined that reality is
sufficiently complex that we will need theories that contain
upwards of fifty variables if we wish to disentangle the effects of numerous exogenous and endogenous variables on
the diversity of dependent variables that interest us (1979,
p. 881). A few years later, he further suggested (Blalock 1986)
that in formulating theories and models, there is a natural
incompatibility between generalizability, parsimony, and
precision, and that one of these desired characteristics must
be sacrificed when conducting research. Blalock argued
for excluding the criterion of parsimony in order to allow
models to describe more diverse settings and populations
by replacing constants reflecting such settings with explanatory variables.
Thus, the question becomes whether the goal is to explain the covariances of a relatively small set of measured
items based on a few underlying latent constructs or to focus
on the complex interrelationships among a large set of factors that more closely mirrors the study context. The former
may work well in experimental settings, whereas more
complex models capturing many factors related to attitudes,
opinions, and behaviors over time could be difficult to fully
capture using SEM. In these instances, component-based
methods such as PLS or path analysis may be very useful,
especially if one places greater emphasis on the completeness portion of Meehls notion of verisimilitude.
The sample size requirements when using PLS for complex models are likely smaller (Chin and Newsted 1999) and
can be ascertained by determining that part of the model
that has the largest number of predictors of a particular dependent variable and applying Cohens power tables (1988)
relative to the effect sizes one wishes to detect. Moreover,
if latent scores are desired, the PLS approach explicitly
calculates such scores as part of its underlying algorithm

Fall 2008 295


(Chin 1998b). Finally, being a full-information procedure, a
poorly developed construct with weak measures can impact
all other estimates throughout an SEM model. PLS, being
a limited-information, component-based least squares alternative, is less affected.
Overall, decisions regarding the use of PLS instead of SEM
depend in part on whether the researcher favors
ease in specifying models (e.g., PLS requires input only on how to group measures into construct
blocks and specification of path relationships, not
on model identification, measurement scale adequacy for the discrepancy estimator, and other SEM
constraints)
ease in interpretation of results (consistent with
regression perspective of variances explained versus
model fit indices)
ability to test more complex models (i.e., large
numbers of indicators or latent variables) for a given
sample size
ability to model formative indicators.

Causal Inferences
Appropriate use of SEM also entails a priori consideration
of the models one plans to test, the nature of the manifest
variables, and the sampling design. Far too often, it is
only after data have been collected and the first attempt at
model estimation conducted that researchers consider the
adequacy and compatibility of the research problem, theoretical framework, population sampled, manifest variables,
and plausible models.
Marketing researchers often construct theoretical causeand-effect statements involving networks of latent variables
and evaluate the networks using cross-sectional data in
structural equation models. In fact, in its earlier days,
researchers often referred to SEM as causal modeling, and
the path-analytic structures of structural equation models
clearly imply causal flows from exogenous to endogenous
constructs. However, deriving causal inferences from
cross-sectional data can be fraught with risk, especially
when there is an implicit (nonexplicated) assumption that
changes in one construct cause changes in another construct
(e.g., Cliff 1983). Doing so requires the assumption that
the causal relationship is instantaneous, that homeostasis
has occurred, or, in the case of self-report data, that study
participants can reasonably be asked to simultaneously
provide information on previous, present, or future behaviors. Such an assumption would appear unwarranted in
most cross-sectional SEM applications. Indeed, researchers
using SEM techniques should keep in mind that causality
can never be conclusively established from correlational
(including longitudinal) data.

Closely related to the use of temporally lagged dependent variables to facilitate causal inferences is the use of
multiple data sources for the same purpose. For example,
it is common in studies of individual performance in sales
management or services marketing contexts to employ
manager ratings or objective measures of performance in
order to avoid common-method bias and facilitate causal
inferences, which are generally implicit in such models.
Even though it is clearly commendable to incorporate different data sources in this manner, causal inferences are
still subject to the same caveat as noted above. Further, this
type of model generally specifies a path-analytic (causal)
flow linking other constructs, which are represented as
antecedents of performance and are operationalized using
data taken from the same source (e.g., individual salespeople
or service providers). The causal relationships implied to
exist among such constructs are subject to stronger caveats
than those that link the antecedents to performance, but
researchers tend to accept such inferences as long as the
ultimate dependent variable is measured using data from
a different source.
The issue of variable timing in causal structural equation
models, regardless of whether cross-sectional or longitudinal data are being analyzed, is one that requires significant
research. Although cross-lagged relationships can lead to
meaningful causal inferences, it is important to recall
that a temporal sequence of data is not by itself necessarily sufficient to infer causality. There may be intervening
constructs or constructs acting on both the hypothesized
cause-and-effect constructs. One is reminded of the research
and counterresearch that appeared in the sales forecasting
literature over the proper time lag to use when predicting
sales responses to marketing efforts (and the demonstrations
that the time lag employed had a significant effect on sales
response parameter estimates).

SUMMARY AND CONCLUSION


Although the use of SEM in marketing has contributed greatly to conceptual, empirical, and methodological advances
in the discipline, its increasingly widespread application
has resulted in some misuses and mistaken assumptions.
Therefore, it is useful to occasionally review common assumptions and best practices. Our review of SEM practices
in marketing encompassed steps that researchers must typically implement in the execution of an SEM study and noted
selected issues that need to be addressed or reconsidered.
We stressed the importance of considering the relationship
between SEM capabilities and assumptions with respect to
the research objectives to be accomplished in a particular

296

Journal of Marketing Theory and Practice

study. Specifically, we noted important limitations regarding the use of SEM in light of certain research objectives and
conditions. Although SEM possesses the capability to facilitate the evaluation of complex structures of relationships
and explicitly model measurement error, it is important to
recognize the limitations inherent within this capability.
In this regard, we hope that our review serves as a useful
reminder of best practices when applying SEM.

NOTE
1. The Chamberlin (1965) reference cited here is a reprint of
an article published in 1890 in Science.

REFERENCES
Allison, Paul D. (2003), Missing Data Techniques for Structural
Equation Modeling, Journal of Abnormal Psychology, 112
(November), 545557.
Anderson, James C., and David W. Gerbing (1988), Structural
Equation Modeling in Practice: A Review and Recommended
Two-Step Approach, Psychological Bulletin, 103 (May),
411423.
Armstrong, J. Scott, Roderick J. Brodie, and Andrew G. Parsons
(2001), Hypotheses in Marketing Science: Literature Review and Publication Audit, Marketing Letters, 12 (May),
171187.
Bagozzi, Richard P. (1980), Causal Models in Marketing, New York:
John Wiley & Sons.
, and Youjae Yi (1988), On the Evaluation of Structural Equation Models, Journal of the Academy of Marketing Science,
16 (Spring), 7494.
Bauer, Daniel J. (2003), Estimating Multilevel Linear Models
as Structural Equation Models, Journal of Educational and
Behavioral Statistics, 28 (Summer), 135167.
Bekker, Paul A., Arien Merckens, and Tom J. Wansbeek (1994),
Identification, Equivalent Models and Computer Algebra, Boston: Academic Press.
Blalock, Hubert M., Jr. (1979), The Presidential Address: Measurement and Conceptualization Problems: The Major Obstacle
to Integrating Theory and Research, American Sociological
Review, 44 (December), 881894.
(1986), Multiple Causation, Indirect Measurement and
Generalizability in the Social Sciences, Synthese, 68 (July),
1336.
Bollen, Kenneth A., and J. Scott Long, eds. (1993), Testing Structural
Equation Models, Newbury Park, CA: Sage.
, and Kwok-fai Ting (2000), A Tetrad Test for Causal Indicators, Psychological Methods, 5 (March), 322.
Boomsma, Anne (2000), Reporting Analyses of Covariance Structures, Structural Equation Modeling, 7 (3), 461483.
Browne, Michael W., and Steven H.C. DuToit (1992), Automated
Fitting of Nonstandard Models, Multivariate Behavioral
Research, 27 (April), 269300.
Butner, Jonathan, Polemnia G. Amazeen, and Genna M. Mulvey
(2005), Multilevel Modeling of Two Cyclical Processes:
Extending Differential Structural Equation Modeling to

Nonlinear Coupled Systems, Psychological Methods, 10


(June), 159177.
Chamberlin, T.C. (1965), The Method of Multiple Working Hypotheses, Science, 148 (May 7), 754759.
Cheung, Mike W.-L., and Wai Chan (2005), Meta-Analytic Structural Equation Modeling: A Two-Stage Approach, Psychological Methods, 10 (March), 4064.
Chin, Wynne W. (1998a), Commentary: Issues and Opinion on
Structural Equation Modeling, MIS Quarterly, 22 (March),
viixvi.
(1998b), The Partial Least Squares Approach for Structural
Equation Modeling, in Modern Methods for Business Research, George A. Marcoulides, ed., Mahwah, NJ: Lawrence
Erlbaum, 295336.
, and Peter R. Newsted (1999), Structural Equation Modeling
Analysis with Small Samples Using Partial Least Squares,
in Statistical Strategies for Small Sample Research, Rick Hoyle,
ed., Thousand Oaks, CA: Sage, 307341.
Cliff, Norman (1983), Some Cautions Concerning the Application of Causal Modeling Methods, Multivariate Behavioral
Research, 18 (January), 115126.
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, Hillside, NJ: Lawrence Erlbaum.
Cudeck, Robert (1989), Analysis of Correlation Matrices Using
Covariance Structure Models, Psychological Bulletin, 105
(March), 317327.
, and Susan J. Henly (1991), Model Selection in Covariance
Structures Analysis and the Problem of Sample Size: A Clarification, Psychological Bulletin, 109 (May), 512519.
Curran, Patrick J. (2003), Have Multilevel Models Been Structural Equation Models All Along? Multivariate Behavioral
Research, 38 (October), 529569.
, Stephen G. West, and John F. Finch (1996), The Robustness
of Test Statistics to Nonnormality and Specification Error
in Confirmatory Factor Analysis, Psychological Methods, 1
(March), 1629.
Diamantopoulos, Adamantios, and Heidi M. Winklhofer (2001),
Index Construction with Formative Indicators: An Alternative to Scale Development, Journal of Marketing Research,
38 (May), 269277.
Edwards, Jeffrey R., and Richard P. Bagozzi (2000), On the Nature
and Direction of Relationships Between Constructs and
Measures, Psychological Methods, 5 (June), 155174.
Fornell, Claes (1983), Issues in the Application of Covariance
Structure Analysis, Journal of Consumer Research, 9 (March),
443448.
, and Fred L. Bookstein (1982), Two Structural Equation Models: LISREL and PLS Applied to Consumer Exit-Voice Theory,
Journal of Marketing Research, 19 (November), 440452.
, and David F. Larcker (1981), Evaluating Structural Equation
Models with Unobserved Variables and Measurement Error,
Journal of Marketing Research, 18 (February), 3950.
Furlow, Carolyn F., and S. Natasha Beretvas (2005), Meta-Analytic
Methods of Pooling Correlation Matrices for Structural Equation Modeling Under Different Patterns of Missing Data,
Psychological Methods, 10 (June), 227254.
Gerbing, David W., and James C. Anderson (1988), An Updated
Paradigm for Scale Development Incorporating Unidimensionality and Its Assessment, Journal of Marketing Research,
25 (May), 186192.

Fall 2008 297


Hauser, Robert M. (1972), Disaggregating a Social-Psychological
Model of Educational Attainment, Social Science Research,
1 (June), 159188.
Hoogland, Jeffrey J., and Anne Boomsma (1998), Robustness
Studies in Covariance Structure Modeling: An Overview and
Meta-Analysis, Sociological Methods & Research, 26 (February), 329367.
Hoyle, Rick H., and Abigail T. Panter (1995), Writing About
Structural Equation Models, in Structural Equation Modeling: Concepts, Issues, and Applications, Rick H. Hoyle, ed.,
Thousand Oaks, CA: Sage, 158176.
Hu, Li-tze T., and Peter M. Bentler (1998), Fit Indices in Covariance Structure Modeling: Sensitivity to Under-Parameterized
Model Misspecification, Psychological Methods, 3 (December), 424453.
, and (1999), Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New
Alternatives, Structural Equation Modeling, 6 (1), 155.
, , and Yutaka Kano (1992), Can Test Statistics in Covariance Structure Analysis Be Trusted? Psychological Bulletin,
112 (September), 351362.
Jarvis, Cheryl Burke, Scott B. MacKenzie, and Philip M. Podsakoff (2003), A Critical Review of Construct Indicators
and Measurement Model Misspecification in Marketing
and Consumer Research, Journal of Consumer Research, 30
(September), 199218.
Jreskog, Karl G., and Arthur S. Goldberger (1975), Estimation
of a Model with Multiple Indicators and Multiple Causes of
a Single Latent Variable, Journal of the American Statistical
Association, 70 (September), 631639.
Kaplan, David (1990), Evaluating and Modifying Covariance Structure Models: A Review and Recommendation (with Discussion), Multivariate Behavioral Research, 25 (April), 137155.
(1991), On the Modification and Predictive Validity of
Covariance Structure Models, Quality and Quantity, 25
(August), 307314.
Lee, Sik-Yum, and Xin-Yuan Song (2004a), Bayesian Model Comparison of Nonlinear Structural Equation Models with Missing
Continuous and Ordinal Categorical Data, British Journal of
Mathematical and Statistical Psychology, 57 (May), 131150.
, and (2004b), Evaluation of the Bayesian and Maximum
Likelihood Approaches in Analyzing Structural Equation
Models with Small Sample Sizes, Multivariate Behavioral
Research, 39 (October), 653686.
Linhart, H., and Walter Zucchini (1986), Model Selection, New York:
John Wiley & Sons.
MacCallum, Robert C. (1986), Specification Searches in Covariance Structure Modeling, Psychological Bulletin, 100 (January), 107120.
(2003), Working with Imperfect Models, Multivariate
Behavioral Research, 38 (January), 113139.
, and James T. Austin (2000), Applications of Structural Equation Modeling in Psychological Research, Annual Review of
Psychology, 51, 201226.
, and Michael W. Browne (1993), The Use of Causal Indicators in Covariance Structure Models: Some Practical Issues,
Psychological Bulletin, 114 (November), 533541.
, and Ledyard R. Tucker (1991), Representing Sources of Error
in the Common Factor Model: Implications for Theory and
Practice, Psychological Bulletin, 109 (May), 502511.

, Mary Roznowski, and Lawrence B. Necowitz (1992), Model


Modifications in Covariance Structure Analysis: The Problem
of Capitalization on Chance, Psychological Bulletin, 111
(May), 490504.
, Ledyard R. Tucker, and Nancy E. Briggs (2001), An Alternative Perspective on Parameter Estimation in Factor Analysis
and Related Methods, in Stephen DuToit, Robert Cudeck,
and Dag Srbom (eds.), Structural Equation Modeling: Present
and Future, Lincolnwood, IL: Scientific Software International, 3957.
, Duane T. Wegener, Bert N. Uchino, and Leandre R. Fabrigar
(1993), The Problem of Equivalent Models in Applications
of Covariance Structure Analysis, Psychological Bulletin, 114
(July), 185199.
MacKenzie, Scott B., Philip M. Podsakoff, and Cheryl Burke Jarvis
(2005), The Problem of Measurement Model Misspecification in Behavioral and Organizational Research and Some
Recommended Solutions, Journal of Applied Psychology, 90
(July), 710730.
Marsh, Herbert W., Kit-Tai Hau, and Zhonglin Wen (2004), In
Search of Golden Rules: Comment on Hypothesis-Testing
Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentlers (1999) Findings,
Structural Equation Modeling, 11 (3), 320341.
, Zhonglin Wen, and Kit-Tai Hau (2004), Structural Equation
Models of Latent Interactions: Evaluation of Alternative Estimation Strategies and Indicator Construction, Psychological
Methods, 9 (September), 275300.
Maydeu-Olivares, Albert, and Ulf Bckenholt (2005), Structural
Equation Modeling of Paired Comparison and Ranking
Data, Psychological Methods, 10 (September), 285304.
McDonald, Roderick P., and Ringo Ho Moon-Ho (2002), Principles
and Practice in Reporting Structural Equation Analyses,
Psychological Methods, 7 (March), 6482.
McQuitty, Shaun (2004), Statistical Power and Structural Equation Models in Business Research, Journal of Business Research, 57 (February), 175183.
Meehl, Paul E. (1990), Appraising and Amending Theories: The
Strategy of Lakatosian Defense and Two Principles That Warrant It, Psychological Inquiry, 1 (2), 108141.
Mehta, Paras D., and Michael C. Neale (2005), People Are Variables
Too: Multilevel Structural Equations Modeling, Psychological
Methods, 10 (September), 259284.
Muthn, Bengt (1989), Multiple Group Structural Equation
Modeling with Non-Normal Continuous Variables, British
Journal of Mathematical and Statistical Psychology, 42 (May),
5562.
, and David Kaplan (1992), A Comparison of Some Methodologies for the Factor Analysis of Non-Normal Likert
Variables: A Note on the Size of the Model, British Journal of
Mathematical and Statistical Psychology, 45 (May), 1930.
Nevitt, Jonathan, and Gregory R. Hancock (2001), Performance
of Bootstrapping Approaches to Model Test Statistics and
Parameter Standard Error Estimation in Structural Equation
Modeling, Structural Equation Modeling, 8 (3), 353377.
, and (2004), Evaluating Small Sample Approaches
for Model Test Statistics in Structural Equation Modeling,
Multivariate Behavioral Research, 39 (July), 439478.
Pearl, Judea (2000), Causality: Models, Reasoning and Inference,
Cambridge: Cambridge University Press.

298

Journal of Marketing Theory and Practice

Pedhazur, Elazar J., and Liora Pedhazur Schmelkin (1991), Measurement, Design, and Analysis: An Integrated Approach, Hillsdale,
NJ: Lawrence Erlbaum.
Peterson, Robert A. (2005), Response Construction in Consumer
Behavior Research, Journal of Business Research, 58 (March),
348353.
Platt, John R. (1964), Strong Inference, Science, 146 (October
16), 347353.
Podsakoff, Philip M., Scott B. MacKenzie, Jeong-Yeon Lee, and
Nathan P. Podsakoff (2003), Common Method Biases in
Behavioral Research: A Critical Review of the Literature and
Recommended Remedies, Journal of Applied Psychology, 88
(September), 879903.
Quintana, Stephen M., and Scott E. Maxwell (1999), Implications
of Recent Developments in Structural Equation Modeling
for Counseling Psychology, Counseling Psychologist, 27
(July), 485527.
Rozeboom, William W. (2005), Meehl on Metatheory, Journal
of Clinical Psychology, 61 (October), 13171354.
Saris, Willem E., Albert Satorra, and Dag Srbom (1987), The
Detection and Correction of Specification Errors in Structural
Equation Models, in Sociological Methodology, Clifford C.
Clogg, ed., Washington, DC: American Sociological Association, 105129.
Satorra, Albert, and Peter M. Bentler (1994), Corrections to
Test Statistics and Standard Errors in Covariance Structure
Analysis, in Latent Variables Analysis: Applications for Developmental Research, Alexander von Eye and Clifford C. Clogg,
eds., Thousand Oaks, CA: Sage, 399419.

Schneeweiss, Hans (1990), Models with Latent Variables: LISREL


Versus PLS, Contemporary Mathematics, 112 (1), 3340.
Shah, Rachna, and Susan Meyer Goldstein (2006), Use of
Structural Equation Modeling in Operations Management
Research: Looking Back and Forward, Journal of Operations
Management, 24 (January), 148169.
Steiger, James H. (1988), Aspects of PersonMachine Communication in Structural Modeling of Correlations and Covariances, Multivariate Behavioral Research, 23 (April), 281290.
(2001), Driving Fast in Reverse: The Relationship Between
Software Development, Theory, and Education in Structural
Equation Modeling, Journal of the American Statistical Association, 96 (March), 331338.
, Alexander Shapiro, and Michael W. Browne (1985), On
the Multivariate Asymptotic Distribution of Sequential
Chi-Square Statistics, Psychometrika, 50 (September),
253264.
Tomarken, Andrew J., and Niels G. Waller (2005), Structural Equation Modeling: Strengths, Limitations, and Misconceptions,
Annual Review of Clinical Psychology, 1, 3165.
West, Stephen G., John F. Finch, and Patrick J. Curran (1995),
Structural Equation Modeling with Nonnormal Variables:
Problems and Remedies, in Structural Equation Modeling:
Concepts, Issues, and Applications, Rick H. Hoyle, ed., Thousand Oaks, CA: Sage, 5675.
Wold, Herman O.A. (1982), Soft Modeling: The Basic Design and
Some Extensions, in Systems Under Indirect Observations:
Causality, Structure, Prediction, part 2, Karl G. Jreskog and
Herman O.A. Wold, eds., Amsterdam: North-Holland, 154.

You might also like