Professional Documents
Culture Documents
where SSA is the sum of squares for the row factor. Recall that the model components
τ i , β j and (τβ ) ij are normally and independently distributed with means zero and
variances σ τ2 , σ 2β , and σ τ2β respectively. The sum of squares and its expectation are
defined as
1 a 2 y...2
SS A = ∑ yi.. − abn
bn i =1
E ( SS A ) =
1 a
y2
E ∑ yi2.. − E ...
FG IJ
bn i =1 abn H K
Now
b n
yi .. = ∑ ∑ yijk = bnµ + bnτ i + nβ . + n(τβ ) i . + ε i ..
j =1 k =1
and
a a
1 1
E ∑ yi .. =
2
E ∑ (bnµ ) 2 + (bn) 2 τ i2 + ε i2.. + 2(bn) 2 µτ i + 2bnµε i .. + 2bnτ i ε i ..
bn i =1 bn i =1
1
= a (bnµ ) 2 + a (bn) 2 σ τ2 + ab(n) 2 σ 2β + abn 2σ τβ
2
+ abnσ 2
bn
= abnµ 2 + abnσ τ2 + anσ 2β + anσ τβ 2
+ aσ 2
We can now collect the components of the expected value of the sum of squares for
factor A and find the expected mean square as follows:
FG SS IJ
E ( MS A ) = E
H a − 1K
A
1 L1 FG y IJ OP
M E∑ y
a 2
−E
H abn K Q
2 ...
a − 1 N bn
i ..
i =1
1
= σ 2 (a − 1) + n(a − 1)σ τβ
2
+ bnσ τ2
a −1
= σ 2 + nσ τβ
2
+ bnσ τ2
N a − 1Q τβ
We will find the expected mean square for the random factor, B. Now
b
E MS B = E g FGH bSS− 1IJK B
E b SS g
1
=
b −1
B
and
b
1 1
E ( SS B ) = E ∑ y.2j . − E ( y...2 )
an j =1 abn
Using the restrictions on the model parameters, we can easily show that
y. j . = anµ + anβ j + ε . j .
and
b
1
E ∑ y.2j . = b(anµ ) 2 + b(an) 2 σ 2β + abnσ 2
an j =1
= abnµ 2 + abnσ 2β + bσ 2
Since
y... = abnµ + anβ . + ε ...
we can easily show that
1 1
E ( y...2 ) = (abnµ ) 2 + b(an) 2 σ 2β + abnσ 2
abn abn
= abnµ 2 + anσ 2β + σ 2
Therefore the expected value of the mean square for the random effect is
b
E MS B =g 1
b −1
b g
E SS B
=
1
b −1
dabnµ 2 + abnσ 2β + bσ 2 − abnµ 2 − anσ 2β − σ 2 i
1
= σ 2 (b − 1) + an(b − 1)σ 2β
b −1
= σ 2 + anσ 2β
and all random effects are uncorrelated random variables. Notice that there is no
assumption concerning the interaction effects summed over the levels of the fixed factor
as is customarily made for the restricted model. Recall that the restricted model is
actually a more general model than the unrestricted model, but some modern computer
programs give the user a choice of models (and some computer programs only use the
unrestricted model), so there is increasing interest in both versions of the mixed model.
We will derive the expected value of the mean square for the random factor, B, in
Equation (13-26), as it is different from the corresponding expected mean square in the
restricted model case. As we will see, the assumptions regarding the interaction effects
are instrumental in the difference in the two expected mean squares.
The expected mean square for the random factor, B, is defined as
b
E MS B = E g FGH bSS− 1IJK
B
E b SS g
1
=
b −1
B
because α . = 0 . Notice, however, that the interaction term in this expression is not zero
as it would be in the case of the restricted model. Now the expected value of the first part
of the expression for E(SSB) is
b
1 1
E ∑ y.2j . = b(anµ ) 2 + b(an) 2 σ γ2 + abn 2σ αγ
2
+ abnσ 2
an j =1 an
= abnµ 2 + abnσ γ2 + bnσ αγ
2
+ bσ 2
Now we can show that
y... = abnµ + bnα . + anγ . + n(αγ ) .. + ε ...
= abnµ + anγ . + n(αγ ) .. + ε ...
Therefore
1 1
E ( y...2 ) = (abnµ ) 2 + b(an) 2 σ γ2 + abn 2σ αγ
2
+ abnσ 2
abn abn
= abnµ 2 + anσ γ2 + nσ αγ
2
+σ2
We may now assemble the components of the expected value of the sum of squares for
factor B and find the expected value of MSB as follows:
b g
E MS B =
1
b −1
E SS Bb g
=
1 LM
1 b
E ∑ y.2j . −
1
E ( y...2 )
OP
N
b − 1 an j =1 abn Q
=
1
b −1
abnµ 2 + abnσ γ2 + bnσ αγ 2
d
+ bσ 2 − abnµ 2 + anσ γ2 + nσ αγ
2
+σ2 i
1
= [σ 2 (b − 1) + n(b − 1)σ αγ
2
+ an(b − 1)σ γ2 ]
b −1
= σ 2 + nσ αγ
2
+ anσ γ2
This last expression is in agreement with the result given in Equation (13-26).
Deriving expected mean squares by the direct application of the expectation operator (the
“brute force” method) is tedious, and the rules given in the text are a great labor-saving
convenience. There are other rules and techniques for deriving expected mean squares,
including algorithms that will work for unbalanced designs. See Milliken and Johnson
(1984) for a good discussion of some of these procedures.
θ i is the linear combination of variance components estimated by the ith mean square,
and fi is the number of degrees of freedom for MSi. Consequently, the 100(1-α) percent
large-sample two-sided confidence interval for σ 20 is
σ 20 − zα / 2 V (σ 20 ) ≤ σ 20 ≤ σ 20 + zα / 2 Vσ 02 )
Operationally, we would replace θ i by MSi in actually computing the confidence interval.
This is the same basis used for construction of the confidence intervals by SAS PROC
MIXED that we presented in section 13-7.3 (refer to the discussion of tables 13-17 and
13-18 in the textbook).
These large-sample intervals work well when the number of degrees of freedom are large,
but when the fi are small they may be unreliable. However, the performance may be
improved by applying suitable modifications to the procedure. Welch (1956) suggested a
modification to the large-sample method that resulted in some improvement, but Graybill
and Wang (1980) proposed a technique that makes the confidence interval exact for
certain special cases. It turns out that it is also a very good approximate procedure for the
cases where it is not an exact confidence interval. Their result is given in the textbook as
Equation (13-42).
σ 12 ∑ c MS i i
= i =1
σ 22 Q
∑ c MS
j = P +1
j j
L=
LM
σ 12 2 + k 4 / ( k1k2 ) − VL OP
σ 12 MN 2(1 − k5 / k 22 ) PQ
where
VL = (2 + k 4 / ( k1k 2 )) 2 − 4(1 − k5 / k 22 )(1 − k 3 / k12 )
P Q
k1 = ∑ ci MSi , k2 = ∑ c MS j j
i =1 j = P +1
P P −1 P
k 3 = ∑ Gi2 ci2 MSi2 + ∑ ∑ Git*ci ct MSi MSt
i =1 i =1 t > i
P Q
k4 = ∑ ∑ G c c MS MS
ij 1 j i j
i = 1 j = P +1
Q
k5 = ∑H c
j = P +1
2 2
j j MS 2j
and Gi , H j , Gig , and Git* are as previously defined. For more details, see the book by
Burdick and Graybill (1992).
Supplemental References
Graybill, F. A. and C. M. Wang (1980. “Confidence Intervals on Nonnegative Linear
Combinations of Variances”. Journal of the American Statistical Association, Vol. 75,
pp. 869-873.
Welch, B. L. (1956). “On Linear Combinations of Several Variances”. Journal of the
American Statistical Association, Vol. 51, pp. 132-148.