You are on page 1of 31

Solutions to selected problems in

Brockwell and Davis


Anna Carlsund

Henrik Hult

Spring 2003

This document contains solutions to selected problems in


Peter J. Brockwell and Richard A. Davis, Introduction to Time Series and Forecasting, 2nd Edition, Springer New York, 2002.
We provide solutions to most of the problems in the book that are not computer
exercises. That is, you will not need a computer to solve these problems. We encourage students to come up with suggestions to improve the solutions and to report
any misprints that may be found.

Contents
Chapter 1

1.1, 1.4, 1.5, 1.8, 1.11, 1.15

Chapter 2

2.1, 2.4, 2.8, 2.11, 2.15

Chapter 3

3.1, 3.4, 3.6, 3.7, 3.11

11

Chapter 4

4.4, 4.5, 4.6, 4.9, 4.10

14

Chapter 5

5.1, 5.3, 5.4, 5.11

19

Chapter 6

6.5, 6.6

23

Chapter 7

7.1, 7.5

25

Chapter 8

8.7, 8.9, 8.13, 8.14, 8.15

28

Chapter 10

10.5

31

Notation: We will use the following notation.


The indicator function

1A (h) =
Diracs delta function

+ if t = 0,
(t) =
0
if t =
6 0,

1
0

if h A,
if h
/ A.

and

f (t)(t)dt = f (0).

Chapter 1
Problem 1.1. a) First note that
E[(Y c)2 ] = E[Y 2 2Y c + c2 ] = E[Y 2 ] 2cE[Y ] + c2
= E[Y 2 ] 2c + c2 .
Find the extreme point by differentiating,
d
(E[Y 2 ] 2c + c2 ) = 2 + 2c = 0
dc

c = .

d
2
2
Since, dc
2 (E[Y ] 2c + c ) = 2 > 0 this is a min-point.
b) We have

E[(Y f (X))2 | X] = E[Y 2 2Y f (X) + f 2 (X) | X]


= E[Y 2 | X] 2f (X)E[Y | X] + f 2 (X),
which is minimized by f (X) = E[Y | X] (take c = f (X) and = E[Y | X] in a).
c) We have

E[(Y f (X))2 ] = E E[(Y f (X))2 | X] ,


so the result follows from b).
Problem 1.4. a) For the mean we have
X (t) = E[a + bZt + cZt2 ] = a,
and for the autocovariance
X (t + h, t) = Cov(Xt+h , Xt ) = Cov(a + bZt+h + cZt+h2 , a + bZt + cZt2 )
= b2 Cov(Zt+h , Zt ) + bc Cov(Zt+h , Zt2 )
+ cb Cov(Zt+h2 , Zt ) + c2 Cov(Zt+h2 , Zt2 )
= 2 b2 1{0} (h) + 2 bc1{2} (h) + 2 cb1{2} (h) + 2 c2 1{0} (h)
2
(b + c2 ) 2 if h = 0,
bc 2
if |h| = 2,
=

0
otherwise.
Since X (t) and X (t + h, t) do not depend on t, {Xt : t Z} is (weakly) stationary.
b) For the mean we have
X (t) = E[Z1 ] cos(ct) + E[Z2 ] sin(ct) = 0,
and for the autocovariance
X (t + h, t) = Cov(Xt+h , Xt )
= Cov(Z1 cos(c(t + h)) + Z2 sin(c(t + h)), Z1 cos(ct) + Z2 sin(ct))
= cos(c(t + h)) cos(ct) Cov(Z1 , Z1 ) + cos(c(t + h)) sin(ct) Cov(Z1 , Z2 )
+ sin(c(t + h)) cos(ct) Cov(Z1 , Z2 ) + sin(c(t + h)) sin(ct) Cov(Z2 , Z2 )
= 2 (cos(c(t + h)) cos(ct) + sin(c(t + h)) sin(ct))
= 2 cos(ch)
where the last equality follows since cos( ) = cos cos + sin sin . Since
X (t) and X (t + h, t) do not depend on t, {Xt : t Z} is (weakly) stationary.
c) For the mean we have
X (t) = E[Zt ] cos(ct) + E[Zt1 ] sin(ct) = 0,
3

and for the autocovariance


X (t + h, t) = Cov(Xt+h , Xt )
= Cov(Zt+h cos(c(t + h)) + Zt+h1 sin(c(t + h)), Zt cos(ct) + Zt1 sin(ct))
= cos(c(t + h)) cos(ct) Cov(Zt+h , Zt ) + cos(c(t + h)) sin(ct) Cov(Zt+h , Zt1 )
+ sin(c(t + h)) cos(ct) Cov(Zt+h1 , Zt )
+ sin(c(t + h)) sin(ct) Cov(Zt+h1 , Zt1 )
= 2 cos2 (ct)1{0} (h) + 2 cos(c(t 1)) sin(ct)1{1} (h)
+ 2 sin(c(t + 1)) cos(ct)1{1} (h) + 2 sin2 (ct)1{0} (h)
2
cos2 (ct) + 2 sin2 (ct) = 2 if h = 0,
=
2 cos(c(t 1)) sin(ct)
if h = 1,
2
cos(ct) sin(c(t + 1))
if h = 1,
We have that {Xt : t Z} is (weakly) stationary for c = k, k Z, since then
X (t + h, t) = 2 1{0} (h). For c 6= k, k Z, {Xt : t Z} is not (weakly)
stationary since X (t + h, t) depends on t.
d) For the mean we have
X (t) = E[a + bZ0 ] = a,
and for the autocovariance
X (t + h, t) = Cov(Xt+h , Xt ) = Cov(a + bZ0 , a + bZ0 ) = b2 Cov(Z0 , Z0 ) = 2 b2 .
Since X (t) and X (t + h, t) do not depend on t, {Xt : t Z} is (weakly) stationary.
e) If c = k, k Z then Xt = (1)kt Z0 which implies that Xt is weakly stationary
when c = k. For c 6= k we have
X (t) = E[Z0 ] cos(ct) = 0,
and for the autocovariance
X (t + h, t) = Cov(Xt+h , Xt ) = Cov(Z0 cos(c(t + h)), Z0 cos(ct))
= cos(c(t + h)) cos(ct) Cov(Z0 , Z0 ) = cos(c(t + h)) cos(ct) 2 .
The process {Xt : t Z} is (weakly) stationary when c = k, k Z and not
(weakly) stationary when c 6= k, k Z, see 1.4. c).
f) For the mean we have
X (t) = E[Zt Zt1 ] = 0,
and
X (t + h, t) = Cov(Xt+h , Xt ) = Cov(Zt+h Zt+h1 , Zt Zt1 )
4
if h = 0,
= E[Zt+h Zt+h1 Zt Zt1 ] =
0
otherwise.
Since X (t) and X (t + h, t) do not depend on t, {Xt : t Z} is (weakly) stationary.
Problem 1.5. a) We have
X (t + h, t) = Cov(Xt+h , Xt ) = Cov(Zt+h + Zt+h2 , Zt + Zt2 )
= Cov(Zt+h , Zt ) + Cov(Zt+h , Zt2 ) + Cov(Zt+h2 , Zt )
+ 2 Cov(Zt+h2 , Zt2 )
= 1{0} (h) + 1{2} (h) + 1{2} (h) + 2 1{0} (h)

1 + 2 if h = 0,
1.64 if h = 0,
=
=

if |h| = 2.
0.8 if |h| = 2.
4

Hence the ACVF depends only on h and we write X (h) = X (t + h, h). The ACF
is then

X (h)
1
if h = 0,
(h) =
=
0.8/1.64

0.49
if |h| = 2.
X (0)
b) We have

1
1
Var
(X1 + X2 + X3 + X4 ) =
Var(X1 + X2 + X3 + X4 )
4
16
1
=
Var(X1 ) + Var(X2 ) + Var(X3 ) + Var(X4 ) + 2 Cov(X1 , X3 )
16

+ 2 Cov(X2 , X4 )
=

1
1.64 + 0.8
1
4X (0) + 4X (2) = X (0) + X (2) =
= 0.61.
16
4
4

c) = 0.8 implies X (h) = 0.8 for |h| = 2 so

1.64 0.8
1
Var
(X1 + X2 + X3 + X4 ) =
= 0.21.
4
4
Because of the negative covariance at lag 2 the variance in c) is considerably smaller.
Problem 1.8. a) First we show that {Xt : t Z} is WN (0, 1). For t even we have
E[Xt ] = E[Zt ] = 0 and for t odd
2

Zt1 1
1
2

E[Xt ] = E
= E[Zt1
1] = 0.
2
2
Next we compute the ACVF. If t is even we have X (t, t) = E[Zt2 ] = 1 and if t is
odd
"
2 #
2
1
Zt1
1
1
4
2

X (t, t) = E
= E[Zt1
2Zt1
+ 1] = (3 2 + 1) = 1.
2
2
2
If t is even we have

X (t + 1, t) = E

1
Zt2 1
Zt = E[Zt3 Zt ] = 0,
2
2

and if t is odd

Z2 1
Z
1

X (t + 1, t) = E Zt+1 t1
= E[Zt+1 ]E t1
= 0.
2
2

Clearly X (t + h, t) = 0 for |h| 2. Hence

1 if h = 0,
X (t + h, h) =
0 otherwise.
Thus {Xt : t Z} is WN (0, 1). If t is odd Xt and Xt1 is obviously dependent so
{Xt : t Z} is not IID (0, 1).
b) If n is odd
E[Xn+1 | X1 , . . . , Xn ] = E[Zn+1 | Z0 , Z2 , Z4 . . . , Zn1 ] = E[Zn+1 ] = 0.
If n is even
E[Xn+1 | X1 , . . . , Xn ] = E

Z2 1
X2 1
Zn2 1

| Z0 , Z2 , Z4 , . . . , Zn = n
= n
.
2
2
2

This again shows that {Xt : t Z} is not IID (0, 1).


5

Problem 1.11. a) Since aj = (2q + 1)1 , q j q, we have


q
X

q
X
1
aj mtj =
(c0 + c1 (t j))
2q + 1 j=q
j=q

q
q
X
X
c
1
1
t (2q + 1)
=
c0 (2q + 1) + c1
(t j) = c0 +
j
2q + 1
2q
+
1
j=q
j=q

q
q
X
c1 X
= c0 + c1 t
j+
j
2q + 1 j=1
j=1

= c0 + c1 t = mt
b) We have

E [At ] = E

q
X

j=q

Var (At ) = Var

q
X

aj Ztj =

q
X

aj E [Ztj ] = 0

and

j=q

aj Ztj =

j=q

q
X

a2j Var (Ztj ) =

j=q

q
X

1
(2q + 1)

2 =

j=q

2
2q + 1

We see that the variance Var(At ) is small for large q. Hence, the process At will be
close to its mean (which is zero) for large q.
Problem 1.15. a) Put
Zt = 12 Xt = (1 B)(1 B 12 )Xt = (1 B)(Xt Xt12 )
= Xt Xt12 Xt1 + Xt13
= a + bt + st + Yt a b(t 12) st12 Yt12 a b(t 1) st1 Yt1
+ a + b(t 13) + st13 + Yt13
= Yt Yt1 Yt12 + Yt13 .
We have Z (t) = E[Zt ] = 0 and
Z (t + h, t) = Cov (Zt+h , Zt )
= Cov (Yt+h Yt+h1 Yt+h12 + Yt+h13 , Yt Yt1 Yt12 + Yt13 )
= Y (h) Y (h + 1) Y (h + 12) + Y (h + 13) Y (h 1) + Y (h)
+ Y (h + 11) Y (h + 12) Y (h 12) + Y (h 11)
+ Y (h) Y (h + 1) + Y (h 13) Y (h 12) Y (h 1) + Y (h)
= 4Y (h) 2Y (h + 1) 2Y (h 1) + Y (h + 11) + Y (h 11)
2Y (h + 12) 2Y (h 12) + Y (h + 13) + Y (h 13).
Since Z (t) and Z (t + h, t) do not depend on t, {Zt : t Z} is (weakly) stationary.
b) We have Xt = (a + bt)st + Yt . Hence,
Zt = 212 Xt = (1 B 12 )(1 B 12 )Xt = (1 B 12 )(Xt Xt12 )
= Xt Xt12 Xt12 + Xt24 = Xt 2Xt12 + Xt24
= (a + bt)st + Yt 2(a + b(t 12)st12 + Yt12 ) + (a + b(t 24))st24 + Yt24
= a(st 2st12 + st24 ) + b(tst 2(t 12)st12 + (t 24)st24 )
+ Yt 2Yt12 + Yt24
= Yt 2Yt12 + Yt24 .
6

Now we have Z (t) = E[Zt ] = 0 and


Z (t + h, t) = Cov (Zt+h , Zt )
= Cov (Yt+h 2Yt+h12 + Yt+h24 , Yt 2Yt12 + Yt24 )
= Y (h) 2Y (h + 12) + Y (h + 24) 2Y (h 12) + 4Y (h)
2Y (h + 12) + Y (h 24) 2Y (h 12) + Y (h)
= 6Y (h) 4Y (h + 12) 4Y (h 12) + Y (h + 24) + Y (h 24).
Since Z (t) and Z (t + h, t) do not depend on t, {Zt : t Z} is (weakly) stationary.

Chapter 2
n+h = aXn + b of Xn+h by
Problem 2.1. We find the best linear predictor X
n+h ] = 0 and E[(Xn+h X
n+h )Xn ] = 0. We
finding a and b such that E[Xn+h X
have
n+h ] = E[Xn+h aXn b] = E[Xn+h ] aE[Xn ] b = (1 a) b
E[Xn+h X
and
n+h )Xn ] = E[(Xn+h aXn b)Xn ]
E[(Xn+h X
= E[Xn+h Xn ] aE[Xn2 ] bE[Xn ]
= E[Xn+h Xn ] E[Xn+h ]E[Xn ] + E[Xn+h ]E[Xn ]

a E[Xn2 ] E[Xn ]2 + E[Xn ]2 bE[Xn ]

= Cov(Xn+h , Xn ) + 2 a Cov(Xn , Xn ) + 2 b

= (h) + 2 a (0) + 2 b,
which implies that
b = (1 a) ,

a=

(h) + 2 b
.
(0) + 2

Solving this system of equations we get a = (h)/(0) = (h) and b = (1 (h))


n+h = (h)Xn + (1 (h)).
i.e. X
Problem 2.4. a) Put Xt = (1)t Z where Z is random variable with E[Z] = 0 and
Var(Z) = 1. Then
X (t + h, t) = Cov((1)t+h Z, (1)t Z) = (1)2t+h Cov(Z, Z) = (1)h = cos(h).
b) Recall problem 1.4 b) where Xt = Z1 cos(ct) + Z2 sin(ct) implies that X (h) =
cos(ch). If we let Z1 , Z2 , Z3 , Z4 , W be independent random variables with zero mean
and unit variance and put




Xt = Z1 cos
t + Z2 sin
t + Z3 cos
t + Z4 sin
t + W.
2
2
4
4
Then we see that X (h) = (h).
c) Let {Zt : t Z} be WN 0, 2 and put Xt = Zt + Zt1 . Then E[Xt ] = 0 and
X (t + h, t) = Cov(Zt+h + Zt+h1 , Zt + Zt1 )
= Cov(Zt+h , Zt ) + Cov(Zt+h , Zt1 ) + Cov(Zt+h1 , Zt )
+ 2 Cov(Zt+h1 , Zt1 )
2
(1 + 2 ) if h = 0,
2
if |h| = 1,
=

0
otherwise.
If we let 2 = 1/(1+2 ) and choose such that 2 = 0.4, then we get X (h) = (h).
Hence, we choose so that /(1 + 2 ) = 0.4, which implies that = 1/2 or = 2.
Problem 2.8. Assume that there exists a stationary solution {Xt : t Z} to
Xt = Xt1 + Zt ,
8

t = 0, 1, . . .

where {Zt : t Z} WN 0, 2 and |1 | = 1. Use the recursions


Xt = Xt1 + Zt = 2 Xt2 + Zt1 + Zt = . . . = n+1 Xt(n+1) +

n
X

i Zti ,

i=0

which yields that


Xt n+1 Xt(n+1) =

n
X

i Zti .

i=0

We have that
Var

n
X
i=0

!
i

Zti

n
X

2i Var (Zti ) =

i=0

n
X

2 = (n + 1) 2 .

i=0

On the other side we have that

Var Xt n+1 Xt(n+1) = 2(0) 2n+1 (n + 1) 2(0) + 2(n + 1) 4(0).


This mean that (n + 1) 2 4(0), n. Letting n implies that (0) = ,
which is a contradiction, i.e. there exists no stationary solution.
Problem 2.11. We have that {Xt : t Z} is an AR(1) process with mean so
{Xt : t Z} satisfies

Xt = (Xt1 ) + Zt ,
{Zt : t Z} WN 0, 2 ,
|h|

with = 0.6 and 2 = 2. Since {Xt : t Z} is AR(1) we have that X (h) = 12 .


Pn
We estimate by X n = n1 k=1 Xk . For large values
P of n X n is approximately
normally distributed with mean and variance n1 |h|< (h) (see Section 2.4 in
Brockwell and Davis). In our case the variance is
!

X
2
1
1
1
2
h
1+2

=
1
+
2

1
n
1 2
n
1
1 2
h=1

1
2
2
1 1+
2
2
=
1
=
=
.
n 1
1 2
n 1 1 2
n(1 )2
2

Hence, X n is approximately N (, n(1)


2 ). A 95% confidence interval is given by

I = (xn 0.025 n(1) , xn + 0.025 n(1)


). Putting in the numeric values gives
I = 0.271 0.69. Since 0 I the hypothesis that = 0 can not be rejected.
n+1 = Pn Xn+1 = a0 + a1 Xn + + an X1 . We may assume
Problem 2.15. Let X
that X (t) = 0. Otherwise we can consider Yt = Xt . Let S(a0 , a1 , . . . , an ) =
n+1 )2 ] and minimize this w.r.t. a0 , a1 , . . . , an .
E[(Xn+1 X

n+1 )2 ]
S(a0 , a1 , . . . , an ) = E[(Xn+1 X
= E[(Xn+1 a0 a1 Xn an X1 )2 ]
= a20 2a0 E[Xn+1 a1 Xn an X1 ]
+ E[(Xn+1 a1 Xn an X1 )2 ]
= a20 + E[(Xn+1 a1 Xn an X1 )2 ].
Differentiation with respect to ai gives
S
= 2a0 ,
a0
S
= 2E[((Xn+1 a1 Xn an X1 )Xn+1i ],
ai
9

i = 1, . . . , n.

Putting the partial derivatives equal to zero we get that S(a0 , a1 , . . . , an ) is minimized if
a0 = 0

E[(Xn+1 Xn+1 )Xk ] = 0,

for each k = 1, . . . , n.

Plugging in the expression for Xn+1 we get that for k = 1, . . . , n.


n+1 )Xk ]
0 = E[(Xn+1 X
= E[(1 Xn + + p Xnp+1 + Zn+1 a1 Xn an X1 )Xk ].
This is clearly satisfied if we let

ai = i ,
ai = 0,

if 1 i p
if i > p

Since there is best linear predictor is unique this is the one. The mean square error
is
2
n+1 )2 ] = E[Zn+1
E[(Xn+1 X
] = 2 .

10

Chapter 3
Problem 3.1. We write the ARMA processes as (B)Xt = (B)Zt . The process
{Xt : t Z} is causal if and only if (z) 6= 0 for each |z| 1 and invertible if and
only if (z) 6= 0 for each |z| 1.
a) (z) = 1 + 0.2z 0.48z 2 = 0 is solved by z1 = 5/3 and z2 = 5/4.
Hence {Xt : t Z} is causal.
(z) = 1. Hence {Xt : t Z} is invertible.
b) (z) = 1 + 1.9z + 0.88z 2 = 0 is solved by z1 = 10/11 and z2 = 5/4.
Hence {Xt : t Z} is not causal.

(z) = 1 + 0.2z + 0.7z 2 = 0 is solved by z1 = (1 i 69)/7

and z2 = (1 + i 69)/7. Since |z1 | = |z2 | = 70/7 > 1, {Xt : t Z}


is invertible.
c) (z) = 1 + 0.6z = 0 is solved by z = 5/3. Hence {Xt : t Z} is causal.
(z) = 1 + 1.2z = 0 is solved by z = 5/6. Hence {Xt : t Z} is not invertible.
d) (z) = 1 + 1.8z + 0.81z 2 = 0 is solved by z1 = z2 = 10/9.
Hence {Xt : t Z} is causal.
(z) = 1. Hence {Xt : t Z} is invertible.
e) (z) = 1 + 1.6z = 0 is solved by z = 5/8. Hence {Xt : t Z} is not causal.
(z) = 1 0.4z + 0.04z 2 = 0 is solved by z1 = z2 = 5.
Hence {Xt : t Z} is invertible.

Problem 3.4. We have Xt = 0.8Xt2 + Zt , where {Zt : t Z} WN 0, 2 . To


obtain the Yule-Walker equations we multiply each side by Xtk and take expected
value. Then we get
E[Xt Xtk ] = 0.8E[Xt2 Xtk ] + E[Zt Xtk ],
which gives us
(0) = 0.8(2) + 2
(k) = 0.8(k 2),

k 1.

We use that (k) = (k). Thus, we need to solve


(0) 0.8(2) = 2
(1) 0.8(1) = 0
(2) 0.8(0) = 0
First we see that (1) = 0 and therefore (h) = 0 if h is odd. Next we solve for
(0) and we get (0) = 2 (1 0.82 )1 . It follows that (2k) = (0)0.8k and hence
the ACF is

h = 0,
1
0.8h , h = 2k, k = 1, 2, . . .
(h) =

0
otherwise.
The PACF can be computed as (0) = 1, (h) = hh where hh comes from that
the best linear predictor of Xh+1 has the form
h+1 =
X

h
X

hi Xh+1i .

i=1

11

h+1 = 1 Xh + 2 Xh1 where we can identify


For an AR(2) process we have X
(0) = 1, (1) = 0, (2) = 0.8 and (h) = 0 for h 3.
Problem 3.6. The ACVF for {Xt : t Z} is
X (t + h, t) = Cov(Xt+h , Xt ) = Cov(Zt+h + Zt+h1 , Zt + Zt1 )
= Z (h) + Z (h + 1) + Z (h 1) + 2 Z (h)
2
(1 + 2 ), h = 0
=
2 ,
|h| = 1.
On the other hand, the ACVF for {Yt : t Z} is
Y (t + h, t) = Cov(Yt+h , Yt ) = Cov(Zt+h + 1 Zt+h1 , Zt + 1 Zt1 )
= Z (h) + 1 Z (h + 1) + 1 Z (h 1) + 2 Z (h)
2 2
(1 + 2 ) = 2 (1 + 2 ), h = 0
=
2 2 1 = 2 ,
|h| = 1.
Hence they are equal.

2
Problem 3.7. First we show that {Wt : t Z} is WN 0, w
.

X
X
j

E[Wt ] = E
() Xtj =
()j E[Xtj ] = 0,
j=0

j=0

since E[Xtj ] = 0 for each j. Next we compute the ACVF of {Wt : t Z} for
h 0.

X
X
W (t + h, t) = E[Wt+h Wt ] = E ()j Xt+hj
()k Xtk
j=0

k=0

()j ()k E[Xt+hj Xtk ] =

j=0 k=0

()j ()k X (h j + k)

j=0 k=0

= X (r) = 2 (1 + 2 )1{0} (r) + 2 1{1} (|r|)


X

=
()(j+k) 2 (1 + 2 )1{jk} (h) + 2 1{jk+1} (h) + 2 1{jk1} (h)
=

j=0 k=0

()(j+jh) 2 (1 + 2 ) +

j=h

()(j+jh+1) 2

j=h1,j0

()(j+jh1) 2

j=h+1

= 2 (1 + 2 )()h

()2(jh) + 2 ()(h1)

j=h

X
(h+1)

+ 2 ()

()2(j(h1))

j=h1,j0

()2(j(h+1))

j=h+1

2
2
+ 2 ()(h1) 2
+ 2 2 1{0} (h)
1
1
2
+ 2 ()(h+1) 2
1
2

= 2 ()h 2
1 + 2 2 1 + 2 2 1{0} (h)
1
= 2 2 1{0} (h)
= 2 (1 + 2 )()h

12

2
2
Hence, {Wt : t Z} is WN 0, w
with w
= 2 2 . To continue we have that
Wt =

()j Xtj =

j=0

j Xtj ,

j=0

P
P
with j = ()j and j=0 |j | = j=0 j < so {Xt : t Z} is invertible and
P
solves (B)Xt = (B)Wt with (z) = j=0 j z j = (z)/(z). This implies that
we must have

X
j=0

j z j =


X
(z)
z j
1
=
.

1
+
z/
(z)
j=0

Hence, (z) = 1 and (z) = 1 + z/, i.e. {Xt : t Z} satisfies Xt = Wt + 1 Wt1 .


Problem 3.11. The PACF can be computed as (0) = 1, (h) = hh where hh
comes from that the best linear predictor of Xh+1 has the form
h+1 =
X

h
X

hi Xh+1i .

i=1

In particular (2) = 22 in the expression


3 = 21 X2 + 22 X1 .
X
The best linear predictor satisfies
3 , Xi ) = 0,
Cov(X3 X

i = 1, 2.

This gives us
3 , X1 ) = Cov(X3 21 X2 22 X1 , X1 )
Cov(X3 X
= Cov(X3 , X1 ) 21 Cov(X2 , X1 ) 22 Cov(X1 , X1 )
= (2) 21 (1) 22 (0) = 0
and
3 , X2 ) = Cov(X3 21 X2 22 X1 , X2 )
Cov(X3 X
= (1) 21 (0) 22 (1) = 0.
Since we have an MA(1) process it has ACVF
2
(1 + 2 ), h = 0,
2 ,
|h| = 1,
(h) =

0,
otherwise.
Thus, we have to solve the equations
21 (1) + 22 (0) = 0
(1 22 )(1) 21 (0) = 0.
Solving this system of equations we find
22 =

2
.
+ 2 + 1

13

Chapter 4
P
Problem 4.4. By Corollary 4.1.1 we know that a function (h) with |h|< |(h)|
is ACVF for some stationary process if and only if it is an even function and
f () =

1 X ih
e
(h) 0,
2

for (, ].

h=

We have that (h) is even, (h) = (h) and


f () =

3
1 X ih
e
(h)
2
h=3

1
=
2
1
=
2
1
=
2

0.25ei3 0.5ei2 + 1 0.5ei2 0.25ei3

1 0.25(ei3 + ei3 ) 0.5(ei2 + ei2 )


(1 0.5 cos(3) cos(2)) .

Do we have f () 0 on (, ]? The answer is NO, for instance f (0) =


1/(4). Hence, (h) is NOT an ACVF for a stationary time series.
Problem 4.5. Let Zt = Xt + Yt . First we show that Z (h) = X (h) + Y (h).
Z (t + h, t) = Cov(Zt+h , Zt ) = Cov(Xt+h + Yt+h , Xt + Yt )
= Cov(Xt+h , Xt ) + Cov(Xt+h , Yt ) + Cov(Yt+h , Xt ) + Cov(Yt+h , Yt )
= Cov(Xt+h , Xt ) + Cov(Yt+h , Yt )
= X (t + h, t) + Y (t + h, t).
We have that

Z
eih dFZ ()

Z (h) =
(,]

but we also know that

Z
Z
Z (h) = X (h) + Y (h) =
eih dFX () +
eih dFY ()
(,]
(,]
Z
ih
=
e (dFX () + dFY ())
(,]

Hence we have that dFZ () = dFX () + dFY (), which implies that
Z
Z
FZ () =
dFZ () =
(dFX () + dFY ()) = FX () + FY ().
(,]

(,]

Problem 4.6. Since {Yt : t Z} is MA(1)-process we have


2
(1 + 2 ), h = 0,
2 ,
|h| = 1,
Y (h) =

0,
otherwise.
By Problem 2.2 the process St = A cos(t/3) + B sin(t/3) has ACVF S (h) =
2 cos(h/3). Since the processes are uncorrelated, Problem 4.5 gives that X (h) =
S (h) + Y (h). Moreover,
Z
2 ih/3
(e
+ eih/3 ) =
eih dFS (),
2 cos(h/3) =
2

14

where
dFS () =

2
2
( /3) d + ( + /3) d
2
2

This implies

< /3,
0,
2 /2, /3 < /3,
FS () =
2
,
/3.
Furthermore we have that

1 X ih
1 i
e
e Y (1) + Y (0) + ei Y (1)
Y (h) =
2
2

fY () =

h=

2
1 2
=
1 + 2.52 + 2.5 2 ei + ei =
(7.25 + 5 cos()).
2
2
This implies that
Z

FY () =

fY ()d =

2
2
(7.25 + 5 cos())d =
7.25 + 5 sin()
2
2

(7.25( + ) + 5 sin()).
2

Finally we have FX () = FS () + FY ().


Problem 4.9. a) We start with X (0),
Z

X (0) =

i0

fX ()d = 100

6 +0.01

6 +0.01

d + 100

6 0.01

d = 100 0.04 = 4.

6 0.01

For X (1) we have,


Z
X (1) =
= 100

ei fX ()d

Z 6 +0.01
i

e d + 100

6 0.01

i 6 +0.01

6 +0.01

ei d

6 0.01

i 6 +0.01

e
e
= 100
+ 100
i 0.01
i 0.01
6
6

100 i( +0.01)
i(
+0.01)
e 6
=
e 6
+ ei( 6 +0.01) ei( 6 +0.01)
i

+ 0.01
= 200 sin + 0.01 + sin
6
6

= 200 3 sin(0.01) 3.46.

The spectral density fX () is plotted in Figure 4.9(a).


b) Let
Yt = 12 Xt = Xt Xt12 =

X
k=

15

k Xtk ,

with 0 = 1, 12 = 1 and j = 0 otherwise. Then we have the spectral density


fY () = |(ei )|2 fX () where
(ei ) =

k eik = 1 ei12 .

k=

Hence,

2
fY () = 1 e12i fX () = (1 e12i )(1 e12i )fX ()
= 2(1 cos(12))fX ().
The power transfer function |(ei )|2 is plotted in Figure 4.9(b) and the resulting
spectral density fY () is plotted in Figure 4.9(c).
c) The variance of Yt is Y (0) which is computed by
Z
Y (0) =
fY ()d

6 +0.01

Z
= 200

Z
(1 cos(12))d + 200

6 0.01

6 +0.01

(1 cos(12))d

6 0.01

sin(12) i 6 +0.01
sin(12) i 6 +0.01 h
+
= 200

12
12

6 0.01
6 0.01

sin(12(/6 + 0.01)) sin(12(/6 0.01))


= 200 0.02
12

sin(12(/6 + 0.01)) sin(12(/6 0.01))


+0.02
12

sin(2 0.12) sin(2 + 0.12)


= 200 0.04 +
6

1
= 200 0.04 sin(0.12) = 0.0192.
3
Problem 4.10. a) Let (z) = 1 z and (z) = 1 z. Then Xt =

(B)
(B) Zt

and

(ei ) 2
(ei ) 2 2

fX () =
fZ () =
.
(ei )
(ei ) 2
For {Wt : t Z} we get

2
2
2

1 i
(e

i
i 2 2
e
1
1 ei 2

) (e )

=
.
fW () =

i ) (ei ) 2
(e
1 1 ei 2 |1 ei |2 2

Now note that we can write


i 2
2

1 1 ei = 1 ei 2 = e
ei 2 = 1 ei 12

2
2
2

1
1
2
2
= 2 1 ei = 2 1 ei .

Inserting this and the corresponding expression with substituted by in the


computation above we get


2
1
i 2
1 ei 2
2 2
2 1 e
= 2
fW () = 1
2
2
i | |1 ei | 2
2
2 |1 e
16

100
80
60
40
20
0
0

/6

/3

/2

2/3

5/6

2/3

5/6

(a) fX ()

4
3
2
1
0
0

/6

/3

/2


 2
(b) ei

1.5

0.5

/60.01

/6

/6+0.01

(c) fY ()

Figure 1: Exercise 4.9

17

which is constant.
b) Since {Wt : t Z} has constant spectral density it is white noise and
Z
2 2
2
2
w
= W (0) =
fW ()d = 2
2 = 2 2 .
2

c) From definition of {Wt : t Z} we get that (B)X


t = (B)Wt which is a causal
and invertible representation.

18

Chapter 5
Problem 5.1. We begin by writing the Yule-Walker equations. {Yt : t Z}
satisfies
Yt 1 Yt1 2 Yt2 = Zt ,

{Zt : t Z} WN(0, 2 ).

Multiplying this equation with Ytk and take expectation gives


2
k = 0,
(k) 1 (k 1) 2 (k 2) =
0
k 1.
We rewrite the first three equations as

1 (k 1) + 2 (k 2) =

(k)
(0) 2

k = 1, 2,
k = 0.

Introducing the notation

1
(1)
(0) (1)
, =
, 2 =
2 =
2
(2)
(1) (0)
2 and 2 by
2 and
we have 2 = 2 and 2 (0) T 2 . We replace 2 by
for . That is, we solve
solve to get an estimate
=
2
2

T
2.

2 = (0)

Hence

1
(0)
(1)
(1)

(1) (0)
(2)
(0)2 (1)2

1
(0)
(1)
(1)
(2)
=
.

(1)2
(0)
(2)
(0)2 (1)2

=
1

2 2 =

We get that
(
(0) (2))
(1)
1 =
= 1.32
(0)2 (1)2
(0)
(2) (1)2
= 0.634
2 =
(0)2 (1)2

2 = (0) 1 (1) 2 (2) = 289.18.


AN(, 2 1 /n) and approximately
AN(,
1 /n).
We also have that
2
2
2
Here

289.18
0.0021 0.0017
0.0060 0.0048
2 1

2 /n =
=
0.0017 0.0021
0.0048 0.0060
100
So we have approximately 1 N (1 , 0.0060) and 2 N (2 , 0.0060) and the
confidence intervals are

I1 = 1 0.025 0.006 = 1.32 0.15

I = 2 0.025 0.006 = 0.634 0.15.


2

19

Problem 5.3. a) {Xt : t Z} is causal if (z) 6= 0 for |z| 1 so let us check for
which values of this can happen. (z) = 1 z 2 z 2 so putting this equal to
zero implies

z
1
1 5
1+ 5
2
z + 2 = 0 z1 =
and z2 =

2
2

Furthermore |z1 | > 1 if || < ( 5 1)/2 = 0.61 and |z2 | > 1 if || < (1 + 5)/2 =
1.61. Hence, the process is causal if || < 0.61.
b) The Yule-Walker equations are
2
k = 0,
(k) (k 1) 2 (k 2) =
0 k 1.
Rewriting the first 3 equations and using (k) = (k) gives
(0) (1) 2 (2) = 2
(1) (0) 2 (1) = 0
(2) (1) 2 (0) = 0.
Multiplying the third equation by 2 and adding the first gives
3 (1) (1) 4 (0) + (0) = 2
(1) (0) 2 (1) = 0.
We solve the second equation to obtain
s
1
=

2(1)

1
+ 1.
4(1)2

Inserting the estimated values of (0) and (1) = (0)


(1) gives the solutions
= {0.509, 1.965} and we choose the causal solution = 0.509. Inserting this
value in the expression for 2 we get
(1) 4 (0) + (0) = 2.985.

2 = 3 (1)
Problem 5.4. a) Let
us construct a test to see if the assumption that {Xt :
2
t Z}
is
WN
0,

is reasonable. To this end suppose that {Xt : t Z} is

WN 0, 2 . Then, since (k) = 0 for k 1 we have that (k) AN(0, 1/n). A

95% confidence interval for (k) is then I(k) = (k) 0.025 / 200. This gives us
I(1) = 0.427 0.139
I(2) = 0.475 0.139
I(3) = 0.169 0.139.
Clearly 0
/ I(k) for any of the observed k = 1, 2, 3 and we conclude that it is not
reasonable to assume that {Xt : t Z} is white noise.
b) We estimate the mean by
= x200 = 3.82. The Yule-Walker estimates is given
by
=R
1 2 ,

1 2 ),

2 = (0)(1 2 T R
2

where

1
2

2 =
, R

(0) (1)
(1) (0)
20

2 =
,

(1)
(2)

Solving this system gives the estimates 1 = 0.2742, 2 = 0.3579 and


2 = 0.8199.
c) We construct a 95% confidence interval for to test if we can reject the hypothesis
that = 0. We have that X 200 AN(, /n) with

(h) (3) + (2) + (1) + (0) + (1) + (2) + (3) = 3.61.

h=

An approximate 95% confidence interval for is then


p
p
I = xn 0.025 /n = 3.82 1.96 3.61/200 = 3.82 0.263.
Since 0
/ I we reject the hypothesis that = 0.
AN(,
1 /n). Inserting the observed values
d) We have that approximately
2
2
we get

2
2
n

0.0050
0.0021

0.0021
0.0050

and hence 1 AN(1 , 0.0050) and 2 AN(2 , 0.0050). We get the 95% confidence intervals

I1 = 1 0.025 0.005 = 0.274 0.139

I = 2 0.025 0.005 = 0.358 0.139.


2

e) If the data were generated from an AR(2) process, then the PACF would be
(0) = 1,
(1) = (1) = 0.427,
(2) = 2 = 0.358 and
(h) = 0 for h 3.
Problem 5.11. To obtain the maximum likelihood estimator we compute as if the
process were Gaussian. Then the innovations
1 = X1 N (0, 0 ),
X1 X
2 = X2 X1 N (0, 1 ),
X2 X
1 )2 ], 1 = 2 r1 = E[(X2 X
2 )2 ]. This implies
where 0 = 2 r0 = E[(X1 X
2
2
2
2 ) ] = (0)2(1)+2 (0)
0 = E[X1 ] = (0), r0 = 1/(1 ) and 1 = E[(X2 X
and hence
r1 =

(0)(1 + 2 ) 2(1)
1 + 2 22
=
= 1.
2
1 2

Here we have used that (1) = 2 /(1 2 ). Since the distribution of the innova j is
tions is normal the density for Xj X
fXj X j

x2
1
exp 2
=p
2 rj1
2 2 rj1

and the likelihood function is

(x1 x
1 )2
(x2 x
2 )2
1
L(, ) =
fXj X j
+
exp 2
2
r0
r1
(2 2 )2 r0 r1
j=1
2

x1
(x2 x1 )2
1
1
+
.
=p
exp 2
2
r0
r1
(2 2 )2 r0 r1
2

2
Y

=p

21

We maximize this by taking logarithm and then differentiate:


1
1 x2
(x2 x1 )2
log L(, 2 ) = log(4 2 4 r0 r1 ) 2 1 +
2
2 r0
r1

1
1 2
2 4
2
2
= log(4 /(1 )) 2 x1 (1 ) + (x2 x1 )2
2
2

1
1
2
= log(2) log( ) + log(1 2 ) 2 x21 (1 2 ) + (x2 x1 )2 .
2
2
Differentiating yields

1
1
l(, 2 )
= 2 + 4 x21 (1 2 ) + (x2 x1 )2 ,
2

2
l(, 2 )
1 2
x1 x2
=
+ 2 .
2

2 1

Putting these expressions equal to zero gives 2 = 12 x21 (1 2 ) + (x2 x1 )2 and


then after some computations = 2x1 x2 /(x21 + x22 ). Inserting the expression for
is the equation for gives the maximum likelihood estimators

2 =

(x21 x22 )2
2x1 x2
and = 2
2
2
2(x1 + x2 )
x1 + x22

22

Chapter 6
Problem 6.5. The best linear predictor of Yn+1 in terms of 1, X0 , Y1 , . . . , Yn i.e.
Yn+1 = a0 + cX0 + a1 Y1 + + an Yn ,
must satisfy the orthogonality relations
Cov(Yn+1 Yn+1 , 1) = 0
Cov(Yn+1 Yn+1 , X0 ) = 0
Cov(Yn+1 Yn+1 , Yj ) = 0,

j = 1, . . . , n.

The second equation can be written as


Cov(Yn+1 Yn+1 , X0 ) = E[(Yn+1 a0 + cX0 + a1 Y1 + + an Yn )X0 ] = cE[X02 ] = 0
so we must have c = 0. This does not effect the other equations since E[Yj X0 ] = 0
for each j.
Problem 6.6. Put Yt = Xt . Then {Yt : t Z} is an AR(2) process. We can
rewrite this as Xt+1 = Yt + Xt1 . Putting t = n + h and using the linearity of the
projection operator Pn gives Pn Xn+h = Pn Yn+h + Pn Xn+h1 . Since {Yt : t Z} is
AR(2) process we have Pn Yn+1 = 1 Yn + 2 Yn1 , Pn Yn+2 = 1 Pn Yn+1 + 2 Yn and
iterating we find Pn Yn+h = 1 Pn Yn+h1 + 2 Pn Yn+h2 . Let (z) = (1 z)(z) =
1 1 z 2 z 2 3 z 3 . Then
(1 z)(z) = 1 1 z 2 z z + 1 z 2 + 2 z 3 ,
i.e. 1 = 1 + 1, 2 = 2 1 and 3 = 2 . Then
Pn Xn+h =

3
X

j Xn+hj .

j=1

This can be verified by first noting that


Pn Yn+h = 1 Pn Yn+h1 + 2 Pn Yn+h2
= 1 (Pn Xn+h1 Pn Xn+h2 ) + 2 (Pn Xn+h2 Pn Xn+h3 )
= 1 Pn Xn+h1 + (2 1 )Pn Xn+h2 2 Pn Xn+h3 .
and then
Pn Xn+h = Pn Yn+h + Pn Xn+h1
= (1 + 1)Pn Xn+h1 + (2 1 )Pn Xn+h2 2 Pn Xn+h3
= 1 Pn Xn+h1 + 2 Pn Xn+h2 + 3 Pn Xn+h3 .
Hence, we have

g(h) =

1 g(h 1) + 2 g(h 2) + 3 g(h 3), h 1,


Xn+h ,
h 0.

We may suggest a solution of the form g(h) = a+b1h +c2h , h > 3 where 1 and
2 are the solutions to (z) = 0 and g(2) = Xn2 , g(1) = Xn1 and g(0) = Xn .
Let us first find the roots 1 and 2 .
4
1
16
(z) = 1 0.8z + 0.25z 2 = 1 z + z 2 = 0 z 2 z + 4 = 0.
5
4
5
23

p
We get that z = 8/5 (8/5)2 4 = (8 6i)/5. Then 11 = 5/(8 + 6i) = =
0.4 0.3i and 21 = 0.4 + 0.3i. Next we find the constants a, b and c by solving
Xn2 = g(2) = a + b12 + c22 ,
Xn1 = g(1) = a + b11 + c21 ,
Xn = g(0) = a + b + c.
Note that (0.4 0.3i)2 = 0.07 0.24i and (0.4 + 0.3i)2 = 0.07 + 0.24i so we get the
equations
Xn2 = a + b(0.07 0.24i) + c(0.07 + 0.24i),
Xn1 = a + b(0.4 0.3i) + c(0.4 + 0.3i),
Xn = a + b + c.
Let a = a1 + a2 i, b = b1 + b2 i and c = c1 + c2 i. Then we split the equations into a
real part and an imaginary part and get
Xn2 = a1 + 0.07b1 + 0.24b2 + 0.07c1 0.24c2 ,
Xn1 = a1 + 0.4b1 + 0.3b2 + 0.4c1 0.4c2 ,
Xn = a1 + b1 + c1 ,
0 = a2 + 0.07b2 0.24b1 + 0.07c2 + 0.24c1 ,
0 = a2 + 0.4b2 0.3b1 + 4c2 + 0.3c1 ,
0 = a2 + b2 + c2 .
We can write this

1 0
1 0

1 0

0 1

0 1
0 1

as a matrix equation by
0.07 0.24
0.4
0.3
1
0
0.24 0.07
0.3 0.4
0
1

0.07
0.4
1
0.24
0.3
0

0.24
0.3
0
0.07
0.4
1

a1
a2
b1
b2
c1
c2

Xn2
Xn1
Xn
0
0
0

which has the solution a = 2.22Xn 1.77Xn1 + 0.55Xn2 , b = c = 1.1Xn2 +


0.88Xn1 + 0.22Xn + (2.22Xn2 + 3.44Xn1 1.22Xn )i.

24

Chapter 7
Problem 7.1. The problem is not very well formulated; we replace the condition
Y (h) 0 as h by the condition that Y (h) is strictly decreasing.
The process is stationary if
t = E[(X1,t , X2,t )T ] = (1 , 2 )T and (t + h, t) does
not depend on t. We may assume that {Yt } has mean zero so that
E[X1,t ] = E[Yt ] = 0
E[X2,t ] = E[Ytd ] = 0,
and the covariance function is

(t + h, t) = E[(X1,t+h , X2,t+h ) (X1,t , X2,t )] =

Y (h)
Y (h + d)
=
.
Y (h d)
Y (h)

E[Yt+h Yt ]
E[Yt+hd Yt ]

E[Yt+h Ytd ]
E[Yt+hd Ytd ]

Since neither
t or (t + h, t) depend on t, the process is stationary. We assume
that Y (h) 0 as h . Then we have that the cross-correlation
12 (h) = p

12 (h)
11 (0)22 (0)

Y (h + d)
= Y (h + d).
Y (0)

In particular, 12 (0) = Y (d) < 1 whereas 12 (d) = Y (0) = 1.


Problem 7.3. We want to estimate the cross-correlation
p
12 (h) = 12 (h)/ 11 (0)22 (0).
We estimate

(h) =

11 (h) 12 (h)
21 (h) 22 (h)

by

Pnh

n )(Xt X
n )T 0 h n 1
(Xt+h X
T
(h)
n + 1 h < 0.
p
Then we get 12 (h) = 12 (h)/ 11 (0)
22 (0). According to Theorem 7.3.1 in Brockwell and Davis we have, for h 6= k, that

12 (h)
n
approx. N (0, )
n
21 (h)

(h)
=

1
n

t=1

where
11 = 22 =
12 = 21 =

X
j=

11 (j)22 (j)
11 (j)22 (j + k h).

j=

Since {X1,t } and {X2,t } are MA(1) processes we know that their ACFs are

1
h=0
X1 (h) =
0.8/(1 + 0.82 ) h = 1

1
h=0
X2 (h) =
0.6/(1 + 0.62 ) h = 1
25

Hence

11 (j)22 (j) = 11 (1)22 (1) + 11 (0)22 (0) + 11 (1)22 (1)

j=

0.8
0.6
0.8
0.6

+1+

0.57.
1 + 0.82 1 + 0.62
1 + 0.82 1 + 0.62
For the covariance we see that 11 (j) 6= 0 if j = 1, 0, 1 and 22 (j + k h) 6= 0 if
j + k h = 1, 0, 1. Hence, the covariance is
=

11 (j)22 (j + k h) = 11 (1)22 (0) + 11 (0)22 (1) 0.0466,

if k h = 1

j=

11 (j)22 (j + k h) = 11 (0)22 (1) + 11 (1)22 (0) 0.0466, if k h = 1

j=

11 (j)22 (j + k h) = 11 (1)22 (1) 0.2152,

j=

11 (j)22 (j + k h) = 11 (1)22 (1) 0.2152,

if k h = 2
if k h = 2.

j=

Problem 7.5. We have {Xt : t Z} is a causal process if det ( (z)) 6= 0 for all
|z| 1, due to Brockwell-Davis page 242. Further more we have that if {Xt : t Z}
is a causal process, then
Xt =

j Ztj ,

j=0

where
j = j +

k jk

k=1

0 = I
j = 0 for j > q
j = 0 for j > p
j = 0 for j < 0
and
(h) =

h+j Tj ,

h = 0, 1, 2, . . .

j=0

(where in this case = I2 ). We have to establish that {Xt : t Z} is a causal


process and then derive (h).

z 1 1
1 0
det((z)) = det(I z1 ) = det

0 1
2 0 1

z
z
1
1 2
2
2
= (2 z)
= det
0
1 z2
4
Which implies that |z1 | = |z2 | = 2 > 1 and hence {Xt : t Z} is a causal process.
We have that j = j + 1 j1 and
0 = 0 + 1 1 = 0 = I
1 = 1 + 1 0 = T1 + 1
n+1 = 1 n

for n 1.
26

From the last equation we get that n+1 = n1 1 = n1 (T1 + 1 ) and from the
definition of 1

2
1
1 5 4
1 n
n
T
1 = n
1 + 1 =
.
0 1
2
4 4 5
Assume that h 0, then
(h) =

h+j Tj = h +

j=0

= h +

h+j Tj

j=1

h+j1
T1 + 1
1

T
j1
T1 + 1
1

j=1

= h + h1

2 T
j1 T1 + 1
j1

j=0

1
1 j 1 5 4
1 0
= h +
0 1 4 4 5 2j j 1
j=0

X
1
5 + 8j + 5j 2 4 + 5j
h1
= h + 1
4 + 5j
5
4 j=0 22j

94/27 17/9
= h + h1
.
17/9 5/3
h1

1
2j

We have that
(
h =

I,
h1
1

h=0

T1

+ 1 , h > 0

which gives that

(0) =

1 0
0 1

94/27
17/9

17/9
5/3

121/27 17/9
17/9
8/3

and for h > 0

94/27 17/9
(h) = 1h1 T1 + 1 + h1
17/9 5/3

1 2 1
1 1 1
94/27 17/9
= 1h1
+
17/9 5/3
2 1 2
2 0 1

1
1 h1
199/27 41/9
= h
.
0
1
26/9 11/3
2

27

Chapter 8
Problem 8.7. First we would like to show that

1
Zt+1
Xt+1 =
0
Zt

(8.1)

is a solution to

Xt+1 =

0 1
0 0

Xt +

Zt+1 .

(8.2)

Let

A=

0
0

1
0

and B =

and note that

A =

0
0

0
0

Then equation (8.2) can be written as


Xt+1 = AXt + BZt+1 = A (AXt1 + BZt ) + BZt+1 = A2 Xt1 + ABZt + BZt+1

Zt+1
1
Zt + Zt+1
1

,
=
Zt+1 =
Zt +
=
Zt
0
Zt+1

0
and hence (8.1) is a solution to equation (8.2). Next we prove that (8.1) is a unique
solution to (8.2). Let X0t+1 be another solution to equation (8.2) and consider the
difference
Xt+1 X0t+1 = AXt + BZt+1 AX0t BZt+1 = A (Xt X0t )

= A AXt1 + BZt AX0t1 BZt = A2 Xt1 X0t1 = 0,


since A2 = 0. This implies that Xt+1 = X0t+1 , i.e. (8.1) is a unique solution to
(8.2). Moreover, Xt is stationary since

1
E[Zt ]
0
X (t) =
=
0
E[Zt1 ]
0
and

11 (t + h, t) 12 (t + h, t)
21 (t + h, t) 22 (t + h, t)

X (t + h, t) =

Cov(Zt+h + Zt+h1 , Zt + Zt1 ) Cov(Zt+h + Zt+h1 , Zt )


=
Cov(Zt+h , Zt + Zt1 )
Cov(Zt+h , Zt )

2
1 + 1{0} (h) + 1{1,1} (h) 1{0} (h) + 2 1{1} (h)
2
=
,
1{0} (h) + 2 1{1} (h)
2 1{0} (h)
i.e. neither of them depend on t. Now we see that

1
Zt
Yt = [1 0]Xt = [1 0]
= [1
0
Zt1
which is the MA(1) process.
28

Zt
Zt1

= Zt + Zt1 ,

Problem 8.9. Let Yt consist of Yt,1 and Yt,2 , then we can write

Yt,1
G1 Xt,1 + Wt,1
G1 Xt,1
Wt,1
Yt =
=
=
+
Yt,1
G2 Xt,2 + Wt,2
G2 Xt,2
Wt,2

G1 0
Xt,1
Wt,1
=
+
.
Xt,2
Wt,2
0 G2
Set

G=

G1
0

0
G2

Xt =

Xt,1
Xt,1

and Wt =

Wt,1
Wt,2

then we have Yt = GXt + Wt . Similarly we have that

Xt+1,1
F1 Xt,1 + Vt,1
F1 Xt,1
Vt,1
Xt+1 =
=
=
+
Xt+1,1
F2 Xt,2 + Vt,2
F2 Xt,2
Vt,2

F1 0
Xt,1
Vt,1
=
+
0 F2
Xt,2
Vt,2
and set

F =

F1
0

0
F2

and Vt =

Vt,1
Vt,2

Finally we have the state-space representation


Yt = GXt + Wt
Xt+1 = F Xt + Vt .
Problem 8.13. We have to solve
+ v2

2
=
2
+ w

which is equivalent to
2
v2 = 0.
2
+ w
2
Multiplying with + w
we get
2 2
2 v2 w
v = 0,

which has the solutions

r
p
2 2
v2 v4 + 4w
1 2
v4
v
2
2
= v
+ w v =
.
2
4
2
Since 0 we have the positive root which is the solution we wanted.
Problem 8.14. We have that
t+1 = t + v2

2t
2
t + w

2
and since v2 = 2 /( + w
) substracting yields

2
2t
t+1 = t +

2
2
+ w
t + w

2
2
t t + w
2t
+ w
2
=

2
2
t + w
+ w
2
2
w
t w

=
2
2
t + w
+ w

2
= w
.

2
2
t + w
+ w
29

This implies that

2
(t+1 )(t ) = w

2
2
t + w
+ w

(t ).

2
Now, note that the function f (x) = x/(x + w
) is increasing in x. Indeed, f 0 (x) =
2
2 2
w /(x + w ) > 0. Thus we get that for t > both terms are > 0 and for t <
both terms are < 0. Hence, (t+1 )(t ) 0.

Problem 8.15. We have the equations for :


2
2 = w
2
+ v2 .
2 (1 + 2 ) = 2w
2
From the first equation we get that 2 = w
/ and inserting this in the second
equation gives
2
2w
+ v2 =

2
w
(1 + 2 ),

and multiplying by gives the equation


2
2
2 2
(2w
+ v2 ) + w
+ w
= 0.

This can be rewritten as


2 +

2
+ v2
2w
+1=0
2
w

which has the solution


2 2 + 2
= w 2 v
2w

p
2
2
2 + 2 )2
2w
+ v2 v4 + 4v2 w
(2w
v

1
=

.
4
2
4w
2w

To get an invertible representation we choose the solution


p
2
2
+ v2 v4 + 4v2 w
2w
=
.
2
2w
2

w
To show that = 2 +
, recall the steady-state solution
w

v2 +

2
v4 + 4v2 w
,
2

which gives
p
2
2
2w
+ v2 v4 + 4v2 w
=
2
2w

p
p
2
2
2
2
2
2w + v v4 + 4v2 w
2w
+ v2 + v4 + 4v2 w

=
p
2 2 2 + 2 +
2
2w
v4 + 4v2 w
w
v
=

4
2
2
4
2
4w
+ 4v2 w
+ v4 v4 4v2 w
4w
w
=

.
2 (2 2 + 2)
2 ( 2 + )
2 +
2w
4w
w
w
w

30

Chapter 10
Problem 10.5. First a remark on existence of such a process: We assume for
simplicity that p = 1. A necessary and sufficient condition for the existence of a
causal, stationary solution to the ARCH(1) equations with E[Zt4 ] < is that 12 <
1/3. If p > 1 existence of a causal, stationary solution is much more complicated.
Let us now proceed with the solution to the problem.
We have
!

p
p
p
2
2
X
X
X
Z
e
e 2 ht
Z2
ti
2
= t 0 +
i Zti
= t = t = Yt ,
e2t 1 +
i Yti = e2t 1 +
i
0
0
0
0
i=1
i=1
i=1
hence Yt = Zt2 /0 satisfies the given equation. Let us now compute its ACVF. We
assume h 1, then
"
!
#
p
X
2
E[Yt Yth ] = E et 1 +
i Yti Yth
i=1

"
=

E[e2t ]E

Yth +

p
X

#
i Yti Yth

i=1

= E[Yth ] +

p
X

i E[Yti Yth ].

i=1

Since Y (h) = Cov(Yt , Yth ) = E[Yt Yth ] 2Y we get


Y (h) + 2Y = Y +

p
X

i Y (h i) + 2Y

i=1

and then
Y (h)

p
X

i Y (h i) = Y + 2Y

e2t

Y = E[Yt ] = E

1+

p
X

!#
i Yti

p
X

=1+

From this expression we see that Y = 1/(1


p
X

i 1 .

i Y (h i) =

i=1

i E[Yt ] = 1 + Y

i=1

i=1

Y (h)

i=1

i=1

We can compute Y as
"

p
X

Pp

1
Pp
i=1

i=1

p
X

i .

i=1

i ). This means that we have


Pp

(1

1
i=1
Pp i
i=1 i )2

= 0.

Dividing by Y (0) we find that the ACF Y (h) satisfies


Y (0) = 1,
Y (h)

p
X

i Y (h i) = 0,

h 1,

i=1

which corresponds to the Yule-Walker equations for the ACF for an AR(p) process
Wt = 1 Wt1 + + p Wtp + Zt .

31

You might also like