You are on page 1of 5

Elyssa Jane V.

Gomez
Angelica H. Mutia

1. (ARMA lag inclusion) Review Table 8.1. Why is the MA(3) term included even though the p-
value indicates that it is not significant? What would be the costs and benefits of dropping the
insignificant MA(3) term?
What we are usually doing is including all the lags up until the maximum that can be adopted,
however we can still remove or drop insignificant lags without doing damage.
The costs and benefits of dropping depend on the number of samples. If the sample is large there
would be a little benefit, but if the sample is small, which has limited degrees of freedom, the
benefit would be larger.

2. (Shapes of correlograms) Given the following ARMA processes, sketch the expected forms of
the autocorrelation and partial autocorrelation functions. (Hint: examine the roots of the various
autoregressive and moving average lag operator polynomials.)

1
a. 𝑦𝑡 = ( ) 𝜀𝑡
1−1.05𝐿−0.09𝐿2

 𝜀t = (1 - 1.05L - 0.09 L2 )yt


 𝜀t = yt − 1.05𝐿yt − 0.09 L2 yt
 yt = 1.05𝐿yt + 0.09 L2 yt + 𝜀t

Since Lyt = yt-1 and L2 yt = yt-2 , then


 yt = 1.05yt-1 + 0.09 yt-2 + 𝜀t
We use eviews to generate the correlogram:
From the sketch above, the autocorrelation failed to damp. This reveals that the process is not
covariance stationary.

b. 𝑦𝑡 = (1 − 0.4𝐿)𝜀𝑡
 yt = 𝜀t - 0.4L𝜀t

Since L𝜀t = 𝜀t-1 , then


 yt = 𝜀t - 0.4𝜀t-1
Using eviews, we generate the correlogram:

From the sketch above, we can see that there is a sharp cutoff beyond lag one due to MA(1)
structure.

1
c. 𝑦𝑡 = ( ) 𝜀𝑡
1−0.7𝐿

 𝜀t = (1 - 0.7L)yt
 𝜀t = yt − 0.7𝐿yt
 yt = 0.7𝐿yt + 𝜀t

Since Lyt = yt-1 , then


 yt = 0.7yt-1 + 𝜀t
From the sketch above, we can see that there is an exponential decay due to AR(1) structure.

3. (The autocovariance function of the MA(1) process, revisited) In the text we wrote

( ) ( ) ( )( ) 𝜽𝝈𝟐 , 𝝉 = 𝟏
𝜸 𝝉 = 𝑬 𝒚𝒕 𝒚𝒕−𝝉 = 𝑬( 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝝉 + 𝜽𝜺𝒕−𝝉−𝟏 ) = {
𝟎, 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
Fill in the missing steps by evaluating explicitly the expectation

𝑬((𝜺𝒕 + 𝜽𝜺𝒕−𝟏 )(𝜺𝒕−𝝉 + 𝜽𝜺𝒕−𝝉−𝟏 )).

𝑬((𝜺𝒕 + 𝜽𝜺𝒕−𝟏 )(𝜺𝒕−𝝉 + 𝜽𝜺𝒕−𝝉−𝟏 ))


= 𝑬(𝜺𝒕 𝜺𝒕−𝝉 + 𝜺𝒕 𝜽𝜺𝒕−𝝉−𝟏 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝝉 + 𝜽𝜺𝒕−𝟏 𝜽𝜺𝒕−𝝉−𝟏)
= 𝑬(𝜺𝒕 𝜺𝒕−𝟏 + 𝜺𝒕 𝜽𝜺𝒕−𝟏−𝟏 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟏 + 𝜽𝜺𝒕−𝟏 𝜽𝜺𝒕−𝟏−𝟏 ), 𝝉 = 𝟏
= 𝑬(𝜺𝒕 𝜺𝒕−𝟏 + 𝜺𝒕 𝜽𝜺𝒕−𝟐 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟏 + 𝜽𝜺𝒕−𝟏 𝜽𝜺𝒕−𝟐 ),
because ɛ is white noise, so the covariance of any two ɛ’s at different times is zero.

= 𝑬(𝟎 + 𝟎 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟏 + 𝟎)


= 𝑬(𝜽𝜺𝟐𝒕−𝟏 )

= 𝜽𝑬(𝜺𝟐𝒕−𝟏 )

= 𝜽𝝈𝟐
When we set 𝝉 > 𝟏,
𝑬(𝜺𝒕 𝜺𝒕−𝟐 + 𝜺𝒕 𝜽𝜺𝒕−𝟑 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟐 + 𝜽𝜺𝒕−𝟏 𝜽𝜺𝒕−𝟑 ), 𝝉 = 𝟐
𝑬(𝜺𝒕 𝜺𝒕−𝟑 + 𝜺𝒕 𝜽𝜺𝒕−𝟒 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟑 + 𝜽𝜺𝒕−𝟏 𝜽𝜺𝒕−𝟑 ), 𝝉 = 𝟑
and so on. Notice that all expected cross products of ɛ’s vanish, so we simply get 0.
4. (ARMA algebra) Derive expressions for the autocovariance function, autocorrelation function,
conditional mean, unconditional mean, conditional variance and unconditional variance of the
following processes:

a. 𝑦𝑡 = 𝜇 + 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2
Autocovariance Function:

𝜸(𝝉) = 𝒄𝒐𝒗(𝒚𝒕 , 𝒚𝒕−𝝉 ) = 𝑬((𝒚𝒕 − 𝝁)(𝒚𝒕−𝝉 − 𝝁) )


= 𝑬((𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 )(𝜀𝑡−𝝉 + 𝜃1 𝜀𝑡−𝝉−1 + 𝜃2 𝜀𝑡−𝝉−2 ))
= 𝑬((𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 )(𝜀𝑡−𝝉 + 𝜃1 𝜀𝑡−𝝉−1 + 𝜃2 𝜀𝑡−𝝉−2 ))
= 𝑬(𝜀𝑡 𝜀𝑡−𝝉 + 𝜀𝑡 𝜃1 𝜀𝑡−𝝉−1 + 𝜀𝑡 𝜃2 𝜀𝑡−𝝉−2 + 𝜃1 𝜀𝑡−1 𝜀𝑡−𝝉 + 𝜃1 𝜀𝑡−1 𝜃1 𝜀𝑡−𝝉−1 + 𝜃1 𝜀𝑡−1 𝜃2 𝜀𝑡−𝝉−2
+ 𝜃2 𝜀𝑡−2 𝜀𝑡−𝝉 + 𝜃2 𝜀𝑡−2 𝜃1 𝜀𝑡−𝝉−1 + 𝜃2 𝜀𝑡−2 𝜃2 𝜀𝑡−𝝉−2 )
𝟐
𝜸(𝟎) = 𝒄𝒐𝒗(𝒚𝒕 , 𝒚𝒕 ) = 𝒗𝒂𝒓(𝒚𝒕 ) = 𝑬(𝒚𝟐𝒕 ) − (𝑬(𝒚𝒕 ))
𝟐
= 𝑬((𝜇 + 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 )𝟐 ) − (𝑬(𝜇 + 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 ))

𝜸(𝟏) = 𝑬(𝜀𝑡 𝜀𝑡−𝟏 + 𝜀𝑡 𝜃1 𝜀𝑡−2 + 𝜀𝑡 𝜃2 𝜀𝑡−3 + 𝜃1 𝜀𝑡−1 𝜀𝑡−1 + 𝜃1 𝜀𝑡−1 𝜃1 𝜀𝑡−𝟐 + 𝜃1 𝜀𝑡−1 𝜃2 𝜀𝑡−𝟑
+ 𝜃2 𝜀𝑡−2 𝜀𝑡−𝟏 + 𝜃2 𝜀𝑡−2 𝜃1 𝜀𝑡−2 + 𝜃2 𝜀𝑡−2 𝜃2 𝜀𝑡−3 )
= 𝑬(0 + 0 + 0 + 𝜃1 𝜀𝑡−1 𝜀𝑡−1 + 0 + 0 + 0 + 0 + 0)
= 𝑬((𝜃1 𝜀𝑡−1 𝜀𝑡−1 )2 )
= 𝜃12 𝑬(𝜀𝑡−1
𝟐
) = 𝜃12 𝝈𝟐

Autocorrelation Function:

𝜸(𝝉) 𝑬((𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 )(𝜀𝑡−𝝉 + 𝜃1 𝜀𝑡−𝝉−1 + 𝜃2 𝜀𝑡−𝝉−2 ))


𝝆(𝝉) = = 𝟐,
𝜸(𝟎) 𝑬((𝝁 + 𝜺 + 𝜽 𝜺 + 𝜽 𝜺 )𝟐 ) − (𝑬(𝝁 + 𝜺 + 𝜽 𝜺
𝒕 𝟏 𝒕−𝟏 𝟐 𝒕−𝟐 𝒕 +𝜽 𝜺
𝟏 𝒕−𝟏 𝟐 𝒕−𝟐 ))

𝝉 = 𝟎, 𝟏, 𝟐, …
Conditional Mean:

𝑬(𝝁 + 𝜺𝒕 + 𝜽𝟏 𝜺𝒕−𝟏 + 𝜽𝟐 𝜺𝒕−𝟐 |𝜴𝒕−𝟏 )


= 𝑬(𝝁|𝜴𝒕−𝟏 ) + 𝑬(𝜺𝒕 |𝜴𝒕−𝟏 ) + 𝜽𝑬( 𝜺𝒕−𝟏 |𝜴𝒕−𝟏 ) + 𝜽𝟐 𝑬( 𝜺𝒕−𝟐 |𝜴𝒕−𝟏 )
Unconditional Mean:

𝑬(𝝁 + 𝜺𝒕 + 𝜽𝟏 𝜺𝒕−𝟏 + 𝜽𝟐 𝜺𝒕−𝟐 ) = 𝑬(𝝁) + 𝑬(𝜺𝒕 ) + 𝜽𝑬( 𝜺𝒕−𝟏 ) + 𝜽𝟐 𝑬( 𝜺𝒕−𝟐 )


Conditional Variance:

𝒗𝒂𝒓(𝒚𝒕 |𝜴𝒕−𝟏 ) = 𝑬((𝒚𝒕 − 𝑬(𝒚𝒕 |𝜴𝒕−𝟏 ))𝟐 |𝜴𝒕−𝟏 )


= 𝑬((𝜇 + 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 − 𝑬(𝜇 + 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 |𝜴𝒕−𝟏 ))𝟐 |𝜴𝒕−𝟏 )
Unconditional Variance:
𝒗𝒂𝒓(𝒚𝒕 ) = 𝒗𝒂𝒓(𝝁 + 𝜺𝒕 + 𝜽𝟏 𝜺𝒕−𝟏 + 𝜽𝟐 𝜺𝒕−𝟐 )
b. 𝑦𝑡 = 𝜑𝑦𝑡−1 + 𝜀𝑡 + 𝜃𝜀𝑡−1
Autocovariance Function:

𝜸(𝝉) = 𝒄𝒐𝒗(𝒚𝒕 , 𝒚𝒕−𝝉 ) = 𝑬((𝝋𝒚𝒕−𝟏 + 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 − 𝝁)(𝝋𝒚𝒕−𝝉−𝟏 + 𝜺𝒕−𝝉 + 𝜽𝜺𝒕−𝝉−𝟏 − 𝝁))


𝜸(𝟏) = 𝑬((𝝋𝒚𝒕−𝟏 + 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 − 𝝁)(𝝋𝒚𝒕−𝟐 + 𝜺𝒕−𝟏 + 𝜽𝜺𝒕−𝟐 − 𝝁))
= 𝑬(𝝋𝒚𝒕−𝟏 𝝋𝒚𝒕−𝟐 + 𝝋𝒚𝒕−𝟏 𝜺𝒕−𝟏 + 𝝋𝒚𝒕−𝟏 𝜽𝜺𝒕−𝟐 − 𝝋𝒚𝒕−𝟏 𝝁 + 𝜺𝒕 𝝋𝒚𝒕−𝟐 + 𝜺𝒕 𝜺𝒕−𝟏 + 𝜺𝒕 𝜽𝜺𝒕−𝟐 −
𝜺𝒕 𝝁 + 𝜽𝜺𝒕−𝟏 𝝋𝒚𝒕−𝟐 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟏 + 𝜽𝜺𝒕−𝟏 𝜽𝜺𝒕−𝟐 − 𝜽𝜺𝒕−𝟏 𝝁 + 𝝁𝝋𝒚𝒕−𝟐 + 𝝁𝜺𝒕−𝟏 + 𝝁𝜽𝜺𝒕−𝟐 − 𝝁𝝁)
= 𝑬(𝝋𝒚𝒕−𝟏 𝝋𝒚𝒕−𝟐 + 𝝋𝒚𝒕−𝟏 𝜺𝒕−𝟏 + 𝝋𝒚𝒕−𝟏 𝜽𝜺𝒕−𝟐 − 𝝋𝒚𝒕−𝟏 𝝁 + 𝜺𝒕 𝝋𝒚𝒕−𝟐 + 𝟎 + 𝟎 − 𝜺𝒕 𝝁
+ 𝜽𝜺𝒕−𝟏 𝝋𝒚𝒕−𝟐 + 𝜽𝜺𝒕−𝟏 𝜺𝒕−𝟏 + 𝟎 − 𝜽𝜺𝒕−𝟏 𝝁 + 𝝁𝝋𝒚𝒕−𝟐 + 𝝁𝜺𝒕−𝟏 + 𝝁𝜽𝜺𝒕−𝟐 − 𝝁𝝁)
𝟐
𝜸(𝟎) = 𝒄𝒐𝒗(𝒚𝒕 , 𝒚𝒕 ) = 𝒗𝒂𝒓(𝒚𝒕 ) = 𝑬(𝒚𝟐𝒕 ) − (𝑬(𝒚𝒕 )) = 𝑬((𝜑𝑦𝑡−1 + 𝜀𝑡 + 𝜃𝜀𝑡−1 )𝟐 ) −
𝟐
(𝑬(𝜑𝑦𝑡−1 + 𝜀𝑡 + 𝜃𝜀𝑡−1 ))

Autocorrelation Function:

𝜸(𝝉) 𝑬((𝝋𝒚𝒕−𝟏 + 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 − 𝝁)(𝝋𝒚𝒕−𝝉−𝟏 + 𝜺𝒕−𝝉 + 𝜽𝜺𝒕−𝝉−𝟏 − 𝝁))


𝝆(𝝉) = = 𝟐 , 𝝉 = 𝟎, 𝟏, 𝟐, …
𝜸(𝟎) 𝑬((𝜑𝑦 + 𝜀 + 𝜃𝜀 )𝟐 ) − (𝑬(𝜑𝑦 + 𝜀 + 𝜃𝜀 ))
𝑡−1 𝑡 𝑡−1 𝑡−1 𝑡 𝑡−1

Conditional Mean:

𝑬(𝝋𝒚𝒕−𝟏 + 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 |𝜴𝒕−𝟏 ) = 𝑬(𝝋𝒚𝒕−𝟏 |𝜴𝒕−𝟏 ) + 𝑬(𝜺𝒕 |𝜴𝒕−𝟏 ) + 𝜽𝑬( 𝜺𝒕−𝟏 |𝜴𝒕−𝟏 )
Unconditional Mean:

𝑬(𝝋𝒚𝒕−𝟏 + 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 ) = 𝑬(𝝋𝒚𝒕−𝟏 ) + 𝑬(𝜺𝒕 ) + 𝜽𝑬( 𝜺𝒕−𝟏 )


Conditional Variance:
𝟐
𝒗𝒂𝒓(𝒚𝒕|𝜴𝒕−𝟏 ) = 𝑬((𝒚𝒕 − 𝑬(𝒚𝒕 |𝜴𝒕−𝟏 )) |𝜴𝒕−𝟏 )
𝟐
= 𝑬((𝜑𝑦𝑡−1 + 𝜀𝑡 + 𝜃𝜀𝑡−1 − 𝑬 (𝜑𝑦𝑡−1 + 𝜀𝑡 + 𝜃𝜀𝑡−1|𝜴𝒕−𝟏 )) |𝜴𝒕−𝟏 )

Unconditional Variance:
𝒗𝒂𝒓(𝒚𝒕 ) = 𝒗𝒂𝒓(𝝋𝒚𝒕−𝟏 + 𝜺𝒕 + 𝜽𝜺𝒕−𝟏 )

You might also like