You are on page 1of 18

Chapter 5

1. Which of the following assumptions are required to show the


consistency, unbiasedness and efficiency of the OLS estimator?
(i) E(ut) = 0
(ii) Var(ut) = 2
(iii) Cov(ut, ut-j) = 0
(iv) ut ~ N(0, 2)

(ii) and (iv) only

(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv) only

All of the assumptions listed in (i) to (iii) are required to show that the OLS
estimator has the desirable properties of consistency, unbiasedness and
efficiency. However, it is not necessary to assume normality (iv) to derive
the above results for the coefficient estimates. This assumption is only
required in order to construct test statistics that follow the standard
statistical distributions in other words, it is only required for hypothesis
testing and not for coefficient estimation.

2. Which of the following may be consequences of one or more of


the CLRM assumptions being violated?
(i) The coefficient estimates are not optimal
(ii) The standard error estimates are not optimal
(iii) The distributions assumed for the test statistics are inappropriate
(iv) Conclusions regarding the strength of relationships between the
dependent and independent variables may be invalid.

(ii) and (iv) only

(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv)


If one or more of the assumptions is violated, either the coefficients could
be wrong or their standard errors could be wrong, and in either case, any
hypothesis tests used to investigate the strength of relationships between
the explanatory and explained variables could be invalid. So all of (i) to
(iv) are true.

3. What is the meaning of the term heteroscedasticity?

The variance of the errors is not constant

The variance of the dependent variable is not constant

The errors are not linearly independent of one another

The errors have non-zero mean


By definition, heteroscedasticity means that the variance of the errors is
not constant.

4. Consider the following regression model

(2)
Suppose that a researcher is interested in conducting Whites
heteroscedasticity test using the residuals from an estimation of (2). What
would be the most appropriate form for the auxiliary regression?
(i)

(ii)

(iii)

(iv)

The first thing to think about is what should be the dependent variable for
the auxiliary regression. Two possibilities are given in the question: u_t-
squared and u_t. Recall that the formula for the variance of any random
variable u_t is
E[u_t E(u_t)]squared
and that E(u_t) is zero, by the first assumption of the classical linear
regression model. Therefore, the variance of the random variable
simplifies to
E[u_t squared]. Thus, our proxy for the variance of the disturbances at
each point in time t becomes the squared residual. Thus, answers c and d,
which contain u rather than u-squared, are both incorrect. The next issue
is to determine what should be the explanatory variables in the auxiliary
regression. Since, in order to be homoscedastic, the disturbances should
have constant variance with respect to all variables, we could put any
variables we wished in the equation. However, Whites test employs the
original explanatory variables, their squares, and their pairwise cross-
products. A regression containing a lagged value of u as an explanatory
variable would be appropriate for testing for autocorrelation (i.e. whether
u is related to its lagged values) but not for heteroscedasticity. Thus (ii) is
correct.

5. Consider the following regression model

(2)
Suppose that model (2) is estimated using 100 quarterly observations, and
that a test of the type described in question 4 is conducted. What would
be the appropriate 2 critical value with which to compare the test
statistic, assuming a 10% size of test?

2.71

118.50

11.07

9.24

The chi-squared distribution has only one degree of freedom parameter,


which is the number of restrictions being placed on the model under the
null hypothesis. The null hypothesis of interest will be that all of the
coefficients in the auxiliary regression, except the intercept, are jointly
zero. Thus, under the correct answer to question 4, b, this would mean
that alpha2 through alpha6 were jointly zero under the null hypothesis.
This implies a chi-squared distribution with 5 degrees of freedom. The 10%
critical value from this distribution is approximately 9.24 and thus 4 is
correct.

6. What would be then consequences for the OLS estimator if


heteroscedasticity is present in a regression model but ignored?

It will be biased

It will be inconsistent

It will be inefficient
All of (a), (b) and (c) will be true
Under heteroscedasticity, provided that all of the other assumptions of the
classical linear regression model are adhered to, the coefficient estimates
will still be consistent and unbiased, but they will be inefficient. Thus c is
correct. The upshot is that whilst this would not result in wrong coefficient
estimates, our measure of the sampling variability of the coefficients, the
standard errors, would probably be wrong. The stronger the degree of
heteroscedasticity (i.e. the more the variance of the errors changed over
the sample), the more inefficient the OLS estimator would be.

7. Which of the following are plausible approaches to dealing with


a model that exhibits heteroscedasticity?
(i) Take logarithms of each of the variables
(ii) Use suitably modified standard errors
(iii) Use a generalised least squares procedure
(iv) Add lagged values of the variables to the regression equation.

(ii) and (iv) only

(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv)

8. Negative residual autocorrelation is indicated by which one of


the following?

A cyclical pattern in the residuals

An alternating pattern in the residuals

A complete randomness in the residuals

Residuals that are all close to zero


Negative residual autocorrelation implies a negative relationship between
one residual and the immediately preceding or immediately following
ones. This implies that, if negative autocorrelation is present, the residuals
will be changing sign more frequently than they would if there were no
autocorrelation. Thus negative autocorrelation would result in an
alternating pattern in the residuals, as they keep crossing the time axis. A
cyclical pattern would have arisen in the residuals if they were positively
autocorrelated. This would be since adjacent residuals would have the
same sign more frequently than would have been the case if there were
no autocorrelation, resulting in the time series plot of the residuals not
crossing the time axis very often. A complete randomness in the residuals
would occur if there were no autocorrelation, while the residuals being all
close to zero could occur if there were significant autocorrelation in either
direction or if there were not significant autocorrelation!
9. Which of the following could be used as a test for
autocorrelation up to third order?

The Durbin Watson test

Whites test

The RESET test

The Breusch-Godfrey test

The Durbin Watson test is one for detecting residual autocorrelation, but it
is designed to pick up first order autocorrelation (that is, a statistically
significant relationship between a residual and the residual one period
ago). As such, the test would not detect third order autocorrelation (that
is, a statistically significant relationship between a residual and the
residual three periods ago). The Breusch-Godfrey test is also a test for
autocorrelation, but it takes a more general auxiliary regression approach,
and therefore it can be used to test for autocorrelation of an order higher
than one. Whites test and the RESET tests are not autocorrelation tests,
but rather are tests for heteroscedasticity and appropriate functional form
respectively.

10. If a Durbin Watson statistic takes a value close to zero, what


will be the value of the first order autocorrelation coefficient?

Close to zero

Close to plus one

Close to minus one

Close to either minus one or plus one

Recall that the formula relating the value of the Durbin Watson (DW)
statistic, and the coefficient of first order autocorrelation, p, is:
DW is approximately equal to 2(1-p)
Thus, if DW is close to zero, the first order autocorrelation coefficient, p,
must be close to +1. A value of p close to 1 would suggest negative
autocorrelation, while a value close to +1 or 1 would suggest that we
thought there was strong autocorrelation but we didnt know whether it
was positive or negative! Such a situation would not happen in practice
because DW can distinguish between the two, and positive and negative
autocorrelation would result in completely different values of the DW
statistic.

11. Suppose that the Durbin Watson test is applied to a


regression containing two explanatory variables plus a constant
(e.g. equation 2 above) with 50 data points. The test statistic
takes a value of 1.53. What is the appropriate conclusion?

Residuals appear to be negatively autocorrelated

Residuals appear not to be autocorrelated

The test result is inconclusive

The value of the test statistic is given at 1.53, so all that remains to be
done is to find the critical values. Recall that the DW statistic has two
critical values: a lower and an upper one. If there are 2 explanatory
variables plus a constant in the regression, this would imply that using my
notation, k = 3 and k = 3-1 = 2. Thus, we would look in the k=2 column
for the lower and upper values, which would be in the row corresponding
to n = 50 data points. The relevant critical values would be 1.40 and 1.63.
Therefore, since the test statistic falls between the lower and upper critical
values, the result is in the inconclusive region. We therefore cannot say
from the result of this test whether first order serial correlation is present
or not.

12. Suppose that a researcher wishes to test for autocorrelation


using an approach based on an auxiliary regression. Which one of
the following auxiliary regressions would be most appropriate?

(i)

(ii)

(iii)

(iv)

Residual autocorrelation is concerned with the relationship between the


current (time t) value of the residual and its previous values. Therefore,
the dependent variable in the auxiliary regression must be the residual
itself and not its square. So (i) and (ii) are clearly inappropriate. This also
suggests that it should be lagged values of the residuals that should be
the regressors in the auxiliary regression and not any of the original
explanatory variables (the xs).

13. If OLS is used in the presence of autocorrelation, which of the


following will be likely consequences?

(i) Coefficient estimates may be misleading


(ii) Hypothesis tests could reach the wrong conclusions
(iii) Forecasts made from the model could be biased
(iv) Standard errors may inappropriate

(ii) and (iv) only

(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv)

The consequences of autocorrelation are similar to those of


heteroscedasticity. Thus the coefficient estimates will still be okay (i.e.
they will be consistently and unbiasedly estimated) provided that the
other assumptions of the classical linear regression model are valid. The
OLS estimator will be inefficient in the presence of autocorrelation, which
implies that the standard errors could be sub-optimal. Since the standard
errors may be inappropriate in the presence of autocorrelation, it is true
that hypothesis tests could reach the wrong conclusion, since the t-test
statistic contains the coefficient standard error in it. As the parameter
estimates should still be correct, forecasts obtained from the model will
only use the coefficients and not the standard errors, so the forecasts
should be unbiased. Therefore (ii) and (iv) are likely consequences and so
a is correct.

14. Which of the following are plausible approaches to dealing


with residual autocorrelation?
(i) Take logarithms of each of the variables
(ii) Add lagged values of the variables to the regression equation
(iii) Use dummy variables to remove outlying observations
(iv) Try a model in first differenced form rather than in levels.

(ii) and (iv) only


(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv)

Autocorrelation often arises as a result of dynamic (i.e. time-series)


structure in the dependent variable that is not being captured by the
model that has been estimated. Such structure will end up in the
residuals, resulting in residual autocorrelation. Therefore, an appropriate
response would be one that ensures that the model allows for this
dynamic structure. Either adding lagged values of the variables or using a
model in first differences will be plausible approaches. However,
estimating a static model with no lags in the logarithmic form would not
allow for the dynamic structure in y and would therefore not remove any
residual autocorrelation that had been present. Taking logs is often
proposed as a response to heteroscedasticities or non-linearities, shown
by the White and Ramsey tests respectively. Similarly, removing a small
number of outliers will also probably not remove the autocorrelation.
Removing outliers using dummy variables is often suggested as a
response to residual non-normality.

15. Which of the following could result in autocorrelated


residuals?
(i) Slowness of response of the dependent variable to changes in the
values of the independent variables
(ii) Over-reactions of the dependent variable to changes in the
independent variables
(iii) Omission of relevant explanatory variables that are autocorrelated
(iv) Outliers in the data

(ii) and (iv) only

(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv)

Autocorrelation often arises as a result of dynamic (i.e. time-series)


structure in the dependent variable that is not being captured by the
model that has been estimated. This dynamic structure could result either
from slow adjustment of the dependent variable to changes in the
independent variables or from over-adjustment of the dependent variable
to changes in the independent variables. The former may be termed
under-reaction and would result in positive residual autocorrelation,
while the latter may be termed over-reaction which would result in
negative residual autocorrelation. It is also the case that omitting a
relevant explanatory variable (in other words, one that is an important
determinant of y) that is itself autocorrelated, will also result in residual
autocorrelation. Outliers in the data are unlikely to cause residual
autocorrelation.

16. Including relevant lagged values of the dependent variable on the right
hand side of a regression equation could lead to which one of the
following?

Biased but consistent coefficient estimates

Biased and inconsistent coefficient estimates

Unbiased but inconsistent coefficient estimates

Unbiased and consistent but inefficient coefficient estimates

Including lagged values of the dependent variable y will cause the


assumption of the CLRM that the explanatory variables are non-stochastic
to be violated. This arises since the lagged value of y is now being used as
an explanatory variable and, since y at time t-1 will depend on the value
of u at time t-1, it must be the case that lagged values of y are stochastic
(i.e. they have some random influences and are not fixed in repeated
samples). The result of this is that the OLS estimator in the presence of
lags of the dependent variable will produce biased but consistent
coefficient estimates. Thus, as the sample size increases towards infinity,
we will still obtain the optimal parameter estimates, although these
estimates could be biased in small samples. Note that no problem of this
kind arises whatever the sample size when using only lags of the
explanatory variables in the regression equation.

17. Near multicollinearity occurs when

Two or more explanatory variables are perfectly correlated with one another

The explanatory variables are highly correlated with the error term

The explanatory variables are highly correlated with the dependent variable

Two or more explanatory variables are highly correlated with one another

Near multicollinearity is defined as the situation where there is a high, but


not perfect, correlation between two or more of the explanatory variables.
There is no definitive answer as to how big the correlation has to be
before it is defined as high. If the explanatory variables were highly
correlated with the error terms, this would imply that these variables were
stochastic (and OLS optimality requires that they are not). If the
explanatory variables are highly correlated with the dependent variable,
this would not be multicollinearity, which only considers the relationship
between the explanatory variables.

18. Which one of the following is NOT a plausible remedy for near
multicollinearity?

Use principal components analysis

Drop one of the collinear variables

Use a longer run of data

Take logarithms of each of the variables

Principal components analysis (PCA) is a plausible response to a finding of


near multicollinearity. This technique works by transforming the original
explanatory variables into a new set of explanatory variables that are
constructed to be orthogonal to one another. The regression is then one of
y on a constant and the new explanatory variables. Another possible
approach would be to drop one of the collinear variables, which will clearly
solve the multicollinearity problem, although there may be other
objections to doing this. Another approach would involve using a longer
run of data. Such an approach would involve increasing the size of the
sample, which would imply more information upon which to base the
parameter estimates, and therefore a reduction in the coefficient standard
errors, thus counteracting the effect of the multicollinearity. Finally, taking
logarithms of the variables will not remove any near multicollinearity.

19. What will be the properties of the OLS estimator in the


presence of multicollinearity?

It will be consistent, unbiased and efficient

It will be consistent and unbiased but not efficient

It will be consistent but not unbiased

It will not be consistent

In fact, in the presence of near multicollinearity, the OLS estimator will still
be consistent, unbiased and efficient. This is the case since none of the
four (Gauss-Markov) assumptions of the CLRM have been violated. You
may have thought that, since the standard errors are usually wide in the
presence of multicollinearity, the OLS estimator must be inefficient. But
this is not true the multicollinearity will simply mean that it is hard to
obtain small standard errors due to insufficient separate information
between the collinear variables, not that the standard errors are wrong.

20. Which one of the following is NOT an example of mis-


specification of functional form?

a)Using a linear specification when y scales as a function of the squares of


x
b)Using a linear specification when a double-logarithmic model would be
more appropriate
c) Modelling y as a function of x when in fact it scales as a function of 1/x
d)Excluding a relevant variable from a linear regression model

21. If the residuals from a regression estimated using a small


sample of data are not normally distributed, which one of the
following consequences may arise?
The coefficient estimates will be unbiased but inconsistent

The coefficient estimates will be biased but consistent

The coefficient estimates will be biased and inconsistent

Test statistics concerning the parameters will not follow their assumed
distributions.

Only assumptions labelled 1-4 in the lecture material are required to show
the consistency, unbiasedness and efficiency of the OLS estimator, and not
the assumption that the disturbances are normally distributed. The latter
assumption is only required for hypothesis testing and not for optimally
determining the parameter estimates. Therefore, the only problem that
may arise if the residuals from a small-sample regression are not normally
distributed is that the test statistics may not follow the required
distribution. You may recall that the normality assumption was in fact
required to show that, when the variance of the disturbances is unknown
and has to be estimated, the t-statistics follow a t-distribution.

22. A leptokurtic distribution is one which

Has fatter tails and a smaller mean than a normal distribution with the
same mean and variance

Has fatter tails and is more peaked at the mean than a normal distribution
with the same mean and variance
Has thinner tails and is more peaked at the mean than a normal
distribution with the same mean and variance

Has thinner tails than a normal distribution and is skewed

By definition, a leptokurtic distribution is one that has fatter tails than a


normal distribution and is more peaked at the mean. In other words, a
leptokurtic distribution will have more of the probability mass in the
tails, more close to the centre and less in the tails. Skewness is not a
required characteristic of leptokurtic distributions.

23. Under the null hypothesis of a Bera-Jarque test, the


distribution has
Zero skewness and zero kurtosis

Zero skewness and a kurtosis of three

Skewness of one and zero kurtosis

Skewness of one and kurtosis of three

A normal distribution has no skewness (i.e. the coefficient of skewness will


be zero), since it is symmetrical about its mean value, and it will have
just the right amount of kurtosis. A normal distribution is defined to have
a coefficient of 3. Therefore, a normal will have excess kurtosis of zero
(excess kurtosis being defined as kurtosis 3).

24. Which one of the following would be a plausible response to a


finding of residual non-normality?

Use a logarithmic functional form instead of a linear one

Add lags of the variables on the right hand side of the regression model

Estimate the model in first differenced form

Remove any large outliers from the data

It is quite often the case that one or two observations that do not fit into
the pattern of all of the others cause residual non-normality. Such
observations, often termed outliers will be a long way away from the
line and will therefore have large residuals (either positive or negative).
When these residuals are used as inputs to the skewness and kurtosis
calculation, they will be raised to the third and fourth powers respectively
in the numerators of the two formulae. The result is that the tails of the
distribution can be made much bigger than they otherwise would have
been by the presence of a small number of big outliers. Thus an
appropriate response may be to remove these outliers from the sample
altogether, either by physically deleting them from the data set or by
using a dummy variables approach to knock them out one at a time. Of
course, many econometricians would question this approach and would
argue that it is better to leave the residuals as non-normal than to remove
information from the sample.

25. A researcher tests for structural stability in the following


regression model:

(3)
The total sample of 200 observations is split exactly in half for the sub-
sample regressions. Which would be the unrestricted residual sum of
squares?

The RSS for the whole sample

The RSS for the first sub-sample

The RSS for the second sub-sample

The sum of the RSS for the first and second sub-samples

The unrestricted regression is always the one where the restriction has not
been imposed. The relevant restriction that is being tested in this case is
that the parameter values are the same in both of the sub-samples. Thus
the unrestricted regression would be the one where the parameter
estimates were allowed to be freely determined in each of the sub-
samples so that in general the two sets of parameters for the sub-samples
would be different from one another. Therefore the unrestricted RSS would
be the one where the coefficients were allowed to vary across the sub-
samples, which would be where the two regressions were estimated
separately and the RSS summed. The restricted RSS would be the one that
resulted from the regression imposing the restriction, which would be
where the coefficients were forced to be equal for the two sub-samples,
i.e. where only one regression is conducted on the whole sample together.

26. Suppose that the residual sum of squares for the three
regressions corresponding to the Chow test described in question
25 [

] are 156.4, 76.2 and 61.9. What is the value of the Chow F-test statistic?

4.3

7.6

5.3

8.6
Recall that the formula for calculating a Chow test is
(RSS (RSS1 + RSS2) / (RSS1 + RSS2)) x (T-2k) / k
Plugging the relevant numbers into the formula would give
(156.4 (76.2 + 61.9) / (76.2+61.9)) x (200 6) / 3
= 8.57.
Thus d is correct to one decimal place. In this case, the test is one of how
big is the increase in the RSS when the data are forced to be generated by
a single regression equation. T is the number of observations in the whole
sample, while k usually denotes the number of regressors in the
unrestricted regression including a constant (or the number of parameters
to be estimated in the unrestricted regression) in the standard F-test
formula. Since the unrestricted regression now comes in two parts, the
total number of parameters to be estimated in the unrestricted regression
is 2k. The number of restrictions will be k, since the restriction is that each
of the parameter values are equal across the two sub-samples.

27. What would be the appropriate 5% critical value for the test
described in questions 25 and 26? [

]
2.6

8.5

1.3

9.2

The degrees of freedom for a standard F-test are given by (m, T-k). In this
case, the total number of parameters to be estimated in the unrestricted
regression is 2k, and the number of restrictions is k, so that the
appropriate critical value would be one from an F(k, T-2k) distribution. T =
200, k = 3, so this would be an F(3,194). The closest value to this in the
table will be an F(3,200) with critical value 2.6.

28. Suppose now that a researcher wants to run a forward


predictive failure test on the last 5 observations using the same
model and data as in question 25 [

]. Which would now be the unrestricted residual sum of squares?

The RSS for the whole sample regression

The RSS for the long sub-sample regression

The RSS for the short sub-sample regression


The sum of the RSS for the long and short sub-sample regressions

The unrestricted regression would, as always, be the one where the


restrictions have not been imposed. In the context of a predictive failure
test, the restriction is that the model predictions for the last few
observations are zero. This would be equivalent to forcing the residuals for
the last 5 observations to all be zero. This could be achieved either by
using separate dummy variables for each of these observations to remove
them or more easily by simply removing them from the sample. Therefore,
the unrestricted RSS would be the one from the long sub-sample, i.e. the
first 195 observations. Note that under the predictive failure test, we do
not have a short sub-sample regression. In other words, in contrast to the
Chow test that required the estimation of 3 regressions, the predictive
failure test involves the estimation of only 2 regressions.

29. If the two RSS for the test described in question 28 are 156.4
and 128.5, what is the value of the test statistic?

13.8

14.3

8.3

8.6

Recall that the formula for calculating a predictive failure test is


((RSS RSS1) / RSS1) x (T1-k) / T2
Plugging the relevant numbers into the formula would give
((156.4 128.5) / 128.5) x (195 3) / 5
= 8.34, which is 8.3 to one decimal place.
T1 is the number of observations used in the unrestricted regression,
which will be the one for the long sub-sample, i.e. 195 observations, k is
the number of regressors in the unrestricted regression including a
constant, and the number of restrictions is the number of observations in
the short sub-sample to be predicted, 5.

30. If a relevant variable is omitted from a regression equation,


the consequences would be that:
(i) The standard errors would be biased
(ii) If the excluded variable is uncorrelated with all of the included
variables, all of the slope coefficients will be inconsistent.
(iii) If the excluded variable is uncorrelated with all of the
included variables, the intercept coefficient will be inconsistent.
(iv) If the excluded variable is uncorrelated with all of the
included variables, all of the slope and intercept coefficients will
be consistent and unbiased but inefficient.
(ii) and (iv) only

(i) and (iii) only

(i), (ii), and (iii) only

(i), (ii), (iii), and (iv)

If a relevant variable is omitted from a regression equation, then the


standard conditions for OLS optimality will not apply. These conditions
implicitly assumed that the model was correctly specified in the sense that
it includes all of the relevant variables. If relevant variables (that is,
variables that are in fact important determinants of y) are excluded from
the model, the standard errors could be biased (thus (i) is true), and, the
slope coefficients will be inconsistently estimated unless the excluded
variable is (are) uncorrelated with all of the included explanatory
variable(s) - thus (ii) is wrong. If this condition holds, the slope estimates
will be consistent, unbiased and efficient (so (iv) is wrong), but the
intercept estimator will still be inconsistent (so (iii) is correct.

31. A parsimonious model is one that

Includes too many variables

Includes as few variables as possible to explain the data

Is a well-specified model

Is a mis-specified model
By definition, a parsimonious model is one that uses as few variables as
possible to explain the important features of the data. A parsimonious
model could be either well-specified or mis-specified, while a model
including too many variables may be also termed a profligate or an over-
parametersised model.

32. An overparameterised model is one that


Includes too many variables

Includes as few variables as possible to explain the data

Is a well-specified model

Is a mis-specified model

An over-parametersised or profligate model is one that includes too


many variables. Such a model would include some statistically
insignificant variables that do not help to explain variations in y. Such
models are undesirable since they use up valuable degrees of freedom
and therefore reduce the statistical significance of variables in the
regression. They may also be inclined to fit to sample-specific features of
the data. This would imply that their forecasting power would be limited.

33. Which one of the following is a disadvantage of the general to


specific or LSE (Hendry) approach to building econometric
models, relative to the specific to general approach?

Some variables may be excluded at the first stage leading to coefficient


biases

The final model may lack theoretical interpretation

The final model may be statistically inadequate

If the initial model is mis-specified, all subsequent steps will be invalid

The LSE or Hendry general to specific methodology involves the


estimation of a large and general model at the first stage with subsequent
simplification once a statistically adequate model has been found. Thus,
since the first stage model includes all possible relevant variables, there is
unlikely to be a coefficient bias resulting from an omitted variable (so a is
incorrect). The general to specific approach also involves conducting
diagnostic tests for model adequacy at every stage. So, provided that an
initial model that is well specified could be found, there is no reason to
accept any later stage model that is not statistically adequate since any
statistically inadequate model should cause the researcher to go back to
the model of the previous stage (so c is incorrect). It is true that if an
initial model is mis-specified, all subsequent modelling steps will be invalid
since the hypothesis tests upon which model rearrangement or
simplification may have been based would be invalid. However, this is an
advantage of the general to specific approach that it avoids such
problems. Thus d is incorrect. Finally, it is true that if a researcher follows
the general to specific approach, it is possible that the final model will lack
theoretical meaning although it should be statistically okay.

34. Which of the following consequences might apply if an


explanatory variable in a regression is measured with error?

(i) The corresponding parameter will be estimated inconsistently

(ii) The corresponding parameter estimate will be biased towards zero

(iii) The assumption that the explanatory variables are non-stochastic will
be violated

(iv) No serious consequences will arise

(i) only
(i) and (ii) only
(i), (ii), and (iii) only
(iv) only
When there is measurement error in an explanatory variable, all of (i) to
(iii) could occur. So parameter estimation may be inconsistent (thus
parameter estimates will not converge upon their true values even as the
sample size tends to infinity), the parameters are always biased towards
zero, and obviously measurement error implies noise in the explanatory
variables and thus they will be stochastic.

35. Which of the following consequences might apply if the


explained variable in a regression is measured with error?

(i) The corresponding parameter will be estimated inconsistently


(ii) The corresponding parameter estimate will be biased towards zero
(iii) The assumption that the explanatory variables are non-stochastic will
be violated
(iv) No serious consequences will arise

(i) only

(i) and (ii) only

(i), (ii), and (iii) only

(iv) only

In the case where the explained variable has measurement error, there
will be no serious consequences the standard regression framework is
designed to allow for this as an error term also influences the value of the
explained variable. This is in stark contrast to the situation where there is
measurement error in the explanatory variables, which is a potentially
serious problem because they are assumed to be non-stochastic.

You might also like