7  Regression with heteroskedastic random disturbances

7.1  Introduction

In regression equations we have studied up to now, we have always assumed that the disturbances satisfy the so-called standard conditions given by

(7.1.1)

(7.1.2)

(7.1.3)

It is evident that the condition described by (7.1.2) saying that all the disturbances should have the same variability is an idealized situation, which econometricans very seldom meet in practice. For instance, consider the case when the dependent variable denotes a household’s expenditure of food and the exogenous variable denotes the household’s income. It is a well known empirical fact that the variation of food expenditure among high-income households are much larger than the variation among low-income households. In these application an assumption of a constant variance is simply not appropriate. Instead of (7.1.2) we have to assume that the error variance is some function of the households’ income. For example, we might assume that

(7.1.4)

but, of course, many other functional forms could be imagined.

On the assumptions (7.1.1) – (7.1.3) we have clarified above the properties of the ordinary least square method and the relevant procedures for testing hypotheses on the regression coefficients. A natural question to ask is which of these properties and testing procedures will survive when we retain the assumptions (7.1.1) and (7.1.3) but drop the assumption (7.1.2) of homoskedastic disturbances?

7.2 Consequences of heteroskedastic disturbances

In order to discuss these topics explicitly we can without sacrificing any essential points consider the simple regression

(7.2.1)

where we replace assumption (7.1.2) by the more general

(7.2.2)

In this regression we know that the OLS estimator of the slope parameter is given by

(7.2.3)

which by substituting for can be written

(7.2.4)

From this formula we see directly that

(7.2.5)

and that

(7.2.6)

From (7.2.5) we observe directly that the OLS estimator is still an unbiased estimator of . Under general conditions it also follows that will be consistent estimator of . Hence, the OLS estimators obtained when the disturbances are heteroskedastic will share these two ‘good’ properties of the OLS estimator with the homoskedastic case. But, of course, when the disturbances are heteroskedastic we can find more efficient methods of estimation, e.g. generalized least square (GLS). However, a more serious objection to the OLS estimator when the disturbances are heteroskedastic is that the variances of the estimators will be different from those obtained when the disturbances are homoskedastic. For example, we remember that in the homoskedastic case the variance of is given by

(7.2.7)

which can be quite different from that given by (7.2.6). We also remember that the assumption that all disturbances had the same variance were crucial in deriving the distribution of our test statistics, the and the This means that the standard errors of estimates and the standard test statistics shown in the outputs of the traditional regression programs will be wrong and unreliable.

However, White (1980) showed how one could derive a consistent estimator of the variance given by (7.2.6). In the literature one usually calls this estimator White’s consistent variance estimator, although this estimator was already known in the statistical literature. A look at (7.2.6) shows that this variance estimator depends on the unknown disturbance variances. This situation, that the number of unknown parameters depends on the number of observations often raises difficult estimation problems. The stationary point of the quadratic form underlying OLS might corresponds to a saddle point and not to a minimum point. But in the present case things come out nicely. The reason for that is that OLS estimators are consistent estimators. White’s proposal is to replace in (7.2.6) by the corresponding squared residuals, i.e. so that White’s estimator becomes

(7.2.8)

In order to indicate loosely why this might be a successful proposal, let us consider the numerator of (7.2.6). Since the explanatory variable is non-stochastic, it is evident that the following equation holds

(7.2.9)

Since the disturbances are given by

(7.2.10)

it is intuitive that an appeal to the law of large numbers will imply the convergence of

(7.2.11)

Continuing this line of reasoning, it is also reasonable to expect since the OLS estimators are consistent estimators that the residuals

(7.2.10)

in some way will converge to the disturbances . Then a reasonable guess is that

(7.2.11)

where the arrows indicate convergence in probability.

A great advantage of White’s variance estimator is that it does not require a parametric specification for the heteroskedasticity. Unlike other tests, there is no need for subsidiary variables to explain the heteroskedasticity. Thus, this method is just quite general. However, White’s estimator is strictly justified only for large sample sizes, so we can use this variance estimator to construct large sample tests on the regression coefficients. However, it must be admitted that in practice it is also used in applications with only moderate sample sizes.

7.3 Detection of heteroskedastic disturbances

When we specify our assumption regarding the random disturbances we have to base our considerations on the type of application under study. Above we mentioned the cross-section studies of the demand for food as a function of income as an obvious example where the assumption of homoskedastic disturbances is doubtful, but the econometric literature abounds by similar examples. So what shall we do to secure that our methods are based on firm footing? Well, an easy and convenient first step is to run a OLS regression, then calculate the residuals and plot the residuals against the explanatory variables. This will indicate whether there is any founding for our suspicion. If the empirical plots indicate that some kind of heterskedasticity is at work in our data, we have to look for test procedures that can help revealing the presence of heteroskedasticity. In the literature there has been developed several such tests which in a way is a refinement of statistical plots in that the framework is regressions of on different functions ( often polynomials) of the explanatory variables or of the estimated dependent variable (. Note there is no point in regressing on the explanatory variables since by the working of OLS will be uncorrelated with the explanatory variables. We shall not give a review of these tests but the following details are useful

(7.3.1)

implying

(7.3.2)

From (7.3.2) we deduce (do that for yourself) that

(7.3.3)

If , i.e. the disturbances are homoskedastic (7.3.3) reduces to

(7.3.4)

which shows that the estimator

(7.3.5)

is an unbiased estimator of in the homoskedastic case. From the expression (7.3.2) we find that when the disturbances are homoskedastic

(7.3.6)

Hence, even though the disturbances are homoskedastic the squared residual will depend on . However, under mild conditions on the explanatory variable this dependency will vanish when the number of observations increases. This explains that tests based on the residuals only has validity when the sample size is large, they are what we call asymptotic tests. Although these tests are quite simple ‘regression tests’ we shall in this course dwell any longer on them. As a formal test of investigating the presence of heteroskedasticity we shall only have a look on the Goldfeld-Quandt test. As a background for this test let us again consider the model of food expenditure as a function of income. As we noted above it is reasonable to assume that the variability of food expenditure is much larger for high income households than for low income families. A simple test of heteroskedasticity based on this idea is to split the sample into two sub-samples where we use income as a sorting criterion. In order to be specific let us split the sample in two equal parts where the first sub-sample contains households with the highest income and the remaining households in second sub-sample. Then one assumes that variance in the first sub-sample is and the variance in the second is , but note that one still assumes that the regression coefficients are the same for the samples. Applying OLS regression to the two sub-samples will give us estimates of the regression coefficients , and from these estimates we deduce estimates the two variances in the usual way. If for example the variance in first sub-sample is and the number of observations allocated to this sample is , then from standard statistical knowledge

(7.3.7)

The treatment of sub-sample is, of course, analogical. So the Goldfeld-Quandt test of heteroskedasticity is this case:

The null hypothesis

The test procedure is now as usual: Choose a level of significance , then clarify the test statistic one wants to use, find the distribution of the test statistic under the null hypothesis and finally determine the rejection region.

In this case we use the test statistic

(7.3.8)

where is the number of observations in the second sub-sample and, of course, also refers to the second sub-sample. Note that under the null hypothesis the two variances are equal and therefore will cancel in expression (7.3.8). If the test statistic is around 1 the null hypothesis is confirmed, if however the statistic is considerably larger this will tend towards rejection of .

If we reject the null hypothesis, then we are pretty convinced that the disturbances are heteroskedastic. But what do we then do? Well, in that case it is reasonable to reflect on the nature of the heteroskedasticity. If we are convinced that this has a specific form, the next step is to apply an appropriate transformation to the variables entering the model.

7.4  Transformations of variables

Let us illustrate this approach by considering a specific application, so we consider the following model

(7.4.1)

For the disturbances we specify

(7.4.2 a-c)

Hence, we suppose that we know the form of the heteroskedasticity. Although, OLS applied to (7.4.1) provides unbiased and consistent estimators of the regression coefficients, we know that OLS is an inefficient method of estimation in this case. The so-called ‘blue’ property of OLS presupposes that the disturbances are homoskedastic. Since we know the form of the heteroskedasticity it is very tempting to try to transform the variables in order to restore the homoskedasticity property of the disturbances. In the present simple case we see directly how this can be done. We simple divide through equation (7.4.1) by the square root of ,

(7.4.3)

Since, is observable the transformed model will be an ordinary regression with two explanatory variables but without an intercept term. We also note that the disturbances in the transformed model are homoskedastic. With obvious notation we write (7.4.3) as

(7.4.4)

Since, the disturbances are homoskedastic OLS applied to this regression (7.4.4) will give us ‘blue’ estimators of , and moreover the conventional procedures can now be applied to hypotheses on the regression coefficients. Generally, if we know the form of the heteroskedasticity, we apply the appropriate transformation and everything will be put in order. Unfortunately, we very seldom know the exact form of the heteroskedasticity. In applications there is a case when this approach can be applied. Suppose our data are group averages, but the number of observations in each group vary. That is, we have the model

(7.4.5)

where counts the number groups, and we have observations in each group. Regarding the disturbances we assume they are homoskedastic. We don’t have observations on the but we data on the group averages

(7.4.6)

From the regression (7.4.5) we derive regression

(7.4.7)

The disturbances will be heteroskedastic since

(7.4.8)

Evidently, in this case we shall multiply the regression (7.4.7) by , so the transformed regression will be

(7.4.9)

The disturbances in this regression is homoskedastic since

(7.4.10)

Applying OLS regression to (7.4.9) will provide us with ‘blue’ estimators of .

Applying OLS regression to the transformed variables as exemplified in the regressions (7.4.4) and (7.4.9) are called weighed OLS regression and are examples of generalized least square regression (GLS). We will learn more about GLS in more advanced courses in econometrics.

8