Assessing Model Stability Using

Recursive Estimation and Recursive Residuals

Our forecasting procedure cannot be expected to produce good forecasts if the forecasting model that we constructed was stable over the sample period and will remain stable over the forecast period.

If the model’s parameters are different during the forecast period than they were during the sample period, then the model we estimated will not be very useful , regardless of how well it was estimated. And, if the model’s parameters were unstable over the sample period, then model was not even a good representation of how the series evolved over the sample period.

The focus of this lecture is on assessing the stability of the model parameters over the sample period. Is there evidence that one or more of the model’s parameters changed at one or more points in the sample period?

Note: We have already proposed one strategy to account for changes in the trend parameters that are believed to have occurred at known points in time – introduce dummy variables.

Consider the model of Y that combines the trend and AR(p) components into the following form:

Yt =β0+ β1t + β2t2 +…+βsts +φ1Yt-1+…+φpYt-p+εt

where the ε’s are WN(0,σ2).

Note for emphasis – the issue is whether the β’s and φ’s are constant over the sample over whether they are varying over sample. (We assumed the former in constructing our forecasts.)

We will propose using results from applying the recursive estimation method to evaluate parameter stability over the sample period t = 1,…,T.

Fit the model (by OLS) for t = p+1,…,T*:

T* = p+1+s+p = 2p+s+1

T* = 2p+s+2

T* = 2p+s+3

T* = T

Recursive Parameter Estimates

The recursive estimation yield parameter estimates for each T*:

and

for i = 1,..,s, j = 1,…,p and T* = 2p+s+1,…,T.

If the model is stable over time then what we should find is that as T* increases the recursive parameter estimates should stabilize at some level. (See for example the upper right panel in Figure 9.15 in the text.)

A model parameter is unstable if it does not appear to stabilize as T* increases or if there appears to be a sharp break in the behavior of the sequence before and after some T*. (See, for example the upper right panel in Figure 9.16 in the text.)

Computing Recursive Coefficients in EViews and an Application to the HEPI series –

I fit the HEPI series, 1961-2004, to the quadratic trend model with AR(2) errors:

Yt =β0+ β1t + β2t2 +φ1Yt-1+φ2Yt-2+εt

From the regression output box, select “View”, “Stability Tests”, “Recursive Estimates”, and “Recursive Coefficients”. (You will have an option of which subset of the coefficients you want to look at.)

The output will be a graph of each of the recursive coefficient estimates (and a 95-percent confidence interval, i.e., a two-standard error band) as T* changes.

C(1) = coefficient on t, β1; C(2) = coefficient on t2, β2;

C(3) = intercept, β0;

C(4) = coefficient on hepi(-1), φ1;

C(5)=coefficient on hepi(-2), φ2

Conclusion?

Recursive Residuals and the CUSUM Test

Visual examination of the graphs of the recursive parameter estimates can be useful in evaluating the stability of the model. It would be useful to have a formal statistical test that we could apply to test the null hypothesis of model stability. The CUSUM test, which is based on the residuals from the recursive estimates, provides such a test.

The idea – We calculate a statistic, called the CUSUM statistic, for each t. Under the null hypothesis, the statistic is drawn from a distribution, called the CUSUM distribution. If, the calculated CUSUM statistics appear to be too large to have been drawn from the CUSUM distribution, we reject the null hypothesis (of model stability).

More details -

  1. Let et+1,t denote the one-step-ahead forecast error associated with forecasting Yt+1 based on the model fit for over the sample period ending in period t. These are called the recursive residuals.

et+1,t = Yt+1 – Yt+1,t

where the t subscripts on the estimated parameters refers to the fact that they were estimated based on a sample whose last observation was in period t.

2. Let σ1,t denote the standard error of the one-step ahead forecast of Y formed at time t, i.e,

σ1,t = sqrt(var(et+1,t))

Define the standardized recursive residuals, wt+1,t, according to

wt+1,t = et+1,t/σ1,t

Fact: Under our maintained assumptions, including model homogeneity,

wt+1,t ~ i.i.d. N(0,1).

Note that there will be a set of standardized recursive residuals for each sample.

  1. The CUSUM (cumulative sum) statistics are defined according to:

for t = k,k+1,…,T-1, where k = 2p+s+1 is the minimum sample size for which we can fit the model.

Under the null hypothesis, the CUSUMt statistic is drawn from from a CUSUM(t-k) distribution. The CUSUM(t-k) distribution is a symmetric distribution centered at 0. Its dispersion increases as t-k increases.

We reject the null hypothesis at the 5% significance level if CUSUMt is below the 2.5-percentile or above the 97.5-percentile of the CUSUM(t-k) distribution.

See Figures 9.15 and 9.16 from the text.

Computing the CUSUM Statistics in EViews and an Application to the HEPI series –

As above, I fit the HEPI series, 1961-2004, to the quadratic trend model with AR(2) errors:

Yt =β0+ β1t + β2t2 +φ1Yt-1+φ2Yt-2+εt

From the regression output box, select “View”, “Stability Tests”, “Recursive Estimates”, and “CUSUM Test”.

The output will be a graph of the CUSUM statistics and bands representative the bounds of the critical region for a test at the five-percent significance level.