Applications of AR Models:

  • Impulse Response Functions
  • Forecasting

Consider the first order autoregressive model:

yt = a0 + a1yt-1 + εt

where εt is a white noise sequence and the stationarity condition,│a1│< 1, is satisfied.

Then –

E(yt) = a0/(1-a1)

Var(yt) = σ2/(1-a12), σ2 = Var(εt)

Corr(yt,yt-s) = (a1)│s│

The OLS estimator is a consistent and asymptotically normal estimator of the a’s. If, in addition, the ε’s are conditionally homoskedastic, the OLS estimator is asymptotically efficient and for large samples the model can be treated as a standard normal linear regression model for inference purposes.

The impulse response function,

g(s)=∂yt+s/∂εt, s = 0,1,2,…

specifies the effect of an innovation in period t on y, s periods forward.

Note that for the AR(1) model,

g(s) = a1s

[yt = a0 + a1yt-1 + εt;

yt+1 = a0 + a1yt+ εt+1

= a0 + a1(a0 + a1yt-1 + εt) + εt+1…]

Note too that the sequence g(0), g(1),… is also the sequence of coefficients in the Wold MA representation of y.

The shape of the impulse response function depends on whether a1 > 0 or a1 < 0, but in either case, since │a1│< 1, This is a characteristic of an ergodic stationary process – “weak memory” or “weakly dependent”

The s-step ahead forecast of yformed at time t is

----

yt+1 = a0 + a1yt + εt+1 ,

so ;

yt+2 = a0 + a1yt+1 + εt+2,

so

and so on.

----

Note that since │a1│< 1,

The s-step ahead forecast converges to the unconditional mean as s goes to ∞. (This will apply to any stationary process.)

Consider the sequence of s-step ahead forecast errors

Note that –

E(fs,t) = 0, Var(fs,t) = σ2(), and

Now consider the general AR(p) model

(*)

E(yt) = a0/(1-a1-…-ap)

Var(yt)? Cov(yt,yt-s)? Let γs = Cov(yt,yt-s).

WLOG, assume a0 = 0.

  1. Multiply both sides of (*) by yt, take expectations and note that εt is uncorrelated with yt-s, s > 0:
  1. Multiply both sides of (*) by yt-1 and

take expectations:

….

P+1. Multiply both sides of (*) by yt-p and

take expectations:

This provides a set of p+1 linear equations, called the Yule-Walker equations, in the p+1 unknowns γ0,…,γp which can be solved given a1,…,ap, and σε2.

Once γ0,…,γp have been determined, γs ,

s > p can be determined recursively –

Constructing Impulse Response Functions and Forecasting with the AR(p) Model

One Approach – Recursive Construction

Consider, for example, the AR(2) model:

A more efficient approach –

Rewrite the 2nd order autoregression as a 1st order, 2-dimensional vector autoregression:

or, in matrix notation,

Yt = A0 + A1Yt-1 + et

where Yt = [yt yt-1]’, A0 = [a0 0]’, A1 =, and et = [εt 0]’.

Then, the s-step ahead forecast of Yformed at time t is

and the s-step ahead forecast of yformed at time t, is the first element of .

Note that this can easily be extended to the general p-th order case -

Yt = A0 + A1Yt-1 + et

where Yt = [yt yt-1… yt-p+1]’, A0 = [a0 0…0]’, et = [εt 0…0]’ are all px1 and A1 is the pxp matrix

a1 a2 … ap-1 ap

A1 = 1 0 … 0 0

0 1 … 0 0

0 0 … 1 0

Then, the s-step ahead forecast of Yformed at time t is

and the s-step ahead forecast of yformed at time t, is the first element of .

The impulse response function for the p-th order autoregression can also be found efficiently from the “companion first order VAR” . The impulse response function,

g(s)=∂yt+s/∂εt, s = 0,1,2,…

specifies the effect of an innovation in period t on y, s periods forward. Recall that for the AR(1) model,

g(s) = a1s

For the AR(p) model,

g(s) = [A1s]11