Winter 2004 / Economics, U.C. Davis
Problem Set 4 – Solutions
Part I – Analytical Questions
Problem 1: Consider a stationary autoregressive process A(L)Xt = t and its corresponding moving average representation, Xt = C(L)t , where .
(a)Find the moving average coefficients for an VAR(1) process.
Solution
Because this is a VAR(1), calculation of the MA representation is quite easy. Thus, if Xt = A1Xt-1 + t, then Ci = A1i.
(b)Show that the moving average coefficients for a VAR(2) can be found recursively by
Solution
A stationary VAR has a moving average representation given by . Plugging this formula into that of a VAR(2) such as, , we find,
,
which can be rewritten as,
,
which delivers the coefficients for each of the epsilon. Since the epsilon can take on any value in , each of these coefficients must equal zero. Hence, C0 = I, C1 = A1, and
Problem 2: Consider the following bivariate VAR,
with .
(a)Find a matrix H, which is lower triangular and ensures that if , then where D is a diagonal matrix.
Solution
For example,
(b)Given this matrix H calculate the structural representation of this VAR.
Solution
(c)Calculate the VMA representation for the reduced form of this VAR (notice that it is very simple in this case – don’t apply the usual formulas mechanically!)
Solution
(d)Calculate the VMA representation of the structural form of the VAR.
Solution
(e)Under what conditions will the reduced form and the structural form produce identical impulse response functions?
Solution: The obvious one is = 0. Less obvious,
(f)Suppose you obtained the structural form as in part (a) but for a system that had the variable m ordered first. Under what conditions would these two structural identification schemes deliver the same impulse responses?
Solution: Notice that the matrix H is in this case,
Naturally, = 0, would work, but also, either or
Problem 3: Consider the following bivariate VAR
with for t = and 0 other wise, for t = and 0 other wise, and for all t, and . Answer the following questions:
(a)Is this system covariance-stationary?
Solution
To answer this question, verify the roots of the polynomial
The roots are 0.833 and 2, hence the system is not-stationary.
(b)Calculate for s = 0, 1, and 2. What is the limit as ?
Solution
Clearly, since and the process is not stationary, then .
(c)Calculate the fraction of the MSE of the two period-ahead forecast error for variable 1, , that is due to
Solution
The fraction due to 1 is (1 + 0.32)/2.37 = 0.46 or 46%.
Problem 4: Consider the process
(a)Derive where D denotes the density function and Hint: the system can be rewritten in matrix form as
Solution
Pre-multiply the system by A =, the inverse of the contemporaneous correlation matrix, to obtain,
Thus,
Given this expressions for the conditional mean and the variance, and noting that the u’s are linear combinations of the and therefore are normally distributed, the conditional distribution D(xt|xt-1) is multivariate normal with conditional mean and variance given by the expressions derived above.
(b)Assume that xt is stationary. Derive and show that is positive definite. What are the implications of this result?
Solution
By stationarity, and therefore, . Similarly, . Since in (a) we calculated that , it follows that which is a quadratic form and therefore positive definite for . The rational is that the conditional variance uses “more information” (hence the conditioning) than the unconditional variance.
Problem 5: Consider the Gaussian linear regression model,
with ut ~ i.i.d. N(0, 2) and ut independent of x for all t and . Define The log of the likelihood of (y1, …, yT) conditional on (x1,…,xT) is given by
(a)Show that the estimate is given by ’ where and and denote the maximum likelihood estimates.
Solution
The proof is straightforward by direct differentiation of the likelihood and noticing that from the first order conditions.
(b)Show that the estimate is given by .
Solution: this proof is also straightforward once you realize is the score of the log-likelihood evaluated at the MLE estimates.
(c)Show that the where for
Solution:
The proof requires the following intermediate results: ; . Direct application of conventional asymptotic results delivers the desired result.
(d)Consider a set of m linear restrictions on of the form R = r for R a known matrix and r a known vector. Show that for , the Wald test statistic given by is identical to the Wald test form of the OLS test given by with the OLS estimate of the variance sT2 replaced with the MLE .
(e)Show that when the lower left and upper right blocks of are set to their plim of zero, then the quasi-maximum likelihood Wald test of R = r is identical to the heteroskedasticity-consistent form of the OLS test given by
Problem 6: Consider the following DGP for the cointegrated random variables z and y
where ~ N(0, I) with z0 = y0 = 0.
(a)Obtain the autoregressive representation of this DGP.
(b)Obtain the error-correction representation of this DGP.
(c)Deduce the long-run relation between z and y.
(a)Directly inverting the lagged matrices on the right-hand side, we get
(b)From the autoregressive representation
(c) From the ECM, the long run solution is y = 2z.
Problem 7: Consider the following DGP
with || < 1, and
where D denotes a generic distribution.
(a)Derive the degree of integratedness of the two series, xt and yt. Do your results depend on any restrictions on the values of , , and ? Discuss how.
If θ = 1, then xtand ytare I(1) since . In addition, given || < 1 one needs to impose the condition . The same linear combination of xtand ytcannot be simulatenously I(0) and I(1).
(b)Under what coefficient restrictions are xt and yt cointegrated? What are the cointegrating vectors in such cases?
. Cointegrating vector (1 ).
(c)Choose a particular set of coefficients that ensures xt and yt are cointegrated and derive the following representations:
- The moving-average.
- The autoregressive.
- The error-correction.
I. MA
II. AR
III. ECM
Define the error correction term , then
(d)Can all the cointegrated systems be represented as an error-correction model? What are the problem/s of analyzing a VAR in the differences when the system is cointegrated?
From the Granger representation theorem we know the answer is yes. Analyzing the VAR in the differences omits the error correction term in the specification. Therefore we have the classical problems of omitted variable bias.
(e)Suppose that economic theory suggests that xt and yt should be cointegrated with cointegrating vector [1 + 0.5t]. Describe:
- How would you test whether this is indeed a cointegrating vector?
Run the OLS regression
and test the residuals with an ADF test.
- What is the likely outcome of the test in short samples? Why?
In short-samples, the cointegrating vector (1 +0.5t) will differ from the cointegrating vector (1 ). However, as the sample size gets larger, note that the bias 0.5tdisappears very quickly.
- What is the likely outcome of the test asymptotically? Why?
Asymptotically the bias disappears sufficiently quickly.
Problem 8: Consider the bivariate VECM
where and Equation by equation, the system is given by
Answer the following questions:
(a)From the VECM representation above, derive the VECM representation
and the VAR(1) representation
(b)Based on the given values of the elements in and , determine , such that
(c)Using the Granger representation theorem determine that , where is the moving average polynomial corresponding to the VECM system above and I2 is the identity matrix of order 2. Hint: you may show this result by showing that is orthogonal to the cointegrating space.
Using the hint: (1)’= 0. It is easy to show that
and therefore
(d)Using the Beveridge-Nelson decomposition and the result in (c), determine the common trend in the VECM system.
All you need to remember is that from the B-N decomposition, the trends are the linear combinations captured in (1)yt, which in this case turns out to be 2y1t + y2t. Notice that this combination is orthogonal to the cointegrating vector.
(e)Show that follows an AR(1) process and show that this AR(1) is stable provided that . What can you say about the system when 1 = 0?
Let be the cointegrating vector. From the equations for y1and y2we have
Combining terms
which is an AR(1) whose stationarity requires that |1 + 1| < 1 or the equivalent condition -2 < 1< 0. When 1 = 0, ztis no longer stationary, so there is no cointegration for any value of 2. y1and y2 are in this case two independent random walks.
Problem 9: Consider the following VAR
(a)Show that this VAR is not-stationary.
Stationarity requires that the values of z satisfying
lie outside the unit circle. For z = 1,notice
(b)Find the cointegrating vector and derive the VECM representation.
Notice that
so that
(c)Transform the model so that it involves the error correction term (call it z) and a difference stationary variable (call it wt). w will be a linear combination of x and y but should not contain z. Hint: the weights in this linear combination will be related the coefficients of the error correction terms.
Given the ECM in part (b), notice
Next
(d)Verify that y and x can be expressed as a linear combination of w and z. Give an interpretation as a decomposition of the vector (y x)’ into permanent and transitory components.
From part (c)
taking the inverse
and therefore
wt is I(1) and zt is I(0), which is a version of the Beveridge-Nelson decomposition proposed by Gonzalo and Granger (1995).
1