The Impact of Climate Change on the Winegrape vineyards of the Portuguese Douro Region

MarioCunhaa & Christian Richterb

aFaculdade de Ciências, Universidade do Porto and Centro de Investigação em Ciências Geo-Espaciais, Rua Rua do Campo Alegre s7n, 4169-007, Porto, Portugal, Email:

bGerman University in Cairo, Faculty of Management Technology, 11835 New Cairo City, Egypt, Email: ..

Supplementary material 3 (S3)

Applying the Kalman filter

In order to estimate the parameters we use the Kalman filter, i.e. we estimate the following state space model:

(S3_1)

whereYt is the WPV, and Xt are the exogenous variables. Let

(S3_2)

and .

For ease of exposition, let us redefine (1) into:

(S3_3)

where (S3_3) is called the measurement equation, and Dt is a matrix consisting of the αs and βs from (1) and Zt is a matrix consisting of the Ys and Xs of (1). The state equation is in analogy to (S3_2):

(S3_4)

witha,ti.i.d. (0, ) for a = 1, 2.

The key characteristic, we are exploiting for our paper is the way how the parameters, i.e. the structure of the model is updated. The Kalman filter estimation algorithm is based on one-period ahead forecasts. These forecasts of Y are then compared with the corresponding (new) observation for the same variable. According to the Kalman gain, the coefficients can then be systematically updated in order to minimise the one period ahead forecast error.

The question is now what information is used to update the parameters or better how to model the residual in (S3_4)? Wells (1996) shows that, in the case of an exogenous shock such as climate change, the parameters are optimally updated as

(S3_5)

where denotes the estimate of the state “d” at time t conditional on the information available at time s and Kt is the Kalman Gain. The interesting part of (S3_5) is the term in brackets. It shows the forecast error. Hence, the current parameters are updated according to the forecast error resulting from an estimated parameter which did not contain the additional information revealed in the current period. The currently available information in turn contains the (new) values of the current exogenous variable as well as the endogenous variable. In other words, the structure of the model at any time is determined by the numerical values of the endogenous and exogenous variables. Hence, if these change in an unforeseen way so does the structure of the model. In this context, if we agree that SW and ST change due to climate change, which is an exogenous shock in our model, then via the forecast error this changes the structure of the model (in terms of parameter changes). Moreover, the Kalman Gain may be calculated according to Wells (1996):

(S3_6)

where is the variance of the forecast error at time t conditioned on the system at time s and  is the covariance matrix of 2,t. As can be seen from (S3_6) the Kalman Gain depends upon the exogenous variable Zt among other factors. So again, changes of ST and SW have an impact on the change of the structure of the model. Please note, it is also for the complexity and the associated control problem in the regression that we kept the number of exogenous variables to a minimum, i.e. 1.

In order to run the Kalman filter (Kalman 1960) we need initial parameter values. The initial parameter values are obtained estimating them by Ordinary Least Squares (OLS) using the entire sample (see also Wells, 1996). Of course, using the entire sample implies that we neglect possible structural breaks. The initial estimates might therefore be biased. The Kalman filter however corrects for this bias since, as Wells (1996) shows, the Kalman filter will converge to the true values independently of the initial values. Hence, our start values have no effect on the parameter estimates, i.e. our results are robust. Given these starting values, we can then estimate the parameter values using the Kalman filter. We then employ the Akaike information criterion - AIC (Akaike, 1974) to obtain a final specification for (Eq. 1, main text), eliminating insignificant lags using the strategy specified in the next paragraph below. Then, for each regression we applied a set of diagnostic tests, shown in the tables in the following sections, to confirm the final specification found. The final parameter values are therefore filtered estimates, independent of their starting values.

Using the specification above implies that we get a set of parameter values for each point in time. Hence, a particular parameter could be significant for all points in time; or at some points but not others; or it might never be significant. These parameter changes are at the heart of this paper as they imply changes in the lag structure. If a parameter was significant for some periods but not others, it was kept in the equation with a parameter value of zero for those periods in which it was nonsignificant. This strategy minimised the AIC criterion, and led to a parsimonious specification. Finally, we tested the residuals in each regression for auto-correlation and heteroscedasticity.

The final specification of the equation 1 and equation 2 (Eq. 1 and Eq. 2, main text) was then validated using two different stability tests. Both tests check for the same null hypothesis (in our case a stable AR (9) specification) against differing temporal instabilities. The first is the fluctuations test of Ploberger et al. (1989), which detects discrete breaks at any point in time in the coefficients of a (possibly dynamic) regression. The second test is due to LaMotte and McWorther (1978), and is designed specifically to detect random parameter variation of a specific unit root form (our specification). We found that the random walk hypothesis for the parameters was justified for each model (results available on request). We also test for autocorrelation of the residuals. For this purpose, we use the Ljung-Box test (Ljung and Box, 1978), which allows for autocorrelated residuals of order p. In all our regressions, we could reject the hypothesis of autocorrelation (Table 1, main text).

Finally, we chose the fluctuations test for detecting structural breaks because the Kalman filter allows for structural breaks at any point and the fluctuations test is able to accommodate this.

REFERENCES

Akaike H (1974) A new look at the statistical model identification. IEEE Transactions on Automatic Control 19 (6): 716–723.

Kalman RE (1960) A new approach to linear filtering and prediction problems.Journal of Basic Engineering T ASME 82 (1): 35–45.

LaMotteLR,McWortherAJ(1978)Anexacttestforthepresenceof randomwalkcoefficientsinalinearregression.Journal of the American Statistical Association 73(364):816–820.

Ljung GM, Box GEP (1978) On a Measure of a Lack of Fit in Time Series Models. Biometrika 65 (2): 297–303.

PlobergerW,KrämerW,KontrusK(1989)Anewtestforstructural stabilityinthelinearregression model.Journal of Econometrics 40(2):307–318.

WellsC(1996)TheKalmanFilterin Finance,vol.32.Advancedstudies intheoreticalandappliedeconometrics.Kluwer,Dordrecht.

1