Exercise 3: Bivariate regression analysis and model evaluation
a)Reasonable assumptions:
- Zero conditional mean:
- Constant variance:
- Autocorrelation:
- Normal distributed:
b)
EQ( 1) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8101.xls
The estimation sample is: 1980(1) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant 23.8431 7.113 3.35 0.0012 0.1245
X 0.574115 0.01996 28.8 0.0000 0.9128
sigma 26.6013 RSS 55902.4978
R^2 0.91284 F(1,79) = 827.4 [0.000]**
Adj.R^2 0.911737 log-likelihood -379.679
no. of observations 81 no. of parameters 2
mean(Y) 209.948 se(Y) 89.5389
AR 1-5 test: F(5,74) = 1473.0 [0.0000]**
ARCH 1-4 test: F(4,73) = 4905.3 [0.0000]**
Normality test: Chi^2(2) = 6.8954 [0.0318]*
Hetero test: F(2,78) = 27.065 [0.0000]**
Hetero-X test: F(2,78) = 27.065 [0.0000]**
RESET23 test: F(2,77) = 901.70 [0.0000]**
Interpretingtheresult:
- We have to perform a t-test to say something about the .
t-test:
To conclude that is not 1, the t-value in absolute value has to be larger than a critical value.
From this result we can conclude that is significantly different from 1.
High t-value gives very low p-value, which means
- The model’s standard error is very low, and also from this we can say that the model have a big certain. This is thus also the reason for the big t-value in absolute terms.
The t-value can though indicate statistical significance either because the estimated is “large” or the standard error is “small”. Too much focus on statistical significance can thus lead to the false conclusion that a variable is important for explaining Y even though its estimated effect is modest.
- this means that the variation in X explains approximately 90 % of the variation in Y. This is a very high number.
c)Is this model an adequate conditional model of given?
Even though, as we stated in b), the is large and the are significantly different from 1, the test fail. We have autocorrelation, heteroskedasticity and the model is not normally distributed. The assumptions we made in a) is not valid. We can’t perform a the t-test correctly. So we can’t say for sure that the is significant.
When we correct for autocorrelation and heteroskedasticity in the model, we get robust standard errors.
Robust standard errors
Coefficients SE HACSE HCSE JHCSE
Constant 23.843 7.1132 11.795 6.2624 6.4130
X 0.57411 0.019959 0.042387 0.022755 0.023236
Coefficients t-SE t-HACSE t-HCSE t-JHCSE
Constant 23.843 3.3520 2.0214 3.8074 3.7180
X 0.57411 28.764 13.545 25.231 24.708
The result is quite similar as in b). We have same values on the , which is still significantly different from 1, the standard error is still very small and there is no heteroskedasticity and autocorrelation. But: it is still not normallydistributed
The importance of normal distribution when we have robust standard error is a matter of preferences to the person who is making the model.
Some would say that the modell is still not valid, and some would think that this is an ok model.
d)The criteria we use to evaluate this model:
- R-squared
- T-test/significance level
- The test: AR, ARCH, normality test, hetero test,
e)The inverted model:
EQ( 2) Modelling X by OLS
The dataset is: M:\ECON 4160\data\Sp8101.xls
The estimation sample is: 1980(1) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant -9.65664 12.60 -0.766 0.4459 0.0074
Y 1.59000 0.05528 28.8 0.0000 0.9128
sigma 44.2691 RSS 154820.562
R^2 0.91284 F(1,79) = 827.4 [0.000]**
Adj.R^2 0.911737 log-likelihood -420.935
no. of observations 81 no. of parameters 2
mean(X) 324.16 se(X) 149.008
AR 1-5 test: F(5,74) = 7880.4 [0.0000]**
ARCH 1-4 test: F(4,73) = 8133.5 [0.0000]**
Normality test: Chi^2(2) = 29.254 [0.0000]**
Hetero test: F(2,78) = 6.2583 [0.0030]**
Hetero-X test: F(2,78) = 6.2583 [0.0030]**
RESET23 test: F(2,77) = 2.9361 [0.0590]
Robust standard errors
Coefficients SE HACSE HCSE JHCSE
Constant -9.6566 12.605 9.3902 4.8329 4.8894
Y 1.5900 0.055277 0.078550 0.040839 0.040972
Coefficients t-SE t-HACSE t-HCSE t-JHCSE
Constant -9.6566 -0.76612 -1.0284 -1.9981 -1.9750
Y 1.5900 28.764 20.242 38.934 38.807
If we have causality, which means that we don’t know if X is explained by Y or if Y is explained by X.
Have to run a t-test:
The coefficient is significant different from 1
The standard error is very small and the R-squared is very high, 0.91.
The tests still fail: the is autocorrelation, heteroskedasticity and not normally distributed.
f)Evaluation criteria:
Reasonable: the test still fail, the t-test haven’t changes, except the t-value of the constant is no longer significant.
g)
EQ( 3) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8101.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Y_1 1.05790 0.006588 161. 0.0000 0.9971
Constant -0.611499 0.8666 -0.706 0.4826 0.0065
X 0.886688 0.08276 10.7 0.0000 0.6016
X_1 -0.930794 0.08366 -11.1 0.0000 0.6196
sigma 1.37508 RSS 143.704417
R^2 0.999764 F(3,76) = 1.072e+005 [0.000]**
Adj.R^2 0.999754 log-likelihood -136.944
no. of observations 80 no. of parameters 4
mean(Y) 212.206 se(Y) 87.7537
AR 1-5 test: F(5,71) = 6.0258 [0.0001]**
ARCH 1-4 test: F(4,72) = 1.9184 [0.1166]
Normality test: Chi^2(2) = 1.1521 [0.5621]
Hetero test: F(6,73) = 1.3187 [0.2598]
Hetero-X test: F(9,70) = 1.5844 [0.1369]
RESET23 test: F(2,74) = 21.630 [0.0000]**
R-squared adjusted is very high
is not significantly different from 1. Can not reject that this value I 1.
We see from the test that there is still autocorrelation. The other tests are ok. There is no longer heteroskedasticity, and the normal distribution is valid.
i)Strategy for evaluating the tree models against each other: model 2 explains more than model 1 because the R-squared is bigger.
“simple-to-general”: you go from a model with few variables to many variables
“general-to-simple”:the opposite.
From (6) to (8), simple to general.
From (8) to (6) general to simple.
Exercise 4:Bivariate regressions with autocorrelated errors
Model 1:
EQ( 4) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8101.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant 25.1914 7.340 3.43 0.0010 0.1312
X 0.570766 0.02047 27.9 0.0000 0.9088
sigma 26.6688 RSS 55475.3342
R^2 0.908811 F(1,78) = 777.4 [0.000]**
Adj.R^2 0.907642 log-likelihood -375.182
no. of observations 80 no. of parameters 2
mean(Y) 212.206 se(Y) 87.7537
AR 1-5 test: F(5,73) = 1361.1 [0.0000]**
ARCH 1-4 test: F(4,72) = 4733.9 [0.0000]**
Normality test: Chi^2(2) = 6.9911 [0.0303]*
Hetero test: F(2,77) = 26.463 [0.0000]**
Hetero-X test: F(2,77) = 26.463 [0.0000]**
RESET23 test: F(2,76) = 882.91 [0.0000]**
EQ( 5) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8102.xls
The estimation sample is: 1980(1) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant -3.29461 0.8385 -3.93 0.0002 0.1635
X 0.987515 0.1237 7.98 0.0000 0.4466
sigma 2.49631 RSS 492.294488
R^2 0.446567 F(1,79) = 63.75 [0.000]**
Adj.R^2 0.439562 log-likelihood -188.021
no. of observations 81 no. of parameters 2
mean(Y) 3.02321 se(Y) 3.33453
AR 1-5 test: F(5,74) = 41.391 [0.0000]**
ARCH 1-4 test: F(4,73) = 24.357 [0.0000]**
Normality test: Chi^2(2) = 16.133 [0.0003]**
Hetero test: F(2,78) = 1.6170 [0.2051]
Hetero-X test: F(2,78) = 1.6170 [0.2051]
RESET23 test: F(2,77) = 1.3818 [0.2573]
EQ( 6) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8103.xls
The estimation sample is: 1980(1) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant -0.0514192 0.09848 -0.522 0.6030 0.0034
X 0.00987416 0.09443 0.105 0.9170 0.0001
sigma 0.885816 RSS 61.9889045
R^2 0.000138391 F(1,79) = 0.01093 [0.917]
Adj.R^2 -0.0125181 log-likelihood -104.101
no. of observations 81 no. of parameters 2
mean(Y) -0.0510664 se(Y) 0.880323
AR 1-5 test: F(5,74) = 0.68673 [0.6350]
ARCH 1-4 test: F(4,73) = 0.69229 [0.5997]
Normality test: Chi^2(2) = 1.9979 [0.3683]
Hetero test: F(2,78) = 0.068104 [0.9342]
Hetero-X test: F(2,78) = 0.068104 [0.9342]
RESET23 test: F(2,77) = 0.30157 [0.7405]
- The first data-set is the same as in 3a)
- Sp8102:
- R-squared are about 0,44, so the variables explains less here than in the previous data set.
- The standard error is a bit bigger, more variation in the observations
- the coefficient on X is not significantly different from 1.
- The test show that there is less probability for heteroskedasticity, but there is still autocorrelation.
- Sp8103:
- R-squared is zero and the coefficient is almost zero.
- The test are here positive though: no autocorrelation, no heteroskedastisity and there is normality.
Model 2:
EQ( 7) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8101.xls
The estimation sample is: 1980(1) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant 23.8431 7.113 3.35 0.0012 0.1245
X 0.574115 0.01996 28.8 0.0000 0.9128
sigma 26.6013 RSS 55902.4978
R^2 0.91284 F(1,79) = 827.4 [0.000]**
Adj.R^2 0.911737 log-likelihood -379.679
no. of observations 81 no. of parameters 2
mean(Y) 209.948 se(Y) 89.5389
AR 1-5 test: F(5,74) = 1473.0 [0.0000]**
ARCH 1-4 test: F(4,73) = 4905.3 [0.0000]**
Normality test: Chi^2(2) = 6.8954 [0.0318]*
Hetero test: F(2,78) = 27.065 [0.0000]**
Hetero-X test: F(2,78) = 27.065 [0.0000]**
RESET23 test: F(2,77) = 901.70 [0.0000]**
EQ( 8) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8102.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Y_1 0.910625 0.04064 22.4 0.0000 0.8685
Constant -0.698079 0.3294 -2.12 0.0374 0.0558
X 0.0476839 0.09402 0.507 0.6135 0.0034
X_1 0.0943174 0.1007 0.937 0.3517 0.0114
sigma 0.85749 RSS 55.8820059
R^2 0.93712 F(3,76) = 377.5 [0.000]**
Adj.R^2 0.934638 log-likelihood -99.1637
no. of observations 80 no. of parameters 4
mean(Y) 3.01193 se(Y) 3.35402
AR 1-5 test: F(5,71) = 0.33846 [0.8880]
ARCH 1-4 test: F(4,72) = 0.58851 [0.6720]
Normality test: Chi^2(2) = 1.3638 [0.5057]
Hetero test: F(6,73) = 0.73998 [0.6192]
Hetero-X test: F(9,70) = 1.0028 [0.4462]
RESET23 test: F(2,74) = 2.2272 [0.1150]
EQ( 9) Modelling Y by OLS
The dataset is: M:\ECON 4160\data\Sp8103.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Y_1 0.0395480 0.1093 0.362 0.7185 0.0017
Constant -0.0732441 0.09597 -0.763 0.4477 0.0076
X 0.0208488 0.09170 0.227 0.8208 0.0007
X_1 0.227191 0.09151 2.48 0.0152 0.0750
sigma 0.856199 RSS 55.7138403
R^2 0.0769963 F(3,76) = 2.113 [0.106]
Adj.R^2 0.040562 log-likelihood -99.0432
no. of observations 80 no. of parameters 4
mean(Y) -0.0669558 se(Y) 0.87411
AR 1-5 test: F(5,71) = 0.62517 [0.6811]
ARCH 1-4 test: F(4,72) = 0.70692 [0.5898]
Normality test: Chi^2(2) = 0.35271 [0.8383]
Hetero test: F(6,73) = 0.68527 [0.6620]
Hetero-X test: F(9,70) = 0.77278 [0.6417]
RESET23 test: F(2,74) = 0.99447 [0.3748]
Sp8102:
- R-squaredhigh
- The coefficients to the X’s is not significant
- Coefficient to the lagged Y is significant
- The tests are ok
Sp8103:
- R-squared low
- The coefficients to the X and lagged Y are not significant
- The coefficient to lagged X is significant
- The tests are ok
Model 3
RALS
EQ(10) Modelling Y by RALS
The dataset is: M:\ECON 4160\data\Sp8101.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant -6.92801 4.495 -1.54 0.1274 0.0299
X 0.792874 0.02379 33.3 0.0000 0.9352
Uhat_1 1.05356 0.005487 192. 0.0000 0.9979
sigma 1.37865 RSS 146.35158
no. of observations 80 no. of parameters 3
mean(Y) 212.206 se(Y) 87.7537
NLS using analytical derivatives (eps1=0.0001; eps2=0.005):
Strong convergence
Roots of error polynomial:
real imag modulus
1.0536 0.00000 1.0536
ARCH 1-4 test: F(4,72) = 2.5384 [0.0472]*
Normality test: Chi^2(2) = 1.5593 [0.4586]
Hetero test: F(2,77) = 0.56747 [0.5693]
Hetero-X test: F(2,77) = 0.56747 [0.5693]
EQ(11) Modelling Y by RALS
The dataset is: M:\ECON 4160\data\Sp8102.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant 0.186602 5.575 0.0335 0.9734 0.0000
X 0.00911751 0.09517 0.0958 0.9239 0.0001
Uhat_1 0.976263 0.03010 32.4 0.0000 0.9318
sigma 0.881768 RSS 59.8686801
no. of observations 80 no. of parameters 3
mean(Y) 3.01193 se(Y) 3.35402
NLS using analytical derivatives (eps1=0.0001; eps2=0.005):
Strong convergence
Roots of error polynomial:
real imag modulus
0.97626 0.00000 0.97626
ARCH 1-4 test: F(4,72) = 0.99624 [0.4153]
Normality test: Chi^2(2) = 1.0621 [0.5880]
Hetero test: F(2,77) = 0.23177 [0.7937]
Hetero-X test: F(2,77) = 0.23177 [0.7937]
EQ(12) Modelling Y by RALS
The dataset is: M:\ECON 4160\data\Sp8103.xls
The estimation sample is: 1980(2) - 2000(1)
Coefficient Std.Error t-value t-prob Part.R^2
Constant -0.0679270 0.1038 -0.654 0.5150 0.0055
X -0.00438685 0.09414 -0.0466 0.9630 0.0000
Uhat_1 0.0469083 0.1129 0.416 0.6789 0.0022
sigma 0.884456 RSS 60.2342082
no. of observations 80 no. of parameters 3
mean(Y) -0.0669558 se(Y) 0.87411
NLS using analytical derivatives (eps1=0.0001; eps2=0.005):
Strong convergence
Roots of error polynomial:
real imag modulus
0.046908 0.00000 0.046908
ARCH 1-4 test: F(4,72) = 0.61528 [0.6530]
Normality test: Chi^2(2) = 1.7531 [0.4162]
Hetero test: F(2,77) = 0.12131 [0.8859]
Hetero-X test: F(2,77) = 0.12131 [0.8859]
Sp8101
- The autocorrelation-coefficient is bigger than 1. This means that the model is explosive and therefore unstable.
- The tests are ok. Which is logical because Rals-estimation take into account the autocorrelation.
Sp8102
- The autocorrelation-coefficient is less than 1, which means that the model is stable. But since its value is 0,97, the speed of convergence is low.
- The tests satisfy the classical assumptions
Sp8103:
- The autocorrelation-coefficient is very low, 0,0469, which reflects a high speed of convergence. This means that the model is very stable.
Exercise 10)
a)Using the equations given in this exercise;
pt = aqt+ bst+ ut1
qt = cpt+ edt+ ut2
X1 = qt = quantum
X2 = pt = pris
X3 = st = supply
X4 = dt = demand
Equation 1 consists of the variable, st, which is not represented in the demand equation. On the other hand, we see that the demand function also consists of a variable, dt, that is not represented in the supply equation. This is the requirement of exact identification of both equations in this system.
Ox Professional version 6.00 (Windows_64/U/MT) (C) J.A. Doornik, 1994-2009
---- PcGive 13.0 session started at 10:24:16 on 8-10-2010 ----
SYS( 1) Estimating the system by OLS (RF)
The dataset is: M:\ECON 4160\data\bfm101.xls
The estimation sample is: 1 - 400
URF equation for: X1
Coefficient Std.Error t-value t-prob
X3 0.376430 0.04132 9.11 0.0000
X4 0.146745 0.04260 3.44 0.0006
Constant U 0.103629 0.06013 1.72 0.0856
sigma = 1.19601 RSS = 567.8859679
URF equation for: X2
Coefficient Std.Error t-value t-prob
X3 -0.219346 0.01676 -13.1 0.0000
X4 0.378187 0.01728 21.9 0.0000
Constant U 0.0579151 0.02440 2.37 0.0181
sigma = 0.485192 RSS = 93.4582914
log-likelihood -729.665772 -T/2log|Omega| 405.485055
|Omega| 0.13167411 log|Y'Y/T| -0.0451642009
R^2(LR) 0.862243 R^2(LM) 0.49697
no. of observations 400 no. of parameters 6
F-test on regressors except unrestricted: F(4,792) = 335.467 [0.0000] **
F-tests on retained regressors, F(2,396) =
X3 552.144 [0.000]** X4 469.487 [0.000]**
Constant U 2.82915 [0.060]
correlation of URF residuals (standard deviations on diagonal)
X1 X2
X1 1.1960 0.77656
X2 0.77656 0.48519
correlation between actual and fitted
X1 X2
0.43589 0.79185
Single-equation diagnostics using reduced-form residuals:
X1 : Portmanteau(12): Chi^2(12) = 19.394 [0.0795]
X1 : AR 1-2 test: F(2,395) = 3.0443 [0.0487]*
X1 : ARCH 1-1 test: F(1,398) = 1.3745 [0.2417]
X1 : Normality test: Chi^2(2) = 0.48019 [0.7866]
X1 : Hetero test: F(4,395) = 1.1420 [0.3363]
X1 : Hetero-X test: F(5,394) = 1.1468 [0.3350]
X2 : Portmanteau(12): Chi^2(12) = 16.886 [0.1539]
X2 : AR 1-2 test: F(2,395) = 1.6114 [0.2009]
X2 : ARCH 1-1 test: F(1,398) = 0.11376 [0.7361]
X2 : Normality test: Chi^2(2) = 3.2973 [0.1923]
X2 : Hetero test: F(4,395) = 1.1115 [0.3506]
X2 : Hetero-X test: F(5,394) = 0.95466 [0.4455]
Vector Portmanteau(12): Chi^2(48) = 47.614 [0.4886]
Vector AR 1-2 test: F(8,784) = 1.0564 [0.3919]
Vector Normality test: Chi^2(4) = 5.9089 [0.2061]
Vector Hetero test: F(12,1040)= 0.80499 [0.6456]
Vector Hetero-X test: F(15,1082)= 0.85315 [0.6179]
Vector RESET23 test: F(8,784) = 0.82584 [0.5799]
MOD( 2) Estimating the model by 1SLS (SF)
The dataset is: M:\ECON 4160\data\bfm101.xls
The estimation sample is: 1 - 400
Equation for: X1 (quantum)
Coefficient Std.Error t-value t-prob
X2 0.820108 0.1060 7.74 0.0000
X4 -0.180153 0.05974 -3.02 0.0027
Constant U -0.00314324 0.06207 -0.0506 0.9596
sigma = 1.22593
Equation for: X2 (price)
Coefficient Std.Error t-value t-prob
X3 -0.371772 0.02091 -17.8 0.0000
X1 0.380681 0.02287 16.6 0.0000
Constant U -0.0168808 0.02780 -0.607 0.5440
sigma = 0.553067
log-likelihood -1126.41198 -T/2log|Omega| 8.7388452
no. of observations 400 no. of parameters 6
No restrictions imposed
correlation of structural residuals (standard deviations on diagonal)
X1 X2
X1 1.2259 0.00000
X2 0.00000 0.55307
Single-equation diagnostics using reduced-form residuals:
X1 : AR 1-2 test: F(2,395) = 238.62 [0.0000]**
X1 : ARCH 1-1 test: F(1,398) = 2.0339 [0.1546]
X1 : Normality test: Chi^2(2) = 4.3366 [0.1144]
X1 : Hetero test: F(4,395) = 26.184 [0.0000]**
X1 : Hetero-X test: F(5,394) = 34.393 [0.0000]**
X2 : AR 1-2 test: F(2,395) = 559.43 [0.0000]**
X2 : ARCH 1-1 test: F(1,398) = 0.075181 [0.7841]
X2 : Normality test: Chi^2(2) = 0.36136 [0.8347]
X2 : Hetero test: F(4,395) = 40.440 [0.0000]**
X2 : Hetero-X test: F(5,394) = 97.155 [0.0000]**
Vector Normality test: Chi^2(4) = 2.2586 [0.6883]
Vector Hetero test: F(12,1040)= 41.372 [0.0000]**
Vector Hetero-X test: F(15,1082)= 56.822 [0.0000]**
MOD( 3) Estimating the model by 2SLS
The dataset is: M:\ECON 4160\data\bfm101.xls
The estimation sample is: 1 - 400
Equation for: X1
Coefficient Std.Error t-value t-prob
X2 -1.71615 0.3017 -5.69 0.0000
X4 0.795769 0.1347 5.91 0.0000
Constant U 0.203020 0.09915 2.05 0.0413
sigma = 1.91585
Equation for: X2
Coefficient Std.Error t-value t-prob
X3 -1.18947 0.2634 -4.52 0.0000
X1 2.57716 0.6609 3.90 0.0001
Constant U -0.209154 0.1482 -1.41 0.1590
sigma = 2.72275
log-likelihood -729.665772 -T/2log|Omega| 405.485055
no. of observations 400 no. of parameters 6
No restrictions imposed
correlation of structural residuals (standard deviations on diagonal)
X1 X2
X1 1.9158 -0.92495
X2 -0.92495 2.7228
Single-equation diagnostics using reduced-form residuals:
X1 : AR 1-2 test: F(2,395) = 3.0443 [0.0487]*
X1 : ARCH 1-1 test: F(1,398) = 1.3745 [0.2417]
X1 : Normality test: Chi^2(2) = 0.48019 [0.7866]
X1 : Hetero test: F(4,395) = 1.1420 [0.3363]
X1 : Hetero-X test: F(5,394) = 1.1468 [0.3350]
X2 : AR 1-2 test: F(2,395) = 1.6114 [0.2009]
X2 : ARCH 1-1 test: F(1,398) = 0.11376 [0.7361]
X2 : Normality test: Chi^2(2) = 3.2973 [0.1923]
X2 : Hetero test: F(4,395) = 1.1115 [0.3506]
X2 : Hetero-X test: F(5,394) = 0.95466 [0.4455]
Vector Normality test: Chi^2(4) = 5.9089 [0.2061]
Vector Hetero test: F(12,1040)= 0.80499 [0.6456]
Vector Hetero-X test: F(15,1082)= 0.85315 [0.6179]
- Each equation in the system has an endogenous variable along with the other exogenous variables on the RHS. This violates the classical assumption about zero conditional mean in the disturbance term, since we condition on all the RHS variables . OLS on the system gives biased and inconsistent estimators.
- OLS regression result show that the estimate of X2 is positive on the demand function. This contradict the economic theory about that an increase in price should dampen the demand. The other estimates correspond to economic theory.
- 2SLS, ILS and IV gives the same result, when it is exact identification. This can be shown by theory.