1

*** MEAN-VARIANCE ANALYSIS

Under the EUH, consider a utility function U(x) where x = w + a = terminal wealth. Analyzing behavior under risk in a mean variance context implies that expected utility can be expressed as EU(x) = W(M, V), where M = E(x) and V = Var(x).

1- The Case of CARA preferences under Normality:

We have already shown that, under CARA and the normality of the distribution of x, maximizing EU(x) is equivalent to maximizing [M - r/2 V], where r = - U"/U' = the constant absolute risk aversion coefficient. In this case, decision analysis under risk can be conducted in the context of an additive mean - variance objective function. However, CARA does not allow for DARA risk preferences.

2- The Case of a Quadratic Utility Function:

We have already shown that, under a quadratic utility function, maximizing expected utility is equivalent to maximizing a mean - variance reference function W(M, V). However, we have also shown that a quadratic utility function exhibits IARA and thus cannot exhibit DARA.

3- The General Case: (Meyer, AER, 1987)

Let

x = M +  e,

where M = E(x) is the mean of x (= a location parameter), e is a random variable with mean zero (E(e) = 0) and  > 0 is a mean preserving spread (= a scale parameter). In the case where Var(e) = 1, the parameters M and  can be interpreted as the mean and the standard deviation of x respectively. Note that, as long as the mean E(x) exists, this representation does not impose any restriction on the form of the probability function of x (or e).

Consider the case where a decision maker chooses among random variables of the form x = M + σe, where all random variables differ from each other only by the location parameter M and/or the scale parameter σ. Then, under the EUH, the expected utility of the decision maker takes the form

EU(x) = EU(M +  e) = W(M, ).

Note that this does not impose any restriction on the form of the probability function of e, nor on the shape of the utility function U(.). The objective function W(M, σ) provides a general way to motivate a mean - standard deviation analysis (or mean - variance analysis with V = 2).

* Notation:

Let WM = W/M = EU'

W = W/ = E(U' e)

WMM = 2W/M2 = EU"

W = 2W/2 = E(U" e2)

WM = 2W/M = E(U" e).

Let W0 denote some constant level of expected utility. Differentiating W0 = W(M(), ) with respect to  gives

WMM/ + W = 0,

or

M/ = - W/WM = S(M, ),

where S(M, ) = M/ is the slope of the indifference curve between M and , holding expected utility at the constant level W0.

* Property 1: WM 0 iff U'  0 for all (M, ).

* Property 2: W 0 iff U"  0 for all (M, ).

1

1

Proof: W = E(U’ e) = U’ e f(e) de, where f(e) ≥ 0 is the probability function of e,

= [U’ y f(y) dy]- U” y f(y) dy de, using integration by parts,

= - U” y f(y) dy de, since y f(y) dy = E(e) = 0,

= sign(U” ),

since E(e) = y f(y) dy = y f(y) dy + y f(y) dy = 0 implies that

y f(y) dy ≤ 0 for all e, and y f(y) dy < 0 for some e,

= sign(U”), when  > 0.

Noting that E(U’ e) = E[(U’ – E(U’)) e] = COV(U’, e), this corresponds to the following intuitive result:

COV(U’, e) = sign(U’)/e) = sign(U” ).

* Property 3: S(M, )  0 if U' > 0, U" < 0 for all (M, ).

Proof: S(M, ) = - W/WM = -E(U' e)/EU' = sign(-U"/U').

* Property 4: W(M, ) is a concave function of (M, ) iff U"  0 for all (M, ).

Proof: 2W/(M, )2 = E[U" (1 e)' (1 e)] = a negative semi-definite matrix iff U"  0.

Note: Property 4 implies that the set {M, : W(M, )  W0} is a convex set.

Note: W is a concave function of σ (the "standard deviation"), but not necessarily a concave function of 2 (the "variance"). This indicates that a "mean-standard deviation" analysis is more convenient than a "mean-variance" analysis.

* Property 5: S(M, )/M {<, =, >} 0 iff U(x) exhibits {DARA, CARA, IARA} for all (M, ), given U' > 0.

Proof: S(M,)/M = -WM/WM + WMMW/(WM)2

= [-WMWM + WMMW]/(WM)2

= sign[-E(U"e) EU' + EU" E(U'e)]

= sign{EU'[-E(U"e) + EU"z]} where z = E(U'e)/EU',

= sign{E[U"(z-e)]}

= sign{E[r U'(e-z)]} where r = -U"/U',

Note that E[U'(e-z)] = U'(e-z)f(e) de = 0, where f(e) is the probability function of e. Since f(e)  0 and U' > 0, it follows that [U'(e-z)f(e)] changes sign only once (from negative to positive) as e increases. Thus

. {r = constant} implies that E[r U'(e-z)] = 0 and S/M = 0,

. {r = increasing} implies that E[r U'(e-z)] > 0 and S/M > 0,

. {r = decreasing} implies that E[r U'(e-z)] < 0 and S/M < 0.

* Property 6: S(tM, t)/t {<, =, >} 0 iff U(x) exhibits {DRRA, CRRA, IRRA} for all (M, ), given U' > 0.

Proof: Evaluated at t = 1, S(tM, t) = -W/WM = -E(U'e)/EU'. It follows that

S(tM, t)/t = -E(U"xe)/EU' + E(U"x)E(U'e)/(EU')2

= sign{-E(U')E(U"xe) + E(U"x)E(U'e)}

= sign{E[r U'(e-z)} where r = -xU"/U' and z = E(U'e)/EU'.

We have shown that E[U'(e-z)] = U'(e-z)f(e) de = 0, and that [U'(e-z)f(e)] changes sign only once (from negative to positive) as e increases. Thus

. {r = constant} implies that E[r U'(e-z)] = 0 and S/t = 0,

. {r = increasing} implies that E[r U'(e-z)] > 0 and S/t > 0,

. {r = decreasing} implies that E[r U'(e-z)] < 0 and S/t < 0.

1