Expected returns and probabilities

Probabilities represent the chance of something happening. Some useful rules:

·  The sum of the probabilities of all possible outcomes is always 1 (100%).

·  ‘Or’ means sum the probabilities.

·  ‘And’ means multiply the probabilities.

Question: At university you can pass or fail a subject. If the probability of failing is 10%, what is the chance of passing the subject?

Answer: You can pass or fail, and the ‘or’ means add the probabilities. The sum of the probabilities of the complete set of outcomes is always one, so the probability of failing or passing must be one. Let the probability of passing be ppass and the probability of failing be pfail.

ppass+pfail=1

ppass+0.1=1

ppass=1-0.1=0.9

So there’s a 90% chance of passing a single subject.

Question: In a 3 year university degree with 4 subjects per semester and 2 semesters per year, a student will complete 24 (=3*2*4) subjects. If the chance of failing a single subject is 10%, what is the chance of passing every single subject? Assume that you have average ability and motivation.

Answer: Passing every subject means that you must pass the first one and the second and the third and so on. Since it’s ‘and’, the probabilities must be multiplied. The chance of passing a single subject is 90%. So the chance of passing all 24 subjects is:

ppass all 24 subjects =ppass, 1×ppass, 2×…×ppass,24

=0.9×0.9×…×0.9=0.924

=0.079766443≈8%

Expected returns, uncertainty and probabilities

The expected return of an asset when there are different possible states of the world (good, ok, bad, and so on) is the sum of the return in each state of the world multiplied by the probability.

Calculation example

Question: Find the expected return on the stock market given the below information about stocks’ returns in different possible states of the economy.

Stock Returns in Different
States of the Economy
State of economy / Probability / Return
Boom / 0.3 / 0.6
Normal / 0.5 / 0.1
Bust / 0.2 / -0.5

Answer: the expected return ‘E(r)’ or μ is equal to:

E(r)=p1.r1+p2.r2+…+p1.r1

=0.3×0.6+0.5×0.1+0.2×-0.5

=0.13=13%

The St. Petersburg Paradox, from 1713

A coin-flipping game is offered at a casino.

You repeatedly toss a coin until you flip tails and then the game ends. So if you flip heads you keep flipping.

The payoff at the end of the game is $2 raised to the power of how many times you flipped the coin. So the payoff is $2n, where n is the number of coin flips, including the tail flip that ended the game.

If you flip tails on your first go, you get $2 (=2^1).

If you flip tails on your second go, you get $4 (=2^2).

If you flip tails on your third go, you get $8 (=2^3).

If you flip tails on your fourth go, you get $16 (=2^4), etc.

Question 1: What price would you pay to participate in the game? Assume that you can only play once.

Question 2: What is the expected payoff from participating in this game once?

Expected Payoff of St Petersburg Paradox Game
Number of
coin flips / Probability / Payoff / Probability
times payoff
1 / 0.5 / 2 / 1
2 / 0.25 / 4 / 1
3 / 0.125 / 8 / 1
4 / 0.0625 / 16 / 1
5 / 0.03125 / 32 / 1
6 / 0.015625 / 64 / 1
… / … / … / …
n / (1/2)^n / 2^n / 1

The probability multiplied by the payoff is always $1. Since it’s possible to keep flipping heads forever (say using a computer to simulate flips), the total expected payoff is equal to $1 + $1 + $1 and so on for the potentially infinite amount of heads flips that are possible before flipping tails. So the expected payoff is infinite.

Expected Payoff=n=1∞12n×2n =n=1∞1=∞

This is the paradox. No one would pay an infinite amount to participate in this game once. Most people would only pay a few dollars.

Utility

Daniel Bernoulli, the brother of Nicolas Bernoulli who first posed the paradox in 1712, proposed a resolution to the paradox in 1738. He observed that:

“The determination of the value of an item must not be based on the price, but rather on the utility it yields…. There is no doubt that a gain of one thousand ducats is more significant to the pauper than to a rich man though both gain the same amount.”

Utility can be thought of as a measure of happiness.

Bernoulli argued that people judge utility as a function of money. The utility function can be used to measure the happiness from money, rather than the raw amount of money by itself.

Rational people should always prefer more money to less. Rational people are also thought to have diminishing marginal benefits from money. So your first $1,000 should make you more happy than the next $1,000.

For example, if a person’s utility function was the square root of wealth, so U(W) = W^(1/2), then their utility function would look like this.


This square root function offers a solution to the paradox.

Bernoulli used a log utility curve which looks quite similar to the square root utility function. It slopes up but at a diminishing rate. He suggested summing the probability of each event multiplied by the change in the utility of the player’s wealth before and after the game.

Let the wealth before playing be W.

Let the cost of participating in a single game be C.

To find the maximum ticket price of the St Petersburg paradox game that a player with a natural logarithm utility curve would pay, find the change in the expected utility of wealth from playing the game once.

ChangeInExpectedUtility=n=1∞Probability . UtilityChange

=n=1∞12n.lnWealthAfterGame-lnWealthBeforeGame

=n=1∞12n.lnW+2n-C-lnW

Since the utility of the ‘wealth before game’ W is always the same no matter what the outcome, and the infinite sum of the probabilities that it’s multiplied by is one, that term can be taken out of the sum.

n=1∞12n.lnW+2n-C-lnW

This results in a finite expected utility from playing the St Petersburg paradox game. For example, a person with $1,000 in wealth would be willing to pay up to $10.95 to play the game once.

You can verify this mathematically by finding the sum of the geometric progression of the probability-weighted utility and then take the limit as n goes to infinity. Set the utility equal to zero and solve for the cost ‘C’. This will be the maximum price that the player would pay to participate since he would gain zero utility.

Or a spreadsheet could be used to approximate the limit and try different values of ‘C’ until the utility is zero. Alternatively to manual trial and error, use Solver or Goal Seek to find that maximum value of ‘C’ automatically.

Utility Function Properties

Since rational people want more wealth to less, utility curves always slope upwards. After all, you can’t have too much wealth. A utility curve’s gradient or first derivative is always positive.

Rational people also appear to have diminishing marginal utility from wealth. This means that the happiness increase from receiving a fixed amount of money gets less and less as a person accumulates more wealth. So peoples’ utility curves are thought to increase at a decreasing rate. Mathematically, their second derivative should always be negative. They are concave down, like a frown.

The square root function looks like this and so does the log function, so they’re often used by economists to represent peoples’ utility functions.

Other Resolutions of the St Petersburg Paradox

Bernoulli’s utility theory has been integral to the development of modern economic and financial theory. It’s the dominant theory.

But keep in mind that this utility-theory solution to the St. Petersburg paradox is just one of several. Even today, more than 300 years later, new solutions are still being thought of and debated. Wikipedia has some interesting discussion about it: https://en.wikipedia.org/wiki/St._Petersburg_paradox#Recent_discussions

Risk Averse People

It appears that most people’s increase in happiness from more and more money diminishes as they get richer.

This provides a compelling reason for risk-aversion because it means that losing $1,000 hurts more than gaining $1,000. So for example, if you had $10,000 and you gained $1,000 then that would provide less happiness than the sadness from losing $1,000 and only having $9,000 now.

People with this characteristic are called ‘risk-averse’. This is seen as normal.

It explains why people buy insurance contracts. Losing a small premium which is paid to the insurance company every month hurts less than losing your whole house in a fire and being homeless. For most people, this is true even if the present value of the insurance premiums is more than the cost of building a new house times the probability of a fire or other disaster.

Risk neutral people

People who have the same marginal happiness from every dollar they gain are risk-neutral. They will not care about risk and they’re utility curve is a straight line.

Risk lovers

People who are risk-lovers will have a concave up utility curve, such as a parabola (U(W) = W^2). They like risk so much that they are willing to pay to get more. This is seen as unusual and irrational.

Calculation Example

Question: An economics teacher runs an experiment. She approaches a poverty stricken student with zero initial wealth. She offers him $100 if he flips a coin and it lands on heads. If it lands on tails he’ll be paid nothing.

Alternatively, the poor student is offered $30. If he takes this certain payment, he can’t take part in the single risky coin flip game.

If the student has a square root utility function, UW=W, what would you expect him to do? Flip the coin and get a risky $100 or $0, or take the certain $30?

Answer:

ChangeInExpectedUtility=i=12Probability . UtilityChange

=i=1212.WealthWithCoinToss-CertainWealth

=12.100-30+12.0-30

=-0.47723

Since this is a negative change in expected utility, flipping the coin is a bad idea and the student would be expected to take the certain $30 instead.

Another way of looking at it is that the coin flip has a utility of 5 =12.100+12.0 while taking the certain $30 has a utility of 5.47723 =11.30. Therefore the certain $30 is better and the coin flip is worse.

Certainty Equivalent

The certainty equivalent of a risky gamble is the known amount of money that a person would be indifferent to having instead of taking part in the risky gamble.

For example, the certainty equivalent of the poor student in the previous question was $25 =12.100+12.0 2 which is his utility of 5 squared since squaring utility converts it back to dollars given that the person has a square root utility function.

At a price of $25, the student would be just as happy to flip the coin and risk getting 100 or nothing rather than taking the risk-free $30.

Since the teacher offered $30, which is above the student’s certainly equivalent, he would logically take the $30.

If the teacher offered him $20, which is below the student’s certainly equivalent, he would logically take the risky coin flip.

Example: Deal or No Deal Game Show

Here is shortvideo which helps explain the game show:

https://www.youtube.com/watch?v=hmZFHjQfx-o

A contestantwho makes a surprising decision, see 3:00:

https://www.youtube.com/watch?v=H9CQscwXBt0

He can get $1 or $1m with a bank offer of around 400k and he refuses!

Question: His certainty equivalent must be higher or lower than what value?

Another surprising contestant whorefuses every offer.

https://www.youtube.com/watch?v=TmWvroEQhg0

9:28

See 9:28 where $750k and $1000k are available.

See 11:07where the bank offers $880k while the $750k and $1000k are still available.

Question: What is the expected value of the $750k and $1000k?

Question: What does the bank’s offer of $880k indicate about the contestant’s risk aversion, or the bank’s knowledge of what’s in the hidden suitcases?

2