Econ 522 – Lecture 26 (Dec 132007)

Today’s material is not on the final.

Pretty much everything we’ve done this semester has assumed that people are perfectly rational, and respond to incentives according to what they correctly perceive to be their own best-interest.

  • Property and nuisance law: people can bargain with each other to get entitlements to the owners who value them most
  • Contract law: parties can negotiate efficient contracts, courts can enforce them correctly
  • Tort law: people react rationally to incentives; courts can assign liability and damages correctly
  • Criminal law: even criminals react rationally to incentives, commit crimes when benefit outweighs expected costs

These are strong assumptions. They are useful assumptions – they gave us a lot of predictions about how laws would affect behavior, and therefore what laws would lead to efficiency. But the question remains whether they’re valid assumptions.

In the last decade or two, there’s been huge growth in the field of behavioral economics. Behavioral economics studies how peoples’ actual behavior differs from the predictions of the standard model. We mentioned a couple examples over the course of the semester: for example, we mentioned that people don’t react to probabilistic risks the way expected-utility theory would suggest.

Behavioral economics started out as a fairly ad-hoc discipline – someone would pick a prediction of the standard model – for instance, expected-utility-maximizing under uncertainty, or discounting future payoffs by a consistent per-period discount rate, or maximizing only one’s own payoff in a multi-player setting. Then they would do experiments – have a bunch of undergraduates play games in a lab – or look for instances in the real world where the prediction was violated.

Over time, behavioral economics has generated some fairly robust conclusions about systematic ways in which peoples’ behavior differs from the standard model of perfect rationality.

What’s important is that the way peoples’ behavior deviates from the standard predictions is not random. If it was, we could explain it simply as random errors – people aren’t necessarily infinitely wise, so they sometimes make mistakes in calculating the right behavior, and these mistakes can go in any direction. Instead, we find that peoples’ behavior seems to have consistent biases – that is, in many situations, deviations from perfect rationality all seem to go in the same direction.

At its best, behavioral economics also holds itself to a sort of a “higher standard” than traditional economics. Traditional economics makes assumptions (basically, rationality and optimization), derives predictions, and then asks whether the predictions seem to be right, but doesn’t spend that much time questioning the assumptions themselves. Behavioral economics tries to justify the assumptions as well.

The paper on the syllabus by Jolls, Sunstein, and Thaler, “A Behavioral Approach to Law and Economics,” discusses some of these biases observed by behavioral economists; and proposes how these more complicated (and therefore more accurate) views of human behavior could be incorporated into law and economics. How people actually behave, and how this differs from the standard model, has implications for every use of law and economics:

  • The positive part
  • “Positive” here means “predictive” – making predictions about how people will respond to particular laws
  • The positive approach also allows us to predict (or explain) the laws that do exist – as outcomes of some process (either the common law “evolving” toward efficiency, as we’ve discussed in class; or as the outcome of a legislative process)
  • (Positive statements are things like, “an increase in expected punishment will lead to a decrease in crime”)
  • The prescriptive part
  • Once we know how people react to a given law, we can make prescriptions about how the law should be designed to achieve particular goals
  • (Prescriptive statements are things like, “to achieve efficiency, the law should specify injunctive relief when transaction costs are low, and damages when transaction costs are high”)
  • If people behave differently than the standard model, than the law should be designed to take this into account
  • The normative part
  • The normative question is, what should the goal of the legal system be?
  • Throughout this class, we’ve mostly assumed that the goal of the law is economic efficiency – we gave a number of arguments to defend this
  • This gets much trickier when a behavioral approach is used
  • One of the observations of behavioral economics is that peoples’ preferences are not as well-defined and stable as the standard model assumes
  • But this makes even measuring efficiency hard, since we don’t know what preferences to use
  • (An example: one of the findings of behavioral economics is that people value things more once they have them. So if I gave one of you a chocolate bar, you might get all excited about it, and be more hurt by losing it than if you hadn’t had it to begin with. Suppose I give one of you a chocolate bar, and offer you an opportunity to sell it to someone else. Good chance you wouldn’t. Even if I offered to subsidize the purchase – I’d throw in 50 cents on top of what they pay you – you might not. So we’d conclude you value the chocolate bar more than them.
  • But if we’d started out giving the chocolate bar to them, maybe they wouldn’t have wanted to sell it to you either.
  • But this muddles the question of who values it more: if I give it to you, you value it more than him; if I give it to him, he values it more than you. But now we have no way to gauge which allocation is efficient!)

So that’s the goal of behavioral law and economics – to give a more accurate model of how people actually behave, and use that model to reconsider the positive, prescriptive, and normative conclusions of law and economics.

The Jolls, Sunstein and Thaler paper concedes that so far, the results are fairly sparse; the paper reads more like a proposal for future research than a bunch of conclusions. Still, some of the initial results – basically, taking behavioral biases documented elsewhere and considering their implications for law and economics – are quite interesting.

Behavioral biases – the way peoples’ actual behavior deviates from the standard model of perfect self-interested rationality – tends to be broken up into three categories:

  • Bounded rationality
  • People aren’t perfect – we have limited computational abilities, have flawed memory, imperfect powers of perception
  • This leads us to make “mistakes”; it also leads us to use simple “rules of thumb”, rather than detailed analysis, in many situations
  • Bounded willpower
  • Even when we know what’s “right”, we don’t always do it – we eat too much, don’t go to the gym, have trouble quitting smoking
  • This means that commitment devices – finding a way to “give up” options – can have value, which doesn’t make sense in the standard model. We’ve all seen people turn down leftover cake – “if I have it at home, I’ll eat it, and I don’t want to eat it.”
  • (This is why savings plans that “force” people to save, or gym memberships that reward you for going to the gym, can have value)
  • Bounded self-interest
  • People aren’t completely selfish – we all do nice things for other people. But even in anonymous situations with strangers, people tend to care about others’ outcomes as well as their own – we’ll see examples.

On to some examples.

We begin with an experiment done at Cornell. The experiment took 44 students in an advanced undergrad Law and Econ class, and gave half of them tokens. Each person (those who got tokens and those who didn’t) was also given a personal value, an amount of money they could exchange a token for at the end of class if they have one. Then people were given an opportunity to trade.

The market for tokens worked just like the standard model would predict: people with higher token values bought them from people with lower token values.

But that was with tokens, which had an artificial value that everyone knew objectively. So they reran the experiment. This time, half the class was chosen at random and given Cornell coffee mugs. Then students were allowed to trade.

If, like in the standard model, each person knew exactly what a mug was worth to them, we’d predict about half the mugs would trade hands. Since the people who got them were chosen at random, about half the mugs should have gone to people who valued them above the median valuation, and half to people who valued them below that; that latter half should all have been sold, to the people with high valuations who didn’t get mugs.

Instead, only 15% of the mugs traded hands. And on average, people who got mugs asked more than twice as much money for them as the people who didn’t get them were willing to pay. And the effect didn’t go away if the experiment was repeated.

The conclusion was that having something makes you value it more – this is referred to as an endowment effect. (In this case, having a mug made you value having it more highly.)

So what? Well, the big so what is that this seems to contradict Coase. Coase predicted that without transaction costs, the initial allocation should affect the final allocation – whoever starts out with an object (or an entitlement), it will naturally flow to whoever values it the most. But endowment effects mean that the initial allocation does matter, in terms of predicting the final allocation. And if preferences really change depending on whether you got the object, it becomes very unclear how to even define efficiency!

Recall what we said about injunctive relief in nuisance cases. We argued that when transaction costs are small, injunctions would work well, since they clarify the two sides’ threat points so they can bargain to an efficient outcome. Endowment effects challenge this result – they say that whoever is allocated the right initially, comes to value it more, and therefore may not be willing to give it away, regardless of who efficiency would have favored ex ante.

The existence of this bias is fairly robust. One of the chapters in Sunstein’s book, “Behavioral Law and Economics,” documents twelve different studies where peoples’ Willingness to Pay for something they didn’t have was compared to their Willingness to Accept an offer for something they did. In every case, the payment required to give up something they had was greater – typically three times greater or more – than their willingness to pay.

This also has implications on damages. If you asked someone ahead of time how much money they would accept to lose an arm, the number would be huge. If someone lost an arm, and you asked them how much money it would take to make them overall as well-off as before, the number would be smaller.

(This is also partly due to the fact that people adapt to new circumstances better than they anticipate. That is, if someone loses their arm, they find ways of dealing with it which make it less bad than they would have guessed ahead of time. Again, though, this calls into question which measure should be used in assessing efficiency. Suppose someone with two arms thinks losing one would be a catastrophe, on the order of a $10,000,000 loss. Someone who lost an arm realizes that life’s still not that bad, and that the damage done was, say, $500,000. Should a construction firm have to take precautions that cost $3,000,000 to prevent each lost arm?)

Another bias Jolls/Sunstein/Thaler discuss is hindsight bias. Once something happens, people have trouble assessing what its likelihood was before the fact. Specifically, they overestimate what the ex-ante probability was, knowing that the thing did in fact happen.

(Ask a Packers fan what they thought the odds were in August that the Packers would be 11-2 right now. Once something happens, we can always find ways to rationalize it – “they’ve got Favre, maybe some of the kids will step up, they’ll win some close games, it’s not impossible”. I couldn’t find Vegas lines…)

Why does this matter? Determining negligence usually requires figuring out what the probability was that something would happen, after it happens. A storage company decides the risk of a fire at its warehouse is 1 in 1000, and so it doesn’t install a $10,000 sprinkler system to protect $1,000,000 in stored goods. Now a fire occurs, and the jury has to sort out whether the company was negligent. Knowing the fire occurred, they might decide the probability of a fire was 1 in 50, and find the company liable.

(The same thing happens in lawsuits against publicly-owned companies who failed to disclose a particular risk to investors. Was the risk material, so the company was fraudulent in hiding it? Or was it an extremely small risk that just happened to occur, so the company did its job and got unlucky?)

The effect of hindsight bias should be clear: juries will find negligence more often than they would if they could perfectly assess ex-ante probabilities after the fact. The proposals Jolls/Sunstein/Thaler give for dealing with hindsight bias, though, have problems themselves.

(One thing they suggest is in some cases, to keep the jury in the dark about what happened. Obviously, since they were asked to serve on a jury, the jury knows something bad happened. However, in some cases, either action or lack of action would entail risk: treating a patient with a risky drug might cause them to die, but not giving them the drug might also cause them to die. They suggest the jury could be given the facts available at the time, without being told what choice was made, and asked to decide if either action would have constituted negligence. Still, this won’t always work, since in many cases the jury will be able to infer what happened from the fact that there’s a trial at all; and in order to make this work, the jury would have to not read newspapers or know anything about the trial, and not even know which lawyers represented the plaintiff and which ones represented the defendant!

The other suggestion they make is to raise the standard of proof for finding negligence – from “preponderance of the evidence,” interpreted as 51% certainty, to, say, the “clear and convincing evidence” standard, generally interpreted as 60-70% certainty. But this assumes that hindsight bias is of a certain magnitude, not just that it exists; and that the “preponderance of the evidence” standard would be efficient if there were no hindsight bias.)

Another bias they consider is what they call “self-serving bias”. This can be thought of as relative optimism that exists even when both sides have the same information.

In another experiment they cite, students – undergrads and law students – were randomly assigned to the roles of plaintiff and defendant, knowing they would be asked to negotiate a settlement. They were all given the same facts – based on an actual case in Texas. Prior to negotiations, they were each asked to write down a guess as to the damages the judge actually awarded, as well as what they felt was a “fair” settlement – these answers would not be used in any way during the negotiations.

Although they were chosen randomly, the students chosen to represent the plaintiffs guessed $14,500 higher than those representing the defendants as to the judge’s actual award, and answered $17,700 higher when asked for a “fair” settlement.

(They give another example where the presidents of teachers unions and the presidents of school boards were asked what other cities were “comparable” to their own, since comparables were often brought up during salary negotiations. Not surprisingly, the union presidents listed cities with higher average salaries than those listed by school board presidents.)

What does self-serving bias suggest? That pre-trial settlements may not happen as often as the standard model would predict, and that sharing information won’t solve the problem. That is, even if both sides have access to all the same information, they may still be relatively optimistic about their chances at trial, and therefore unable to reach a settlement. (It also has implications for wage negotiations and strikes.)

Another example of self-serving bias is the old cliché that 80% of people think they’re above-average drivers. The authors mention that this sort of bias can be helpful in designing public campaigns that are more effective. In promoting safe driving, the move from “drive carefully or you’ll cause an accident” to “drive carefully, there are bad drivers out there you have to avoid!”

There’s another bias, similar to hindsight bias, in how people perceive the probabilities of events. People tend to overestimate the probability of a certain type of accident happening in the future if they’ve recently observed a similar accident. Jolls/Sunstein/Thaler refer to this as availability – a memory of a recent accident is available in your mind, and colors your perception. Adding to this is salience – basically, how vivid the memory is.