What David Laibson and Andrei Shleifer are Teaching for Behavioral Economics—Jeffrey Ohl

The majority of my research time for many years has gone to the effort to develop the principals for a well-constructed National Well-Being Index. We (Dan Benjamin, Ori Heffetz, Kristen Cooper and I) call this the “Well-Being Measurement Initiative.” Every year, we choose a full-time RA, serving for two years in an overlapping generations structure that gives us two full-time RAs at a time. We have a very rigorous hiring process. Jeffrey Ohl survived that process and is doing an excellent job. This guest post is from him. Since our full-time RAs sit at the NBER, it was easy for him to audit a course at Harvard. Below are Jeffrey’s words.


This post summarizes what I learned auditing Economics 2030 at Harvard University from January - May of 2022. I am grateful to Professor David Laibson and Professor Andrei Shleifer for letting me take this course.

Behavioral economics aims to provide more realistic models of people’s actions than the Neoclassical paradigm epitomized by expected utility theory. Expected utility theory (and its compatriot, Bayesian inference), assumes people’s preferences are stable and that people can and do rationally incorporate all available information when making decisions. The implications often violate common-sense—people buy lottery tickets, have self-control problems, and fail to learn from their mistakes.  

For decades, the rational benchmark was defended. Milton Friedman’s “as-if” defense is that assumptions don’t need to be accurate: they only need to make correct predictions. For example, a model that assumes a billiards player calculates the equations to determine the path of the billiards balls may predict his choice of shots quite well, even though he makes his shots only off intuition. Similarly, Friedman argued, people might not think about maximizing utility moment-to-moment, but if people make choices as if they were, then the rationality assumption is justified when making predictions.  Another argument—also by Milton Friedman—is evolutionary. Even if most investors are not rational, the irrational ones make fewer profits over time, leaving only the rational investors to conduct transactions of meaningful size.

But over the last 30 years, behavioral economics has become more widely accepted in the economics profession. Richard Thaler served as president of the American Economic Association in 2015 and won a Nobel Prize in Economics in 2017, Daniel Kahneman (a psychologist) received the Nobel in 2002, and John Bates Clark medals went to Matthew Rabin (1999), Andrei Shleifer (2001, and Raj Chetty (2013).

This post will thus not discuss issues with assuming rationality in economic models, to which Richard Thaler’s Misbehaving and Kahneman’s Thinking Fast and Slowly are superb popular introductions. David Laibson also has an excellent paper on how behavioral economics has evolved (Behavioral Economics in the Classroom). Instead I will discuss some debates within behavioral economics

Prospect Theory and its flaws

One of the most highly cited papers in the social sciences is the 1979 paper Prospect Theory by Daniel Kahneman and Amos Tversky (KT). In it, KT propose an alternative to expected utility, prospect theory, which is summarized in two famous graphs.

The first is the value function, which includes (a) diminishing sensitivity to both losses and gains, (b) loss aversion, and (c) the fact that gains and losses are assessed relative to a reference point, rather than in absolute terms.  Loss aversion has been fairly well validated, for example a 2010 paper by Justin Sydnor shows that people choose home insurance deductibles in a way that risk aversion alone cannot explain[1]. Other animals may even exhibit loss aversion.

The more contested figure of the two is the probability weighting function (PWF). The PWF posits that people “smear” probabilities towards 0.5 -- they overestimate small probabilities and underestimate large ones (sometimes called the certainty effect).

The PWF is attractive. If true, it can parsimoniously explain why people buy lottery tickets with negative expected value, as well as overpriced rental car insurance.

But there are several issues with the PWF.

First, it assumes that people make decisions based on numerical probabilities. But this is rarely how we assess risk. In practice, we are exposed to a process - for example, the stock market, and need to learn how risky it is with experience. It  turns out this difference matters. Hertwig and Erev showed that when told a numerical probability vs having to infer it from repeated exposure to gambles, people make different choices.

Another weakness of the PWF is that it doesn’t depend on the magnitudes of the losses and gains in a gamble, only their objective probability, p.  In a 2012 paper, Bordalo, Gennaioli, and Shleifer (BGS) propose an alternative theory, salience theory, which captures a key intuition: people pay more attention to large changes when assessing gambles, holding probabilities constant. BGS tested this claim experimentally and found that subjective probabilities did indeed depend on the stakes in the experiment, whereas the PWF would predict the below plot to show no variation in the y-axis—the stakes should not matter.

Salience theory can also neatly explain preference reversals that PT cannot. For example, Liechtenstein and Slovic found that subjects gave a higher willingness-to-pay for Lottery A than Lottery B, but chose Lottery B over Lottery A when asked to pick between the two. Under prospect theory, this should not occur. But under salience theory, the differing salience of the lottery payoffs between the two settings can explain this reversal. Many other applications of salience in economics are discussed in this literature review.

The limitations of nudges

Behavioral economics’ largest influence on public policy is probably via “nudges”. Nudges subtly modify people’s environments in ways that help satisfy their goals without reducing their choice set. Cass Sunstein and Richard Thaler discuss nudges in-depth in their eponymous best-selling book. I learned that nudges are often of limited effectiveness, despite the initial excitement.

For example, one famous nudge is to make 401(k) plans “opt-out”, rather than “opt-in”, or automatic enrollment (AE). One of the first studies on the effects of AE was a 2001 Madrian and Shea paper. In 2004, Choi, Laibson, Madrian and Metrick did a follow-up study and found that when the study’s effects were extended from 12 to 27 months, auto-enrollment had approximately zero impact on wealth accumulation. The initial benefits of the nudge were offset because it anchored certain employees at a smaller savings rate than they otherwise might have chosen, and because the default fund had a conservative allocation.

A similar flop occurred in the credit card domain. Adams, Guttman-Kenney,  Hayes, Hunt, Laibson and Stewart found that a fairly aggressive nudge, which completely removed the button allowing credit card customers to pay only the minimum amount, did not have a significant effect on total repayments, or credit card debt.  Borrowers mainly compensated for the nudge by making fewer small payments throughout the month.

Laibson summarizes the results of two decades of nudges in the table at the top of this post, which is excerpted from his 2020 AEA talk. Two features stand out: 1) the short-run impact of nudges is often larger than the long-run impact because habits, societal pressures, etc. pull people back to their pre-nudge behavior and 2) large welfare effects from nudges are rare. However, small effect sizes can still imply cost effectiveness, since the costs of nudges are small. Both extreme optimism and pessimism for nudges seem unwarranted.

A unifying theory for behavioral economics to replace EU theory?

One of the main critiques against behavioral economics is that is has no unifying theory.

Anyone familiar with KT’s heuristics-and-biases program will know the slew of biases and errors they found: the availability heuristic, the representativeness heuristic, the conjunction fallacy, etc. These biases often conflict and there is no underlying theory that makes predictions about when one dominates over another.

For example, suppose I’m asked to estimate the percentage of people in Florida who are over 55, after having just visited friends in Florida. The representativeness heuristic suggests I’d overestimate this percentage, since Florida has more older people than other states, and thus being over 55 is representative of being from Florida. But the availability heuristic implies I’d mainly recall the young people who I just saw in Florida, causing me to underestimate the share of older people in the state. What does behavioral economics predict?

Rational actor models sidestep these issues by having a small set of assumptions that—even if not exactly true - are reasonable enough that most economists view them as good approximations. This had led to rational choice serving as a common language among economists - when theories are written using this language, their assumptions can be transparently criticized. But when behavioral biases are introduced ad hoc, it makes comparing theories difficult.

The inertia of a unifying theory means that even if it’s not perfect, rational actor models will probably remain the primary way economists talk to each other unless a replacement comes along.

In a series of recent papers, BGS and co-authors have begun to outline such a replacement. In papers such as Memory and Representativeness, Memory, Attention and Choice, and Memory and Probability, they micro-found decision-making in the psychology of attention and of memory. This research program predicts the existence of many biases originally discovered piecemeal by psychologists, as well as new ones. Rather than making small tweaks to existing models, they start with a biological foundation for predicting how people judge probabilities and value goods, and see where it goes.

For example,  Memory and Probability assumes people (a) estimate probabilities by sampling from memory, and (b) are more likely to recall events that are similar to a cue, even if those events are irrelevant.  Granting these assumptions predicts the availability heuristic, the representativeness heuristic, and the conjunction fallacy.  The advantage of this unified approach is that researchers don’t need to weigh one bias against another, rather, many biases are nested in a theory that makes a single prediction.

The paper also predicts a new bias, which the authors validate experimentally. The bias is over-estimation of the probability of “homogenous” classes of events, i.e. classes where all the events are self-similar, for example, “death from a flood”. Similarly, they underestimate the likelihood of “heterogenous” classes, e.g. “death from causes other than a flood.”

In closing, one of the most important challenges in making economics models more accurate will be to develop a theory that incorporates the quirks of how our brains actually work while remaining mathematically tractable enough to be adopted by the economics profession.

[1]   Some studies, however, have shown that loss aversion is reduced with training and proper incentives. The original PT paper was also ambiguous about how the reference point from which gains/losses are assessed is formed.