Fifteen years ago in 2008, Cass Sunstein and Richard Thaler published a best-seller called Nudge: Improving Decisions about Health, Wealth, and Happiness. The idea built on research in behavioral economics over the previous few decades which had shown that people’s decision-making is often subject to biases or limitations that cause them to make choices, which, on later reflection, they would prefer not to have made. The idea of “nudge” policies–sometimes called “liberal paternalism”– is to change the way certain decisions are presented and framed in a way that can counterbalance the pre-existing biases and limitations, but while still leaving individuals the choice to continue making the same decisions if they wish to do so.

If “nudge” policies work as intended, many individuals will be pleased that they were nudged, because it helped them to make the decision that they actually wanted to make. For example, people might be later on feel pretty good about nudges that encouraged them to quit smoking, or save more money, or eat a healthier diet.

However, a couple of active researchers in behavioral economics are now expressing some doubts about the role of “nudges” in public policy. Nick Chater and George Loewenstein have written “The i-frame and the s-frame: How focusing on individual-level solutions has led behavioral public policy astray” (Behavioral and Brain Sciences, published online September 5, 2022, still forthcoming in a future issue). They write:

An influential line of thinking in behavioral science, to which the two authors have long subscribed, is that many of society’s most pressing problems can be addressed cheaply and effectively at the level of the individual, without modifying the system in which the individual operates. We now believe this was a mistake, along with, we suspect, many colleagues in both the academic and policy communities.

Their conceptual argument comes in two main parts. One is that while behavioral interventions often have some positive effect, the size of the effect is often small. They mention many papers on this theme, but as an illustration, consider the findings of Stefano DellaVigna and Elizabeth Linos “RCTs to Scale: Comprehensive Evidence From Two Nudge Units” (Econometrica, 2022, 90:1, pp. 81-116). They worked with data from two “nudge units”–that is, organizations that work with a wide range of US government agencies on nudge policies. One was a consulting firm called BIT North America, the other was the Office of Evaluation Sciences (OES), which is part of the US Government Services Administration. Here’s how DellaVigna and Linos describe their project:

In this paper, we present the results of a unique collaboration with two major “Nudge Units”: BIT North America, which conducts projects with multiple U.S. local governments, and OES, which collaborates with multiple U.S. Federal agencies. Both units kept a comprehensive record of all trials from inception in 2015. As of July 2019, they conducted a total of 165 trials testing 347 nudge treatments and affecting almost 37 million participants. In a remarkable case of administrative transparency, each trial had a trial report, including in many cases a pre-analysis plan. The two units worked with us to retrieve the results of all trials, 87 percent of which have not been documented in working papers or academic publications. This evidence differs from a traditional meta-analysis in two ways: (i) the large majority of these findings have not previously appeared in academic journals; (ii) we document the entirety of trials run by these units, with no scope for selective publication.

How did these programs work out? Chater and Loewenstein refer to behavioral nudges as “i-frame” interventions, meaning that they seek to change outcomes via a focus on individuals. Here’s their summary of the findings from DellaVigna and Linos:

I-frame interventions alone are likely to be insufficient to deal with the myriad problems facing humanity. Indeed, disappointingly often they yield small or null results. DellaVigna and Linos (2022) analyze all the trials run by two large U.S. Nudge Units: 126 RCTs covering 23 million people. Whereas the average impact of nudges reported in academic journals is large – at 8.7% – their analysis yielded a mean impact of just 1.4%. Why the difference? They conclude that selective publication in academic journals explains about 70% of the discrepancy. DellaVigna and Linos also surveyed nudge practitioners and academics, to predict the effect sizes their evaluation would uncover. Practitioners were far more pessimistic, and realistic, than academics, presumably because of their direct experience with nudge interventions.

In short, nudges based on research studies in behavioral economics have an positive effect, but in the real world, the average effect is pretty close to zero.

But as long as the nudge policies are cheap and the effects are at least mildly positive, why not do them? The second main part of the Chater-Loewenstein argument applies the behavioral economic concept of how a question is framed to energy behind public policy choices. They argue that behavioral economics creates an “i-frame,” with the emphasis on nudging individuals to make better choices. Once the problem and solution is framed in that way, it is harder for “s-frame” policies that focus on systemic changes to be adopted, or even considered.

In theory, there is no contradiction between, say, an i-frame policy of discouraging smoking by with warning labels and images on cigarette packages and also having s-frame policies like taxes on cigarettes and bans on smoking in workplaces and restaurants. But the authors cite research evidence that, for example, when people and policy-makers first consider a “nudge” i-frame policy toward using carbon-free energy, they are then less likely to support an s-frame carbon tax. Farmers who have taken steps to adapt to climate change can then be less likely to support government steps to reduce climate change. Indeed, Chater and Loewenstein argue that there is a pattern in which companies try to channel policy discussions toward i-frame interventions, and behavioral economists and government agencies line up to evaluate the potential nudges. For example, food companies that make unhealthy products often emphasize how individuals should eat in moderation. Energy companies like BP promote the idea of each individual considering their own carbon footprint, which emphasizes what individuals can do rather than government policy steps.

Chater and Loewenstein illustrate their argument with examples from various policy areas, including climate change, obesity, retirement savings, and pollution from plastic waste. They suggest that behavioral economists should refocus their attention, away from individual nudges and toward systematic tax and regulatory changes. They write: “We argue that the most important way in which behavioral scientists can contributed to public policy is by employing their skills to develop and implement value-creating system-level change.”

For a recent research working paper along these lines, I was interested in “Judging Nudging: Understanding the Welfare Effects of Nudges Versus Taxes,” by John A. List, Matthias Rodemeier, Sutanuka Roy, Gregory K. Sun (Becker-Friedman Institute Working Paper, May 15, 2023). The authors look at policies in three areas where both nudges and either taxes or subsidies have been used: cigarettes, influenza vaccinations, and household energy. They summarize: “While nudges are effective in changing behavior in all three markets, they are not necessarily the most efficient policy. We find that nudges are more efficient in the market for cigarettes, while taxes are more efficient in the energy market. For influenza vaccinations, optimal subsidies likely outperform nudges. … Combining nudges and taxes does not always provide quantitatively significant improvements to implementing one policy tool alone.”