What are Motivated Beliefs?

\”Motivated beliefs\” is a relatively recent development economics which offers a position between traditional assumptions of rational and purposeful behavior and the conventional approaches of behavioral economics. It is introduced and explored in a symposium in the Summer 2016 Journal of Economic Perspectives. Nicholas Epley and Thomas Gilovich contribute an introductory essay in \”The Mechanics of Motivated Reasoning.\” Roland Bénabou and Jean Tirole have written: \”Mindful Economics: The Production, Consumption, and Value of Beliefs.\”  Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri look at one aspect of motivated beliefs in \”The Preference for Belief Consonance.\”  Francesca Gino, Michael I. Norton, and Roberto A. Weber focus on another aspect in \”Motivated Bayesians: Feeling Moral While Acting Egoistically.\” 

Of course, I encourage you to read the actual papers. I\’m worked as the Managing Editor of JEP for 30 years, so I always want everyone to read the papers! But here\’s an overview and a taste of the arguments.

In traditional working assumptions of microeconomics, people act in purposeful and directed ways to accomplish their goals. Contrary to the complaints I sometimes hear, this approach doesn\’t require that people have perfects and complete information or that they are perfectly rational decision-makers. It\’s fairly straightforward to incorporate imperfect information and bounded rationality into these models. But even so, this approach is built on the assumption that people act purposefully to achieve their goals and do not repeatedly make the same mistakes without altering their behavior.

 Behavioral economics, as it has usually been practiced, is sometimes called the \”heuristics and biases\” approach. It points to certain patterns of behavior that have been well-demonstrated in the psychology literature: for example, people often act in a short-sighted or myopic way that puts little weight on long-term consequences; people have a hard time evaluating how to react to low-probability events; people are \”loss averse\” and treat a loss of a certain amount as a negative outcome that is bigger in absolute value than a gain of the same amount; the \”confirmation bias\” of interpreting new evidence so that it tends to support previously held beliefs; and others. In this view, people can make decisions and regret them, over and over. Short-sighted people may fail to save, or fail to exercise, and regret it. People who are loss-averse and have a hard time evaluating low-probability events may be sucked into buying a series of service plans and warranties that don\’t necessarily offer them a good value. When decision-making includes heuristics and biases, people can make the same mistakes repeatedly.

The theory of motivated beliefs fall in-between these possibilities. In these arguments, people are not strictly rational or purposeful decision-makers, but nor does their decision-making involve built-in flaws. Instead, people have a number of goals, which include fitting in with their social group, feeling moral, competent, and attractive, fitting in with their existing social group or achieving higher social status. As Epley and Gilovich explain in their introductory essay,

\”This idea is captured in the common saying, “People believe what they want to believe.” But people don’t simply believe what they want to believe. The psychological mechanisms that produce motivated beliefs are much more complicated than that. … People generally reason their way to conclusions they favor, with their preferences influencing the way evidence is gathered, arguments are processed, and memories of past experience are recalled. Each of these processes can be affected in subtle ways by people’s motivations, leading to biased beliefs that feel objective …

One of the complexities in understanding motivated reasoning is that people have many goals, ranging from the fundamental imperatives of survival and reproduction to the more proximate goals that help us survive and reproduce, such as achieving social status, maintaining cooperative social relationships, holding accurate beliefs and expectations, and having consistent beliefs that enable effective action. Sometimes reasoning directed at one goal undermines another. A person trying to persuade others about a particular point is likely to focus on reasons why his arguments are valid and decisive—an attentional focus that could make the person more compelling in the eyes of others but also undermine the accuracy of his assessments. A person who recognizes that a set of beliefs is strongly held by a group of peers is likely to seek out and welcome information supporting those beliefs, while maintaining a much higher level of skepticism about contradictory information (as Golman, Loewenstein, Moene, and Zarri discuss in this symposium). A company manager narrowly focused on the bottom line may find ways to rationalize or disregard the ethical implications of actions that advance short-term profitability (as Gino, Norton, and Weber discuss in this symposium). 

The crucial point is that the process of gathering and processing information can systematically depart from accepted rational standards because one goal— desire to persuade, agreement with a peer group, self-image, self-preservation—can commandeer attention and guide reasoning at the expense of accuracy. Economists are well aware of crowding-out effects in markets. For psychologists, motivated reasoning represents an example of crowding-out in attention. In any given instance, it can be a challenge to figure out which goals are guiding reasoning … 

In one classic study, mentioned in the overview and several of the papers, participants were given a description of a trial and asked to evaluate whether they thought the accused was guilty or innocent. Some of the players were assigned to play the role of prosecutors or defense attorneys before reading the information; others were not assigned a role until after evaluating the information. Those who were assigned to be prosecutors before reading the evidence were more likely to evaluate the evidence as showing the defendant was guilty, while those assigned to be defense attorneys before reading the evidence were more likely to evaluate the evidence as showing the defendant to be not guilty. The role you play will often influence your reading of evidence. 
Bénabou and Tirole offer a conceptual framework for thinking about motivated beliefs, and then apply the framework in a number of context. They argue that motivated beliefs arise for two reasons,which they label \”self-efficacy\” and \”affective.\”  In the self-efficacy situation, people use their beliefs to give their immediate actions a boost. Can I do a good job in the big presentation at work? Can I save money? Can I persevere with a diet? In such situations, people are motivated to distort their interpretation of information and their own actions in a way that helps support their ability to persevere with a certain task.  In the \”affective\” situation, people get immediate and visceral pleasure from seeing themselves as smart, attractive, or moral, and they can also get \”anticipatory utility\” from contemplating pleasant future outcomes.
However, if your motivated beliefs do not reflect reality, then in some cases reality will deliver some hard knocks in response. They analyze certain situations in which these hard knocks, again through a process of motivated beliefs, makes you cling to those beliefs harder than ever. Moreover, if you are somewhat self-aware and know that you are prone to motivated beliefs, then you may be less likely to trust your own interpretations of evidence, which complicates the analysis further. Bénabou and Tirole apply these arguments in a wide array of contexts: political beliefs (a subject of particular interest in 2016), social and organizational beliefs, financial bubbles, and personal identity. Here\’s one example of a study concerning political beliefs (most citations omitted). 

The World Values Survey reveals considerable differences in beliefs about the role of effort versus luck in life. In the United States, 60 percent of people believe that effort is key; in Western Europe, only 30 percent do on average, with major variations across countries. Moreover, these nationally dominant beliefs bear no relationship to the actual facts about social mobility or how much the poor are actually working, and yet they are strongly correlated with the share of social spending in GDP. At the individual level, similarly, voters’ perceptions of the extent to which people control their own fate and ultimately get their just desserts are first-order determinants of attitudes toward inequality and redistribution, swamping the effects of own income and education. 

In Bénabou and Tirole (2006), we describe how such diverse politico-ideological equilibria can emerge due to a natural complementarity between (self-)motivation concerns and marginal tax rates. When the safety net and redistribution are minimal, agents have strong incentives to maintain for themselves, and pass on to their children, beliefs that effort is more important than luck, as these will lead to working hard and persevering in the face of adversity. With high taxes and generous transfers, such beliefs are much less adaptive, so fewer people will maintain them. Thus, there can coexist: i) an “American Dream” equilibrium, with just-world beliefs about social mobility, and little redistribution; and ii) a “Euro-pessimistic” equilibrium, with more cynical beliefs and a large welfare state. In the latter, the poor are less (unjustly) stigmatized as lazy, while total effort (annual hours worked) and income are lower, than in the former. More generally, across all steady-states there is a negative correlation between just-world beliefs and the size and the welfare state, just as observed across countries.

Golman, Loewenstein. Moene, and Zarri consider one aspect of motivated beliefs, the \”preference for belief consonance,\” which is the desire to be in agreement with others in one\’s immediate social group. They endeared themselves to me by starting with a quotation from Adam Smith\’s first great work, The Theory of Moral Sentiments (Part VII, Section IV): \”The great pleasure of conversation, and indeed of society, arises from a certain correspondence of sentiments and opinions, from a certain harmony of minds, which like so many musical instruments coincide and keep time with one another.\” They write:

Why are people who hold one set of beliefs so affronted by alternative sets of beliefs—and by the people who hold them? Why don’t people take a live-and-let-live attitude toward beliefs that are, after all, invisibly encoded in other people’s minds? In this paper, we present evidence that people care fundamentally about what other people believe, and we discuss explanations for why people are made so uncomfortable by the awareness that the beliefs of others differ from their own. This preference for belief consonance (or equivalently, distaste for belief dissonance) has far-ranging implications for economic behavior. It affects who people choose to interact with, what they choose to exchange information about, what media they expose themselves to, and where they choose to live and work. Moreover, when people are aware that their beliefs conflict with those of others, they often try to change other people’s beliefs (proselytizing). If unsuccessful in doing so, they sometimes modify their own beliefs to bring them into conformity with those around them. A preference for belief consonance even plays an important role in interpersonal and intergroup conflict, including the deadliest varieties: Much of the conflict in the world is over beliefs—especially of the religious variety—rather than property … 

A substantial group of studies show that if you ask people about their opinions on certain issues, and if you ask people about their opinions while telling them that certain other specific groups hold certain opinions, the patterns of answers can be quite different. Personally, I\’m always disconcerted that for every opinion I hold, some of the others who hold that same opinions are people I don\’t like very much.
Gino, Norton, and Weber take on another dimension of motivated beliefs in  their essay on \”feeling moral while acting egoistically.\” They explain that when given some wiggle room to manage their actions or their information, people often choose to act in a way that allows them to feel moral while acting selfishly. Gino, Norton, and Weber write: 

 In particular, while people are often willing to take a moral act that imposes personal material costs when confronted with a clear-cut choice between “right” and “wrong,” such decisions often seem to be dramatically influenced by the specific contexts in which they occur. In particular, when the context provides sufficient flexibility to allow plausible justification that one can both act egoistically while remaining moral, people seize on such opportunities to prioritize self-interest at the expense of morality. In other words, people who appear to exhibit a preference for being moral may in fact be placing a value on feeling moral, often accomplishing this goal by manipulating the manner in which they process information to justify taking egoistic actions while maintaining this feeling of morality.

They cite many studies of this phenomenon. Here\’s an overview of one: 

[P]articipants in a laboratory experiment distribute two tasks between themselves and another participant: a positive task (where correct responses to a task earn tickets to a raffle) and a negative task (not incentivized and described as “rather dull and boring”). Participants were informed: “Most participants feel that giving both people an equal chance— by, for example, flipping a coin—is the fairest way to assign themselves and the other participant to the tasks (we have provided a coin for you to flip if you wish). But the decision is entirely up to you.” Half of participants simply assigned the tasks without flipping the coin; among these participants, 90 percent assigned themselves to the positive task. However, the more interesting finding is that among the half of participants who chose to flip the coin, 90 percent “somehow” ended up with the positive task—despite the distribution of probabilities that one would expect from a two-sided coin. Moreover, participants who flipped the coin rated their actions as more moral than those who did not—even though they had ultimately acted just as egoistically as those who did not flip in assigning themselves the positive task. These results suggest that people can view their actions as moral by providing evidence to themselves that they are fair (through the deployment of a theoretically unbiased coin flip), even when they then ignore the outcome of that coin flip to benefit themselves.

The theory of motivated beliefs still views people as motivated by self-interest. However, the dimensions of self-interest expand beyond the standard concerns like consumption and leisure, and encompass how we feel about ourselves and the social groups we inhabit. In this way, the analysis opens up insights into insights into behavior that is otherwise puzzling in the context of economic analysis, as well as building intellectual connections to other social sciences of psychology and sociology. 

Alfred Marshall and the Origin of Ceteris Paribus

When non-economists ask me questions, they often seem to be jumping from topic to topic. A question about the effects of raising the minimum wage, for example, shifts from how it will affect jobs, and earnings, and companies that hire minimum wage workers, and work effort, and automation, and the the overall income distribution, and children of minimum wage earners, and so on. The questions are all reasonable. But I become self-aware that economists have trained themselves into a one-thing-at-a-time method of analysis, and so bouncing from one topic to another can feel somehow awkward.

The ceteris paribus or \”other things equal\” assumption involves an intellectual approach, common among economists, of trying to focus on one thing at a time. After all, many economic issues and policies have a number of possible causes and effects. Rather than hopscotching among them, economists often try to discuss isolate one factor at a time, and then to move on to other factors, before combining it all into an overall perspective. The use of this approach in economic analysis traces back to trace back to Alfred Marshall\’s 1890 classic Principles of Economic Analysis.

The Library of Economics and Liberty provides a useful place for finding searchable editions of many classic works in economics. The site provides the 8th edition of Marshall\’s Principles, published in 1920. In Book V, Chapter V, \”Equilibrium of Normal Demand and Supply, Continued, With Reference To Long and Short Periods,\” Marshall described the overall logic of looking at one thing at a time, offers some hypothetical examples from a discussion of supply and demand shocks in fish markets, and points out that longer the time period of analysis, the harder it becomes to assume that everything else is constant. Marshall writes:

\”The element of time is a chief cause of those difficulties in economic investigations which make it necessary for man with his limited powers to go step by step; breaking up a complex question, studying one bit at a time, and at last combining his partial solutions into a more or less complete solution of the whole riddle. In breaking it up, he segregates those disturbing causes, whose wanderings happen to be inconvenient, for the time in a pound called Cœteris Paribus. The study of some group of tendencies is isolated by the assumption other things being equal: the existence of other tendencies is not denied, but their disturbing effect is neglected for a time. The more the issue is thus narrowed, the more exactly can it be handled: but also the less closely does it correspond to real life. Each exact and firm handling of a narrow issue, however, helps towards treating broader issues, in which that narrow issue is contained, more exactly than would otherwise have been possible. With each step more things can be let out of the pound; exact discussions can be made less abstract, realistic discussions can be made less inexact than was possible at an earlier stage. …

The day to day oscillations of the price of fish resulting from uncertainties of the weather, etc., are governed by practically the same causes in modern England as in the supposed stationary state. The changes in the general economic conditions around us are quick; but they are not quick enough to affect perceptibly the short-period normal level about which the price fluctuates from day to day: and they may be neglected [impounded in cœteris paribus] during a study of such fluctuations.

Let us then pass on; and suppose a great increase in the general demand for fish, such for instance as might arise from a disease affecting farm stock, by which meat was made a dear and dangerous food for several years together. We now impound fluctuations due to the weather in cœteris paribus, and neglect them provisionally: they are so quick that they speedily obliterate one another, and are therefore not important for problems of this class. And for the opposite reason we neglect variations in the numbers of those who are brought up as seafaring men: for these variations are too slow to produce much effect in the year or two during which the scarcity of meat lasts. Having impounded these two sets for the time, we give our full attention to such influences as the inducements which good fishing wages will offer to sailors to stay in their fishing homes for a year or two, instead of applying for work on a ship. We consider what old fishing boats, and even vessels that were not specially made for fishing, can be adapted and sent to fish for a year or two. The normal price for any given daily supply of fish, which we are now seeking, is the price which will quickly call into the fishing trade capital and labour enough to obtain that supply in a day\’s fishing of average good fortune; the influence which the price of fish will have upon capital and labour available in the fishing trade being governed by rather narrow causes such as these. This new level about which the price oscillates during these years of exceptionally great demand, will obviously be higher than before. Here we see an illustration of the almost universal law that the term Normal being taken to refer to a short period of time an increase in the amount demanded raises the normal supply price.  …

Relatively short and long period problems go generally on similar lines. In both use is made of that paramount device, the partial or total isolation for special study of some set of relations. In both opportunity is gained for analysing and comparing similar episodes, and making them throw light upon one another; and for ordering and co-ordinating facts which are suggestive in their similarities, and are still more suggestive in the differences that peer out through their similarities. But there is a broad distinction between the two cases. In the relatively short-period problem no great violence is needed for the assumption that the forces not specially under consideration may be taken for the time to be inactive. But violence is required for keeping broad forces in the pound of Cateris Paribus during, say, a whole generation, on the ground that they have only an indirect bearing on the question in hand. For even indirect influences may produce great effects in the course of a generation, if they happen to act cumulatively; and it is not safe to ignore them even provisionally in a practical problem without special study. Thus the uses of the statical method in problems relating to very long periods are dangerous; care and forethought and self-restraint are needed at every step. The difficulties and risks of the task reach their highest point in connection with industries which conform to the law of Increasing Return; and it is just in connection with those industries that the most alluring applications of the method are to be found.

For those who want more on the history of ceteris paribus (the modern spelling no longer uses the ligature version that ties together the o and e), Joseph  Persky offers a nice introduction in his 1990 article \”Retrospectives: Ceteris Paribus,\” which appeared in the Journal of Economic Perspectives (4: 2, pp. 187-193). Persky finds early uses of the term back in the 1600s, including a 1662 passage by the economist William Petty that was often quoted in the 19th century–and thus may have inspired Marshall\’s use of the term. 
Persky notes the dueling concerns that economists may in some cases feel that they should avoid big-picture subjects in the global economy or historical analysis because the ceteris are not always paribus, or in other cases that economic research may be focusing on one factor while other important factors are also changing. But as Persky points out, the ceteris paribus assumption is not meant as a literal statement that nothing else has changed, but only to remind the reader that the analysis may be leaving something out. As Persky writes: \”Economists could do much worse than to flag our fallibility with a bit of Latin.\”

The Future of DSGE Models in Macroeconomics

One of the hardest problems in studying the macroeconomy is that time keeps advancing. You can\’t go back to, say, 2001 or 2009, not enact the Bush tax cuts or the Obama economic stimulus, and then re-run the economy and see what happens. Instead, researchers end up comparing effects of seemingly similar policies enacted at different times–but the policies and the circumstances are never quite identical, so room for dispute remains. Indeed, disagreements among macroeconomists are nearly proverbial. \”Macroeconomists have predicted nine of the last five recessions.\” \”Two macroeconomists, five opinions.\” \”Economists are the experts who explain why the prediction they made yesterday didn\’t come true today.\”

I sometimes receive notes from readers asking for a sense of why macroeconomists disagree.  Olivier Blanchard opens up some of the central issues for useful discussion in a short and readable paper, \”Do DSGE Models Have a Future?\” written for the Peterson Institute for International Economics (Policy Brief 16-11, August 2016).

For the uninitiated, DSGE models of the macroeconomy are a method that is both well-established and the stuff of continuing controversy. DSGE stands for \”dynamic stochastic general equilibrium model,\” which represents a broad class of macroeconomic models. In the jargon, \”dynamic\” means that the models show the evolution of a (hypothetical) economy over time. \”Stochastic\” means that the models show how the economy would respond if certain shocks occur, whether the shocks involve policy choices or economic events (like a rise or fall in the rate of productivity growth). \”General equilibrium\” means that these models don\’t look at the macroeconomy one sector at a time–say, first consumption, then investment, then foreign trade–but instead try to take all the interactions of these sectors into account.  Blanchard describes the models in this way:

\”For those who are not macroeconomists, or for those macroeconomists who lived on a desert island for the last 20 years, here is a brief refresher. DSGE stands for “dynamic stochastic general equilibrium.” The models are indeed dynamic, stochastic, and characterize the general equilibrium of the economy. They make three strategic modeling choices: First, the behavior of consumers, firms, and financial intermediaries, when present, is formally derived from microfoundations. Second, the underlying economic environment is that of a competitive economy, but with a number of essential distortions added, from nominal ties to monopoly power to information problems. Third, the model is estimated as a system, rather than equation by equation in the previous generations of macroeconomic models. … [C]urrent DSGE models are best seen as large scale versions of the New Keynesian model, which emphasizes nominal rigidities and a role for aggregate demand.\”

Blanchard gives four main concerns about DSGE models along with some thoughts about each one. Thus, he writes:

There are many reasons to dislike current DSGE models. First: They are based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms.  … Second: Their standard method of estimation, which is a mix of calibration and Bayesian estimation, is unconvincing. … Third: While the models can formally be used for normative purposes, normative implications are not convincing. … Fourth: DSGE models are bad communication devices. A typical DSGE paper adds a particular distortion to an existing core. It starts with an algebra-heavy derivation of the model, then goes through estimation, and ends with various dynamic simulations showing the effects of the distortion on the general equilibrium properties of the model. \”

You can read the details of Blanchard\’s responses in the paper, but I\’d characterize his overall Blanchard\’s view of DSGE models seems to be negative, ambivalent, and  positive all at the same time. He writes: \”I see the current DSGE models as seriously flawed, but they are eminently improvable and central to the future of macroeconomics.\” A snippet of his more detailed answer like this:

The pursuit of a widely accepted analytical macroeconomic core, in which to locate discussions and extensions, may be a pipe dream, but it is a dream surely worth pursuing. If so, the three main modeling choices of DSGEs are the right ones. Starting from explicit microfoundations is clearly essential; where else to start from? Ad hoc equations will not do for that purpose. Thinking in terms of a set of distortions to a competitive economy implies a long slog from the competitive model to a reasonably plausible description of the economy. But, again, it is hard to see where else to start from. Turning to estimation, calibrating/estimating the model as a system rather than equation by equation also seems essential. Experience from past equation-by-equation models has shown that their dynamic properties can be very much at odds with the actual dynamics of the system. 

It\’s worth unpacking this a bit. Blanchard\’s comment that the DSGE approach \”may be a pipe dream, but it is a dream surely worth pursuing,\” is not calculated to inspire confidence in the results of such studies! This intellectual agenda involves modelling activities of real-world economic actors, including various assumptions and some combination of rational choice and behavioral economics, involves many possible choices. The selection of possible frictions like monopoly power, wages and prices which adjust in a sticky manner, the formation of expectations, the issue raised by financial markets, all adds another set of possible choices. The question of how to get a workable quantatitive number out of this model involves choosing some plausible values from other studies (that is, \”calibrating\” the model) and what parts of the model to estimate using data involves still more choices.

In addition, Blanchard discusses how DSGE modelling needs to be open to new insights from behavioral economics, from the use of big data, from issues about problems that can arise in financial markets, and more. He also suggests: \”At one end, maximum theoretical purity is indeed the niche of DSGEs. For those models, fitting the data closely is less important than clarity of structure.\” This comments is not calculated to inspire confidence in the results of such studies either. He suggests that there is also a need for one set of related-but-different studies for policy purposes, and another set of related-but-different models for puruposes economic forecasting, and still other lessons that are most accessible through simpler ad hoc models (like the IS-LM model from intermediate-level macro textbooks).

In a way, what macroeconomists have been learning in the last few decades is to reach a deeper understanding how many different ingredients might be included in a macroeconomic model. But no model can look at everything at once, so macroeconomists are always trying to figure out which ingredients matter most. My own takeaway is that DSGE models will continue to matter a lot to high-powered researchers in macroeconomics, like Blanchard. But for the rest of us, the task is to keep track of how insights from those models filter down through the research literature and become practical lessons that can be explained and applied in more stripped-down contexts.

US Motor Vehicle Deaths:

The United States has made enormous strides in reducing motor vehicle deaths–and even so, a higher percentage of Americans die on the road than in other other high-income countries. Plausible estimates suggest that reducing drunk driving and increasing seat belt use in the US to levels that commonly prevail in other high-income countries could save thousands of lives each year.

Here\’s an article from 1999 showing the fall in US motor vehicle deaths from the mid-1960s up through the end of the 20th century.  US Motor vehicle deaths per million miles travelled fall by about two-thirdd during this time.

Figure 2
Even though this decline in US motor vehicle deaths has continued, it\’s happening faster in other high income countries. Erin K. Sauber-Schatz, David J. Ederer,; Ann M. Dellinger, and Grant T. Baldwin offers some comparisons in \”Vital Signs: Motor Vehicle Injury Prevention — United States and 19 Comparison Countries,\” which appears in Morbidity and Mortality Weekly Report, July 6, 2016, published by the Centers for Disease Control. They write:

\”In 2013, the United States motor vehicle crash death rate of 10.3 per 100,000 population had decreased 31% from the rate in 2000; among the 19 comparison countries, the rate had declined an average of 56% during this time. Among all 20 countries, the United States had the highest rate of crash deaths per 100,000 population (10.3); the highest rate of crash deaths per 10,000 registered vehicles (1.24), and the fifth highest rate of motor vehicle crash deaths per 100 million vehicle miles traveled (1.10). Among countries for which information on national seat belt use was available, the United States ranked 18th out of 20 for front seat use, and 13th out of 18 for rear seat use. Among 19 countries, the United States reported the second highest percentage of motor vehicle crash deaths involving alcohol-impaired driving (31%), and among 15, had the eighth highest percentage of crash deaths that involved speeding (29%). …

\”If the United States had the same motor vehicle crash death rate as Belgium (the country with the second highest death rate), 12,000 fewer lives would have been lost in 2013 and an estimated $140 million in direct medical costs would have been averted. Similarly, if the United States’ motor vehicle crash death rate was equivalent to the average in the 19 comparison countries, at least 18,000 fewer lives would have been lost and an estimated $210 million in direct medical costs would have been averted.\”

As the authors are careful to note, these comparisons are in some sense quick-and-dirty (my phrase, not theirs!). Countries differ in how they collect this kind of data, what the rules are that govern whether a driver is intoxicated, the quality of their roads, other rules they impose about driving (like speed limits) and in other ways. But given that there are a range of possible enforcement policies  and ways of investing in safer roads that could plausibly save more than 10,000 American lives per year, surely this is a worthwhile cause for someone to take up?

Something about reducing motor vehicle deaths just isn\’t politically sexy. It sounds like nagging. One can almost hear the argumentative hypothetical response: \”Hey, if I want to drive without a seat-belt and take a risk of killing myself, it\’s nobody else\’s business.\” But friends and loved ones might disagree. Those who die because someone else was driving drunk or texting or on an unsafe road would surely disagree. And we don\’t get a chance to hear the opinions of those who are already dead.

Is Support for Democracy Eroding?

\”In the past three decades, the share of U.S. citizens who think that it would be a “good” or “very good” thing for the “army to rule”—a patently undemocratic stance—has steadily risen. In 1995, just one in sixteen respondents agreed with that position; today, one in six agree. While those who hold this view remain in the minority, they can no longer be dismissed as a small fringe, especially since there have been similar increases in the number of those who favor a “strong leader who doesn’t have to bother with parliament and elections” and those who want experts rather than the government to “take decisions” for the country. Nor is the United States the only country to exhibit this trend. The proportion agreeing that it would be better to have the army rule has risen in most mature democracies, including Germany, Sweden, and the United Kingdom.\”

Roberto Stefan Foa and Yascha Mounk lay out a variety of evidence on these themes in \”The Democratic Disconnect,\” in which appears in the July 2016 issue of the Journal of Democracy (27:3, pp. pp. 5-17). Their opinion data is drawn from the World Values Survey, which relies on a network of social scientists, now active in about 100 countries around the world, who do surveys using a common set of questions.

Foa and Mounk offer a variety of detailed insights into attitudes about democracy: here, I\’ll just  highlight a few results that caught my eye. One is that the lower support for democracy seems especially high among younger adults. The horizontal axis on this figure shows the decade in which people were born: thus, older respondents are on the left and younger respondents are on the right. In both the US and in Europe, young adults have become less likely to say that it is \”essential\” to live in a democracy.

Conversely, the share of people saying that a democratic political system is a bad or a very bad way to run the country has risen since the mid-1990s, and this attitude is also more prevalent among younger adults.

In the US, the support for a \”strong leader\” who doesn\’t have to \”bother with parliament and elections\” has especially risen among those with higher income levels.

Foa and Mounk have a lot more to say in breaking down these patterns and trends. I\’ll only add that it seems to me that many people in the US and elsewhere are feeling the pull of what I call the \”technocratic temptation.\” In this view, our economic, foreign policy, and social issues have clear-cut answers. If we would all just came together as a unified nation, put the appropriate technocratic experts in charge, and shut up those who disagree, then the experts could put those clear-cut answers into effect. My own view is that many deep problems don\’t have simple answers; while experts can be useful in contributing information and insight to social disputes, they can be at least as nutty in their social values and decision-making as anyone else; and ongoing disagreements on many issues should be viewed as healthy and productive, even though it means accepting that a certain number of issues will never be fully settled. For those who feel the need for an obligatory quotation here from John Stuart Mill, here\’s one.

Higher Local Minimum Wages: Early Results from Seattle

In June 2014, the city of Seattle passed a law raising the minimum wage for many employers in the city. The law went into effect on April 1, 2015, with an $11/hour minimum wage taking effect for many employers at that time, a $13/hour minimum wage scheduled to start in January 2016, and then ongoing rises up to $18/hour in years to come. A group called the  Seattle Minimum Wage Study Team, based at the University of Washington, is planning to study the effects of this rise in the minimum wage over time. The team investigators are Jacob Vigdor, Mark C. Long, Jennifer Romich, Scott W. Allard, Heather D. Hill, Jennifer Otten, Robert Plotnick, Scott Bailey, and Anneliese Vance-Sherman. It has now published a study on the first nine months of Seattle\’s higher minimum wage, in Report on the Impact of Seattle\’s Minimum Wage Ordinance on Wages, Workers, Jobs, and Establishments through 2015.

Perhaps the main difficulty in all minimum wage studies is the \”compared to what?\” question. If a minimum wage law is phased in over time and accompanied by higher wages, would a substantial portion of that rise in wages have happened anyway? If a minimum wage law is phased in over time and accompanied by fewer low-wage jobs, would a substantial portion of that decline in low-wage jobs have happened anyway (perhaps because of automation or other factors)?

One way to tackle the \”compared to what?\” question is to use comparison groups. Specifically, the Seattle study team looks at patterns of wages and jobs in Seattle before and after the rise in the minimum wage, and compares it to patterns to four other areas. Seattle is in King County, so one comparison is to King County outside Seattle.  A second comparison is to counties that surround King County, namely Snohomish, Kitsap, and Pierce. A third comparison is to  “synthetic Seattle,” which the researchers define \”as a set of regions in the state of Washington that have matched Seattle’s labor market trends in recent years.\” A fourth comparison is “Synthetic Seattle excluding King County,” to account for potential spillover of the Seattle Minimum Wage Ordinance into labor market of suburban King County

As a starting point, the evidence shows that hourly wages for low-wage workers did rise in Seattle. At they write: \”The typical worker earning under $11/hour in Seattle when the City Council voted to raise the minimum wage in June 2014 (“low-wage workers”) earned $11.14 per hour by the end of 2015, an increase from $9.96/hour at the time of passage.\”

However, the economy in Washington state was doing fairly well in 2015, and wages also rose in the comparison group areas. In describing the higher hourly wages in Seattle, the study team writes: \”The minimum wage contributed to this effect, but the strong economy did as well. We estimate that the minimum wage itself is responsible for a $0.73/hour average increase for low-wage workers.\”

These findings for the wage paid per hour don\’t take into account possible changes in the number of hours worked. The study finds: \” The minimum wage appears to have slightly reduced the employment rate of low-wage workers by about one percentage point. … Hours worked among low-wage Seattle workers have lagged behind regional trends, by roughly four hours per quarter (nineteen minutes per week), on average. … Low-wage individuals working in Seattle when the ordinance passed transitioned to jobs outside Seattle at an elevated rate compared to historical patterns. … For businesses that rely heavily on low-wage labor, our estimates of the impact of the Ordinance … on hours per employee more consistently indicate a reduction of roughly one hour per week.\”

Thus, low-wager workers in Seattle were better off as a result of the higher minimum wage if they managed to keep their job or to keep working roughly the same number of hours. But the employment rate of low-wage workers in Seattle declined slightly, as did the hours worked, which would lead to lower total earnings. As the study group notes: \”The major conclusion one should draw from this analysis is that the Seattle Minimum Wage Ordinance worked as intended by raising the hourly wage rate of low-wage workers, yet the unintended, negative side effects on hours and employment muted the impact on labor earnings. … The effects of disemployment appear to be roughly offsetting the gain in hourly wage rates, leaving the earnings for the average low-wage worker unchanged. Of course, we are talking about the average result.\”

Looking at the comparison groups, the study team doesn\’t find any effect of the first nine months of Seattle\’s higher minimum wage on business openings or closings: \”We do not find compelling evidence that the minimum wage has caused significant increases in business failure rates. Moreover, if there has been any increase in business closings caused by the Minimum Wage Ordinance, it has been more than offset by an increase in business openings.\”

As the authors are at some pains to point out, it would be unwise to draw general conclusions about the minimum wage from a single study. The Seattle economy has specific characteristics during the period under study, which other urban areas may not share. The comparison economies are all in Washington state, and all have their own  specific characteristics, too. Looking at the effects of a city-level law and extrapolating to a state-level law or a national-level law can be problematic for a lot of reasons: for example, it\’s a lot easier for some employers paying minimum wage to expand in Seattle to shift operations to just outside the city border than it would be for them to shift outside the state border or the national border.

Moreover, some responses to a minimum wage might take time. For example, perhaps employers over time will find that hiring as many or more workers at the the higher minimum wage makes more economic sense than they had previously expected, perhaps because the higher wages bring less employee turnover or higher efficiency. Alternatively, perhaps employers over time will find more ways to substitute away from lower-wage workers in Seattle, using automation or having work done in other locations, and job losses traceable to the higher minimum wage will rise.

I\’m willing to let the evidence tell me the story, and on many economic issues, it takes time for the evidence to accumulate. As more cities raise minimum wage, the picture will clarify. But the early evidence from Seattle is that a higher minimum wage at the city level doesn\’t raise total earnings by much, because low-skilled workers end up with fewer hours on the job.

Are Victims of War and Violence More Likely to Become Social Cooperators?

Economists are usually viewed as the skunk at the garden party–the ones who bring up difficult tradeoffs when everyone else just wants to view the world as all benefits and no costs. But a body of social science research is now suggesting that war, one of the most costly and brutal of human activities, does have at least one tradeoff on the positive side. Those who have experienced war seem somewhat more likely to increase their level of social participation and cooperation after the violence has ended. In \”Can War Foster Cooperation?\” Michal Bauer, Christopher Blattman, Julie Chytilová, Joseph Henrich, Edward Miguel, and Tamar Mitts review this evidence. The article appears in the just-released Summer 2016 issue of the Journal of Economic Perspectives.  They begin (citations omitted):

\”Warfare leaves terrible legacies, from raw physical destruction to shattered lives and families. International development researchers and policymakers sometimes describe war as “development in reverse”, causing persistent adverse effects on all factors relevant for development: physical, human, and social capital. Yet a long history of scholarship from diverse disciplines offers a different perspective on one of the legacies of war. Historians and anthropologists have noted how, in some instances, war fostered societal transitions from chiefdoms to states and further strengthened existing states. Meanwhile, both economists and evolutionary biologists, in examining the long-run processes of institution-building, have also argued that war has spurred the emergence of more complex forms of social organization, potentially by altering people’s psychology. In this article, we discuss and synthesize a rapidly growing body of research based on a wealth of new data from which a consistent finding has emerged: people exposed to war violence tend to behave more cooperatively after war. We show the range of cases where this holds true and persists, even many years after war.\” 

The evidence on the after-effects of war on cooperation typically involves a survey component, in which people from a conflict-riven places like Sierra Leone, Uganda, Bosnia, Kosovo, and others are surveyed about the experience of their family in the war. For example, they might be asked questions like: \”Were any members of your household killed during the conflict? Were any members injured or maimed during the conflict? Were any members made refugees during the war?\” I

Evidence on social cooperation is then gathered in two forms. One is through additional survey data: that is, asking people about whether they now belong to clubs, vote, have an interest in politics, are active participants or leaders in community life, make voluntary contributions to public projects, and so on. The other is to have people participate in experimental games that seek to elicit attitudes toward cooperation.

Economists will be familiar with these games, like the ultimatum game, public goods game, and the like. Here are a couple of examples. the \”dictator game\” is one of the simplest of these games, in which one subject is given an amount to divide with another player. That\’s it! The player who is the \”dictator\” can keep it all, or give it all, and the amount they give can be viewed as a measure of likelihood to cooperation. In the more complex \”trust game,\” the first player is given an amount to divide with another player. Whatever they give to that other player is multiplied by three. Then the second player gets to decide how much–if any–to return to the first player. When trust is higher, the first player will give more to the second player, in the hope or expectation of getting even more back.

The authors review 20 studies, and find that those who have been most exposed to the violence of war are more likely to show cooperative behaviors for years afterward. These effects seem especially strong when they are involved with players who in some way can be identified as members of their own group. I\’ll let you read the more detailed evidence for yourself, but it\’s perhaps worth noting one issue for any social science study. Is there some reason to believe that those who were more cooperative to begin with might also be more likely to suffer violence during wartime? The authors are fully aware of this argument,and write:

\”For instance, more cooperative people might be more likely to participate in collective action, including civil defense forces or armed organizations that represent their groups during wartime, and thus more likely to live in a family that experiences some form of direct war victimization. Or perhaps attackers systematically target people who are likely to be more cooperative in nature, such as leading families or wealthy and influential citizens. If true, statistical tests would overstate the effect of war victimization on later civic participation and social capital. Attrition poses another potential challenge for causal identification if the least prosocial or cooperative people are also more likely to die, migrate, or be displaced and not return home.\”

The short answer to these concerns is that n many of these conflicts, it is frighteningly plausible that these direct experiences of violence were more-or-less randomly distributed across certain villages or populations. Thus, it is plausible that their higher cooperation is an after-effect of having experienced violence.

One intriguing question asked near the end of the paper is whether the effect of war-time violence on cooperation might also arise after other types of violence. The authors write (citations omitted):

\”Another important direction is to examine other forms of physical insecurity, including crime, state repression, natural disaster, life-threatening accidents, and domestic abuse. In particular, the distinction between wartime violence and urban crime may not be large in certain cases, especially where widespread organized crime takes on characteristics of civil conflict, such as the cases of Mexican or Colombian drug trafficking organizations. Early evidence does indeed suggest that our findings on violence and cooperation could generalize to a wider range of situations. The meta-analysis finds that those who have experienced crime-related violence are also more likely to display cooperative behavior, just like war victims. There are parallels in related literatures, including findings that victims of crime are more likely to participate in community and political meetings, be interested in politics, and engage in group leadership. Other emerging evidence exploring the effects of post-election violence, and earthquake and tsunami damage also mimics the main finding of this paper, namely that survival threats tend to enhance local cooperation.\”

Of course, neither the authors nor anyone else is arguing that the costs of war and violence are offset by a modest if real improvement in cooperation. But as one considers the grim violence that stalks the lives of so many people around the world, any possible glimmer of light for the aftermath is welcome.

(Full disclosure: I\’m the Managing Editor of JEP, and have been in that role since 1986. Since 2011, all JEP articles are freely available on-line compliments of the American Economic Association.)

The Global Tourism Industry

International tourism is an enormous industry, representing $1.5 trillion per year in revenues and about 7% of total world exports of goods and services in 2015. The UN World Tourism Organization has published its \”Tourism Highlights 2016\” with a number of background facts.

International tourist arrivals reached 1.2 billion in 2015, and are projected by the UNWTO to rise by 50% in the next 15 years.

The main destinations of tourism are perhaps not much of a surprise. European countries rank highly, in part because of the number of trips from one European country to another. The US is a major destination. But there are some surprises. I would not necessarily have expected China to rank 4th, Turkey to rank 6th, or Russia to rank 10th around the world in tourist arrivals.

When it comes to countries sending tourists, China is far and away on top.

The UNWTO report is focused on numbers and trends, not on economic insights. But it provides food for thought. In high-income countries, many people have a standard of living where they can focus not just on buying goods and services for daily consumption, but on a \”bucket list\” of leisure experiences As more people around the world enter the global middle class, they often want to expand consumption of travel and tourism.

In an economic sense, tourism (both international and domestic) is a very large industry, and it is shaped by a mixture of policies toward visitors, ease of transportation and lodging, and a mixture of public and private amenities and destinations. When the US ranks behind France as a tourist destination, it suggests that the US is not doing what it could to facilitate tourism. For a number of lower-income and medium-income countries, one of their important economic questions is how to grow their tourism industry, and then how to incorporate the economic gains into gains in the standard of living for their economy. After all, many skills that facilitate tourism can help the economy in other ways, too, like reliable communications, logistics and scheduling, handling financial payments, clean water, electricity, transportation, and more. In a big-picture sense, I\’m enough of an optimist to hope that the associations and connections that often result from international tourism can have a positive effect across countries and cultures, too.

Summer 2016 Journal of Economic Perspectives Available Online

For the past 30 years, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which back in 2011 decided–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. Here, I\’ll start with Table of Contents for the just-released Summer 2016 issue. Below are abstracts and direct links for all of the papers. I will almost certainly blog about some of the individual papers in the next week or two, as well.

Schools and Accountability
\”The Importance of School Systems: Evidence from International Differences in Student Achievement,\” by Ludger Woessmann
Students in some countries do far better on international achievement tests than students in other countries. Is this all due to differences in what students bring with them to school–socioeconomic background, cultural factors, and the like? Or do school systems make a difference? This essay argues that differences in features of countries\’ school systems, and in particular their institutional structures, account for a substantial part of the cross-country variation in student achievement. It first documents the size and cross-test consistency of international differences in student achievement. Next, it uses the framework of an education production function to provide descriptive analysis of the extent to which different factors of the school system, as well as factors beyond the school system, account for cross-country achievement differences. Finally, it covers research that goes beyond descriptive associations by addressing leading concerns of bias in cross-country analysis. The available evidence suggests that differences in expenditures and class size play a limited role in explaining cross-country achievement differences, but that differences in teacher quality and instruction time do matter. This suggests that what matters is not so much the amount of inputs that school systems are endowed with, but rather how they use them. Correspondingly, international differences in institutional structures of school systems such as external exams, school autonomy, private competition, and tracking have been found to be important sources of international differences in student achievement. 
Full-Text Access | Supplementary Materials

\”Accountability in US Education: Applying Lessons from K-12 Experience to Higher Education,\” by David J. Deming and David Figlio
A new push for accountability has become an increasingly important feature of education policy in the United States and throughout the world. Broadly speaking, accountability seeks to hold educational institutions responsible for student outcome using tools ranging from performance \”report cards\” to explicit rewards and sanctions. We survey the well-developed empirical literature on accountability in K-12 education and consider what lessons we can learn for the design and impact of college ratings. Our bottom line is that accountability works, but rarely as well as one would hope, and often not entirely in the ways that were intended. Research on K-12 accountability offers some hope but also a number of cautionary tales.
Full-Text Access | Supplementary Materials

\”What Can We Learn from Charter School Lotteries?\” by Julia Chabrier, Sarah Cohodes and Philip Oreopoulos
We take a closer look at what can be learned about charter schools by pooling data from lottery-based impact estimates of the effect of charter school attendance at 113 schools. On average, each year enrolled at one of these schools increases math scores by 0.08 standard deviations and English/language arts scores by 0.04 standard deviations relative to attending a counterfactual public school. There is wide variation in impact estimates. To glean what drives this variation, we link these effects to school practices, inputs, and characteristics of fallback schools. In line with the earlier literature, we find that schools that adopt an intensive \”No Excuses\” attitude towards students are correlated with large positive effects on academic performance, with traditional inputs like class size playing no role in explaining charter school effects. However, we highlight that No Excuses schools are also located among the most disadvantaged neighborhoods in the country. After account ing for performance levels at fallback schools, the relationship between the remaining variation in school performance and the entire No Excuses package of practices weakens. No Excuses schools are effective at raising performance in neighborhoods with very poor performing schools, but the available data have less to say on whether the No Excuses approach could help in nonurban settings or whether other practices would similarly raise achievement in areas with low-performing schools. We find that intensive tutoring is the only No Excuses characteristic that remains significant (even for nonurban schools) once the performance levels of fallback schools are taken into account.
Full-Text Access | Supplementary Materials

\”The Measurement of Student Ability in Modern Assessment Systems,\” Brian Jacob and Jesse
Rothstein

Economists often use test scores to measure a student’s performance or an adult’s human capital. These scores reflect non-trivial decisions about how to measure and scale student achievement, with important implications for secondary analyses. For example, the scores computed in several major testing regimes, including the National Assessment of Educational Progress (NAEP), depend not only on the examinees’ responses to test items, but also on their background characteristics, including race and gender. As a consequence, if a black and white student respond identically to questions on the NAEP assessment, the reported ability for the black student will be lower than for the white student—reflecting the lower average performance of black students. This can bias many secondary analyses. Other assessments use different measurement models. This paper aims to familiarize applied economists with the construction and properties of common cognitive score measures and the implications for research using these measures.
Full-Text Access | Supplementary Materials

\”The Need for Accountability in Education in Developing Countries,\” Isaac M. Mbiti
Despite the rapid growth in enrollment rates across the developing world, there are major concerns about the quality of education that children receive. Across numerous developing countries, recent learning assessments have revealed that children are not able to develop basic numeracy and literary skills. These low levels of learning are the result of a number of interrelated factors, many of which reflect the low levels of accountability across multiple levels of the education system. In this paper, I document the main education challenges facing developing countries, including the lack of accountability among teachers and school management. I also review recent literature that documents the effectiveness of interventions aimed at addressing these accountability issues. Finally, I assess the potential for the market to improve accountability in the education sector in developing countries.
Full-Text Access | Supplementary Materials

Motivated Beliefs

\”The Mechanics of Motivated Reasoning,\” by Nicholas Epley and Thomas Gilovich
Whenever we see voters explain away their preferred candidate\’s weaknesses, dieters assert that a couple scoops of ice cream won\’t really hurt their weight loss goals, or parents maintain that their children are unusually gifted, we are reminded that people\’s preferences can affect their beliefs. This idea is captured in the common saying, \”People believe what they want to believe.\” But people don\’t simply believe what they want to believe. Psychological research makes it clear that \”motivated beliefs\” are guided by motivated reasoning–reasoning in the service of some self-interest, to be sure, but reasoning nonetheless. People generally reason their way to conclusions they favor, with their preferences influencing the way evidence is gathered, arguments are processed, and memories of past experience are recalled. Each of these processes can be affected in subtle ways by people\’s motivations, leading to biased beliefs that feel objective. In this symposium introduction, we s et the stage for discussion of motivated beliefs in the papers that follow by providing more detail about the underlying psychological processes that guide motivated reasoning.
Full-Text Access | Supplementary Materials

\”Mindful Economics: The Production, Consumption, and Value of Beliefs,\” by Roland Bénabou and Jean Tirole
In this paper, we provide a perspective into the main ideas and findings emerging from the growing literature on motivated beliefs and reasoning. This perspective emphasizes that beliefs often fulfill important psychological and functional needs of the individual. Economically relevant examples include confidence in ones\’ abilities, moral self-esteem, hope and anxiety reduction, social identity, political ideology, and religious faith. People thus hold certain beliefs in part because they attach value to them, as a result of some (usually implicit) tradeoff between accuracy and desirability. In a sense, we propose to treat beliefs as regular economic goods and assets–which people consume, invest in, reap returns from, and produce, using the informational inputs they receive or have access to. Such beliefs will be resistant to many forms of evidence, with individuals displaying non-Bayesian behaviors such as not wanting to know, wishful thinking, and reality denial.
Full-Text Access | Supplementary Materials

\”The Preference for Belief Consonance,\” by Russell Golman, George Loewenstein, Karl Ove Moene and Luca Zarri
We consider the determinants and consequences of a source of utility that has received limited attention from economists: people\’s desire for the beliefs of other people to align with their own. We relate this \’preference for belief consonance\’ to a variety of other constructs that have been explored by economists, including identity, ideology, homophily, and fellow-feeling. We review different possible explanations for why people care about others\’ beliefs and propose that the preference for belief consonance leads to a range of disparate phenomena, including motivated belief-formation, proselytizing, selective exposure to media, avoidance of conversational minefields, pluralistic ignorance, belief-driven clustering, intergroup belief polarization, and conflict. We also discuss an explanation for why disputes are often so intense between groups whose beliefs are, by external observers\’ standards, highly similar to one-another.
Full-Text Access | Supplementary Materials

\”Motivated Bayesians: Feeling Moral While Acting Egoistically,\” Francesca Gino, Michael I. Norton and Roberto A. Weber
Research yields ample evidence that individual\’s behavior often reflects an apparent concern for moral considerations. A natural way to interpret evidence of such motives using an economic framework is to add an argument to the utility function such that agents obtain utility both from outcomes that yield only personal benefits and from acting kindly, honestly, or according to some other notion of \”right.\” Indeed, such interpretations can account for much of the existing empirical evidence. However, a growing body of research at the intersection of psychology and economics produces findings inconsistent with such straightforward, preference-based interpretations for moral behavior. In particular, while people are often willing to take a moral act that imposes personal material costs when confronted with a clear-cut choice between \”right\” and \”wrong,\” such decisions often seem to be dramatically influenced by the specific contexts in which they occur. In particular, when the c ontext provides sufficient flexibility to allow plausible justification that one can both act egoistically while remaining moral, people seize on such opportunities to prioritize self-interest at the expense of morality. In other words, people who appear to exhibit a preference for being moral may in fact be placing a value on feeling moral, often accomplishing this goal by manipulating the manner in which they process information to justify taking egoistic actions while maintaining this feeling of morality.
Full-Text Access | Supplementary Materials

NSF Funding for Economists

\”In Defense of the NSF Economics Program,\”  by Robert A. Moffitt
The NSF Economics program funds basic research in economics across all its disparate fields. Its budget has experienced a long period of stagnation and decline, with its real value in 2013 below that in 1980 and having declined by 50 percent as a percent of the total NSF budget. The number of grants made by the program has also declined over time, and its current budget is very small compared to that of many other funders of economic research. Over the years, NSF-supported research has supported many of the major intellectual developments in the discipline that have made important contributions to the study of public policy. The public goods argument for government support of basic economic research is strong. Neither private firms, foundations, nor private donors are likely to engage in the comprehensive support of all forms of economic research if NSF were not to exist. Select universities with large endowments are more likely to have the ability to support general economic research in the absence of NSF, but most universities do not have endowments sufficiently large to do so. Support for large-scale general purpose dataset collection is particularly unlikely to receive support from any nongovernment agency. On a priori grounds, it is likely that most NSF-funded research represents a net increase in research effort rather than displacing already-occurring effort by academic economists. Unfortunately, the empirical literature on the net aggregate impact of NSF economics funding is virtually nonexistent.
Full-Text Access | Supplementary Materials

\”A Skeptical View of the National Science Foundation\’s Role in Economic Research,\” by Tyler Cowen and Alex Tabarrok
We can imagine a plausible case for government support of science based on traditional economic reasons of externalities and public goods. Yet when it comes to government support of grants from the National Science Foundation (NSF) for economic research, our sense is that many economists avoid critical questions, skimp on analysis, and move straight to advocacy. In this essay, we take a more skeptical attitude toward the efforts of the NSF to subsidize economic research. We offer two main sets of arguments. First, a key question is not whether NSF funding is justified relative to laissez-faire, but rather, what is the marginal value of NSF funding given already existing government and nongovernment support for economic research? Second, we consider whether NSF funding might more productively be shifted in various directions that remain within the legal and traditional purview of the NSF. Such alternative focuses might include data availability, prizes rather than grants, broa der dissemination of economic insights, and more. Given these critiques, we suggest some possible ways in which the pattern of NSF funding, and the arguments for such funding, might be improved.
Full-Text Access | Supplementary Materials

Articles

\”Can War Foster Cooperation?\” by Michal Bauer, Christopher Blattman, Julie Chytilová, Joseph Henrich, Edward Miguel and Tamar Mitts
In the past decade, nearly 20 studies have found a strong, persistent pattern in surveys and behavioral experiments from over 40 countries: individual exposure to war violence tends to increase social cooperation at the local level, including community participation and prosocial behavior. Thus while war has many negative legacies for individuals and societies, it appears to leave a positive legacy in terms of local cooperation and civic engagement. We discuss, synthesize, and reanalyze the emerging body of evidence and weigh alternative explanations. There is some indication that war violence enhances in-group or \”parochial\” norms and preferences especially, a finding that, if true, suggests that the rising social cohesion we document need not promote broader peace.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Brexit: Getting Concrete about Next Steps

Now that the first wave of triumphalism and consternation about Brexit vote on June 23 has died down a bit (my own thoughts just after the vote are here), it\’s useful to start thinking about actual next steps. Richard E. Baldwin has edited an e-book for VoxEU, Brexit Beckons: Thinking Ahead by Leading Economists, with short and readable contributions by 19 economists.  Here, I\’ll draw on their contributions and offer some thoughts of my own.

Brexit was a vote for the United Kingdom to leave the European Union, but it didn\’t specify what comes next. Leaving the EU isn\’t just about trade and immigration with the EU directly. The European Union acted on behalf of its members in negotiating trade agreements with the rest of the world, and so the UK must also renegotiate all of those trade agreements, along with renegotiating its status with the World Trade Organization. Moreover, part of membership in the single European market was a very large number of specific rules and regulations about certain industries, some that were doubtless silly, but others that facilitated contractual and financial arrangements across countries. Membership in the EU also meant, for example, support payments to Britain\’s farmer and easy access to the European market for their production. As Baldwin notes in  his overview, the prospect of all this renegotiation is \”massively complex.\”

Meanwhile, firms around the world are every day making decisions about where to expand or start up a new office or operation–and where to not expand, pull back, or shut down. If the UK process of renegotiation is long and drawn-out, there will be an dose of extra uncertainty about making business plans involving the UK.

 So what are the main options for Britain\’s renegotiators? Probably the most popular choice among the writers in this book is referred to as the EEA or the \”Norway\” option.\” Without trying to decrypt all of the alphabet soup of initials around the European Union, the European Economic Area is an overlapping and broader concept. It includes the EU countries, as well as Norway, Iceland, and Liechtenstein.  These countries participate in the European single market (with some exceptions like agriculture and fisheries), but are not members of the EU. They make lower contributions to the EU budget, and are not part of EU decision-making.

It\’s easy to see some the attractions of this approach. There is a template to follow, so the renegotiation could be fairly quick.  The UK could send a lot less money to Brussels, which would surely look like a win to the \”Leave\” proponents. For the \”Remain\” proponents, the UK economy would along many dimensions continue to be part of the single market.

That said, it\’s not at all clear that this approach would be politically acceptable in the UK. The UK would still be complying with single market rules, but would no longer be involved in negotiating those rules. In particular, the UK would need to remain open to immigration. Thus, some variations on the EEA concept are being floated.

For example, in his contribution to the volume, Jonathan Portes discusses the likelihood of what he calls the \”EEA-minus\” option–that is, EEA membership but while limiting immigration. He writes:

Immigration was a major factor – perhaps the major factor – in the Brexit vote. … It looks likely that the UK’s negotiating position may coalesce around an ‘EEA minus’ arrangement. While free movement would not continue as now, this would not imply moving to a system that gives effectively equal treatment to EU and non-EU nationals; there would still be a considerable degree of preference for the former. The negotiations would likely be legally, economically, and politically complex, but this does not mean that it is not worth trying.

If the UK’s vote to leave the European Union was a vote against anything, it was a vote against free movement of workers within the EU – a vote to “take (back) control” over immigration policy.  For most economists, this is paradoxical. There is a clear consensus that in the UK the economic impacts of immigration, particularly from within the EU, have been largely benign. In particular, there is little or no evidence of economically significant negative impacts on native workers, either in terms of jobs or wages, while the public finances and hence public services have, if anything, benefited. 

Of course, there is no particular reason to believe that the EU is willing to negotiate this kind of option, given that other members will be watching carefully to see if they might want to demand some additional flexibility of their own, whether on immigration or on other issues. Overall, it\’s not clear whether the EU will react to the Brexit vote as a signal that it should take a half-step back and allow its member states greater freedom and flexibility in certain areas, or whether it will react–expecially now that the UK is no longer involved in meetings and making an anti-interventionist case–by creating an imposing a new wave of common rules and regulations. 
Richard Baldwin offers a different twist, which he calls EEA+AE: 

The alternative that seems most sensible from an economic perspective is the Norway option. It may well be that the UK government could make this palatable, despite the free movement of people, by bundling it together with a very thorough set of policies to help the UK citizens who have been left behind by globalisation, technological advances, and European integration. Maybe we could call it the ‘EEA plus anti-exclusion option’ (EEA+AE). If this came to pass, the main economic policy outcome of the Brexit vote would be simple. The UK would end up with more influence over its trade, agricultural, and regional policies, but less influence over the rules and regulation governing its industrial and service sectors.

If an EEA-related isn\’t practical, either for UK political reasons or because the EU isn\’t willing to negotiate it, then what other options are possible? Another model is Switzerland, an economy with a large financial sector (like the UK) which has negotiated a set of bilateral deals with the European Union. Canada has signed a free trade agreement with the EU, and presumably the UK could do the same. Or the UK could just become part of the World Trade Organization and trade with the EU under those rules, which has been the legal framework for how the US trades with the EU. 

Angus Armstrong tackles this question in his essay by trying to define what Britain\’s trade priorities should look like, give the shape of the modern British economy. He writes:

\”It is more than four decades since the UK last was in charge of trade negotiations. Back then, exports were mostly domestic manufactured goods, where a pound of exports meant a pound of local profits and wages. Today, the UK is at the forefront of complex global value chains where services generate more than half of its domestic profits and wages from trade. This matters when negotiating the best type of trade arrangements. Trade policy is no longer just about reducing tariffs and subsidies to unprofitable industries; it is common standards and regulation, property rights and investment
protection, infrastructure and communications, and the free movement of ideas and
human capital.\”

Armstrong offers a chart on the main UK trading sectors. He writes:

\”Figure 2 gives a breakdown of value added by domestic and foreign firms in UK exports by the most important trading sectors. It is striking that business services, finance, and wholesale and retail trade account for the same domestic value added as the 17 other sectors from chemicals onwards. For these businesses, trade policy is about market access, equivalent regulations, and mutual recognition. Many FTAs include service sector provisions, but they typically involve official procurement opportunities,
cross-border exports of services (as opposed to locating firms in foreign markets) and transparency agreements, and cover specific sectors only. No FTAs offer anything like the service sector access offered by the Single Market.\”

In short, the UK can in some ways agree to replicate a number of existing trade agreements. It could go to the WTO, and say that it will just more-or-less leave all its trade commitments as is. There\’s no precedent for how such a negotiation should happen, but this seems at least theoretically possible. As Armstrong points out: \”The EU has 53 preferential trade agreements – mostly with developing states – that will no longer cover the UK after withdrawal. The UK would also need to consider if, and how, to be included in the US-EU Transatlantic Trade and Investment Partnership (TTIP) and other free trade agreements (FTAs) under negotiation. The UK can seek to join regional trade agreements such as the Trans-Pacific Partnership, and enter into other negotiations such as the Trade in Services Agreement (TiSA). Whether the UK has more success or less influence outside the EU remains to be seen.\”

But the essential issue here is that more than half of UK trade with the EU, and the bulk of that trade is in business services, finance, and wholesale and retail trade. These are areas where trade often requires detailed agreement on financial and legal regulations–the kind of detailed agreements that are part of the EU.  Armstrong writes: \”From n economics perspective, it is clear that agreements offering deep market access are more preferable than WTO access and many FTAs. The problem is that policies which enable deep market access encroach on the traditional domain of domestic policy.\”

Given the importance of the financial sector in the UK economy, what happens to that sector next seems especially important. For example, there are questions of \”passporting,\” meaning whether UK financial firms can operate freely in Europe, and whether common regulations will be imposed. In her essay, Patricia Jackson offers what seems to me a cautiously optimistic scenario that that the UK banking sector may be able to adapt. She writes:

The Brexit vote has undoubtedly created uncertainty and market volatility, with particular uncertainty for London, the EU’s largest financial centre. One issue facing the UK banking sector is the right to conduct cross-border activity in the EU (so-called passporting) when the UK is no longer an EU member. Another is the impact of Brexit on flexible recruitment in London. A further issue is the possibility that UK regulation moves away from that in the EU. …

Currently, banks established in the UK – either UK owned or UK subsidiaries of overseas banks – have the right to establish branches or carry out cross-border activity in the rest of the EU and other EEA states (passporting). It is far too early to say if these rights will be maintained as a result of the exit negotiations. If the rights are not maintained, then many banks may have to reassess their European structures if they wish to carry out cross-border activity into the EEA. Before deciding on changes, however, the banks need to consider the extent to which they can utilise existing subsidiaries established in the rest of the EU to achieve their passporting rights. A quick review of a sample of major non-European banks with subsidiaries in London indicates that around three-quarters also have subsidiaries elsewhere in the EU.

In addition, the Markets in Financial Instruments Directive (MiFID) does allow for cross-border access by banks established outside the EU to exchanges, clearing houses, and clearing and settlement systems, and third-country equivalence provisions allow passporting into the EU to deal with professional clients. Third-country equivalence requires an assessment of areas such as authorisation and supervision, rules covering market abuse, and so on. The questions are therefore much more about access to non-professional customers, and here existing subsidiaries could in many cases be used to provide passporting. …

One concern that the industry has is that UK regulation could diverge from that of the EU, adding cost and complexity. However, capital regulation of banks is underpinned by the Basel Accords, making it unlikely that the UK would move away from the EU in this area. Of course, over time some differences in application might develop, but in terms of implementation the UK has had a distinct approach. Indeed, changes in the Single Supervisory Mechanism led by the ECB are tending to bring the continent closer to the UK’s approach in areas such as Pillar II, the assessment of risks in the round and adequacy of capital. The UK has also always had a distinctive approach to conducting
regulation.

As the high-profile decisions get made, like when (or whether?) the UK will withdraw officially from the EU, and whether to pursue an EEA-related approach, my expectation is that there will be a continual succession of political land mines which blow up at unexpected times. For example, British agriculture is likely to be in turmoil, Baldwin sketches the issues in his overview (citations omitted): 

The farm problem is a particularly significant one. During the referendum campaign, UK farmers reportedly received assurances from Leave campaigners that the subsidies they now receive from the EU would be continued after Brexit. This is no small matter, as EU direct payments make up 54% of British farmers’ income. 

One issue may arise, however, with the nature of the payments. Under WTO rules it is not possible for the UK to provide trade-distorting subsidies to its farmers unless the UK has an agreement that permits it. Today, such payments are possible due to a deal that the EU stuck with its WTO partners when the UK was part of the EU. After leaving, the UK would either have to abandon the policy, or negotiate new exceptions with the other 162 WTO members. As some of the other members are vehemently opposed to such payments, negotiating such a waiver could be difficult. Additionally, continued access to the EU market for farm products is important since the EU buys over 60% of the UK’s agricultural exports. Even under the Norway option, this access is not assured since agriculture was excluded from the European Economic Area agreements (at the request of Norway, inter alia, when the deal was being crafted in the 1990s).

There will be questions about how the preferences of Scotland and Northern Ireland will be reflected in the negotiations to come, and how this affects politics within the UK. Membership in the EU did limit UK policy choices in certain ways, for example, by limiting the ability of governments to hand out industrial subsidies. My guess is that at least some of the \”Leave\” supporters will soon be pushing for a new wave of such subsidies.

In a biggest picture sense, UK trade relations are really just a step toward the bigger goal, which is a shared and growing prosperity for the country. Here, the concern should be that Britain\’s political system is so wrapped up in renegotiating with the EU and the rest of the world that it pays insufficient attention to the fundamental underpinnings of economic growth. As I\’ve argued on this blog in the past, international trade is often treated as a scapegoat for economic problems that are more fundamentally about technology and the pace of change in a globalizing economy.

Nicholas Crafts offers a useful reflection on this theme, pointing out that when it comes to long-run growth in the United Kingdom, the most important policy choices have always been, and will continue to be, the choices made in Westminster, not by EU bureaucrats in Brussels. Crafts writes (citations omitted):

The proximate sources of growth can be found in rates of increase of factor inputs, including capital, human capital, and hours worked, and of the productivity of those inputs. At a deeper level, economics highlights the importance of micro-foundations of growth in terms of the key role played by the incentive structures which inform decisions to invest, to innovate, and to adopt new technology, and which depend on institutions and policy. Obviously, there are a large number of supply-side policies that affect growth performance. These include areas such as competition, education, infrastructure, innovation, regulation, and taxation. Moreover, even for EU members, to a large extent these are very largely under the control of national governments.

Even though relative UK growth performance improved prior to the Global Crisis, there have been long-standing failings in supply-side policy. The most obvious is in innovation policy, which is reflected in a low level of R&D, but education, infrastructure, land-use planning regulation, and the tax system also give significant cause for concern, while British capital markets remain notably short-termist with a bias against long-term investment.

Although Eurosceptics complain about the costs of EU-imposed regulations, it should be recognised that the UK has persistently been able to maintain very light levels of regulation in terms of key OECD indicators such as product market regulation (PMR) and employment protection legislation (EPL), for which high scores have been shown to have significant detrimental effects. In 2013, the UK had a PMR score of 1.09 and an EPL score of 1.12, the second and third lowest in the OECD, respectively. Moreover, it is noticeable that the regulations which it might be politically feasible to remove in the event of Brexit do not include anything that might make a significant difference to productivity performance. 

In short, the Brexit decision that the UK should renegotiate all its trade agreement–both directly with the EU and those agreements made by the EU with other countries–might not have much negative effect on the UK economy, assuming a new set of trade agreements not too different from existing arrangements goes into effect fairly soon. (Did I ram enough qualifiers into that sentence?) But existing trade agreements were not a primary source of Britain\’s economic issues, and so renegotiating trade agreements won\’t be a path to economic prosperity for the UK.