Summer 2018 Journal of Economic Perspectives Available On-line

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I\’ll start with Table of Contents for the just-released Summer 2018 issue, which in the Taylor household is known as issue #125. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.

_________________

Symposium: Macroeconomics a Decade after the Great Recession

\”What Happened: Financial Factors in the Great Recession,\” by Mark Gertler and Simon Gilchrist
At the onset of the recent global financial crisis, the workhorse macroeconomic models assumed frictionless financial markets. These frameworks were thus not able to anticipate the crisis, nor to analyze how the disruption of credit markets changed what initially appeared like a mild downturn into the Great Recession. Since that time, an explosion of both theoretical and empirical research has investigated how the financial crisis emerged and how it was transmitted to the real sector. The goal of this paper is to describe what we have learned from this new research and how it can be used to understand what happened during the Great Recession. In the process, we also present some new empirical work. We argue that a complete description of the Great Recession must take account of the financial distress facing both households and banks and, as the crisis unfolded, nonfinancial firms as well. Exploiting both panel data and time series methods, we analyze the contribution of the house price decline, versus the banking distress indicator, to the overall decline in employment during the Great Recession. We confirm a common finding in the literature that the household balance sheet channel is important for regional variation in employment. However, we also find that the disruption in banking was central to the overall employment contraction.
Full-Text Access | Supplementary Materials

\”Finance and Business Cycles: The Credit-Driven Household Demand Channel,\” by Atif Mian and Amir Sufi
What is the role of the financial sector in explaining business cycles? This question is as old as the field of macroeconomics, and an extensive body of research conducted since the Global Financial Crisis of 2008 has offered new answers. The specific idea put forward in this article is that expansions in credit supply, operating primarily through household demand, have been an important driver of business cycles. We call this the credit-driven household demand channel. While this channel helps explain the recent global recession, it also describes economic cycles in many countries over the past 40 years.
Full-Text Access | Supplementary 

\”Identification in Macroeconomics,\” by Emi Nakamura and Jón Steinsson
This paper discusses empirical approaches macroeconomists use to answer questions like: What does monetary policy do? How large are the effects of fiscal stimulus? What caused the Great Recession? Why do some countries grow faster than others? Identification of causal effects plays two roles in this process. In certain cases, progress can be made using the direct approach of identifying plausibly exogenous variation in a policy and using this variation to assess the effect of the policy. However, external validity concerns limit what can be learned in this way. Carefully identified causal effects estimates can also be used as moments in a structural moment matching exercise. We use the term \”identified moments\” as a short-hand for \”estimates of responses to identified structural shocks,\” or what applied microeconomists would call \”causal effects.\” We argue that such identified moments are often powerful diagnostic tools for distinguishing between important classes of models (and thereby learning about the effects of policy). To illustrate these notions we discuss the growing use of cross-sectional evidence in macroeconomics and consider what the best existing evidence is on the effects of monetary policy.
Full-Text Access | Supplementary Materials

\”The State of New Keynesian Economics: A Partial Assessment,\” by Jordi Galí
In August 2007, when the first signs emerged of what would come to be the most damaging global financial crisis since the Great Depression, the New Keynesian paradigm was dominant in macroeconomics. Ten years later, tons of ammunition has been fired against modern macroeconomics in general, and against dynamic stochastic general equilibrium models that build on the New Keynesian framework in particular. Those criticisms notwithstanding, the New Keynesian model arguably remains the dominant framework in the classroom, in academic research, and in policy modeling. In fact, one can argue that over the past ten years the scope of New Keynesian economics has kept widening, by encompassing a growing number of phenomena that are analyzed using its basic framework, as well as by addressing some of the criticisms raised against it. The present paper takes stock of the state of New Keynesian economics by reviewing some of its main insights and by providing an overview of some recent developments. In particular, I discuss some recent work on two very active research programs: the implications of the zero lower bound on nominal interest rates and the interaction of monetary policy and household heterogeneity. Finally, I discuss what I view as some of the main shortcomings of the New Keynesian model and possible areas for future research.
Full-Text Access | Supplementary Materials

\”On DSGE Models,\” by Lawrence J. Christiano, Martin S. Eichenbaum and Mathias Trabandt
The outcome of any important macroeconomic policy change is the net effect of forces operating on different parts of the economy. A central challenge facing policymakers is how to assess the relative strength of those forces. Economists have a range of tools that can be used to make such assessments. Dynamic stochastic general equilibrium (DSGE) models are the leading tool for making such assessments in an open and transparent manner. We review the state of mainstream DSGE models before the financial crisis and the Great Recession. We then describe how DSGE models are estimated and evaluated. We address the question of why DSGE modelers—like most other economists and policymakers—failed to predict the financial crisis and the Great Recession, and how DSGE modelers responded to the financial crisis and its aftermath. We discuss how current DSGE models are actually used by policymakers. We then provide a brief response to some criticisms of DSGE models, with special emphasis on criticism by Joseph Stiglitz, and offer some concluding remarks.
Full-Text Access | Supplementary Materials

\”Evolution of Modern Business Cycle Models: Accounting for the Great Recession,\” Patrick J. Kehoe, Virgiliu Midrigan and Elena Pastorino
Modern business cycle theory focuses on the study of dynamic stochastic general equilibrium (DSGE) models that generate aggregate fluctuations similar to those experienced by actual economies. We discuss how these modern business cycle models have evolved across three generations, from their roots in the early real business cycle models of the late 1970s through the turmoil of the Great Recession four decades later. The first generation models were real (that is, without a monetary sector) business cycle models that primarily explored whether a small number of shocks, often one or two, could generate fluctuations similar to those observed in aggregate variables such as output, consumption, investment, and hours. These basic models disciplined their key parameters with micro evidence and were remarkably successful in matching these aggregate variables. A second generation of these models incorporated frictions such as sticky prices and wages; these models were primarily developed to be used in central banks for short-term forecasting purposes and for performing counterfactual policy experiments. A third generation of business cycle models incorporate the rich heterogeneity of patterns from the micro data. A defining characteristic of these models is not the heterogeneity among model agents they accommodate nor the micro-level evidence they rely on (although both are common), but rather the insistence that any new parameters or feature included be explicitly disciplined by direct evidence. We show how two versions of this latest generation of modern business cycle models, which are real business cycle models with frictions in labor and financial markets, can account, respectively, for the aggregate and the cross-regional fluctuations observed in the United States during the Great Recession.
Full-Text Access | Supplementary Materials

\”Microeconomic Heterogeneity and Macroeconomic Shocks,\” by Greg Kaplan and Giovanni L. Violante
In this essay, we discuss the emerging literature in macroeconomics that combines heterogeneous agent models, nominal rigidities, and aggregate shocks. This literature opens the door to the analysis of distributional issues, economic fluctuations, and stabilization policies—all within the same framework. In response to the limitations of the representative agent approach to economic fluctuations, a new framework has emerged that combines key features of heterogeneous agents (HA) and New Keynesian (NK) economies. These HANK models offer a much more accurate representation of household consumption behavior and can generate realistic distributions of income, wealth, and, albeit to a lesser degree, household balance sheets. At the same time, they can accommodate many sources of macroeconomic fluctuations, including those driven by aggregate demand. In sum, they provide a rich theoretical framework for quantitative analysis of the interaction between cross-sectional distributions and aggregate dynamics. In this article, we outline a state-of-the-art version of HANK together with its representative agent counterpart, and convey two broad messages about the role of household heterogeneity for the response of the macroeconomy to aggregate shocks: 1) the similarity between the Representative Agent New Keynesian (RANK) and HANK frameworks depends crucially on the shock being analyzed; and 2) certain important macroeconomic questions concerning economic fluctuations can only be addressed within heterogeneous agent models.
Full-Text Access | Supplementary Materials

Symposium: Incentives in the Workplace
\”Compensation and Incentives in the Workplace,\” by Edward P. Lazear
Labor is supplied because most of us must work to live. Indeed, it is called \”work\” in part because without compensation, the overwhelming majority of workers would not otherwise perform the tasks. The theme of this essay is that incentives affect behavior and that economics as a science has made good progress in specifying how compensation and its form influences worker effort. This is a broad topic, and the purpose here is not a comprehensive literature review on each of many topics. Instead, a sample of some of the most applicable papers are discussed with the goal of demonstrating that compensation, incentives, and productivity are inseparably linked.
Full-Text Access | Supplementary Materials

\”Nonmonetary Incentives and the Implications of Work as a Source of Meaning,\” by Lea Cassar and Stephan Meier
Empirical research in economics has begun to explore the idea that workers care about nonmonetary aspects of work. An increasing number of economic studies using survey and experimental methods have shown that nonmonetary incentives and nonpecuniary aspects of one\’s job have substantial impacts on job satisfaction, productivity, and labor supply. By drawing on this evidence and relating it to the literature in psychology, this paper argues that work represents much more than simply earning an income: for many people, work is a source of meaning. In the next section, we give an economic interpretation of meaningful work and emphasize how it is affected by the mission of the organization and the extent to which job design fulfills the three psychological needs at the basis of self-determination theory: autonomy, competence, and relatedness. We point to the evidence that not everyone cares about having a meaningful job and discuss potential sources of this heterogeneity. We sketch a theoretical framework to start to formalize work as a source of meaning and think about how to incorporate this idea into agency theory and labor supply models. We discuss how workers\’ search for meaning may affect the design of monetary and nonmonetary incentives. We conclude by suggesting some insights and open questions for future research.
Full-Text Access | Supplementary Materials

\”The Changing (Dis-)utility of Work,\” by Greg Kaplan and Sam Schulhofer-Wohl
We study how changes in the distribution of occupations have affected the aggregate non-pecuniary costs and benefits of working. The physical toll of work is less now than in 1950, with workers shifting away from occupations in which people report experiencing tiredness and pain. The emotional consequences of the changing occupation distribution vary substantially across demographic groups. Work has become happier and more meaningful for women, but more stressful and less meaningful for men. These changes appear to be concentrated at lower education levels.
Full-Text Access | Supplementary Materials

Individual Articles

\”Social Connectedness: Measurement, Determinants, and Effects,\” by Michael Bailey, Rachel Cao, Theresa Kuchler, Johannes Stroebel and Arlene Wong
Social networks can shape many aspects of social and economic activity: migration and trade, job-seeking, innovation, consumer preferences and sentiment, public health, social mobility, and more. In turn, social networks themselves are associated with geographic proximity, historical ties, political boundaries, and other factors. Traditionally, the unavailability of large-scale and representative data on social connectedness between individuals or geographic regions has posed a challenge for empirical research on social networks. More recently, a body of such research has begun to emerge using data on social connectedness from online social networking services such as Facebook, LinkedIn, and Twitter. To date, most of these research projects have been built on anonymized administrative microdata from Facebook, typically by working with coauthor teams that include Facebook employees. However, there is an inherent limit to the number of researchers that will be able to work with social network data through such collaborations. In this paper, we therefore introduce a new measure of social connectedness at the US county level. Our Social Connectedness Index is based on friendship links on Facebook, the global online social networking service. Specifically, the Social Connectedness Index corresponds to the relative frequency of Facebook friendship links between every county-pair in the United States, and between every US county and every foreign country. Given Facebook\’s scale as well as the relative representativeness of Facebook\’s user body, these data provide the first comprehensive measure of friendship networks at a national level.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Mark Twain on Extrapolation: "Such Wholesale Returns of Conjecture Out of Such a Trifling Investment of Fact"

I sometimes say, with a smile and a wince, that it only takes three data-points for economists to start building a theory — and that in pinch, we can make due with less data. But of course, anyone who develops a theory based on limited data is prone to false extrapolations. Mark Twain offered one vivid example in his 1883 memoir Life on the Mississippi. This passage describes how there are places where the river loops back and forth in the shape of horseshoe curves. At some point, there is a cut-through (often caused by nature, but sometimes with an assist from those who saw the value of riverfront property). The river charges through the cut-through instead, and thus becomes shorter.

Twain uses this set of facts for a sarcastic jab a science and extrapolation. I quote here from the Project Gutenberg version of Life on the Mississippi, from near the start of Chapter 17. 

\”They give me an opportunity of introducing one of the Mississippi\’s oddest peculiarities,—that of shortening its length from time to time. … The water cuts the alluvial banks of the \’lower\’ river into deep horseshoe curves; so deep, indeed, that in some places if you were to get ashore at one extremity of the horseshoe and walk across the neck, half or three quarters of a mile, you could sit down and rest a couple of hours while your steamer was coming around the long elbow, at a speed of ten miles an hour, to take you aboard again. …

\”Pray observe some of the effects of this ditching business. Once there was a neck opposite Port Hudson, Louisiana, which was only half a mile across, in its narrowest place. You could walk across there in fifteen minutes; but if you made the journey around the cape on a raft, you traveled thirty-five miles to accomplish the same thing. In 1722 the river darted through that neck, deserted its old bed, and thus shortened itself thirty-five miles. In the same way it shortened itself twenty-five miles at Black Hawk Point in 1699. Below Red River Landing, Raccourci cut-off was made (forty or fifty years ago, I think). This shortened the river twenty-eight miles. In our day, if you travel by river from the southernmost of these three cut-offs to the northernmost, you go only seventy miles. To do the same thing a hundred and seventy-six years ago, one had to go a hundred and fifty-eight miles!—shortening of eighty-eight miles in that trifling distance. At some forgotten time in the past, cut-offs were made above Vidalia, Louisiana; at island 92; at island 84; and at Hale\’s Point. These shortened the river, in the aggregate, seventy-seven miles.

\”Since my own day on the Mississippi, cut-offs have been made at Hurricane Island; at island 100; at Napoleon, Arkansas; at Walnut Bend; and at Council Bend. These shortened the river, in the aggregate, sixty-seven miles. In my own time a cut-off was made at American Bend, which shortened the river ten miles or more.

\”Therefore, the Mississippi between Cairo and New Orleans was twelve hundred and fifteen miles long one hundred and seventy-six years ago. It was eleven hundred and eighty after the cut-off of 1722. It was one thousand and forty after the American Bend cut-off. It has lost sixty-seven miles since. Consequently its length is only nine hundred and seventy-three miles at present.

\”Now, if I wanted to be one of those ponderous scientific people, and \’let on\’ to prove what had occurred in the remote past by what had occurred in a given time in the recent past, or what will occur in the far future by what has occurred in late years, what an opportunity is here! Geology never had such a chance, nor such exact data to argue from! Nor \’development of species,\’ either! Glacial epochs are great things, but they are vague—vague. Please observe:—

\”In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period,\’ just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.\”

"Whoever Is Not a Liberal at 20 Has No Heart …"

There\’s an saying along these general lines \”If you’re not a liberal when you’re 25, you have no heart. If you’re not a conservative by the time you’re 35, you have no brain.\” Who said it? One blessing of the web is that I can fiddle around with such questions without needing to spend three days in the library.

It\’s apparently not Winston Churchill. At least, there\’s no record of him having said or written it. And Churchill scholars point out that he was a conservative at 15 and a liberal at 35. 
Indeed, it seems the origins of the comments may be French, rather than English. The Quote Investigator website writes:

The earliest evidence located by QI appeared in an 1875 French book of contemporary biographical portraits by Jules Claretie. A section about a prominent jurist and academic named Anselme Polycarpe Batbie included the following passage [translated as] … 

\”Mr. Batbie, in a much-celebrated letter, once quoted the Burke paradox in order to account for his bizarre political shifts: “He who is not a républicain at twenty compels one to doubt the generosity of his heart; but he who, after thirty, persists, compels one to doubt the soundness of his mind.”

Quote Investigator has not found an actual record of Mr. Batbie\’s \”much-celebrated letter.\” And although the \”Burke paradox\” seems mostly likely to apply to Edmund Burke, it isn\’t clear whether it\’s a reference to something not-yet-discovered that was written by Burke, or by a reference to a pattern purportedly revealed by Burke\’s life and writings. 

But hearkening back to Burke is interesting, because in Thomas Jefferson\’s journals one finds an entry relevant to this subject for January 1799. John Adams is president at this time. Jefferson writes: 

\”In a conversation between Dr. Ewen and the President, the former said one of his sons was an aristocrat, the other a democrat. The President asked if it was not the youngest who was the democrat. Yes, said Ewen. Well, said the President, a boy of 15 who is not a democrat is good for nothing, and he is no better who is a democrat at 20. Ewen told Hurt, and Hurt told me.\”

For a lengthy list of other places where something similar to this quotation has appeared, see here or here.  While the quotation clearly has staying power, it seems overly facile to me. The distinction that liberals feel and conservatives think is silly and shallow, and shows little understanding of either. The strong beliefs of young people are easily dismissed as rooted only in feelings, but at least young people often show some flexibility about learning and adapting. It often seems the strong feelings of the middle aged and elderly are often based as much on being set in their ways and confirmation bias, and about lessons learned in the rather-different past, rather than seeking to apply some deeper weighing of facts, values, and experience. 
Herbert Stein, who was an economist in many positions in Washington, DC for more than 50 years, captured some of my own sense here  in his 1995 collection of essays, On the Other Hand – Essays on Economics, Economists, and Politics (from pp. 1-2):

\”An old saying goes that whoever is not a Socialist when young has no heart and whoever is still a Socialist when old has no head. I would say that whoever is not a liberal when young has no heart, whoever is not a conservative when middle-aged has no head, and whoever is still either a liberal or a conservative at age seventy-eight has no sense of humor. Obviously, orthodox certainty on matters about which there can be so little certitude must eventually be seen as only amusing.\”

If you can\’t learn from both liberals and conservatives, and also laugh at both liberals and conservatives, you might want to reconsider the vehemence of your partisan commitments. 

How Coalitional Instincts Make Weird Groups and Stupid People

I like to think of myself as an individual who makes up his own mind, but that\’s almost certainly wrong for me, and you, gentle reader, as well. A vast literature in psychology points out that, in effect, a number of separate personalities live in each of our brains. Which decision gets made at a certain time is determined in part by how issues of reward and risk are framed and communicated to us.  Moreover, we are members of groups. If my wife or one of my children is in a serious dispute, I will lose some degree of my sunny disposition and rational fair-mindedness. Probably I won\’t lose all of it. Maybe I\’ll lose less of it than a typical person in a similar situation. But I\’ll lose some of it. 
John Tooby, a professor of anthropology at the University of California-Santa Barbara, has written about what he calls \”Coalitional Instincts\” in a short piece for Edge.com (November 22, 2017). Tooby argues that human brains have evolved so that we have \”a nearly insuperable human appetite to be a good coalition member.\” But to demonstrate clearly that we are part of a coalition, we are all drawn to \”unusual, exaggerated beliefs … alarmism, conspiracies, or hyperbolic comparisons.\” Here\’s Tooby (I have inserted the boldface emphasis): 

\”Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups.  … These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. …

\”Why do we see the world this way? Most species do not and cannot. … Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.

\”Ancestrally, evolving the neural code to crack these problems supercharged the ability to successfully compete for access to reproductively limiting resources. Fatefully, we are descended solely from those better equipped with coalitional instincts. In this new world, power shifted from solitary alphas to the effectively coordinated down-alphabet, giving rise to a new, larger landscape of political threat and opportunity: rival groups or factions expanding at your expense or shrinking as a result of your dominance.

\”And so a daunting new augmented reality was neurally kindled, overlying the older individual one. It is important to realize that this reality is constructed by and runs on our coalitional programs and has no independent existence. You are a member of a coalition only if someone (such as you) interprets you as being one, and you are not if no one does. We project coalitions onto everything, even where they have no place, such as in science. We are identity-crazed.

\”The primary function that drove the evolution of coalitions is the amplification of the power of its members in conflicts with non-members. This function explains a number of otherwise puzzling phenomena. For example, ancestrally, if you had no coalition you were nakedly at the mercy of everyone else, so the instinct to belong to a coalition has urgency, preexisting and superseding any policy-driven basis for membership. This is why group beliefs are free to be so weird. Since coalitional programs evolved to promote the self-interest of the coalition’s membership (in dominance, status, legitimacy, resources, moral force, etc.), even coalitions whose organizing ideology originates (ostensibly) to promote human welfare often slide into the most extreme forms of oppression, in complete contradiction to the putative values of the group. … 

\”Moreover, to earn membership in a group you must send signals that clearly indicate that you differentially support it, compared to rival groups. Hence, optimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities.

\”This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one\’s friends, and one\’s cherished group identity. This freezes belief revision.

\”Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. \”

The lesson I draw here is although we all feel a strong need to join groups, we do have some degree of choice and agency over what groups we end up joining. Even within larger groups, like a certain religion or political party, there will be smaller groups with which one can have a primary affiliation. It may be wise to give an outlet to our coalitional nature by joining several different groups, or by pushing oneself to occasionally phase out one membership and join another.

In addition, we all feel a need to do something a little whacky and extreme to show our group affiliation, but again, we have some degree of choice and agency over what actions and messages define our group. Wearing the colors of a professional sports team, for example, is a different kind of whackiness than sending vitriolic social media  messages. Humans want to join coalitional groups, but we can at least consider whether the way a group expresses solidarity is a good fit with who we want to be.

Difficulties of Making Predictions: Global Power Politics Edition

Making predictions is hard, especially about the future. It\’s a comment that seems to have been attributed to everyone from Nostradamus to Niels Bohr to Yogi Berra. But it\’s deeply true. Most of us have a tendency to make statements about the future with a high level of self-belief, avoid later reconsidering how wrong we were, and then make more statements.

Here\’s a nice vivid example from back in 2001. The Bush administration has just taken office, and a Department of Defense Linville Wells at the US Department of Defense was reflecting on the then-forthcoming \”Quadrennial Defense Review.\” He wanted offer a pungent reminder that the entire exercise of looking ahead even just 10 years has often been profoundly incorrect. Thus, Wells wrote this memo (dated April 12, 2001):

  • If you had been a security policy-maker in the world\’s greatest power in 1900, you would have been a Brit, looking warily at your age-old enemy, France. 
  • By 1910, you would be allied with France and your enemy would be Germany. 
  • By 1920, World War I would have been fought and won, and you\’d be engaged in a naval arms race with your erstwhile allies, the U.S. and Japan. 
  • By 1930, naval arms limitation treaties were in effect, the Great Depression was underway, and the defense planning standard said \”no war for ten years.\” 
  • Nine years later World War II had begun. 
  • By 1950, Britain no longer was the world\’s greatest power, the Atomic Age had dawned, and a \”police action\” was underway in Korea. 
  • Ten years later the political focus was on the \”missile gap,\” the strategic paradigm was shifting from massive retaliation to flexible response, and few people had heard of Vietnam.
  • By 1970, the peak of our involvement in Vietnam had come and gone, we were beginning détente with the Soviets, and we were anointing the Shah as our protégé in the Gulf region.
  • By 1980, the Soviets were in Afghanistan, Iran was in the throes of revolution, there was talk of our \”hollow forces\” and a \”window of vulnerability,\” and the U.S. was the greatest creditor nation the world had ever seen. 
  • By 1990, the Soviet Union was within a year of dissolution, American forces in the Desert were on the verge of showing they were anything but hollow, the U.S. had become the greatest debtor nation the world had ever known, and almost no one had heard of the internet. 
  • Ten years later, Warsaw was the capital of a NATO nation, asymmetric threats transcended geography, and the parallel revolutions of information, biotechnology, robotics, nanotechnology, and high density energy sources foreshadowed changes almost beyond forecasting. 
  • All of which is to say that I\’m not sure what 2010 will look like, but I\’m sure that it will be very little like we expect, so we should plan accordingly.
The questions of how to predict for what you don\’t expect, and how to plan for what you don\’t expect, are admittedly difficult. The ability to pivot smoothly to face the new challenge may be  one of the most underrated skills in politics and management. 

The Need for Generalists

One can make a reasonable argument that the concept of an economy and the study of economics begins with the idea of specialization, in the sense that those who function within an economy specialize in one kind of production, but then trade with others to consume a broader array of good. Along these lines, the first chapter of Adam Smith\’s 1776 Wealth of Nations is titled, \”On the Division of Labor.\” But in the push for specialization, there can be a danger of neglecting the virtues of generalists. Even when it comes to assembly lines, specialization of tasks can be pushed so far that it becomes unproductive (as Smith recognized). In a broad array of jobs, including managers, journalists, legislators and politicians, and even editors of academic journals, there is a need for generalists who can synthesize and put in context the work of a range of specialists.

The need for generalists is not at all new, of course. Here\’s a commentary from 80 years on \”The Need for Generalists\” from AG Black, who was Chief of the Bureau of Agricultural Economics at the US Department of Agriculture, \” published in the Journal of Farm Economics (November 1936, 18:4, pp. 657-661, and available vis JSTOR). Black is writing in particular about specialization within what is already a specialty of agricultural economics, but his point applies more broadly.

\”The past generation, like several generations before it, has indeed been one of greater and greater specialization. This has resulted in great advances in agricultural economics. Our specialists have developed new technics of analysis, they have discovered new relationships, they have been able to give close attention to important facts and factors that might otherwise have escaped attention and by such escape might have led to wrong conclusions. Without this specialized attention our discipline in agricultural economics could not have attained the position it has reached today.

\”This advance has not been attained without cost. The price has been the loss of minds, or the neglect to develop minds, trained to cope with the complex problems of today in the comprehensive, overall manner called for by such problems. Our specialists are splendidly equipped to solve a problem concerning the price of wheat, or of corn, or of cotton, or a technical question in cooperative marketing, farm management, taxation or international trade. But the more important problems almost never present themselves in those narrow terms; rather they may involve elements of all the above and perhaps several more. …

\”Increased specialization itself tends to raise barriers between fields. It tends to create a system of professional jealousies that is not conducive to the development of generalists. The specialist who burrows deeper and deeper into a narrower and narrower hole becomes convinced that no one who has been sapping in a neighboring tunnel can possibly know as much about the peculiar twists and turns of his burrow as he, himself. And he is right. He knows that he can readily confound and confuse a neighboring specialist if the latter strays from his own confines, and what is more, he will. One of the greatest joys of the specialist is to make an associate appear infantile and ridiculous on the occasions when the latter appears to be getting out of his field.

\”The specialist stakes out his claim and guards it as jealously as ever did a prospector of the \’40s, and woe to the unwary trespasser. As the specialist knows how he looks upon intruders, he knows how he would be treated if he had the temerity to wander outside his main field. Consequently he is usually quite willing to leave outside fields to other specialists.

\”The development of the whole field takes on a honeycomb appearance with series upon series of well-marked and almost wholly isolated cells. These cell walls need to be broken down. There is need of men who can correlate and coordinate the specialized knowledge in the separate cells–men who can bring to bear on the larger problems the findings of the different specialists and who have sufficient perspective and sense of proportion to apply just the correct shade of emphasis to the contribution of each particular specialist. …

\”Our whole organization has developed on the assumption that the generalizing function is not important, that it does not require quite the ability and training of the specialist, that it can be satisfactorily done by almost anyone and that certainly there is nothing about it that demands the attention of really first class men. If generalizing be done at all, it can safely be committed to the specialist who can play with it as relaxation from the really serious and important demands of his specialty, or to the administrator who can give it all of the attention it requires between telephone calls and committee meetings.

\”All of this, I suppose, leads to the conclusion that in agricultural economics we need another specialist, that is a \”specialist\” who is a \”generalist.\” We need to make a place for the trained economist of highest ability who will be free from administrative demands as well as free from the tyranny of specialization, who will have the job of keeping abreast of the results of the various specialists and who can spend a good deal of time in analyzing findings having a bearing upon the ultimate solution of these same problems. … In other words, students need training in analysis and in synthesis. Today the ability to synthesize facts, research results and partial solutions into a well rounded whole, is too infrequently available.\”

One of the many political cliches that makes me die a little bit inside is when someone claims that all we need to address a certain problem (health care, poverty, transportation, the Middle East, whatever) is to bring together a group of experts who will provide the common-sense solution that we have all been ignoring. But while bringing together a group of specialist experts can provide a great deal of information and insight, they are often not especially good at melding their specific insights into a general policy.

Homage: I ran across a mention of this article at Carola Binder\’s always-useful \”Quantitative Ease\” website  last summer, and left myself a note to track it down. But given my time constraints and organizational skills, it took awhile for me to do so.

"Half the Money I Spend on Advertising is Wasted, and the Trouble is I Don\’t Know Which Half"

There\’s an old rueful line from firms that advertise: \”We know that half of all we spend on advertising is wasted, but we don\’t know which half.\” It\’s not clear who originally coined the phrase. But we do know that the effects of advertising have changed dramatically in a digital age. Half of all advertising spending may still be wasted, but now it\’s for a very different reason.

I was raised with the folklore that John Wanamaker, founder of the eponymous department stores, was the originator of the phrase at hand. But the attribution gets pretty shaky, pretty fast. David Ogilvy, the head of the famous Ogilvy & Mather advertising agency, wrote in his 1963 book Confessions of an Advertising Man (pp. 86-87): \”As Lord Leverhulme (and John Wanamaker after  him) complained, `Half the money I spend on advertising is wasted, and the trouble is I don\’t know which half.\”

So how about William Lever, Lord Leverhulme, who built a fortune in the soap business (with Sunlight Soap, and eventually Unilever)? Career advertising executive Jeremy Bullmore has looked into it, and wrote in the 2013 annual report of the British advertising and public relations firm WPP:

\”There are at least a dozen minor variations of this sentiment that are confidently quoted and variously attributed but they all have in common the words ‘advertising’, ‘half’ and ‘waste’. Google the line and you’ll get about nine million results. … As it happens, there’s little hard evidence that either William Lever or John Wanamaker (or indeed Ford or Penney) ever made such a remark. Certainly, neither the Wanamaker nor the Unilever archives contains any such reference. Yet for a hundred years or so, with no accredited source and no data to support it, this piece of folklore has survived and prospered.\” 

Bullmore makes some compelling points. One is that even 100 years ago, it was widely believed tha advertising could be usefully shaped and targeted. He writes:

\”Retail advertising in the days of John Wanamaker was mostly placed in local newspapers and was mainly used to shift specific stock. An ad for neckties read, ‘They’re not as good as they look, but they’re good enough. 25 cents.’ The neckties sold out by closing time and so weren’t advertised again. Waste, zero. Experiment was commonplace. Every element of an advertisement – size, headline, position in paper – was tested for efficacy and discarded if found wanting. Waste, if not eliminated, was ruthlessly hounded.

\”Claude Hopkins published Scientific Advertising in 1923. In it, he writes, “Advertising, once a gamble, has… become… one of the safest of business ventures. Certainly no other enterprise with comparable possibilities need involve so little risk.” Even allowing for adman’s exuberance, it strongly suggests that, within Wanamaker’s lifetime, there were very few advertisers who would have agreed that half their advertising money was wasted.\”

Further, Bullmore points out that people are more comfortable buying certain products because \”everyone knows\” about them, and \”everyone knows\” because even those who don\’t purchase the product have seen the ads.

\”A common attribute of all successful, mass-market, repeat-purchase consumer brands is a kind of fame. And the kind of fame they enjoy is not targeted, circumscribed fame but a curiously indiscriminate fame that transcends its particular market sector. Coca-Cola is not just a famous soft drink. Dove is not just a famous soap. Ford is not just a famous car manufacturer. In all these cases, their fame depends on their being known to just about everyone in the world: even if they neither buy nor use. Show-biz publicists have understood this for ever. When The Beatles invaded America in 1964, their manager Brian Epstein didn’t arrange a series of targeted interviews in fan magazines; he brokered three appearances on the Ed Sullivan Show with an audience for each estimated at 70 million. Far fewer than half of that 70 million will have subsequently bought a Beatles record or a Beatles ticket; but it seems unlikely that Epstein thought this extended exposure in any way wasted.\”

And of course, if large amounts of advertising are literally wasted, it seems as we should be able to observe a substantial number of companies who cut their advertising budget in half and suffered no measurable decline in sales. (In fact, if  half of advertising is always wasted, shouldn\’t the firm then keep cutting the advertising budget by half, and half again, and half again, and so down to zero? Seems as if there must be a flaw in this logic!)

Of course, one of the major changes in advertising during the last decade or two is that print advertising has plummeted, while digital advertising has soared. More generally, digital technology has made it much more straightforward to create systematic variations in the quantity and qualities of advertising– and to track the results. Bullmore writes: \”And given modern measurements and the growth of digital channels, it’s easier than ever for advertising to be held accountable; to be seen to be more investment than cost.\”

But Bullmore is probably too optimistic here about how easy it is to hold advertising accountable, for a couple of reasons.

One problem is that the idea of targeting specific audiences for digital advertising is a lot more complicated in practice than it may seem at first. Judy Unger Franks of Medill School of Journalism, Media, Integrated Market Communications at Northwestern University explained the issues in a short essay late last summer: She wrote:

\”Programmatic Advertising enables marketers to make advertising investments to select individuals in a media audience as opposed to having to buy the entire audience. Advertisers use a wealth of Big Data to learn about each audience member to then determine whether that audience member should be served with an advertisement and at what price. This all happens in near real-time and advertisers can therefore make near real-time adjustments to their approach to optimize the return-on-investment of its advertising expenditures.

\”In theory, Programmatic Advertising should solve the issue of waste. However, in our attempt to eliminate waste from the advertising value chain, we may have made things worse. We have unleashed a dark side to Programmatic Advertising that comes at a significant cost. Now, we know exactly which half of the money spent on advertising is wasted: it’s the half that marketers must now spend on third parties who have inserted themselves into the Programmatic Advertising ecosystem just to keep our investments clean. … 

\”How bad is it? How much money are advertisers spending on this murky supply chain? The IAB (Interactive Advertising Bureau) answered this for us when they released their White Paper, “The Programmatic Supply Chain: Deconstructing the Anatomy of a Programmatic CPM” in March of 2016. The IAB identified ten different value layers in the Programmatic ecosystem. I believe they are being overly generous by calling each a “value” layer. When you need an ad blocking service to avoid buying questionable content and a separate verification service to make sure that the ad was viewable by a human, how is this valuable? When you add up all the costs associated with the ten different layers, they account for 55% of the cpm (cost-per-thousand) that an advertiser pays for a programmatic ad. This means that for every dollar an advertiser spends in Programmatic Advertising over half (55%) of that dollar never reaches the publisher. It falls into the hands of all the third parties that are required to feed the beast that is the overly complex Programmatic Advertising ecosystem. We now know which half of an advertising investment is wasted. It’s wasted on infrastructure to prop up all those opportunities to buy individual audiences across the entire Programmatic Advertising supply chain.\”

In other words, by the time an advertiser has spent the money to do the targeting, and to make sure that the mechanisms to do the targeting work, and to follow up on the targeting, the costs can be so high that the reason for targeting in the first place is in danger of being lost.

The other interesting problem is that academic studies that have tried to measure the returns to targeted online advertising have run into severe problems. For a discussion, see \”The Unfavorable Economics of Measuring the Returns to Advertising,\” by Randall A. Lewis and Justin M. Rao

(Quarterly Journal of Economics, 130:4, November 2015, pp. 1941–1973, available here). They describe the old \”half of what I spend in advertising is wasted\” slogan in these terms (citations omitted):

\”In the United States, firms annually spend about $500 per person on advertising. To break even, this expenditure implies that the universe of advertisers needs to casually affect $1,500–2,200 in annual sales per person, or about $3,500–5,500 per household. A question that has remained open over the years is whether advertising affects purchasing behavior to the degree implied by prevailing advertising prices and firms’ gross margins …\”

The authors look at 25 studies of digital advertising. They find that the variations in what people buy and how much they spend are very large. Thus, it\’s theoretically possible that if adverting causes even a small number of people to \”tip\” from spending only a little on a product to being big spender on a product, the advertising can pay off for the advertiser. But in statistical sense, given that people vary so much in their spending on products and change so much anyway,  it\’s really hard to disentangle the effects of advertising from the changes in buying patterns that happen anyway. As the authors write: \”[W]e are making the admittedly strong claim that most advertisers do not, and indeed some cannot, know the effectiveness of their advertising spend …\”

Thus, the economics of spending on advertising remain largely unresolved, even in the digital age. Those interested in more on the economics of advertising might want to check my post on \”The Case For and Against Advertising\” (November 15, 2012).

Early Examples of Randomization in the Social Sciences

Randomization is one of the most persuasive techniques for determining cause and effect. Half of a certain group get a treatment; half don\’t. Compare. If the groups were truly chosen at random, and the treatment was truly the only difference between them, and the differences in outcomes are meaningful and the size of the samples are also large enough for drawing statistically meaningful conclusions, then the differences can tell you something about causes.

Economists have surged into randomized experimental work in the last few decades. From the 1970s, up into the 1990s, such work was often focused on social policies, with experiments on different kinds of health insurance, job training and job search, changes in welfare rules, early childhood education, and others. More recently, such work has become very prominent in the development economics literature, as well as on a variety of focused economics topics like how incentive pay affects work or how charitable contributions could be increased. Running experiments is now part of the common tool-kit (for a small taste, see here, here, here, and the three-paper symposium in the Fall 2017 issue of the Journal of Economic Perspectives on \”From Experiments to Economic Policy\”).

Thus, economists and other social scientists may find it useful to keep some historical examples of randomization near at hand. Julian C. Jamison provides a trove of such examples in \”The Entry of Randomized Assignment into the Social Sciences\”  (World Bank Policy Research Working Paper 8062, May 2017).

Jamison has a run-through of the classic examples of randomization over the centuries, which were often in a medical context. For example, he quotes the correspondence between poet/writers Petrarch and  Boccaccio in 1364, in which Petrarch wrote:

\”I solemnly affirm and believe, if a hundred or a thousand men of the same age, same temperament and habits, together with the same surroundings, were attacked at the same time by the same disease, that if one half followed the prescriptions of the doctors of the variety of those practicing at the present day, and that the other half took no medicine but relied on Nature’s instincts, I have no doubt as to which half would escape.\” 

Or Jan Baptist van Helmont, a doctor writing in the first half of the 1600s, proposed that the two \”cures\” of the day–bloodletting vs. induced vomiting/defecation–be tested in this way:

\”Let us take out of the Hospitals, out of the Camps, or from elsewhere, 200 or 500 poor People, that have Fevers, Pleurisies, etc. Let us divide them in halfes, let us cast lots, that one half of them may fall to my share, and the other to yours; I will cure them without bloodletting… we shall see how many Funerals both of us shall have.\”

There are cases from the 1600s, 1700s, and and 1800s of randomization as a way of testing for the effectiveness of  treatments for scurvy, or smallpox, salt-based homeopathic treatments. Perhaps one of the best-known experiments was done by Louis Pasteur in 1881 to test his vaccine for sheep anthrax. Jamison writes:

\”He was attempting to publicly prove that he had developed an animal anthrax vaccine (which may not have been his to begin with), so he asked for 60 sheep and split them into three groups: 10 would be left entirely alone; 25 would be given his vaccine and then exposed to a deadly strain of the disease; and 25 would be untreated but also exposed to the virus …. [A]ll of the exposed but untreated sheep died, while all of the vaccinated sheep survived healthily.\”

There are LOTS of examples. But in Jamison\’s telling, the earliest example of randomization in an experiment within a subject conventionally thought of as economics was actually done by two non-economists working on game theory, psychologist Richard Atkinson and polymath Patrick Suppes, in work published in 1957:

\”Atkinson and Suppes (1957), also not economists by training, analyzed different learning models in two-person zero-sum games, and they explicitly “randomly assigned” pairs of subjects into one of three different treatment groups. This is the earliest instance of random assignment in experimental economics, for purposes of comparing treatments, that has been found to date.\”

As to broader social experiments about the effects of policy interventions, the first one goes back to the 1940s:

\”The first clearly and individually randomized social experiment was the Cambridge-Somerville youth study. This was devised by Richard Clarke Cabot, a physician and pioneer in advancing the field of social work. Running from 1942-45, the study randomized approximately 500 young boys who were at risk for delinquency into either a control group or a treatment group, the latter receiving counseling, medical treatment, and tutoring. Results (Powers and Witmer 1951) were highly disappointing, with no differences reported.\” 

Back in high school, we had to design and carry out our own experiment in a psychology class. I wrote up the same message (a request for some saccharine and meaningless information) on two sets of postcards. One of the sets of postcards was typed; the other was handwritten. I chose the first 60 households in the local phone directory, and sent the postcards out at random. My working hypothesis was that the typewritten notes would get a higher response (perhaps because they would look more \”professional,\” but actually the handwritten notes got a much higher response (probably because they reeked of high school student). Even at the time, it felt like a silly little experiment to me. But the result felt powerful, nonetheless.

The Modern Shape-Up Labor Market

I\’m taking some family vacation the next 10 days or so. The lake country of northern Minnesota calls. My wife says that I get a distinctively blissful expression when I\’m sitting in the back of a canoe with a paddle in my hand. While I\’m gone, I\’ve prescheduled a string of posts that look at various things I\’ve been reading or have run across in the last few months that are at least loosely connected to my usual themes of economics and academia.

It’s not unusual to hear predictions that in the future, we will all have opportunities to run our own companies, or that jobs will become a series of freelance contracts. Here’s a representative comment from business school professor Arun Sundararajan (“The Future of Work,” Finance & Development, June 2017, p. 7-11):

“To avoid further increases in the income and wealth inequality that stem from the sustained concentration of capital over the past 50 years, we must aim for a future of crowd-based capitalism in which most of the workforce shifts from a full-time job as a talent or labor provider to running a business of one—in effect a microentrepreneur who owns a tiny slice of society’s capital.\”

To me, this description is reminiscent of what used to be called the “shape-up” system of hiring, described by journalist Malcolm Johnson in his Pulitzer-prize winning articles about crime on the docks of New York City in the late 1940s (Crime on the Labor Front, quotation from pp. 133-35),  which is perhaps best-remembered today for how it was depicted in the 1954 movie “On the Waterfront.”  Johnson described the process for a longshoreman of seeking and getting a job in this way:
“The scene is any pier along New York’s waterfront. At a designated hour, the longshoremen gather in a semicircle at the entrance to the pier. They are the men who load and unload the ships. They are looking for jobs and as they stand there in a semicircle their eyes are fixed on one man. He is the hiring stevedore and he stands alone, surveying the waiting men. At this crucial moment he possesses the crucial power of economic life or death over them and the men know it. Their faces betray that knowledge in tense anxiety, eagerness, and fear. They know that the hiring boss, a union man like themselves, can accept them or reject them at will. He can hire them or ignore them, for any reason or for no reason at all.  Now the hiring boss moves among them, choosing the man he wants, passing over others. He nod or points to the favored ones or calls out their names, indicating that they are hired. For those accepted, relief and joy. The pinched faces of the others reflect bleak disappointment, despair. …
“Under the shape-up, the longshoreman never knows from one day to the next whether he has a job or not. Under the shape-up, he may be hired today and rejected tomorrow, or hired in the morning and turned away in the afternoon. There is no security, no dignity, and no justice in the shape-up. … The shape-up fosters fear. Fear of not working. Fear of incurring the displeasure of the hiring boss.”

You can call it “crowd-based capitalism,” but to a lot of people, the idea of “running a business of one” does not sound attractive.  Many people don’t want to apply for a new job every day, or every week, or every month. They don\’t want to be a \”microentrepreneur who owns a tiny slice of society’s capital.\” They don’t want to be treated as interchangeable cogs, at the discretionary power of a modern hiring boss. All workers know that others have the power of economic life and death over them, but many prefer not to  have that fact rubbed in our faces every day.


It seems to me that a lot of the concern about the modern labor market isn\’t over whether the wage rates is going up a percentage point or two faster each year. It\’s about a sense that careers which build skills are harder to find, and that the labor market for many people feels like a modern version of the shape-up. 

The Chicken Paper Conundrum

Harald Uhlig delivered a talk on \”Money and Banking: Some DSGE Challenges\” (video here, slides here) at the Nobel Symposium on Money and Banking recently held in Stockholm. He introduces the \”Chicken Paper Conundrum,\” which he attributes to Ed Prescott.

 I\’ve definitely read academic papers, as well as listed to policy discussions, which follow this pattern.

Homage: I ran across this in the middle of two long blog posts by John Cochrane at his Grumpy Economist blog (here and here), which summarize and give links to many papers at this conference given by leading macroeconomists. Many have links to video, slides, and sometimes full papers. If you are interested in topics on the cutting edge of macroeocnomics, it\’s well worth your time.