Some Facts on Global Current Account Balances

I\’m the sort of joyless and soul-killing conversationalist who likes to use facts as the background for arguments. In that spirit, here\’s an overview of some facts about global trade balances, taken from the IMF External Sector Report: Tackling Global Imbalances and Rising Trade Tensions (July 2018).

Here\’s a list of the 15 countries with the largest trade surpluses and deficits, as measured by the current account balance. It also shows these magnitudes as a share of world GDP and a share of the country\’s GDP (for the record, I\’ve edited the table by cutting out the columns for 2014-2016).

A few facts jump out at me:

1) The US has the largest trade deficit in the world in absolute terms. However, trade deficits are larger as a share of the national economy in a number of other countries, including the UK, Canada, Turkey, Argentina, Algeria, Egypt, Lebanon, Pakistan, and Oman.

2) Germany has by far the largest trade surplus in the world in absolute terms. Indeed, the trade surplus for the euro-area as a whole (not shown in the table) is $442 billion–very similar in size to the US trade deficit.

3) China, which seems to be arch-enemy #1 for trade at present , is third-highest in absolute size of trade surplus, well behind Germany and Japan. Measured as a share of national GDP, China\’s trade surplus is actually the smallest of the top 15 trade surplus countries listed here.

4) If you subscribe to the economically illiterate view that trade surpluses are a measure of a nefarious ability to trade unfairly and exploit the rest of the world, while trade deficits are a sign of victimization by the beliefs of naive and overly trusting free trade fanatics, you need to match those beliefs to the national patterns shown here. That is, you need to believe that the 15 countries at the top are unfairly hustling the rest of the world economy, while the 15 at the bottom are paying the price.

5) The IMF report also emphasizes that there was a major shift in the configuration of global trade balances back around 2013, which has continued since then: trade surpluses and deficits are more concentrated in advanced economies, and less in the rest of the world economy.

\”Global surpluses and deficits have become increasingly concentrated in AEs [advanced economies], as China and oil exporters have seen their current account surpluses narrow and the deficits of some EMDEs [emerging market and developing economies] (for example, Brazil, India, Indonesia, Mexico, South Africa) have shrunk. Key drivers of this reconfiguration were the sharp drop earlier this decade in oil prices, which have recovered somewhat after bottoming out in 2016, and the gradual tightening of global financing conditions reflecting prospects for monetary policy normalization in the United States. Also at work have been asymmetries in demand recovery and the associated policy responses in systemic economies … After 2013, higher or persistently large surpluses in key advanced economies (for example, Germany, Japan, the Netherlands) were underpinned by relatively weaker domestic demand, constrained by fiscal consolidation efforts—necessary in some cases, given compressed fiscal space. Meanwhile, higher or persistent current account deficits in other AEs (United Kingdom, United States) reflected a stronger recovery in domestic demand, supported by some recent fiscal easing. Meanwhile, the narrowing of China’s underlying current account surplus was supported by a marked relaxation of fiscal and credit policies, masking lingering structural problems and causing a buildup of domestic vulnerabilities. These asymmetries in demand strength have also led to differences in monetary policy (as seen by the evolution of longer term nominal bond yields) and currencies.\”

In passing, it\’s worth notice that the IMF economists explain these shifts in current account surpluses and deficits without reference to trade becoming more or less fair–which makes sense, because there were no major changes in the rules over this time. Instead, they focus on demand in different countries, along with fiscal and monetary policy choices.

In fact, China\’s current account deficit has dropped dramatically in the last decade, from about 10% of GDP in 2007 to 1.4% in the table above. The IMF writes about China:

\”The CA [current account] surplus continued to decline, reaching 1.4 percent of GDP in 2017 … about 0.4 percentage points lower than in 2016. This mainly reflects a shrinking trade balance (driven by high import volume growth), notwithstanding REER [real effective exchange rate] depreciation. Viewed from a longer perspective, the CA surplus declined substantially relative to the peak of about 10 percent of GDP in 2007, reflecting strong investment growth, REER appreciation, weak demand in major advanced economies, and, more recently, a widening of the services deficit …\”

Conversely, the US current account trade deficit has declined from about 6% of GDP back in 2006 down to about 2.5% of GDP since 2014.

It is of course not a coincidence that the peak of China\’s trade surpluses coincides with the trough of US trade deficits, back around 2006-2007. China\’s exports and trade surplus exploded after China entered the World Trade Organization in 2001, much faster than anyone (including China\’s government) expected. China\’s exports of goods and services were 20.3% of China\’s GDP in 2001, and then took off to hit 36% of GDP in 2006, but since have fallen back to 19.7% of China\’s GDP in 2017. Conversely, the US economy was inhaling imports during its credit-led housing boom back in about 2005-2006.

Maybe there was a case for seeking to limit disruption from China\’s exports during the \”China shock\” period from 2001-2007 or so, when China\’s exports and trade surplus exploded in size. But it\’s now a decade later. And both China\’s trade surpluses and America\’s trade deficits have dramatically declined during that decade, well before any shots were fired in President Trump\’s trade war.

The Emergence and Erosion of the Retail Sales Tax

About 160 countries around the world, including all the other high-income countries of the world, use a value-added tax. The US has no value added tax, but 45 states and several thousand cities, use a sales tax as an alternative method of taxing consumption.  John L. Mikesell and Sharon N. Kioko provide a useful overview of the issues in \”The Retail Sales Tax in a New Economy,\” written for the 7th Annual Municipal Finance Conference, which was held on July 16-17, 2018, at the Brookings Institution.  Video of the conference presentation of the paper, with comments and discussion, is available here.

Here\’s a short summary of the emergence and erosion of the retail sales tax (footnotes omitted):

\”The American retail sales tax emerged from a desperation experiment in Mississippi in the midst of the Great Depression. Revenue from the property tax, the largest single source of state tax revenue at the time, collapsed, falling by 11.4 percent from 1927 to 1932 and by another 16.8 percent from 1932 to 1934. State revenue could not cover their service obligations or provide expected assistance to local governments. Mississippi (followed by West Virginia) showed that retail sales taxes could produce immediate cash collections, even in low-income jurisdictions. Other states paid attention. In 1933, eleven other states adopted the tax (two let the tax expire almost immediately). By 1938, twenty-two states (plus Hawaii, not yet a state) were collecting the tax; six others had also imposed the tax for a short time but had let them expire. …

\”The national total retail sales tax collections exceeded the collections from every other state tax from 1947 through 2001. It was also the largest tax producer in 2003 and 2004 also (years in which individual income tax revenue was still impacted by the 2001 recession), but it was surpassed by state individual income tax revenues in other years since 2001. ,,, By fiscal 2016, total state individual income tax collections exceeded $345 billion, compared to over $288 billion for state retail sales taxes. However, those national totals conceal the continuing dominance of the retail sales taxes in a number of states …

A major and ongoing US sales taxes is that, from the start, they mostly did not cover services. Thus, as the US has shifted to a service-based economy, the amount of consumer spending doing to goods covered by the sales tax has diminished. As the base of the sales tax diminished, then states have gradually raised the rate of the sale tax so that it would bring in a similar proportion of overall state revenue. This dynamic of higher sales tax rates imposed on a shrinking base is not sensible.
looking only at the 45 states with sales taxes.

\”[Here is] the history of mean retail sales tax breadth (implicit tax base / state personal income) across the states from 1970 to 2016. The record is one of almost constant decline, from 49.0 percent in 1970 to 37.3 percent in 2016. … The typical state retail sales tax base has narrowed as a share of the economy of the state over the years and this has meant that, in order for states to maintain the place of their sales tax in their revenue systems, they have been required to gradually increase the statutory tax rate they apply to that base. … [L]ittle good can be said about a narrow base / high statutory rate revenue policy. …

\”Unfortunately, many states got off to a bad start when they initially adopted their sales taxes and excluded all or almost all household service purchases from the tax base and it has proven to be difficult to correct that initial error. Extending the retail sales tax to include at least some services is a perennial topic whenever states are seeking additional revenue or considering reforms in their tax systems. … While the current typical sales tax base is around 20 percent narrower in 2016 compared with 1970, the base with all services added is actually about 11 percent broader, and the base without health care and education services is only 8 percent below its 1970 level. …\”

Another perennial sales tax issue is that legislatures like to list items that will be exempt from the sales tax, or tax \”holidays\” where sales tax doesn\’t need to be paid during certain time periods on items or like back-to-school items, energy-saving appliances, emergency preparedness supplies, and other items. These policies are often justified as helping those with low incomes, but any policy which cuts taxes for 100% of the population in the name of helping the 15-20% of the population that is poor has a mismatch between its stated intentions and its reality. Several states have taken a much more sensible course: if the goal is to help poor people, then give poor people a tax credit, based on their income, so that sales taxes they pay can be rebated to them. Mikesell and Kioko write:

\”The problems with [a sales tax] exemption are well-known – absence of targeting, high revenue loss, additional cost of compliance and administration, distortion of consumer behavior, reward for political support, etc. – and it is particularly distressing in light of the fact that the credit/rebate system normally operated through the state income tax provides an alternative relief approach that eliminates virtually all these difficulties. Currently five states (Maine, Kansas, Oklahoma, Idaho, and Hawaii – operate some form of sales tax credit that returns to families some or all of sales tax paid on purchases, giving greatest relative relief to lowest income families and lesser (or no) relief to more affluent families. … The credit / rebate system promises efficiency, equity, and less revenue loss. Its apparent unpopularity is somewhat surprising, particularly in light of the spread of the earned income tax credit program, a program with some similar characteristics.\”

Yet another perennial sales tax issue is that the logic of the tax is that it should apply to goods and services purchased by households, not to business purchases. If a sales tax is applied to business purchases, it raises a risk of \”pyramiding,\” where one business pays sales tax on equipment and supplies from another business, and the consumers also pay sales tax on the finished product. If there are layers of businesses buying from each other along the supply chain, the sales tax can be imposed on a given product multiple times. Again, Mikesell and Kioko write:

\”American retail sales taxes have not entirely gotten over the confusion that the tax is not on finished goods but rather should be on goods (and services) purchased for household consumption. The reality of sales taxation is that a considerable share of the overall sales tax base, roughly 40 percent on average, consists of input purchases by businesses. The tax on those purchases embeds in prices charged by those businesses, meaning that this share of the tax is effectively hidden from households, allowing legislatures to claim a statutory rate that is considerably less than the true rate borne by individuals. … The pattern does show a considerable movement toward removal of these business input purchases from the tax base, thus reducing the prospects for pyramiding, hidden tax burdens, distortions, and discrimination. However, states continue to tax purchases made by other business activities. Lawmakers are inclined to try to pick favorites for tax relief and appear to like glitz. Targeted preferences for motion pictures, certain sorts of research and development, or bids for the Super Bowl are attractive to politicians because they provide identifiable credit and possibly ribbon cutting not available with general exemption. Super Bowl bids are particularly egregious.\”

A more recent issue is how jurisdictions with a sales tax can react to the rise of online sales from other jurisdictions. There seem to be several models developing. First, there is a \”South Dakota\” model of collecting sales tax from companies physically located in other states if they have total sales above a certain minimum level to South Dakota residents. The US Supreme Court upheld this law as constitutional this summer.  Other states that have adopted this model include Indiana, Iowa, North Dakota, Massachusetts, Maine, Mississippi, Wyoming, and Alabama.

An alternative \”Colorado\” model require sellers in a different state to notify both Colorado buyers and the Colorado tax authorities that state sales tax was due–but did not seek to collect the sales tax from those out-of-state sellers.  Other states that have enacted this approach are Louisiana, Pennsylvania, Vermont, and Washington.

Yet another approach addresses the question of when the producer is in one state, the buyer is in another state, and the \”market facilitator\” through which an online transaction is carried out is in still another state. This approach seeks to make the market facilitator based in one state responsible for collecting sales taxes on behalf of other states. Alabama, Arizona, Oklahoma, Pennsylvania, Rhode Island, Washington, and Minnesota have taken this approach.

The lurking difficulty with the lower base and higher rates for the retail sales tax is that, at some point, the tax rate gets  high enough that it becomes lucrative to find ways to avoid paying it.

\”The problem is that there has been a consensus, heavily based on pre-value-added tax experience in Scandinavian countries with high-rate retail sales taxes, that retail sales tax rates much above 10 percent are likely to produce compliance issues so difficult that the tax becomes almost impossible to administer. American retail sales tax rates are drifting ever closer to that danger level, particularly when local governments add their own rates to the rate levied by the state. … [S]tate statutory rates have drifted upward since 1970. Rates of 6 and 7 percent are no longer rare and a narrowing base will require more rate increases if the position of the sales tax is to be preserved (or expanded) in state revenue systems. Rates are moving toward the danger zone in which significant non-compliance begins to become more attractive and, unless states can manage the narrowing base problem, a compliance gap may become a significant challenge for state tax administrators in the first part of the 21st century.\”

Outside the US, where value-added taxes are high, there has been a spread of what is called \”sales suppression software,\” under names like \”phantomware\” and \”zappers.\” Basically, this software cooks the accounting books to make sales look lower, either by substituting lower prices for the higher price that was actually charged, or by reducing the number of transactions. This software takes care of other issues too, by producing fake inventory records if needed, or by running certain transactions through international cloud-based services that will be more difficult to track. Tax administration has become increasingly based on electronic records, so sales suppression software may turn into a real problem.

Albert Jay Nock on the Three Rules of Editorial Policy

For 31 years, I\’ve been editing the Journal of Economic Perspectives. At the most basic level, editing is about pushing the author to have a point in the first place, and to make it clearly. Sounds simple, perhaps? On complex subjects, meeting those criteria can be a high hurdle to cross. 
Albert Jay Nock, in his 1943 Memoirs of a Superfluous Man, tells a story along these lines in his description of the editorial policy at a magazine he had edited called The Freeman (p. 172):

\”In one way, our editorial policy was extremely easy-going, and in another way it was unbending as a ramrod. I can explain this best by an anecdote. One day Miss X steered in a charming young man who wanted to write for us. I took a liking to him at once, and kept him chatting for quite a while. When we came down to business, he diffidently asked what our policy was, and did we have any untouchable sacred cows. I said we certainly had, we had three of them, as untouchable and sacred as the Ark of the Covenant. He looked a bit flustered and asked what they were. 

\”The first one,\” I said, \”is that you must have a point. Second, you must make it out. The third one is that you must make it out in eighteen-carat, impeccable, idiomatic English.\”

\”But is that all?\” 

\”Isn\’t it enough for you?\” 

\’Why, yes, I suppose so, but I mean, is that all the editorial policy you have?\” 

\”As far as I know, it is,\” I said, rising. \”Now you run along home and write us a nice piece on the irremissibility of postbaptismal sin, and if you can put it over those three jumps, you will see it in print. Or if you would rather do something on a national policy of strangling all the girl-babies at birth, you might do that—glad to have it.\” 

The young man grinned and shook hands warmly. We got splendid work out of him. As a matter of fact, at one time or another we printed quite a bit of stuff that none of us believed in, but it all conformed to our three conditions, it was respectable and worth consideration. Ours was old-school editing, no doubt, but in my poor judgement it made a far better paper than more stringent methods have produced in my time.

I especially like the comment in the closing paragraph about how they \”printed quite a bit of stuff that none of us believed in.\”  For editors, agreeing with authors is overrated and in fact unnecessary.

Summer 2018 Journal of Economic Perspectives Available On-line

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I\’ll start with Table of Contents for the just-released Summer 2018 issue, which in the Taylor household is known as issue #125. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.

_________________

Symposium: Macroeconomics a Decade after the Great Recession

\”What Happened: Financial Factors in the Great Recession,\” by Mark Gertler and Simon Gilchrist
At the onset of the recent global financial crisis, the workhorse macroeconomic models assumed frictionless financial markets. These frameworks were thus not able to anticipate the crisis, nor to analyze how the disruption of credit markets changed what initially appeared like a mild downturn into the Great Recession. Since that time, an explosion of both theoretical and empirical research has investigated how the financial crisis emerged and how it was transmitted to the real sector. The goal of this paper is to describe what we have learned from this new research and how it can be used to understand what happened during the Great Recession. In the process, we also present some new empirical work. We argue that a complete description of the Great Recession must take account of the financial distress facing both households and banks and, as the crisis unfolded, nonfinancial firms as well. Exploiting both panel data and time series methods, we analyze the contribution of the house price decline, versus the banking distress indicator, to the overall decline in employment during the Great Recession. We confirm a common finding in the literature that the household balance sheet channel is important for regional variation in employment. However, we also find that the disruption in banking was central to the overall employment contraction.
Full-Text Access | Supplementary Materials

\”Finance and Business Cycles: The Credit-Driven Household Demand Channel,\” by Atif Mian and Amir Sufi
What is the role of the financial sector in explaining business cycles? This question is as old as the field of macroeconomics, and an extensive body of research conducted since the Global Financial Crisis of 2008 has offered new answers. The specific idea put forward in this article is that expansions in credit supply, operating primarily through household demand, have been an important driver of business cycles. We call this the credit-driven household demand channel. While this channel helps explain the recent global recession, it also describes economic cycles in many countries over the past 40 years.
Full-Text Access | Supplementary 

\”Identification in Macroeconomics,\” by Emi Nakamura and Jón Steinsson
This paper discusses empirical approaches macroeconomists use to answer questions like: What does monetary policy do? How large are the effects of fiscal stimulus? What caused the Great Recession? Why do some countries grow faster than others? Identification of causal effects plays two roles in this process. In certain cases, progress can be made using the direct approach of identifying plausibly exogenous variation in a policy and using this variation to assess the effect of the policy. However, external validity concerns limit what can be learned in this way. Carefully identified causal effects estimates can also be used as moments in a structural moment matching exercise. We use the term \”identified moments\” as a short-hand for \”estimates of responses to identified structural shocks,\” or what applied microeconomists would call \”causal effects.\” We argue that such identified moments are often powerful diagnostic tools for distinguishing between important classes of models (and thereby learning about the effects of policy). To illustrate these notions we discuss the growing use of cross-sectional evidence in macroeconomics and consider what the best existing evidence is on the effects of monetary policy.
Full-Text Access | Supplementary Materials

\”The State of New Keynesian Economics: A Partial Assessment,\” by Jordi Galí
In August 2007, when the first signs emerged of what would come to be the most damaging global financial crisis since the Great Depression, the New Keynesian paradigm was dominant in macroeconomics. Ten years later, tons of ammunition has been fired against modern macroeconomics in general, and against dynamic stochastic general equilibrium models that build on the New Keynesian framework in particular. Those criticisms notwithstanding, the New Keynesian model arguably remains the dominant framework in the classroom, in academic research, and in policy modeling. In fact, one can argue that over the past ten years the scope of New Keynesian economics has kept widening, by encompassing a growing number of phenomena that are analyzed using its basic framework, as well as by addressing some of the criticisms raised against it. The present paper takes stock of the state of New Keynesian economics by reviewing some of its main insights and by providing an overview of some recent developments. In particular, I discuss some recent work on two very active research programs: the implications of the zero lower bound on nominal interest rates and the interaction of monetary policy and household heterogeneity. Finally, I discuss what I view as some of the main shortcomings of the New Keynesian model and possible areas for future research.
Full-Text Access | Supplementary Materials

\”On DSGE Models,\” by Lawrence J. Christiano, Martin S. Eichenbaum and Mathias Trabandt
The outcome of any important macroeconomic policy change is the net effect of forces operating on different parts of the economy. A central challenge facing policymakers is how to assess the relative strength of those forces. Economists have a range of tools that can be used to make such assessments. Dynamic stochastic general equilibrium (DSGE) models are the leading tool for making such assessments in an open and transparent manner. We review the state of mainstream DSGE models before the financial crisis and the Great Recession. We then describe how DSGE models are estimated and evaluated. We address the question of why DSGE modelers—like most other economists and policymakers—failed to predict the financial crisis and the Great Recession, and how DSGE modelers responded to the financial crisis and its aftermath. We discuss how current DSGE models are actually used by policymakers. We then provide a brief response to some criticisms of DSGE models, with special emphasis on criticism by Joseph Stiglitz, and offer some concluding remarks.
Full-Text Access | Supplementary Materials

\”Evolution of Modern Business Cycle Models: Accounting for the Great Recession,\” Patrick J. Kehoe, Virgiliu Midrigan and Elena Pastorino
Modern business cycle theory focuses on the study of dynamic stochastic general equilibrium (DSGE) models that generate aggregate fluctuations similar to those experienced by actual economies. We discuss how these modern business cycle models have evolved across three generations, from their roots in the early real business cycle models of the late 1970s through the turmoil of the Great Recession four decades later. The first generation models were real (that is, without a monetary sector) business cycle models that primarily explored whether a small number of shocks, often one or two, could generate fluctuations similar to those observed in aggregate variables such as output, consumption, investment, and hours. These basic models disciplined their key parameters with micro evidence and were remarkably successful in matching these aggregate variables. A second generation of these models incorporated frictions such as sticky prices and wages; these models were primarily developed to be used in central banks for short-term forecasting purposes and for performing counterfactual policy experiments. A third generation of business cycle models incorporate the rich heterogeneity of patterns from the micro data. A defining characteristic of these models is not the heterogeneity among model agents they accommodate nor the micro-level evidence they rely on (although both are common), but rather the insistence that any new parameters or feature included be explicitly disciplined by direct evidence. We show how two versions of this latest generation of modern business cycle models, which are real business cycle models with frictions in labor and financial markets, can account, respectively, for the aggregate and the cross-regional fluctuations observed in the United States during the Great Recession.
Full-Text Access | Supplementary Materials

\”Microeconomic Heterogeneity and Macroeconomic Shocks,\” by Greg Kaplan and Giovanni L. Violante
In this essay, we discuss the emerging literature in macroeconomics that combines heterogeneous agent models, nominal rigidities, and aggregate shocks. This literature opens the door to the analysis of distributional issues, economic fluctuations, and stabilization policies—all within the same framework. In response to the limitations of the representative agent approach to economic fluctuations, a new framework has emerged that combines key features of heterogeneous agents (HA) and New Keynesian (NK) economies. These HANK models offer a much more accurate representation of household consumption behavior and can generate realistic distributions of income, wealth, and, albeit to a lesser degree, household balance sheets. At the same time, they can accommodate many sources of macroeconomic fluctuations, including those driven by aggregate demand. In sum, they provide a rich theoretical framework for quantitative analysis of the interaction between cross-sectional distributions and aggregate dynamics. In this article, we outline a state-of-the-art version of HANK together with its representative agent counterpart, and convey two broad messages about the role of household heterogeneity for the response of the macroeconomy to aggregate shocks: 1) the similarity between the Representative Agent New Keynesian (RANK) and HANK frameworks depends crucially on the shock being analyzed; and 2) certain important macroeconomic questions concerning economic fluctuations can only be addressed within heterogeneous agent models.
Full-Text Access | Supplementary Materials

Symposium: Incentives in the Workplace
\”Compensation and Incentives in the Workplace,\” by Edward P. Lazear
Labor is supplied because most of us must work to live. Indeed, it is called \”work\” in part because without compensation, the overwhelming majority of workers would not otherwise perform the tasks. The theme of this essay is that incentives affect behavior and that economics as a science has made good progress in specifying how compensation and its form influences worker effort. This is a broad topic, and the purpose here is not a comprehensive literature review on each of many topics. Instead, a sample of some of the most applicable papers are discussed with the goal of demonstrating that compensation, incentives, and productivity are inseparably linked.
Full-Text Access | Supplementary Materials

\”Nonmonetary Incentives and the Implications of Work as a Source of Meaning,\” by Lea Cassar and Stephan Meier
Empirical research in economics has begun to explore the idea that workers care about nonmonetary aspects of work. An increasing number of economic studies using survey and experimental methods have shown that nonmonetary incentives and nonpecuniary aspects of one\’s job have substantial impacts on job satisfaction, productivity, and labor supply. By drawing on this evidence and relating it to the literature in psychology, this paper argues that work represents much more than simply earning an income: for many people, work is a source of meaning. In the next section, we give an economic interpretation of meaningful work and emphasize how it is affected by the mission of the organization and the extent to which job design fulfills the three psychological needs at the basis of self-determination theory: autonomy, competence, and relatedness. We point to the evidence that not everyone cares about having a meaningful job and discuss potential sources of this heterogeneity. We sketch a theoretical framework to start to formalize work as a source of meaning and think about how to incorporate this idea into agency theory and labor supply models. We discuss how workers\’ search for meaning may affect the design of monetary and nonmonetary incentives. We conclude by suggesting some insights and open questions for future research.
Full-Text Access | Supplementary Materials

\”The Changing (Dis-)utility of Work,\” by Greg Kaplan and Sam Schulhofer-Wohl
We study how changes in the distribution of occupations have affected the aggregate non-pecuniary costs and benefits of working. The physical toll of work is less now than in 1950, with workers shifting away from occupations in which people report experiencing tiredness and pain. The emotional consequences of the changing occupation distribution vary substantially across demographic groups. Work has become happier and more meaningful for women, but more stressful and less meaningful for men. These changes appear to be concentrated at lower education levels.
Full-Text Access | Supplementary Materials

Individual Articles

\”Social Connectedness: Measurement, Determinants, and Effects,\” by Michael Bailey, Rachel Cao, Theresa Kuchler, Johannes Stroebel and Arlene Wong
Social networks can shape many aspects of social and economic activity: migration and trade, job-seeking, innovation, consumer preferences and sentiment, public health, social mobility, and more. In turn, social networks themselves are associated with geographic proximity, historical ties, political boundaries, and other factors. Traditionally, the unavailability of large-scale and representative data on social connectedness between individuals or geographic regions has posed a challenge for empirical research on social networks. More recently, a body of such research has begun to emerge using data on social connectedness from online social networking services such as Facebook, LinkedIn, and Twitter. To date, most of these research projects have been built on anonymized administrative microdata from Facebook, typically by working with coauthor teams that include Facebook employees. However, there is an inherent limit to the number of researchers that will be able to work with social network data through such collaborations. In this paper, we therefore introduce a new measure of social connectedness at the US county level. Our Social Connectedness Index is based on friendship links on Facebook, the global online social networking service. Specifically, the Social Connectedness Index corresponds to the relative frequency of Facebook friendship links between every county-pair in the United States, and between every US county and every foreign country. Given Facebook\’s scale as well as the relative representativeness of Facebook\’s user body, these data provide the first comprehensive measure of friendship networks at a national level.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Mark Twain on Extrapolation: "Such Wholesale Returns of Conjecture Out of Such a Trifling Investment of Fact"

I sometimes say, with a smile and a wince, that it only takes three data-points for economists to start building a theory — and that in pinch, we can make due with less data. But of course, anyone who develops a theory based on limited data is prone to false extrapolations. Mark Twain offered one vivid example in his 1883 memoir Life on the Mississippi. This passage describes how there are places where the river loops back and forth in the shape of horseshoe curves. At some point, there is a cut-through (often caused by nature, but sometimes with an assist from those who saw the value of riverfront property). The river charges through the cut-through instead, and thus becomes shorter.

Twain uses this set of facts for a sarcastic jab a science and extrapolation. I quote here from the Project Gutenberg version of Life on the Mississippi, from near the start of Chapter 17. 

\”They give me an opportunity of introducing one of the Mississippi\’s oddest peculiarities,—that of shortening its length from time to time. … The water cuts the alluvial banks of the \’lower\’ river into deep horseshoe curves; so deep, indeed, that in some places if you were to get ashore at one extremity of the horseshoe and walk across the neck, half or three quarters of a mile, you could sit down and rest a couple of hours while your steamer was coming around the long elbow, at a speed of ten miles an hour, to take you aboard again. …

\”Pray observe some of the effects of this ditching business. Once there was a neck opposite Port Hudson, Louisiana, which was only half a mile across, in its narrowest place. You could walk across there in fifteen minutes; but if you made the journey around the cape on a raft, you traveled thirty-five miles to accomplish the same thing. In 1722 the river darted through that neck, deserted its old bed, and thus shortened itself thirty-five miles. In the same way it shortened itself twenty-five miles at Black Hawk Point in 1699. Below Red River Landing, Raccourci cut-off was made (forty or fifty years ago, I think). This shortened the river twenty-eight miles. In our day, if you travel by river from the southernmost of these three cut-offs to the northernmost, you go only seventy miles. To do the same thing a hundred and seventy-six years ago, one had to go a hundred and fifty-eight miles!—shortening of eighty-eight miles in that trifling distance. At some forgotten time in the past, cut-offs were made above Vidalia, Louisiana; at island 92; at island 84; and at Hale\’s Point. These shortened the river, in the aggregate, seventy-seven miles.

\”Since my own day on the Mississippi, cut-offs have been made at Hurricane Island; at island 100; at Napoleon, Arkansas; at Walnut Bend; and at Council Bend. These shortened the river, in the aggregate, sixty-seven miles. In my own time a cut-off was made at American Bend, which shortened the river ten miles or more.

\”Therefore, the Mississippi between Cairo and New Orleans was twelve hundred and fifteen miles long one hundred and seventy-six years ago. It was eleven hundred and eighty after the cut-off of 1722. It was one thousand and forty after the American Bend cut-off. It has lost sixty-seven miles since. Consequently its length is only nine hundred and seventy-three miles at present.

\”Now, if I wanted to be one of those ponderous scientific people, and \’let on\’ to prove what had occurred in the remote past by what had occurred in a given time in the recent past, or what will occur in the far future by what has occurred in late years, what an opportunity is here! Geology never had such a chance, nor such exact data to argue from! Nor \’development of species,\’ either! Glacial epochs are great things, but they are vague—vague. Please observe:—

\”In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period,\’ just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.\”

"Whoever Is Not a Liberal at 20 Has No Heart …"

There\’s an saying along these general lines \”If you’re not a liberal when you’re 25, you have no heart. If you’re not a conservative by the time you’re 35, you have no brain.\” Who said it? One blessing of the web is that I can fiddle around with such questions without needing to spend three days in the library.

It\’s apparently not Winston Churchill. At least, there\’s no record of him having said or written it. And Churchill scholars point out that he was a conservative at 15 and a liberal at 35. 
Indeed, it seems the origins of the comments may be French, rather than English. The Quote Investigator website writes:

The earliest evidence located by QI appeared in an 1875 French book of contemporary biographical portraits by Jules Claretie. A section about a prominent jurist and academic named Anselme Polycarpe Batbie included the following passage [translated as] … 

\”Mr. Batbie, in a much-celebrated letter, once quoted the Burke paradox in order to account for his bizarre political shifts: “He who is not a républicain at twenty compels one to doubt the generosity of his heart; but he who, after thirty, persists, compels one to doubt the soundness of his mind.”

Quote Investigator has not found an actual record of Mr. Batbie\’s \”much-celebrated letter.\” And although the \”Burke paradox\” seems mostly likely to apply to Edmund Burke, it isn\’t clear whether it\’s a reference to something not-yet-discovered that was written by Burke, or by a reference to a pattern purportedly revealed by Burke\’s life and writings. 

But hearkening back to Burke is interesting, because in Thomas Jefferson\’s journals one finds an entry relevant to this subject for January 1799. John Adams is president at this time. Jefferson writes: 

\”In a conversation between Dr. Ewen and the President, the former said one of his sons was an aristocrat, the other a democrat. The President asked if it was not the youngest who was the democrat. Yes, said Ewen. Well, said the President, a boy of 15 who is not a democrat is good for nothing, and he is no better who is a democrat at 20. Ewen told Hurt, and Hurt told me.\”

For a lengthy list of other places where something similar to this quotation has appeared, see here or here.  While the quotation clearly has staying power, it seems overly facile to me. The distinction that liberals feel and conservatives think is silly and shallow, and shows little understanding of either. The strong beliefs of young people are easily dismissed as rooted only in feelings, but at least young people often show some flexibility about learning and adapting. It often seems the strong feelings of the middle aged and elderly are often based as much on being set in their ways and confirmation bias, and about lessons learned in the rather-different past, rather than seeking to apply some deeper weighing of facts, values, and experience. 
Herbert Stein, who was an economist in many positions in Washington, DC for more than 50 years, captured some of my own sense here  in his 1995 collection of essays, On the Other Hand – Essays on Economics, Economists, and Politics (from pp. 1-2):

\”An old saying goes that whoever is not a Socialist when young has no heart and whoever is still a Socialist when old has no head. I would say that whoever is not a liberal when young has no heart, whoever is not a conservative when middle-aged has no head, and whoever is still either a liberal or a conservative at age seventy-eight has no sense of humor. Obviously, orthodox certainty on matters about which there can be so little certitude must eventually be seen as only amusing.\”

If you can\’t learn from both liberals and conservatives, and also laugh at both liberals and conservatives, you might want to reconsider the vehemence of your partisan commitments. 

How Coalitional Instincts Make Weird Groups and Stupid People

I like to think of myself as an individual who makes up his own mind, but that\’s almost certainly wrong for me, and you, gentle reader, as well. A vast literature in psychology points out that, in effect, a number of separate personalities live in each of our brains. Which decision gets made at a certain time is determined in part by how issues of reward and risk are framed and communicated to us.  Moreover, we are members of groups. If my wife or one of my children is in a serious dispute, I will lose some degree of my sunny disposition and rational fair-mindedness. Probably I won\’t lose all of it. Maybe I\’ll lose less of it than a typical person in a similar situation. But I\’ll lose some of it. 
John Tooby, a professor of anthropology at the University of California-Santa Barbara, has written about what he calls \”Coalitional Instincts\” in a short piece for Edge.com (November 22, 2017). Tooby argues that human brains have evolved so that we have \”a nearly insuperable human appetite to be a good coalition member.\” But to demonstrate clearly that we are part of a coalition, we are all drawn to \”unusual, exaggerated beliefs … alarmism, conspiracies, or hyperbolic comparisons.\” Here\’s Tooby (I have inserted the boldface emphasis): 

\”Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups.  … These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. …

\”Why do we see the world this way? Most species do not and cannot. … Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.

\”Ancestrally, evolving the neural code to crack these problems supercharged the ability to successfully compete for access to reproductively limiting resources. Fatefully, we are descended solely from those better equipped with coalitional instincts. In this new world, power shifted from solitary alphas to the effectively coordinated down-alphabet, giving rise to a new, larger landscape of political threat and opportunity: rival groups or factions expanding at your expense or shrinking as a result of your dominance.

\”And so a daunting new augmented reality was neurally kindled, overlying the older individual one. It is important to realize that this reality is constructed by and runs on our coalitional programs and has no independent existence. You are a member of a coalition only if someone (such as you) interprets you as being one, and you are not if no one does. We project coalitions onto everything, even where they have no place, such as in science. We are identity-crazed.

\”The primary function that drove the evolution of coalitions is the amplification of the power of its members in conflicts with non-members. This function explains a number of otherwise puzzling phenomena. For example, ancestrally, if you had no coalition you were nakedly at the mercy of everyone else, so the instinct to belong to a coalition has urgency, preexisting and superseding any policy-driven basis for membership. This is why group beliefs are free to be so weird. Since coalitional programs evolved to promote the self-interest of the coalition’s membership (in dominance, status, legitimacy, resources, moral force, etc.), even coalitions whose organizing ideology originates (ostensibly) to promote human welfare often slide into the most extreme forms of oppression, in complete contradiction to the putative values of the group. … 

\”Moreover, to earn membership in a group you must send signals that clearly indicate that you differentially support it, compared to rival groups. Hence, optimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities.

\”This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one\’s friends, and one\’s cherished group identity. This freezes belief revision.

\”Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. \”

The lesson I draw here is although we all feel a strong need to join groups, we do have some degree of choice and agency over what groups we end up joining. Even within larger groups, like a certain religion or political party, there will be smaller groups with which one can have a primary affiliation. It may be wise to give an outlet to our coalitional nature by joining several different groups, or by pushing oneself to occasionally phase out one membership and join another.

In addition, we all feel a need to do something a little whacky and extreme to show our group affiliation, but again, we have some degree of choice and agency over what actions and messages define our group. Wearing the colors of a professional sports team, for example, is a different kind of whackiness than sending vitriolic social media  messages. Humans want to join coalitional groups, but we can at least consider whether the way a group expresses solidarity is a good fit with who we want to be.

Difficulties of Making Predictions: Global Power Politics Edition

Making predictions is hard, especially about the future. It\’s a comment that seems to have been attributed to everyone from Nostradamus to Niels Bohr to Yogi Berra. But it\’s deeply true. Most of us have a tendency to make statements about the future with a high level of self-belief, avoid later reconsidering how wrong we were, and then make more statements.

Here\’s a nice vivid example from back in 2001. The Bush administration has just taken office, and a Department of Defense Linville Wells at the US Department of Defense was reflecting on the then-forthcoming \”Quadrennial Defense Review.\” He wanted offer a pungent reminder that the entire exercise of looking ahead even just 10 years has often been profoundly incorrect. Thus, Wells wrote this memo (dated April 12, 2001):

  • If you had been a security policy-maker in the world\’s greatest power in 1900, you would have been a Brit, looking warily at your age-old enemy, France. 
  • By 1910, you would be allied with France and your enemy would be Germany. 
  • By 1920, World War I would have been fought and won, and you\’d be engaged in a naval arms race with your erstwhile allies, the U.S. and Japan. 
  • By 1930, naval arms limitation treaties were in effect, the Great Depression was underway, and the defense planning standard said \”no war for ten years.\” 
  • Nine years later World War II had begun. 
  • By 1950, Britain no longer was the world\’s greatest power, the Atomic Age had dawned, and a \”police action\” was underway in Korea. 
  • Ten years later the political focus was on the \”missile gap,\” the strategic paradigm was shifting from massive retaliation to flexible response, and few people had heard of Vietnam.
  • By 1970, the peak of our involvement in Vietnam had come and gone, we were beginning détente with the Soviets, and we were anointing the Shah as our protégé in the Gulf region.
  • By 1980, the Soviets were in Afghanistan, Iran was in the throes of revolution, there was talk of our \”hollow forces\” and a \”window of vulnerability,\” and the U.S. was the greatest creditor nation the world had ever seen. 
  • By 1990, the Soviet Union was within a year of dissolution, American forces in the Desert were on the verge of showing they were anything but hollow, the U.S. had become the greatest debtor nation the world had ever known, and almost no one had heard of the internet. 
  • Ten years later, Warsaw was the capital of a NATO nation, asymmetric threats transcended geography, and the parallel revolutions of information, biotechnology, robotics, nanotechnology, and high density energy sources foreshadowed changes almost beyond forecasting. 
  • All of which is to say that I\’m not sure what 2010 will look like, but I\’m sure that it will be very little like we expect, so we should plan accordingly.
The questions of how to predict for what you don\’t expect, and how to plan for what you don\’t expect, are admittedly difficult. The ability to pivot smoothly to face the new challenge may be  one of the most underrated skills in politics and management. 

The Need for Generalists

One can make a reasonable argument that the concept of an economy and the study of economics begins with the idea of specialization, in the sense that those who function within an economy specialize in one kind of production, but then trade with others to consume a broader array of good. Along these lines, the first chapter of Adam Smith\’s 1776 Wealth of Nations is titled, \”On the Division of Labor.\” But in the push for specialization, there can be a danger of neglecting the virtues of generalists. Even when it comes to assembly lines, specialization of tasks can be pushed so far that it becomes unproductive (as Smith recognized). In a broad array of jobs, including managers, journalists, legislators and politicians, and even editors of academic journals, there is a need for generalists who can synthesize and put in context the work of a range of specialists.

The need for generalists is not at all new, of course. Here\’s a commentary from 80 years on \”The Need for Generalists\” from AG Black, who was Chief of the Bureau of Agricultural Economics at the US Department of Agriculture, \” published in the Journal of Farm Economics (November 1936, 18:4, pp. 657-661, and available vis JSTOR). Black is writing in particular about specialization within what is already a specialty of agricultural economics, but his point applies more broadly.

\”The past generation, like several generations before it, has indeed been one of greater and greater specialization. This has resulted in great advances in agricultural economics. Our specialists have developed new technics of analysis, they have discovered new relationships, they have been able to give close attention to important facts and factors that might otherwise have escaped attention and by such escape might have led to wrong conclusions. Without this specialized attention our discipline in agricultural economics could not have attained the position it has reached today.

\”This advance has not been attained without cost. The price has been the loss of minds, or the neglect to develop minds, trained to cope with the complex problems of today in the comprehensive, overall manner called for by such problems. Our specialists are splendidly equipped to solve a problem concerning the price of wheat, or of corn, or of cotton, or a technical question in cooperative marketing, farm management, taxation or international trade. But the more important problems almost never present themselves in those narrow terms; rather they may involve elements of all the above and perhaps several more. …

\”Increased specialization itself tends to raise barriers between fields. It tends to create a system of professional jealousies that is not conducive to the development of generalists. The specialist who burrows deeper and deeper into a narrower and narrower hole becomes convinced that no one who has been sapping in a neighboring tunnel can possibly know as much about the peculiar twists and turns of his burrow as he, himself. And he is right. He knows that he can readily confound and confuse a neighboring specialist if the latter strays from his own confines, and what is more, he will. One of the greatest joys of the specialist is to make an associate appear infantile and ridiculous on the occasions when the latter appears to be getting out of his field.

\”The specialist stakes out his claim and guards it as jealously as ever did a prospector of the \’40s, and woe to the unwary trespasser. As the specialist knows how he looks upon intruders, he knows how he would be treated if he had the temerity to wander outside his main field. Consequently he is usually quite willing to leave outside fields to other specialists.

\”The development of the whole field takes on a honeycomb appearance with series upon series of well-marked and almost wholly isolated cells. These cell walls need to be broken down. There is need of men who can correlate and coordinate the specialized knowledge in the separate cells–men who can bring to bear on the larger problems the findings of the different specialists and who have sufficient perspective and sense of proportion to apply just the correct shade of emphasis to the contribution of each particular specialist. …

\”Our whole organization has developed on the assumption that the generalizing function is not important, that it does not require quite the ability and training of the specialist, that it can be satisfactorily done by almost anyone and that certainly there is nothing about it that demands the attention of really first class men. If generalizing be done at all, it can safely be committed to the specialist who can play with it as relaxation from the really serious and important demands of his specialty, or to the administrator who can give it all of the attention it requires between telephone calls and committee meetings.

\”All of this, I suppose, leads to the conclusion that in agricultural economics we need another specialist, that is a \”specialist\” who is a \”generalist.\” We need to make a place for the trained economist of highest ability who will be free from administrative demands as well as free from the tyranny of specialization, who will have the job of keeping abreast of the results of the various specialists and who can spend a good deal of time in analyzing findings having a bearing upon the ultimate solution of these same problems. … In other words, students need training in analysis and in synthesis. Today the ability to synthesize facts, research results and partial solutions into a well rounded whole, is too infrequently available.\”

One of the many political cliches that makes me die a little bit inside is when someone claims that all we need to address a certain problem (health care, poverty, transportation, the Middle East, whatever) is to bring together a group of experts who will provide the common-sense solution that we have all been ignoring. But while bringing together a group of specialist experts can provide a great deal of information and insight, they are often not especially good at melding their specific insights into a general policy.

Homage: I ran across a mention of this article at Carola Binder\’s always-useful \”Quantitative Ease\” website  last summer, and left myself a note to track it down. But given my time constraints and organizational skills, it took awhile for me to do so.

"Half the Money I Spend on Advertising is Wasted, and the Trouble is I Don\’t Know Which Half"

There\’s an old rueful line from firms that advertise: \”We know that half of all we spend on advertising is wasted, but we don\’t know which half.\” It\’s not clear who originally coined the phrase. But we do know that the effects of advertising have changed dramatically in a digital age. Half of all advertising spending may still be wasted, but now it\’s for a very different reason.

I was raised with the folklore that John Wanamaker, founder of the eponymous department stores, was the originator of the phrase at hand. But the attribution gets pretty shaky, pretty fast. David Ogilvy, the head of the famous Ogilvy & Mather advertising agency, wrote in his 1963 book Confessions of an Advertising Man (pp. 86-87): \”As Lord Leverhulme (and John Wanamaker after  him) complained, `Half the money I spend on advertising is wasted, and the trouble is I don\’t know which half.\”

So how about William Lever, Lord Leverhulme, who built a fortune in the soap business (with Sunlight Soap, and eventually Unilever)? Career advertising executive Jeremy Bullmore has looked into it, and wrote in the 2013 annual report of the British advertising and public relations firm WPP:

\”There are at least a dozen minor variations of this sentiment that are confidently quoted and variously attributed but they all have in common the words ‘advertising’, ‘half’ and ‘waste’. Google the line and you’ll get about nine million results. … As it happens, there’s little hard evidence that either William Lever or John Wanamaker (or indeed Ford or Penney) ever made such a remark. Certainly, neither the Wanamaker nor the Unilever archives contains any such reference. Yet for a hundred years or so, with no accredited source and no data to support it, this piece of folklore has survived and prospered.\” 

Bullmore makes some compelling points. One is that even 100 years ago, it was widely believed tha advertising could be usefully shaped and targeted. He writes:

\”Retail advertising in the days of John Wanamaker was mostly placed in local newspapers and was mainly used to shift specific stock. An ad for neckties read, ‘They’re not as good as they look, but they’re good enough. 25 cents.’ The neckties sold out by closing time and so weren’t advertised again. Waste, zero. Experiment was commonplace. Every element of an advertisement – size, headline, position in paper – was tested for efficacy and discarded if found wanting. Waste, if not eliminated, was ruthlessly hounded.

\”Claude Hopkins published Scientific Advertising in 1923. In it, he writes, “Advertising, once a gamble, has… become… one of the safest of business ventures. Certainly no other enterprise with comparable possibilities need involve so little risk.” Even allowing for adman’s exuberance, it strongly suggests that, within Wanamaker’s lifetime, there were very few advertisers who would have agreed that half their advertising money was wasted.\”

Further, Bullmore points out that people are more comfortable buying certain products because \”everyone knows\” about them, and \”everyone knows\” because even those who don\’t purchase the product have seen the ads.

\”A common attribute of all successful, mass-market, repeat-purchase consumer brands is a kind of fame. And the kind of fame they enjoy is not targeted, circumscribed fame but a curiously indiscriminate fame that transcends its particular market sector. Coca-Cola is not just a famous soft drink. Dove is not just a famous soap. Ford is not just a famous car manufacturer. In all these cases, their fame depends on their being known to just about everyone in the world: even if they neither buy nor use. Show-biz publicists have understood this for ever. When The Beatles invaded America in 1964, their manager Brian Epstein didn’t arrange a series of targeted interviews in fan magazines; he brokered three appearances on the Ed Sullivan Show with an audience for each estimated at 70 million. Far fewer than half of that 70 million will have subsequently bought a Beatles record or a Beatles ticket; but it seems unlikely that Epstein thought this extended exposure in any way wasted.\”

And of course, if large amounts of advertising are literally wasted, it seems as we should be able to observe a substantial number of companies who cut their advertising budget in half and suffered no measurable decline in sales. (In fact, if  half of advertising is always wasted, shouldn\’t the firm then keep cutting the advertising budget by half, and half again, and half again, and so down to zero? Seems as if there must be a flaw in this logic!)

Of course, one of the major changes in advertising during the last decade or two is that print advertising has plummeted, while digital advertising has soared. More generally, digital technology has made it much more straightforward to create systematic variations in the quantity and qualities of advertising– and to track the results. Bullmore writes: \”And given modern measurements and the growth of digital channels, it’s easier than ever for advertising to be held accountable; to be seen to be more investment than cost.\”

But Bullmore is probably too optimistic here about how easy it is to hold advertising accountable, for a couple of reasons.

One problem is that the idea of targeting specific audiences for digital advertising is a lot more complicated in practice than it may seem at first. Judy Unger Franks of Medill School of Journalism, Media, Integrated Market Communications at Northwestern University explained the issues in a short essay late last summer: She wrote:

\”Programmatic Advertising enables marketers to make advertising investments to select individuals in a media audience as opposed to having to buy the entire audience. Advertisers use a wealth of Big Data to learn about each audience member to then determine whether that audience member should be served with an advertisement and at what price. This all happens in near real-time and advertisers can therefore make near real-time adjustments to their approach to optimize the return-on-investment of its advertising expenditures.

\”In theory, Programmatic Advertising should solve the issue of waste. However, in our attempt to eliminate waste from the advertising value chain, we may have made things worse. We have unleashed a dark side to Programmatic Advertising that comes at a significant cost. Now, we know exactly which half of the money spent on advertising is wasted: it’s the half that marketers must now spend on third parties who have inserted themselves into the Programmatic Advertising ecosystem just to keep our investments clean. … 

\”How bad is it? How much money are advertisers spending on this murky supply chain? The IAB (Interactive Advertising Bureau) answered this for us when they released their White Paper, “The Programmatic Supply Chain: Deconstructing the Anatomy of a Programmatic CPM” in March of 2016. The IAB identified ten different value layers in the Programmatic ecosystem. I believe they are being overly generous by calling each a “value” layer. When you need an ad blocking service to avoid buying questionable content and a separate verification service to make sure that the ad was viewable by a human, how is this valuable? When you add up all the costs associated with the ten different layers, they account for 55% of the cpm (cost-per-thousand) that an advertiser pays for a programmatic ad. This means that for every dollar an advertiser spends in Programmatic Advertising over half (55%) of that dollar never reaches the publisher. It falls into the hands of all the third parties that are required to feed the beast that is the overly complex Programmatic Advertising ecosystem. We now know which half of an advertising investment is wasted. It’s wasted on infrastructure to prop up all those opportunities to buy individual audiences across the entire Programmatic Advertising supply chain.\”

In other words, by the time an advertiser has spent the money to do the targeting, and to make sure that the mechanisms to do the targeting work, and to follow up on the targeting, the costs can be so high that the reason for targeting in the first place is in danger of being lost.

The other interesting problem is that academic studies that have tried to measure the returns to targeted online advertising have run into severe problems. For a discussion, see \”The Unfavorable Economics of Measuring the Returns to Advertising,\” by Randall A. Lewis and Justin M. Rao

(Quarterly Journal of Economics, 130:4, November 2015, pp. 1941–1973, available here). They describe the old \”half of what I spend in advertising is wasted\” slogan in these terms (citations omitted):

\”In the United States, firms annually spend about $500 per person on advertising. To break even, this expenditure implies that the universe of advertisers needs to casually affect $1,500–2,200 in annual sales per person, or about $3,500–5,500 per household. A question that has remained open over the years is whether advertising affects purchasing behavior to the degree implied by prevailing advertising prices and firms’ gross margins …\”

The authors look at 25 studies of digital advertising. They find that the variations in what people buy and how much they spend are very large. Thus, it\’s theoretically possible that if adverting causes even a small number of people to \”tip\” from spending only a little on a product to being big spender on a product, the advertising can pay off for the advertiser. But in statistical sense, given that people vary so much in their spending on products and change so much anyway,  it\’s really hard to disentangle the effects of advertising from the changes in buying patterns that happen anyway. As the authors write: \”[W]e are making the admittedly strong claim that most advertisers do not, and indeed some cannot, know the effectiveness of their advertising spend …\”

Thus, the economics of spending on advertising remain largely unresolved, even in the digital age. Those interested in more on the economics of advertising might want to check my post on \”The Case For and Against Advertising\” (November 15, 2012).