Adam Smith: The Plight of the Impartial Spectator in Times of Faction

Adam Smith\’s first great book, the Theory of Moral Sentiments, relies heavily in places on the idea of an \”impartial spectator.\” Smith\’s notion is that our beliefs about morality are closely related to our notion of how a hypothetical \”impartial spectator\” would react to a given situation. (Here\’s a quick overview of Smith\’s argument.) But what happens to a person trying to think like an impartial spectator–that is, a person trying to preserve the integrity of their own personal judgment–in a time of faction?  Smith argues that anyone trying to act in this way is likely to be marginalized by all competing factions.

Here is Smith\’s comment from the 1759  Theory of Moral Sentiments (Book III, Ch. 1, paragraph 85). As is my wont, I quote here from the online version of the book at the Library of Economics and Liberty website. Smith wrote: 

\”The animosity of hostile factions, whether civil or ecclesiastical, is often still more furious than that of hostile nations; and their conduct towards one another is often still more atrocious. …  In a nation distracted by faction, there are, no doubt, always a few, though commonly but a very few, who preserve their judgment untainted by the general contagion. They seldom amount to more than, here and there, a solitary individual, without any influence, excluded, by his own candour, from the confidence of either party, and who, though he may be one of the wisest, is necessarily, upon that very account, one of the most insignificant men in the society. All such people are held in contempt and derision, frequently in detestation, by the furious zealots of both parties. A true party-man hates and despises candour; and, in reality, there is no vice which could so effectually disqualify him for the trade of a party-man as that single virtue. The real, revered, and impartial spectator, therefore, is, upon no occasion, at a greater distance than amidst the violence and rage of contending parties. To them, it may be said, that such a spectator scarce exists any where in the universe. Even to the great Judge of the universe, they impute all their own prejudices, and often view that Divine Being as animated by all their own vindictive and implacable passions. Of all the corrupters of moral sentiments, therefore, faction and fanaticism have always been by far the greatest.\”

Misallocation and Productivity: International Perspective

In pretty much every industry in pretty much every country, the firms exhibit a range of productivity: that is, some well-run and efficient firms produce more output given their levels of labor and capital, while others produce less. What ought to happen in a well-functioning economy is that the lagging-productivity firms should either be catching up with the leading-productivity firms over time, or the laggard firms should shrink in size while the leading firms grow in size. This process has been demonstrably important to economic growth in the past.

However, a wide range of taxes, rules, and institutions may act to inhibit reallocation of resources, and thus to slow down productivity growth. For example, if smaller farms are less efficient that larger farms, but the land use rules in a developing economy keep farm sizes small, then agricultural resources will not be reallocated. Economists refer to this as an issue of \”misallocation.\”

Diego Restuccia and Richard Rogerson discuss \”The Causes and Costs of Misallocation\” in the Summer 2017 issue of the Journal of Economic Perspectives (31:3: pp. 151-174). The IMF discusses the role of tax policy in creating and sustaining misallocation  in the April 2017 IMF Fiscal Monitor, with an overall theme of \”Achieving More with Less.\” The discussion of reallocation is in \”Chapter 2: Upgrading the Tax System to Boost Productivity.\” The IMF researchers write:

\”Resource misallocation manifests itself in a wide dispersion in productivity levels across firms, even within narrowly defined industries. High dispersion in firm productivities reveals that some businesses in each country have managed to achieve high levels of efficiency, possibly close to those of the world frontier in that industry. This implies that existing conditions within a country are compatible with higher levels of productivity. Therefore, countries can reap substantial TFP [total factor productivity] gains from reducing resource misallocation, allowing firms to catch up with the high-productivity firms in their own economies. In some cases, however, the least productive businesses will need to exit the market, releasing resources for the more productive ones. For example, Baily, Hulten, and Campbell (1992) find that 50 percent of manufacturing productivity growth in the United States during the 1980s can be attributed to the reallocation of factors across plants and to firm entry and exit. Similarly, Barnett and others (2014) find that labor reallocation across firms explained 48 percent of labor productivity growth for most sectors in the U.K. economy in the five years prior to 2007.

Resource misallocation is often the result of a large number of poorly designed economic policies and market failures that prevent the expansion of efficient firms and promote the survival of inefficient ones. Reducing misallocation is therefore a complex and multidimensional task that requires the use of all policy levers. Structural reforms play a crucial role, in particular because the opportunity cost of poorly designed economic policies is much greater now in the context of anemic productivity growth. Financial, labor, and product market reforms have been identified as important contributors (see Banerjee and Duflo 2005; Andrews and Cingano 2014; Gamberoni, Giordano, and Lopez-Garcia 2016; and Lashitew 2016). This chapter makes the case that upgrading the tax system is also key to boosting productivity by reducing distortions that prevent resources from going to where they are most productive. … 

Potential TFP gains from reducing resource misallocation are substantial and could lift the annual real GDP growth rate by roughly 1 percentage point. Payoffs are higher for emerging market and low-income developing countries than for advanced economies, with considerable variation across countries. …

Many emerging market economies have a relatively small number of leading firms, and a large number of laggards. If the distribution of leaders and laggards in these markets became more equal, similar to the distribution between leaders and laggard firms in US industries, the productivity gains could be large. By the IMF calculations, total factor productivity \”would increase by 30 to 50 percent in China and by 40 to 60 percent in India.\”

In their JEP essay, Restuccia and Rogerson  provide a useful overview of what can cause misallocation, and how economists have sought to measure the potential gains from reducing misallocation. For a flavor of the issues and analysis, here are a few of the studies mentioned in the paper:

\”Government regulation can also hinder the reallocation of individuals across space. Hsieh and Moretti (2015) study misallocation of individuals across 220 US metropolitan areas from 1964 to 2009. They document a doubling in the dispersion of wages across US cities during the sample period. Using a model of spatial reallocation, they show that the increase in wage dispersion across US cities represents a misallocation that contributed to a loss in aggregate GDP per capita of 13.5 percent. They argue that across-city labor misallocation is directly related to housing regulations and the associated constraints on housing supply. …\”

\”Tombe and Zhu (2015) provide direct evidence on the frictions of labor (and goods) mobility across space and sectors in China and quantify the role of these internal frictions and their changes over time on aggregate productivity. The reduction of internal migration frictions is key and together with internal trade restrictions account for about half of the growth in China between 2000 and 2005. …\”

\”Restuccia and Santaeulalia-Llopis (2017) study misallocation across household farms in Malawi. They have data on the physical quantity of outputs and inputs as well as measures of transitory shocks and so are able to measure farm-level total factor productivity. They find that the allocation of inputs is relatively constant across farms despite large differences in measured total factor productivity, suggesting a large amount of misallocation. In fact, they found that aggregate agricultural output would increase by a remarkable factor of 3.6 if inputs were allocated efficiently. Their analysis also suggests that institutional factors that affect land allocation are likely playing a key role. Specifically, they compare misallocation within groups of farmers that are differentially influenced by restrictive land markets. Whereas most farmers in Malawi operate a given allocation of land, other farmers have access to marketed land (in most cases through informal rentals). Using this source of variation, Restuccia and Santaeulalia-Llopis find that misallocation is much larger for the group of farmers without access to marketed land: specifically, the potential output gains from removing misallocation are 2.6 times larger in this group relative to the gains for the group of farms with marketed land.\” 

There will always be leading and lagging firms, and various hindrances to reallocation of resources across places and firms. In that sense, misallocation is never going away. But studying misallocation offers a useful reminder that productivity growth and economic growth are driven (or not!) by the dynamic forces of competition in a reasonably flexible economic setting.

Moreover, a better understanding of the gaps between leading and lagging companies–why suchh gaps persist, what might help to close them– may even help to explain one of the really big questions in the global economy, which is the overall slowdown in productivity growth. A 2015 study done by the OECD found that productivity growth among leading companies in various industries has not been slowing down; instead, the gap between leading and lagging companies has expanded, as if lagging companies are having a harder time keeping pace.  

NAFTA in a Multipolar World Economy

Discussions of globalization often seem rooted in an assumption that the main choices, either for the US or for other countries, are either nationalism or global. But there is another possibility, which is that the world economy evolves to a \”multipolar\” setting, which is based on primarily regional agglomerations of cross-national trade. In this situation, the issue for the US economy is whether it will be part of its geographically natural multipolar group, here in the Americas, or whether it will try view itself as a group of one, competing in the global economy with multipolar groups in Europe and in Asia. Your attitude toward the North American Free Trade Agreement, for example, may vary according to whether you see it as one of many trade deals in a globalizing economy, or whether you see it as the specific trade deal for building a US-centered trading bloc in a multipolar economy.

Michael O’Sullivan and Krithika Subramanian lay out the case for the multipolarity hypothesis in \”Getting Over Globalization,\” written as a report for Credit Suisse Research (January 2017). They write:

\”Globalization is running out of steam. We can see this in various ways. Our measure for tracking globalization – made up of flows of trade, finance, services and people – has ebbed in the past year, and has slipped backwards over the course of the past three years so that it has dropped below the levels reached in 2012–2013 to about the same level as crisis-ridden 2009–2010 … . Perhaps the most basic representation of globalization is trade, and this is sluggish or according to many measures it is plateauing. … Other indicators of globalization paint a more negative picture – cross-border flows of financial assets (relative to GDP) have continued downward from their pre-financial-crisis peak, most likely because of the effects of regulation and the general shrinking of the banking sector. Trade liberalization, as measured by the Fraser Institute’s economic freedom of the world indices, has been slowly declining since its peak in 2000, although it is still at a relatively healthy level. … It should be said that the extent of globalization/multipolarity is still at a historically high level, although it is hard not to have the impression that it is on the verge of a downward correction, especially once we consider some of the underlying dynamics. …

\”One of the notable sub-trends of globalization has been a much better distribution of the world’s economic output, led by what were once regarded as overly populous, third world countries such as India and China. This has fueled multipolarity – the rise of regions that are now distinct in terms of their economic size, political power, approaches to democracy and liberty, and their cultural norms. …
We believe that the world is now leaving globalization behind it and moving to a more distinct multipolar setting. …

\”The … scenario is based on the rise of Asia and a stabilization of the Eurozone so that the world economy rests, broadly speaking, on three pillars – the Americas, Europe and Asia (led by China). In detail, we would expect to see the development of new world or regional institutions that surpass the likes of the World Bank, the rise of “managed democracy” and more regionalized versions of the rule of law – migration becomes more regional and more urban rather than cross-border, regional financial centers develop and banking and finance develop in new ways. At the corporate level, the significant change would be the rise of regional champions, which in many cases would supplant multinationals. We would also expect to see uneven improvements in human development leading to more stable, wealthier local economies on the back of a continuation of the emerging market consumer trend. …

\”An interesting and intuitive way of seeing how the world has evolved from a unipolar one (i.e. USA) to a more multipolar one is to look at the location of the world’s 100 tallest buildings. The construction of skyscrapers (200 meters plus in height) is a nice way of measuring hubris and economic machismo, in our opinion. Between 1930 and 1970, at least 90% of the world’s tallest buildings could be found in the USA, with a few exceptions in South America and Europe. In the 1980s and 1990s, the USA continued to dominate the tallest tower league tables, but by the 2000s there was a radical change, with Middle Eastern and Asian skyscrapers rising up. Today about 50% of the world’s tallest buildings are in Asia, with another 30% in the Middle East, and a meager 16% in the USA, together with a handful in Europe. In more detail, three-quarters of all skyscraper completions in 2015 were located in Asia (China and Indonesia principally), followed by the UAE and Russia. Panama had more skyscraper completions than the USA.\”

If one believes that the US should view its economy as part of an emerging American bloc in a multipolar world economy, the North American Trade Agreement between the US, Canada, and Mexico is the foundation for that bloc. C. Fred Bergsten and Monica de Bolle have edited an e-book titled A Path Forward for NAFTA, a collection of 11 short essays discussing NAFTA \”modernization,\” \”renegotiation,\” and \”updating\” from various national, industry, and foreign policy perspectives (Peterson Institute for International Economics Briefing 17-2, July 2017).  They give some sense of the possibilities for cooperation and agreement, and the unlikeliness that such an agreement will address bilateral trade deficits, in the \”Overview\” essay:

\”The overarching goal of negotiators from the three participating countries must be to boost the competitiveness of North America as a whole, liberalizing and reforming commercial relations between the three partner countries and responding to the many changes in the world economy since NAFTA went into effect in 1994. These changes include the digital transformation of commerce, which has enabled sophisticated new production methods employing elaborate supply chains, transforming North America into a trinational manufacturing and services hub. But concerns about labor, the environment, climate change and energy resources, and currency issues have become more acute than they were at the time NAFTA started. Commerce Secretary Wilbur Ross was thus correct when he said that NAFTA “didn’t really address our economy or theirs [Mexico and Canada] in the way they are today.” …

\”The broadest consistent goal shared by the NAFTA countries should be to strengthen the international position of North America as a whole in a world of tough competition from China and others. Beyond that objective, the negotiators can take steps toward achieving regional energy independence, since all three countries are large consumers and producers of different kinds of energy, from those based on fossil fuels to those derived from new technologies and renewable sources. There is also plenty of room for additional or indeed full liberalization of key sectors, such as financial services and telecommunications, to the benefit of all three economies.

\”The new NAFTA could borrow some of the TPP’s innovative approaches and embrace cutting-edge standards for issues such as e-commerce, state-owned enterprises (SOEs), and other sectors that have become central to international trade and investment. The North American partners might be able to help resolve a politically inflammatory issue plaguing trade agreements worldwide: incorporating dispute settlement mechanisms that will make their provisions enforceable and thus credible without being perceived as undermining national sovereignty and widely shared concepts of fairness. Another step in this direction would be to work out a North American competition policy that would enable the three countries to disavow the use of antidumping and countervailing duties against each other, as Australia and New Zealand have done. The NAFTA partners might also strive to achieve a degree of regulatory coherence that has so far eluded the United States and the European Union in their efforts to forge a transatlantic agreement. NAFTA negotiators could permit like-minded countries, notably the members of the Pacific Alliance (Chile, Colombia, and Peru, as well as Mexico), all of which are already free trade agreement partners of the United States, to join NAFTA. …

\”[T]rade agreements are inappropriate and ineffective vehicles for attempting to reduce trade imbalances. The reason is that external imbalances are created by internal macroeconomic imbalances and can be remedied only by changes in the latter. Hence continued US insistence on cutting its trade deficit, especially via bilateral efforts with Mexico, would almost surely lead to dissatisfaction with the outcome and a potential blowup of the entire agreement. Taking the concern about trade deficits at face value, moreover, is a prescription for deadlock with Canada and Mexico, both of which run global trade and current account deficits on the same order of magnitude as the United States. Hence they properly view themselves as deficit countries that need to strengthen, not weaken, their external economic positions. They are most unlikely to accede to US demands to strengthen its external position at their expense, even if the economics were to make that possible, and can in fact be expected to argue (correctly) that the three North American deficit countries should work together to improve their joint and several external positions with the rest of the world.

Those interested in NAFTA and the possibility of an emerging multipolar world economy might wish to check some earlier posts:

The US Fiscal Outlook

Pretty much everyone agrees that the US fiscal outlook for the long-run–a few decades into the future–looks grim unless changes are made. Here\’s are estimates of the ratio of accumulated federal debt/GDP throughout US history, and projected up through 2050, from a Congressional Budget Office Report in March 2017. The spikes of government debt during wartime, the Great Depression, and the Reagan and Obama administrations are clear. The trajectory forecast would take US government debt outside past experience.

Alan J. Auerbach and William G. Gale set the stage for the discussion that needs to  happen in \”The Fiscal Outlook In a Period of Policy Uncertainty,\” written for the Tax Policy Center (August 7, 2017). Douglas W. Elmendorf and Louise M. Sheiner also tackle these issues in \”Federal Budget Policy with an Aging Population and Persistently Low Interest Rates,\” in the Summer 2017 issue of the Journal of Economic Perspectives (31: 3, pp. 175-194).

Auerbach and Gale summarize their theme straightforwardly:

\”Budget deficits appear manageable in the short run, but the nation’s debt-GDP ratio is already high relative to historical norms, and even under optimistic assumptions,  both measures will rise in the future. Sustained deficits and rising federal debt will crowd out  future investment, reduce prospects for economic growth, and impose burdens on future generations. …

\”For example, we find that just to ensure that the debt-GDP ratio in 2047 does not exceed the current level would require a combination of immediate and permanent spending cuts and/or tax increases totaling 3.2 percent of GDP. This represents about a 16 percent cut in non-interest spending or a 19 percent increase in tax revenues relative to current levels. To return the debt-GDP ratio in 2047 to 36 percent, its average in the 50 years preceding the Great Recession in 2007-9, would require immediate and permanent spending cuts or tax increases of 4.6 percent of GDP. The longer policy makers wait to institute fiscal adjustments, the larger those adjustments would have to be to reach a given debt-GDP ratio target in a given year. While the numbers above are projections, not predictions, they nonetheless constitute the fiscal backdrop against which potentially ambitious new tax and spending proposals should be considered.\” 

Auerbach and Gale go through a variety of ways of projecting future deficits, but the overall message that there is a long-run reason for concern keeps coming through. They also offer a useful reminder that even if the proximate cause of the higher federal debt burden is projections for higher spending on entitlement and especially health programs, there are a number of cases in addressing a problem by reversing its cause doesn\’t make sense. As I sometimes say, \”When someone is hit by a car, you don\’t fix their injury by reversing the cause–that is, backing up the car over their body.\” As Auerbach and Gale write:

\”Looking toward policy solutions, it is useful to emphasize that even if the main driver of long-term fiscal imbalances is the growth of entitlement benefits, this does not mean that the only solutions are some combination of benefit cuts now and benefit cuts in the future. For example, when budget surpluses began to emerge in the late 1990s, President Clinton devised a plan to use the funds to “Save Social Security First.” Without judging the merits of that particular plan, our point is that Clinton recognized that Social Security faced long-term shortfalls and, rather than ignoring those shortfalls, aimed to address the problem in a way that went beyond simply cutting benefits. A more general point is that addressing entitlement funding imbalances can be justified precisely because one wants to preserve and enhance the programs, not just because one might want to reduce the size of the programs. Likewise, addressing these imbalances may involve reforming the structure of other spending, raising or restructuring revenues, or creating new programs, as well as simply cutting existing benefits. Nor do spending cuts or tax changes need to be across the board. Policy makers should make choices among programs. For example, more investment in infrastructure or children’s programs could be provided, even in the context of overall spending reductions.\” 

Elmendorf and Sheiner tackle a different aspect of the same question. They agree that federal deficits are on \”an unsustainable path.\” However, they also note that interest rates are very low, which offers an opportunity for federal borrowing aimed at infrastructure and long-run investments. They write:

\”Both market readings and detailed analyses by a number of researchers suggest that Treasury interest rates are likely to remain well below their historical norms for years to come, which represents a sea change for budget policy. We argue that many—though not all—of the factors that may be contributing to the historically low level of interest rates imply that both federal debt and federal investment should be substantially larger than they would be otherwise. We conclude that, although significant policy changes to reduce federal budget deficits ultimately will be needed, they do not have to be implemented right away. Instead, the focus of federal budget policy over the coming decade should be to increase federal investment while enacting changes in federal spending and taxes that will reduce deficits gradually over time.\”

As a focused argument in economic reasoning, Elmendorf and Sheiner make a strong case. As a matter of political economy, it\’s trickier, because one can raise at least three questions.

1) If the US political system decides not to focus on deficit-reduction now, is it capable of focusing the additional spending on investment that will raise long-run growth, or will the additional budget flexibility just lead to more transfer payments?

2) If the US political system doesn\’t focus on deficit reduction in the near-term, then in the medium-term roughly a decade from now, it will need to preside over even greater budget changes (as Auerbach and Gale explain) to avoid the outcome that everyone agrees is unsustainable. It would be a hard political U-turn to shift from larger deficits for investment in the present, to taking budgetary steps in the fugure that will offset that additional borrowing, and more besides, in the future.

3) The theoretical case for enacting changes now that will have the effect of holding down the increase in deficits in the long-run is strong. But in practical terms, just what these changes should be is less clear. For example, Congress could pass a law which places limits on, say, the level of government health care spending from 2030 to 2040, but there\’s no reason to believe that those limits will have any actual force when those years arrive. There are a few changes, like phasing in an older retirement age or a change in benefit formulas for Social Security, that might have a better chance of persisting. It seems useful to think more about about budgetary policies that could be enacted in the present, but would have most of their effect after a long-term phase-in, and would be relatively resistant to future political tinkering.

It\’s not exactly news that democratic political systems are continually enticed to focus on the present and push costs into the future in a wide range of contexts: public borrowing, pensions, environment, and others.

William Playfair: Inventor of the Bar Graph, Line Graph, and Pie Chart

William Playfair (1759-1823) wasn\’t sure himself whether he had actually invented the bar graph and the line graph. So after he had published The Commercial and Political Atlas in 1786, he kept an eye out for other examples. After 13 years of looking, but not finding any predecessors, he declared himself to be the inventor in his 1798 book Lineal Arithmetic, where he wrote (pp. 6-7):

\”I confess I was very anxious to find out if I was actually the first who applied the principles of geometry to matters of finance, as it had long before been applied to chronology with great success. I am now satisfied, upon due enquiry, that I was the first; for during 11 years I have never been able to learn that anything of a similar nature had ever before been produced.

\”To those who have studied geography, or any branch of mathematics, these charts will be perfectly intelligible. To such, however, as have not, a short explanation may be necessary.

\”The advantage proposed by those charts, is not that of giving a more accurate statement than by figures, but it is to give a more simple and permanent idea of the gradual progress and comparative amounts, at different periods, by presenting to the eye a figure, the proportions of which correspond with the amount of the sums intended to be expressed.

\”As the eye is the best judge of proportion, being able to estimate it with more quickness and accuracy than any of our other organs, it follows that wherever relative quantities are in question, a gradual increase or decrease of any revenue, receipt or expenditure of money, or other value, is to be stated, this mode of representing it is peculiarly applicable; it gives a simple, accurate, and permanent idea, by giving form and shape to a number of separate ideas, which are otherwise abstract and unconnected. In a numerical table there are as many distinct ideas given, and to be remembered, as there are sums, the order and progression of those sums, therefore, are also to be recollected by another effort of memory, while this mode unites proportion, progression, and quantity all under one simple impression of vision, and consequently one act of memory. \”

Cara Giaimo provides an overview of Playfair\’s story in \”The Scottish Scoundrel Who Changed How We See Data: When he wasn’t blackmailing lords and being sued for libel, William Playfair invented the pie chart, the bar graph, and the line graph,\” appearing in Atlas Obscura (June 28, 2016). Giaimo described Playfair as a \”near-criminal rascal.\” He apprenticed with James Watt, of steam engine fame, failed at silversmithing, falsely claimed to have invented the semaphore telegraphy, tried blackmailing a Scottish lord, sold tracts of American land he didn\’t actually own to French nobility, and died in poverty and obscurity. For some additional detail on Playfair\’s colorful life, Giaimo links to a 1997 article, \”Who Was Playfair?\” by Ian Spence and Howard Wainer.

But for social scientists, what\’s interesting is that Playfair pushed back against the style of argument of his time–mainly verbal persuasion and perhaps a few tables–and invented these graphs. For example, here\’s the first bar chart, showing Scotland\’s trading partners.

Here\’s an early line graph from Playfair\’s 1786 atlas, showing England\’s imports and exports to Denmark & Norway in the 18th century.

And Playfair wasn\’t done. In his 1801 book The Statistical Breviary, he invented the pie chart. It appears in the middle of a group of other circular charts, and shows Turkish land holdings. Moreover, Playfair  hand-colored the \”slices\” of the pie, thus initiating the idea of color-coding. Here\’s the overall page, followed by a close-up of the pie chart itself.

The first pie chart, drawn among other circular charts by Playfair in 1801, and illustrating the Turkish Empire\'s land holdings. A closeup of the pie is available here.


I suspect that the invention of the line graph, bar graph, and pie chart were–like so many inventions–something that would have been invented during this time frame by someone, and sooner rather than later. But Playfair was first, and deserves the credit.

Homage: I ran across the Giamo article thanks to Tyler Cowen and the always-intriguing Marginal Revolution blog.

Negative Interest Rates: Evidence and Practicalities

Seven central banks around the world have lowered the interest rate that they use to implement monetary policy to a negative rate: along with the very prominent European Central Bank and Bank of Japan, the others include the central banks of Bulgaria, Denmark, Hungary, Sweden, and Switzerland. How is this working out? When (not if) the next recession  hits, are negative interest rates a tool that might be used by the US Federal Reserve? The IMF has issued a staff report on \”Negative Interest Rate Policies–Initial Experiences and Assessments\” (August 2017). In the Summer 2017 issue of the Journal of Economic PerspectivesKenneth Rogoff explores the arguments for negative interest rates (as opposed to other policy options) and practical methods of moving toward such a policy in \”Dealing with Monetary Paralysis at the Zero Bound\” (31:3, pp. 46-77).

When (and not if) the next recession comes, monetary policy is likely to face a hard problem. For most of the last few decades, the standard response of central banks during a recession has been to reduce the policy interest rate under their control by 4-5 percentage points. For example this is how the US Federal Reserve cut it interest rates in response to the recessions that started in 1990, 2001, and 2007.

The problem is that when (not if) the next recession hits, reducing interest rates in this traditional way will not be practical. As you an see, the policy interest rates has crept up to about 1%, but that\’s not high enough to allow for an interest rate cut of 4-5% without running into the \”zero lower bound.\”

The problem of the zero lower bound seems unlikely to go away. Nominal interest rate can be divided up into the amount that reflects inflation, and the remaining \”real\” interest rate–and both are low. Inflation has been rock-bottom now for about 20 years, even as the economy has moved up and down, leading even Fed chair Janet Yellen to propose that economists need to study \”What determines inflation?\” Real interest rates have been falling, and seem likely to remain low.  The Fed is slowly raising its federal funds interest rate, but there is no current prospect that it will move back to the range of, say, 4- 5% or more. Thus, when (not if) the next recession hits, it will be impossible to use standard monetary tools to cut that interest rate by the usual 4-5 percentage points.

What macroeconomic policy tools will the government have when (not if) the next recession hits. Fiscal policy tools like cutting taxes or raising spending remain possible, although with the Congressional Budget Office forecasting a future of government debt rising to unsustainable levels during the next few decades, this tool may need to be used with care.  Hitting the zero bound is why the Fed and other central banks turned to \”quantitative easing,\” where the central bank buys assets like government or private debt, although this raises obvious problems of what assets to buy, how much of these assets to buy–and the likelihood of political intervention in these decisions.

Thus, some central banks have taken their policy interest rates into negative territory. As the figure shows, the Bank of Denmark went negative in 2012, while a number of others did so in 2014 and 2015.

There are a number of concerns with negative interest rates. Will negative interest rates be transmitted through the economy in a similar way to traditional reductions in interest rates? Will negative interest rates weaken the banking sector? What sort of financial innovations might happen as investors seek to avoid being affected by negative rates? The IMF staff report argues that so far, the evidence is reasonably positive:

\”There is some evidence of a decline in loan and bond rates following the implementation of  NIRPs [negative interest rate policies]. Banks’ profit margins have remained mostly unchanged. And there have not been significant shifts to physical cash. That said, deeper cuts are likely to entail diminishing returns, as interest rates reach their “true” lower bound (at which point agents shift into cash holdings). And pressure on banks may prove greater; especially in systems with larger shares of indexed loans and where banks compete more directly with bond markets and non-bank credit providers. … On balance, the limits to NIRPs point to the need to rely more on fiscal policy, structural reforms, and financial sector policies to stimulate aggregate demand, safeguard financial stability, and strengthen monetary policy transmission.\”

For those who instinctively recoil from the notion of a negative interest rate, it\’s perhaps useful to remember that it has occurred quite often in recent decades. Any time someone is locked into paying or receiving a fixed rate of interest, and then sees inflation move up, a negative interest rate results. Thus, back in the 1970s and early 1980s, lots of Americans were receiving negative interest rates if they had money in bank accounts or Treasury bonds, and were paying negative interest rates if they already had a fixed-rate mortgage. In short, the innovation here isn\’t that real inflation-adjusted interest rates can be negative, but rather that a  nominal interest rate is negative.

It\’s also worth remembering that this policy interest rate is related to the everyday interest rates that people and firms pay and receive, but it\’s not the same. The interest rates for borrowers, for example, are also affected by underlying factors like risk and collateral. In short, negative policy interest rates does mean downward pressure on interest rates, but it doesn\’t mean that the credit card company is going to be paying you if you charge more on your credit card, or that negative interest will start eating away your home mortgage.

Thus, the existing evidence on negative interest rates to this point show that having the policy interest rate be a few tenths of a percent in the negative is possible, and can be sustained for several years. It doesn\’t show in a direct way how banks, households, and the economy would react if negative nominal interest rates became larger and widespread through the economy.

An obvious issue with negative interest rates, and a focus of the IMF report, is what happens if people and firms decide to hold massive amounts of cash, which pays a zero interest rate, to avoid the negative interest rate. In Kenneth Rogoff\’s paper in the Summer 2017 issue of JEP, he makes the case for the practicality of moving gradually to a dual-currency system, where electronic money is the \”real\” currency and paper money trades with electronic money at a certain \”exchange rate.\”  Rogoff writes:

\”The idea of one country having two different currencies with an exchange rate between them may seem implausible, but the basics are not difficult to explain. The first step in setting up a dual currency system would be for the government to declare that the “real” currency is electronic bank reserves and that all government contracts, taxes, and payments are to be denominated in electronic dollars. As we have already noted, paying negative interest on electronic money or bank reserves is a nonissue. Say then that the government wants to set a policy interest rate of negative 3 percent to combat a financial crisis. To stop a run into paper currency, it would simultaneously announce that the exchange rate on paper currency in terms of electronic bank reserves would depreciate at 3 percent per year. For example, after a year, the central bank would give only .97 electronic dollars for one paper dollar; after two years, it would give back only .94. …

\”In most advanced countries, private agents are free to contract on whatever indexation scheme they prefer; this is not a condition that can be imposed by fiat. If the private sector does not convert to electronic currency, the zero bound would re-emerge since it still exists for paper currency. Finally, one must consider that after a period of negative interest rates, paper and electronic currency would no longer trade at par, which would be an inconvenience in normal times. Restoring par would require a period of paying positive interest rates on electronic reserves, which might potentially interfere with other monetary goals.\”

Rogoff recognizes that negative interest rates raise a number of practical and economic problems, including issues of regulatory, accounting, and tax policy. But from his perspective, negative interest rates are the best of the alternatives when a central bank faces the problem of a zero lower bound on interest rates. For example, quantitative easing only seems to have mild effects, while exposing the central bank to political pressures about who gets the loans from the central bank. Re-setting the central bank inflation target from 2% to 4% might help push up nominal interest rates, and thus allow those rates to be cut in a future recession while remaining above-zero, but given that central banks have spent decades establishing their goal of 2% inflation in the minds and expectations of financial markets, such a shift isn\’t to be contemplated lightly. Looking at these and other policy options–like all countries simultaneously trying to weaken their currencies in order to boost exports–Rogoff argues that negative interest rates are the simplest and cleanest option, with the best chance of working well.

From my own point of view, negative policy interest rates are one of those subjects that literally never crossed my mind up until about 2009. When the central banks of smaller economies like Denmark and Switzerland first used negative policy interest rates, but the main goal seemed to be to assure that the exchange rate of their currencies didn\’t soar. I wasn\’t quite ready to draw lessons for the US Federal Reserve from the Swiss National Bank or Danmarks Nationalbank. But when the Bank of Japan and the European Central Bank started employing mildly negative interest rates, and it seemed to be working without major glitches, it became clear that more serious attention needed to be paid. I remain dubious about interest rates in the range of negative 3-5%, but my reasons are less about technical economics and more about potential counterreactions.

Back in the 1970s, people put up with the idea that the inflation rate was higher than the interest on their bank account or on Treasury bonds, but the nominal interest rates they received was still positive. Maybe the public in other countries would accept a situation in which their bank accounts were eroded by 3-5% per year by a negative interest rates, but I have a hard time imagining that this would fly in a US political context. In an economy where negative interest rates are common, I would also expect large financial institutions like pension funds, insurance companies, and banks to make strenuous efforts to sidestep their effects. I\’ve reached the point where I\’m willing to consider negative interest rates as a serious possibility, but I suspect that the practical problems and issues of substantially negative interest rates are at this point underestimated.

Fighting Colony Collapse Disorder: How Beekeepers Make More Bees

Bees and pollination play an important supporting actor role in economic discussions of how and when markets will work well.

In a 1952 article (Economic Journal, \”External Economies and Diseconomies in a Competitive Situation\”) , James Meade suggested some problems that could arise between an apple farmer and a beekeeper. In Meade\’s example, if an apple farmer thought about expanding the orchard, part of the economic benefit would be that local bees could make more honey. However, the apple farmer would not benefit from the gains in honey-making, and thus would have a reduced incentive to expand the orchard. Conversely, if a beekeeper and honey-producer is considering expanding the number of bees, the apple farmer would also benefit. However, because the beekeeper would not benefit from the increased apple production, there was a reduced incentive to increase the number of bees.

But Meade\’s example was hypothetical. In a 1973 article, Stephen Cheung published \”The Fable of the Bees: An Economic Investigation\” (Journal of Law and Economics, April 1973). After considering actual contracts and pricing between beekeepers and apple-producers in Washington state, he reported that in the real world, they were coordinating their efforts just fine.

I spelled out these arguments three years ago in \”Do Markets Work for Bees?\” (July 10, 2014). Bees and market were in the news, because of a fear of Colony Collapse Disorder. Here\’s the cover of TIME magazine on August 19, 2013. By 2014, President Obama had appointed a Pollinator Health Task Force to create a National Pollinator Health Strategy, with representation from 17 different government agencies.

Image result for time magazine cover bees

So here we are, three years later. How have markets adapted to the danger of \”a world without bees,\” as the TIME magazine cover put it? Shawn Regan tells the story of \”How Capitalism Saved the Bees:A decade after colony collapse disorder began, pollination entrepreneurs have staved off the beepocalypse,\” in the August/September issue of Reason magazine.

The short take is that Colony Collapse Disorder is real, although its causes remain a source of some dispute. The Environmental Protection Agency lists the possible causes like this:

  • Increased losses due to the invasive varroa mite (a pest of honey bees).
  • New or emerging diseases such as Israeli Acute Paralysis virus and the gut parasite Nosema.
  • Pesticide poisoning through exposure to pesticides applied to crops or for in-hive insect or mite control.
  • Stress bees experience due to management practices such as transportation to multiple locations across the country for providing pollination services. 
  • Changes to the habitat where bees forage.
  • Inadequate forage/poor nutrition.
  • Potential immune-suppressing stress on bees caused by one or a combination of factors identified above.

As Regan reports: 

\”And beekeepers are still reporting above-average bee deaths. In 2016, U.S. beekeepers lost 44 percent of their colonies over the previous year, the second-highest annual loss reported in the past decade. But here\’s what you might not have heard. Despite the increased mortality rates, there has been no downward trend in the total number of honeybee colonies in the United States over the past 10 years. Indeed, there are more honeybee colonies in the country today than when colony collapse disorder began.\”

The reason is straightforward. Beekeepers have had to deal episodes of colony collapse disorder on average every decade or so. They fight back against the bee diseases as best they can. And they create new hives. Here\’s Regan:

\”There have been 23 episodes of major colony losses since the late 1860s. Two of the most recent bee killers are Varroa mites and tracheal mites, two parasites that first appeared in North America in the 1980s. … Beekeepers have developed a variety of strategies to combat these afflictions, including the use of miticides, fungicides, and other treatments. While colony collapse disorder presents new challenges and higher mortality rates, the industry has found ways to adapt.

\”Rebuilding lost colonies is a routine part of modern beekeeping. The most common method involves splitting a healthy colony into multiple hives—a process that beekeepers call “making increase.” The new hives, known as “nucs” or “splits,” require a new fertilized queen bee, which can be purchased from a com-mercial queen breeder. These breeders produce hundreds of thousands of queen bees each year. A new fertilized queen typically costs about $19 and can be shipped to beekeepers overnight. (One breeder\’s online ad touts its queens as “very prolific, known for their rapid spring buildup, and…extremely gentle.”) As an alternative to purchasing queens, beekeepers can produce their own queens by feeding royal jelly to larvae.

\”Beekeepers regularly split their hives prior to the start of pollination season or later in the summer in anticipation of winter losses. The new hives quickly produce a new brood, which in about six weeks can be strong enough to pollinate crops. Often, beekeepers can replace more bees by splitting hives than they lose over the winter, resulting in no net loss to their colonies.

\”Another way to rebuild a colony is to purchase “packaged bees” to replace an empty hive. (A 3-pound package typically costs about $90 and includes roughly 12,000 worker bees and a fertilized queen.) A third method is to replace an older queen with a new one. A queen bee is a productive egg-layer for one or two seasons; after that, replacing her will reinvigorate the health of the hive. If the new queen is accepted—as she often is when an experienced beekeeper installs her—the hive can be productive right away.

\”Replacing lost colonies by splitting hives is surprisingly straightforward and can be accomplished in about 20 minutes. New queens and packaged bees are also inexpensive. If a commercial beekeeper loses 100 of his hives, replacing them would come at a cost—the price of each new queen, plus the time required to split the existing hives—but it is unlikely to spell disaster. And because new hives can be up and running in short order, there is little or no lost time for pollination or honey production. As long as some healthy hives remain that can be used for splitting, beekeepers can quickly and easily rebuild lost colonies.\” 

Of course, there are still legitimate concerns about the health of wild bees, and their role in natural ecosystems.  But it seems fairly clear that the buzz over how colony collapse disorder threatened an imminent bee extinction–\”a world without bees\” and \”beemaggedon\” and all the rest–was grossly exaggerated. As the EPA reports:

\”Once thought to pose a major long term threat to bees, reported cases of CCD have declined substantially over the last five years. The number of hives that do not survive over the winter months – the overall indicator for bee health – has maintained an average of about 28.7 percent since 2006-2007 but dropped to 23.1 percent for the 2014-2015 winter. While winter losses remain somewhat high, the number of those losses attributed to CCD has dropped from roughly 60 percent of total hives lost in 2008 to 31.1 percent in 2013; in initial reports for 2014-2015 losses, CCD is not mentioned.\”

For more detail on economic adaptations to colony collapse disorder, and how actions by beekeepers have kept any economic losses very small, a useful starting point is the January 2016 working paper, \”Colony Collapse and the Economic Consequences of  Bee Disease: Adaptation to Environmental Change,\” by Randal R. Rucker, Walter N. Thurman, and Michael Burgett.

US Public Firm Agonistes

The number of shareholder-owned US corporations is in steep decline, falling from 7,507 in 1997 to 3,766 by 2015. Thus, Kathleen M. Kahle and René M. Stulz ask \”Is the US Public Corporation in Trouble?\” in their article in the Summer 2017 Journal of Economic Perspectives (31:3, pp. 67-88). (Full disclosure: I\’ve been Managing Editor of JEP since its inception in 1987, and thus may be predisposed to believe that the articles appearing there are worth reading! All articles in JEP, from the most recent issue back to the first, are freely available online compliments of its publisher, the American Economic Association.)

The blue line in this figure shows the number of publicly-trades US corporations. The bars show (inflation-adjusted) market capitalization–that is, the total value if you take all the shares of all the companies and multiply by the price of the shares.

Kahle and Stulz consider the evidence on US corporations along a number of dimensions. Here are a few of their points that caught my eye:

  • \”In 1975, the US economy has 22.4 publicly listed firms per million inhabitants. In 2015, it has just 11.7 listed firms per million inhabitants.\” 
  • The main reason for the decline in the number of firms is mergers of existing firms–combined with a slowdown in the rate of new firms being created. \”[T]he number of initial public offerings decreases dramatically after 2000, such that the average yearly number of initial public offerings after 2000 is roughly one-third of the average from 1980 to 2000 …\”
  • The average age of a public firm was 12.2 years in 1995, and 18.4 years in 2015.
  • \”A simple but rough benchmark is to compute the percentage of listed firms that are small, defined as having a market capitalization of less than $100 million in 2015 dollars. In 1975, 61.5 percent of listed firms are small … This percentage peaks at 63.2 percent in 1990, and then falls. The share of small, listed firms dropped all the way to 19.1 percent of listed firms in 2013, before rebounding slightly to 22.6 percent in 2015. In other words, small listed firms are much scarcer today than 20 or 40 years ago.\”
  • \”Listed firms have  a much lower average ratio of capital expenditures to assets and a much higher ratio of R&D expenditures to assets in 2015 than they do in 1975. Figure 2 shows the evolution of average R&D to assets over time.\”
  • \”[I]n 1975, 50 percent of the total earnings of public firms is earned by the 109 top-earning firms; by 2015, the top 30 firms earn 50 percent of the total earnings of the US public firms. Even more striking, in results not separately tabulated here, we find that the earnings of the top 200 firms by earnings exceed the earnings of all listed firms combined in 2015, which means that the combined earnings of the firms not in the top 200 are negative. In 1975, the 94 largest firms own half of the assets of US public firms, but 35 do so in 2015. Finally, 24 firms account for half of the cash holdings of public firms in 1975, but 11 firms do in 2015.\”
  • \”None of our leverage measures are elevated at the end of the sample period in 2015, suggesting that concerns about corporate leverage are less relevant for public firms now than at other times during the sample period. Leverage is even less of an issue now because interest rates are extremely low since the credit crisis. Hence, interest paid as a percentage of assets has never been as low during the sample period as in recent years …\”
  • The share of firms paying dividends dropped substantially in the 1980s and 1990s, to the point where one occasionally read about \”the death of the dividend.\” However, firms have been paying out more to shareholders through the mechanisms of repurchasing shares, and so the payouts of firms as as share of their net income has risen substantially since 2000. 
  • \”These explanations imply that there are fewer public firms both because it has become harder to succeed as a public firm and also because the benefits of being  public have fallen. As a result, firms are acquired rather than growing organically. This process results in fewer thriving small public firms that challenge larger firms and eventually succeed in becoming large. A possible downside of this evolution is that larger firms may be able to worry less about competition, can become more set in their ways, and do not have to innovate and invest as much as they would with more youthful competition. Further, small firms are not as ambitious and often choose the path of being acquired rather than succeeding in public markets. With these possible explanations, the developments we document can be costly, leading to less investment, less growth, and less dynamism. … It may be in the best interests of shareholders for firms to behave that way, but the end result is likely to leave us with fewer public firms, who gradually become older, slower, and less ambitious. Consequently, fewer new private firms are born, as the rewards for entrepreneurship are not as large. And those firms that are born are more likely to lack ambition, as they aim to be acquired rather than to conquer the world.\”
In 1962, Richard Nixon announced that he was leaving politics and told the assembled journalists, with whom he had had a relationship that could politely be described as \”adversarial\”: \”Just think how much you\’re going to be missing. You don\’t have Nixon to kick around any more.\” Of course, Nixon came back, and I expect that the US public corporation will come back, as well. But even for those who like to kick around the public corporation, these patterns should offer some cause for concern. The public corporation, for all its warts and flaws, has been a primary engine of US economic growth for more than a century. When it no longer makes economic sense for most small firms to become public corporations,  when the number of firms is being continually depleted by mergers, and when large firms are often paying out a larger share of their income or hoarding cash rather than investing, these are all legitimate causes for public concern. 
The Kahle-Stulz paper is the first of four in a \”Symposium about the Modern Corporation\” in the Summer 2017 issue of JEP, and those interested in the subject will want to check out the other papers, too. 
Lucian A. Bebchuk, Alma Cohen, and Scott Hirst discuss  \”The Agency Problems of Institutional Investors (pp. 89-102).  The traditional problem of corporate governance has been what economists called \”the separation of ownership and control,\” which refers to the fact that while shareholders technically own corporations, a very large number of relatively small shareholders are likely to have a hard time actually controlling the corporation. Instead, top executives and boards of directors may be able to cooperate in back-scratching arrangements that make their own lives easier. However, in recent years, large blocks of stock are owned by \”institutional\” investors, like the giant stock market index funds. Unlike the much smaller shareholders of the past, the large institutional investors have some power to exert control over large companies–but they may have much incentive to do so. After all, one large indexed mutual fund gets no advantage over other large indexed mutual funds by exercising oversight over corporations. No matter what one index fund does, or doesn\’t do, investors in index funds just get the overall average market outcome.

Luigi Zingales offers some thoughts \”Towards a Political Theory of the Firm\” (pp. 113-30). He writes of the dangers of a \”Medici vicious circle,\” which can be described in the slogan: “Money to get power. Power to protect money.” He writes:

\”The ideal state of affairs is a “goldilocks” balance between the power of the state and the power of firms. If the state is too weak to enforce property rights, then firms will either resort to enforcing these rights by themselves (through private violence) or collapse. If a state is too strong, rather than enforcing property rights it will be tempted to expropriate from firms. When firms are too weak vis-à-vis the state, they risk being expropriated, if not formally (with a transfer of property rights to the government), then substantially (when the state demands a large portion of the returns to any investment). But when firms are too strong vis-à-vis the state, they may shape the definition of property rights and its enforcement in their own interest and not in the interest of the public at large, as in the Mickey Mouse Copyright Act example. …

\”While the perfect “goldilocks” balance is an unattainable ideal, given that ongoing events will expose the tradeoffs in any given approach, the countries closest to this ideal are probably the Scandinavian countries today and the United States in the second part of the twentieth century. Crucial to the success of a goldilocks balance is a strong administrative state, which operates according to the principal of impartiality (Rothstein 2011) and a competitive private sector economy.\”

Anat R. Admati provides \”A Skeptical View of Financialized Corporate Governance\” (pp. 131-50). She points out that the modern corporation has often sought to provide appropriate incentives to top corporate executives by linking their compensation to various financial measures, like stock market prices. However, she argues that this approach can in many cases provide misguided incentives, and does not seem to limit or hinder an ongoing parade of corporate scandals. From the abstract: 

\”Managerial compensation typically relies on financial yardsticks, such as profits, stock prices, and return on equity, to achieve alignment between the interests of managers and shareholders. But financialized governance may not actually work well for most shareholders, and even when it does, significant tradeoffs and inefficiencies can arise from the conflict between maximizing financialized measures and society\’s broader interests. Effective governance requires that those in control are accountable for actions they take. However, those who control and benefit most from corporations\’ success are often able to avoid accountability. The history of corporate governance includes a parade of scandals and crises that have caused significant harm. After each, most key individuals tend to minimize their own culpability. Common claims from executives, boards of directors, auditors, rating agencies, politicians, and regulators include \”we just didn\’t know,\” \”we couldn\’t have predicted,\” or \”it was just a few bad apples.\” Economists, as well, may react to corporate scandals and crises with their own version of \”we just didn\’t know,\” as their models had ruled out certain possibilities. Effective governance of institutions in the private and public sectors should make it much more difficult for individuals in these institutions to get away with claiming that harm was out of their control when in reality they had encouraged or enabled harmful misconduct, and ought to have taken action to prevent it.\”

Digitization of Media Industries: Quantity and Quality

Digitization has revolutionized media industries. The equipment needed to produce a movie, television show, or musical album has gotten remarkably cheaper. The cost of distributing video, sound, or text has dropped dramatically, too, in some cases to nearly zero. In addition, the power of the \”gatekeepers\” who used to determine what content would be broadly distributed–producers and publishers–has been substantially diminished.

All revolutions, technological and otherwise, can lead to either sunny or gloomy predictions. For digitization of media industries, the gloomy prediction sounded like this: High-quality producers will see greatly diminished returns, as their work is pirated and redistributed. As they fade, consumers will find that they have greatly expanded access to cheap and low-quality producers, who will flood these market with drivel and trash. As a precursor of what might come, consider the decline in total value of shipments for the US music industry as revenues for the music industry after the Napster file-sharing service appeared circa 1999.

Joel Waldfogel makes the case for a  more positive view in \”How Digitization Has Created a Golden Age of Music, Movies, Books, and Television,\” in the Summer 2017 issue of the Journal of Economic Perspectives (31:3, pp. 195-214). (Full disclosure: I\’ve been Managing Editor of JEP since its inception in 1987, and thus may be predisposed to believe that the articles appearing there are worth reading! All articles in JEP, from the most recent issue back to the first, are freely available online compliments of its publisher, the American Economic Association.)

Digitization of media industries has increased output. For movies, Waldfogel writes: \”[T]he number of new motion pictures produced in the United States rose from about 500 features in 1990 to 1,200 in 2000, and by 2010 had risen to nearly 3,000. Growth in US-origin documentaries is even larger, and the patterns for other countries are similar …\” Back in 1990, the big TV networks produced about 25 new shows per year; now that many consumers have access to 150 channels and more, the number of new TV shows is about 250 per year.  The number of new popular songs was about 50,000 per year in the 1980s, and is now headed toward 400,000 per year. The number of self-published books is skyrocketing, now approaching 500,000 per year.

Consumers can benefit from these changes in several ways. One is lower-cost access to the content that would have existed anyway, even without digitization. Another is the \”long tail,\” which refers to the situation in which greater entry allows the production and availability of highly specialized content, so that consumers who like a certain specific niche of book or music or show have access to it. As Waldfogel writes: \”The idea [of the long tail] is well-illustrated by a comparison between the welfare consumers derived from, say, the 50,000 titles available in their local book stores compared with the 1,000,000 titles available to them from a retailer like Amazon that effectively has infinite shelf space. While each of the additional 950,000 titles has low demand, the sum of the incremental welfare delivered by many small things may be large.\”

But Waldfogel emphasizes another factor that may be even  more important, which is based on the idea that the gatekeepers in traditional media industries had imperfect judgment. Year after year, lots of the content that they approved turned out not to be a hit, or in some cases not to be even remotely popular. Thus, it\’s not just that additional entry into media industries can cater to niche tastes: in a number of cases, the additional entry into media industries is leading the production and distribution of content that a number of consumers view as higher quality.

This argument is a delicate one to make in purely economic terms, because measuring the \”quality\” of what consumers prefer is a slippery business, but Waldfogel lines up a number of indicators which point in the direction of this conclusion. For example, he collects evidence from \”best-of\” lists produced by movie and music critics, as well as evidence from sites that tally popular opinion. He also looks at whether people are choosing to spend more of their money on movies and music from small-scale and independent producers, on whether more best-selling books are self-publishes, and so on. Here\’s a figure showing some of the patterns.

The overall theme that emerges is that new entry into media industries from digitization is not just a matter of a \”long tail\” of niche products. Instead, some proportion of the content that would have been screened out by the traditional media gatekeepers is finding a large and receptive audience. There\’s no doubt that as media output increases, a lot of crud get produced. But the options that people choose and prefer do not seem to be decreasing in quality, and may well be  increasing.

This argument further implies that consumers of media are not finding themselves especially overwhelmed by the range of new choices available. Herbert Simon wrote back in 1971: \”What information consumes is rather obvious: it consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information resources that might consume it.\” But it turns out that online reviews and social media offer a gatekeepers for new media products–both for specialized niche products and for those aimed at a mass audience–in a way that lets consumers sort through the options. Indeed,  many consumers seem to find the sorting process through interactive social media to be fun in itself.

Kindleberger on International Use of the US Dollar and the English Language

In a world of many languages, it is efficient if everyone shares a second language. In a world of many currencies, it is efficient if everyone shares a second currency. In the current world economy, that common second language has been English and the common second currency has now been the US dollar for a half-century. In August 1967, Charles P. Kindleberger made this point in an essay called \”The Politics of International Money and World Language,\” published by the economics department at Princeton University as #61 in a series of \”Essays in International Finance.\”

As Kindleberger points out in the essay, debates over a widely shared second language or second currency are inevitably controversial in  political terms, because culture and prestige are at stake. He writes: \”The basic question that will be left unanswered is whether economic efficiency is less important in these matters than political appearances, which many other observers would probably call political reality. It is possible that it is, but economists are accustomed to having doubts. At the least, I would insist that there is a trade-off between economic inefficiency and political appearances which must be explicitly evaluated to see whether the cost in one is worth the benefit in the other.\”

Here\’s a dose of Kindleberger:

However, the analogy which interests me most is that between the use of the dollar in international economics and the use of the English language in international intercourse more generally. Analogies are tempting, and dangerous because frequently misleading. But the dollar \”talks,\” and English is the \”coin\” of international communication. The French like neither fact, which is understandable. But to seek to use newly-created international money or a newly-created international language would be patently inefficient. 

Languages are ordered hierarchically. Like sterling, French used to dominate. Like the dollar, English does now. Frenchmen must learn English; it is not vital for Anglo-Saxons to learn French. 

The analogy with the language quarrel in Belgium is exact. The Flemish must learn French, but the Walloons, despite their constitutional edict of equality between the languages and the legislative edict which requires civil servants to do so, do not learn or use Dutch. The Flemish are offended and begin to insist on Flemish, exactly as France has insisted that its representatives at international conferences, even when they know English perfectly, must speak only French and insist on all speeches in English being translated into French. The transactions costs of translation, including the misunderstanding in communication and the waste of time, are even more evident than the transactions costs of converting gold to dollars and dollars to gold, when it is dollars—not gold—that are necessary to transactions.  …

It is easy to imagine what is implied in a \”sabotage\” of French as a working language at the United Nations. Someone—presumably an Anglo-Saxon—at a working-committee meeting, observing that all the Francophones had a good command of English, suggested that the translation into French from English and possibly from French into English be dispensed with in the interest of efficiency. The transactions (translation) costs of simultaneous but especially of consecutive translation are high in efficiency, owing to loss of time or accuracy and of intimacy in two-way communication. It is highly desirable for Americans and British to know enough French, German, Italian, Spanish, and perhaps Russian to be able to receive in those languages, or some of them, even if they transmit only in English. But world efficiency is achieved when all countries learn the same second language, just as when the different nationalities in India use English as a lingua franca. …  One\’s own currency is the native language, and foreign transactions are carried on in the vehicle currency of a common second language, the dollar. 

It is hard on French, which used to be the language of diplomacy, to have lost this distinction; but it is a fact. In scientific writing, as in communication between international airplane and control tower, English is the universal language, except for the rescue call \”Mayday\” which … would have put in French as \”M\’aidez.\” But a common second language is efficient, rather than nationalist or imperialist. 

The power of the dollar and the power of English represent la force des choses and not la force des hommes. This is not to gainsay the existence of unattractive nationals abroad—from virtually all countries. I recall particularly a Chicago Tribune reporter who got through Europe with two words: \”Whiskey\” and \”Steak.\” But it is not nationalism which spreads the use of the dollar and the use of English; it is the ordinary search of the world for short cuts in getting things done. … 

The selection of the dollar as the lingua franca of international monetary arrangements, then, is not the work of men but of circumstances. Pointing to its utility involves positive, not normative, economics. Students of international politics must deplore the nationalistic overtones and would like to see the ultimate bastions of the system, and the means of producing policy, international. 

But the analogy has one more aspect. The futility of a synthetic, deliberately created international medium of exchange is suggested by the analogy with Esperanto. This still commands a doughty band of true believers, but their legions have thinned. A linguistics expert states that Esperanto suffers from being inadequately planned as an international language. If he worked on it, he could devise a common language which would be much better suited to the task. Our instinct tells us that this is equally applicable to the myriad of [international monetary] plans—Triffin, Stamp, Postuma, Roosa, Bernstein, Modigliani-Kenen, and all the rest—all of which have strengths (and weaknesses) but also share the basic weakness that they do not grow out of the day-to-day life of markets, as the dollar standard based on New York has done, and likewise the Eurodollar. …
 At the other extreme, the French view that the international monetary system should re-enthrone gold as the international medium of exchange resembles an appeal for a return to Latin as the lingua franca of international discourse, an appeal not without its nostalgic value for those who admire ancient Rome and medieval culture, but one that is evidently swimming against the stream of history, as the increasingly rapid abandonment of Latin in the Catholic Church testifies. 

Finally, the many academic economists who recommend separating international money and capital markets by a system of flexible exchange rates between national currencies in effect call for a return to Babel, with foreign languages used by none save professional interpreters. This maximizes transactions costs and minimizes international discourse. A compromise between this and fixed exchange rates is possible: with separate dollar, sterling (or dollar-sterling), franc, and ruble areas, each with many countries having fixed exchange rates and speaking a common area language but with flexible exchange rates and full formal translation between them. …  For those who like neat Cartesian designs, it has much to recommend it. 

But how can one make such a division of the world among the great powers into spheres of influence stick, even if one has no misgivings about its morality? An earlier paper by Mundell raised the central issue, \”What is the Optimum Currency Area?\” and the same question could be put for languages. The rapid shrinkage of the world, however, makes it impractical to try to maintain traditional currency and language areas without infiltration of a single language and currency into a wider range of human activity. The Organization of Petroleum Exporting Countries (OPEC), consisting of Arab and Spanish-speaking states, inevitably reckons in dollars and discourses in English, and there is little that the statesmen of the maj or powers can do to prevent a succession of hundreds of similar steps toward reducing the costs of economic and social intercourse. In positive, not normative, terms the optimum currency and language area is rapidly expanding to the world. …

The ironic and politically very damaging fact is that the European language is English, or perhaps one should say American, just as the European unit in monetary affairs is the dollar. This is because the optimum language and currency areas today are not countries, nor continents, but the world; and because, for better or worse—and opinions differ on this—the choice of which language or which currency is made not on merit, or moral worth, but on size.

It\’s interesting to me that Kindleberger was making this point about the dominance of the US dollar and the English language a half-century ago, and this part of his argument seems to have stood the test of time. One of the most common questions I get asked in public forums is \”How long before the US dollar loses its global preeminence, and will be forced to share the world economic stage with the Chinese yuan,  the euro, the Japanese yen, and others. Kindleberger\’s meditation offers an answer to that question, which is roughly \”not very soon.\”