Negative Interest Rates: Evidence and Practicalities

Seven central banks around the world have lowered the interest rate that they use to implement monetary policy to a negative rate: along with the very prominent European Central Bank and Bank of Japan, the others include the central banks of Bulgaria, Denmark, Hungary, Sweden, and Switzerland. How is this working out? When (not if) the next recession  hits, are negative interest rates a tool that might be used by the US Federal Reserve? The IMF has issued a staff report on \”Negative Interest Rate Policies–Initial Experiences and Assessments\” (August 2017). In the Summer 2017 issue of the Journal of Economic PerspectivesKenneth Rogoff explores the arguments for negative interest rates (as opposed to other policy options) and practical methods of moving toward such a policy in \”Dealing with Monetary Paralysis at the Zero Bound\” (31:3, pp. 46-77).

When (and not if) the next recession comes, monetary policy is likely to face a hard problem. For most of the last few decades, the standard response of central banks during a recession has been to reduce the policy interest rate under their control by 4-5 percentage points. For example this is how the US Federal Reserve cut it interest rates in response to the recessions that started in 1990, 2001, and 2007.

The problem is that when (not if) the next recession hits, reducing interest rates in this traditional way will not be practical. As you an see, the policy interest rates has crept up to about 1%, but that\’s not high enough to allow for an interest rate cut of 4-5% without running into the \”zero lower bound.\”

The problem of the zero lower bound seems unlikely to go away. Nominal interest rate can be divided up into the amount that reflects inflation, and the remaining \”real\” interest rate–and both are low. Inflation has been rock-bottom now for about 20 years, even as the economy has moved up and down, leading even Fed chair Janet Yellen to propose that economists need to study \”What determines inflation?\” Real interest rates have been falling, and seem likely to remain low.  The Fed is slowly raising its federal funds interest rate, but there is no current prospect that it will move back to the range of, say, 4- 5% or more. Thus, when (not if) the next recession hits, it will be impossible to use standard monetary tools to cut that interest rate by the usual 4-5 percentage points.

What macroeconomic policy tools will the government have when (not if) the next recession hits. Fiscal policy tools like cutting taxes or raising spending remain possible, although with the Congressional Budget Office forecasting a future of government debt rising to unsustainable levels during the next few decades, this tool may need to be used with care.  Hitting the zero bound is why the Fed and other central banks turned to \”quantitative easing,\” where the central bank buys assets like government or private debt, although this raises obvious problems of what assets to buy, how much of these assets to buy–and the likelihood of political intervention in these decisions.

Thus, some central banks have taken their policy interest rates into negative territory. As the figure shows, the Bank of Denmark went negative in 2012, while a number of others did so in 2014 and 2015.

There are a number of concerns with negative interest rates. Will negative interest rates be transmitted through the economy in a similar way to traditional reductions in interest rates? Will negative interest rates weaken the banking sector? What sort of financial innovations might happen as investors seek to avoid being affected by negative rates? The IMF staff report argues that so far, the evidence is reasonably positive:

\”There is some evidence of a decline in loan and bond rates following the implementation of  NIRPs [negative interest rate policies]. Banks’ profit margins have remained mostly unchanged. And there have not been significant shifts to physical cash. That said, deeper cuts are likely to entail diminishing returns, as interest rates reach their “true” lower bound (at which point agents shift into cash holdings). And pressure on banks may prove greater; especially in systems with larger shares of indexed loans and where banks compete more directly with bond markets and non-bank credit providers. … On balance, the limits to NIRPs point to the need to rely more on fiscal policy, structural reforms, and financial sector policies to stimulate aggregate demand, safeguard financial stability, and strengthen monetary policy transmission.\”

For those who instinctively recoil from the notion of a negative interest rate, it\’s perhaps useful to remember that it has occurred quite often in recent decades. Any time someone is locked into paying or receiving a fixed rate of interest, and then sees inflation move up, a negative interest rate results. Thus, back in the 1970s and early 1980s, lots of Americans were receiving negative interest rates if they had money in bank accounts or Treasury bonds, and were paying negative interest rates if they already had a fixed-rate mortgage. In short, the innovation here isn\’t that real inflation-adjusted interest rates can be negative, but rather that a  nominal interest rate is negative.

It\’s also worth remembering that this policy interest rate is related to the everyday interest rates that people and firms pay and receive, but it\’s not the same. The interest rates for borrowers, for example, are also affected by underlying factors like risk and collateral. In short, negative policy interest rates does mean downward pressure on interest rates, but it doesn\’t mean that the credit card company is going to be paying you if you charge more on your credit card, or that negative interest will start eating away your home mortgage.

Thus, the existing evidence on negative interest rates to this point show that having the policy interest rate be a few tenths of a percent in the negative is possible, and can be sustained for several years. It doesn\’t show in a direct way how banks, households, and the economy would react if negative nominal interest rates became larger and widespread through the economy.

An obvious issue with negative interest rates, and a focus of the IMF report, is what happens if people and firms decide to hold massive amounts of cash, which pays a zero interest rate, to avoid the negative interest rate. In Kenneth Rogoff\’s paper in the Summer 2017 issue of JEP, he makes the case for the practicality of moving gradually to a dual-currency system, where electronic money is the \”real\” currency and paper money trades with electronic money at a certain \”exchange rate.\”  Rogoff writes:

\”The idea of one country having two different currencies with an exchange rate between them may seem implausible, but the basics are not difficult to explain. The first step in setting up a dual currency system would be for the government to declare that the “real” currency is electronic bank reserves and that all government contracts, taxes, and payments are to be denominated in electronic dollars. As we have already noted, paying negative interest on electronic money or bank reserves is a nonissue. Say then that the government wants to set a policy interest rate of negative 3 percent to combat a financial crisis. To stop a run into paper currency, it would simultaneously announce that the exchange rate on paper currency in terms of electronic bank reserves would depreciate at 3 percent per year. For example, after a year, the central bank would give only .97 electronic dollars for one paper dollar; after two years, it would give back only .94. …

\”In most advanced countries, private agents are free to contract on whatever indexation scheme they prefer; this is not a condition that can be imposed by fiat. If the private sector does not convert to electronic currency, the zero bound would re-emerge since it still exists for paper currency. Finally, one must consider that after a period of negative interest rates, paper and electronic currency would no longer trade at par, which would be an inconvenience in normal times. Restoring par would require a period of paying positive interest rates on electronic reserves, which might potentially interfere with other monetary goals.\”

Rogoff recognizes that negative interest rates raise a number of practical and economic problems, including issues of regulatory, accounting, and tax policy. But from his perspective, negative interest rates are the best of the alternatives when a central bank faces the problem of a zero lower bound on interest rates. For example, quantitative easing only seems to have mild effects, while exposing the central bank to political pressures about who gets the loans from the central bank. Re-setting the central bank inflation target from 2% to 4% might help push up nominal interest rates, and thus allow those rates to be cut in a future recession while remaining above-zero, but given that central banks have spent decades establishing their goal of 2% inflation in the minds and expectations of financial markets, such a shift isn\’t to be contemplated lightly. Looking at these and other policy options–like all countries simultaneously trying to weaken their currencies in order to boost exports–Rogoff argues that negative interest rates are the simplest and cleanest option, with the best chance of working well.

From my own point of view, negative policy interest rates are one of those subjects that literally never crossed my mind up until about 2009. When the central banks of smaller economies like Denmark and Switzerland first used negative policy interest rates, but the main goal seemed to be to assure that the exchange rate of their currencies didn\’t soar. I wasn\’t quite ready to draw lessons for the US Federal Reserve from the Swiss National Bank or Danmarks Nationalbank. But when the Bank of Japan and the European Central Bank started employing mildly negative interest rates, and it seemed to be working without major glitches, it became clear that more serious attention needed to be paid. I remain dubious about interest rates in the range of negative 3-5%, but my reasons are less about technical economics and more about potential counterreactions.

Back in the 1970s, people put up with the idea that the inflation rate was higher than the interest on their bank account or on Treasury bonds, but the nominal interest rates they received was still positive. Maybe the public in other countries would accept a situation in which their bank accounts were eroded by 3-5% per year by a negative interest rates, but I have a hard time imagining that this would fly in a US political context. In an economy where negative interest rates are common, I would also expect large financial institutions like pension funds, insurance companies, and banks to make strenuous efforts to sidestep their effects. I\’ve reached the point where I\’m willing to consider negative interest rates as a serious possibility, but I suspect that the practical problems and issues of substantially negative interest rates are at this point underestimated.

Fighting Colony Collapse Disorder: How Beekeepers Make More Bees

Bees and pollination play an important supporting actor role in economic discussions of how and when markets will work well.

In a 1952 article (Economic Journal, \”External Economies and Diseconomies in a Competitive Situation\”) , James Meade suggested some problems that could arise between an apple farmer and a beekeeper. In Meade\’s example, if an apple farmer thought about expanding the orchard, part of the economic benefit would be that local bees could make more honey. However, the apple farmer would not benefit from the gains in honey-making, and thus would have a reduced incentive to expand the orchard. Conversely, if a beekeeper and honey-producer is considering expanding the number of bees, the apple farmer would also benefit. However, because the beekeeper would not benefit from the increased apple production, there was a reduced incentive to increase the number of bees.

But Meade\’s example was hypothetical. In a 1973 article, Stephen Cheung published \”The Fable of the Bees: An Economic Investigation\” (Journal of Law and Economics, April 1973). After considering actual contracts and pricing between beekeepers and apple-producers in Washington state, he reported that in the real world, they were coordinating their efforts just fine.

I spelled out these arguments three years ago in \”Do Markets Work for Bees?\” (July 10, 2014). Bees and market were in the news, because of a fear of Colony Collapse Disorder. Here\’s the cover of TIME magazine on August 19, 2013. By 2014, President Obama had appointed a Pollinator Health Task Force to create a National Pollinator Health Strategy, with representation from 17 different government agencies.

Image result for time magazine cover bees

So here we are, three years later. How have markets adapted to the danger of \”a world without bees,\” as the TIME magazine cover put it? Shawn Regan tells the story of \”How Capitalism Saved the Bees:A decade after colony collapse disorder began, pollination entrepreneurs have staved off the beepocalypse,\” in the August/September issue of Reason magazine.

The short take is that Colony Collapse Disorder is real, although its causes remain a source of some dispute. The Environmental Protection Agency lists the possible causes like this:

  • Increased losses due to the invasive varroa mite (a pest of honey bees).
  • New or emerging diseases such as Israeli Acute Paralysis virus and the gut parasite Nosema.
  • Pesticide poisoning through exposure to pesticides applied to crops or for in-hive insect or mite control.
  • Stress bees experience due to management practices such as transportation to multiple locations across the country for providing pollination services. 
  • Changes to the habitat where bees forage.
  • Inadequate forage/poor nutrition.
  • Potential immune-suppressing stress on bees caused by one or a combination of factors identified above.

As Regan reports: 

\”And beekeepers are still reporting above-average bee deaths. In 2016, U.S. beekeepers lost 44 percent of their colonies over the previous year, the second-highest annual loss reported in the past decade. But here\’s what you might not have heard. Despite the increased mortality rates, there has been no downward trend in the total number of honeybee colonies in the United States over the past 10 years. Indeed, there are more honeybee colonies in the country today than when colony collapse disorder began.\”

The reason is straightforward. Beekeepers have had to deal episodes of colony collapse disorder on average every decade or so. They fight back against the bee diseases as best they can. And they create new hives. Here\’s Regan:

\”There have been 23 episodes of major colony losses since the late 1860s. Two of the most recent bee killers are Varroa mites and tracheal mites, two parasites that first appeared in North America in the 1980s. … Beekeepers have developed a variety of strategies to combat these afflictions, including the use of miticides, fungicides, and other treatments. While colony collapse disorder presents new challenges and higher mortality rates, the industry has found ways to adapt.

\”Rebuilding lost colonies is a routine part of modern beekeeping. The most common method involves splitting a healthy colony into multiple hives—a process that beekeepers call “making increase.” The new hives, known as “nucs” or “splits,” require a new fertilized queen bee, which can be purchased from a com-mercial queen breeder. These breeders produce hundreds of thousands of queen bees each year. A new fertilized queen typically costs about $19 and can be shipped to beekeepers overnight. (One breeder\’s online ad touts its queens as “very prolific, known for their rapid spring buildup, and…extremely gentle.”) As an alternative to purchasing queens, beekeepers can produce their own queens by feeding royal jelly to larvae.

\”Beekeepers regularly split their hives prior to the start of pollination season or later in the summer in anticipation of winter losses. The new hives quickly produce a new brood, which in about six weeks can be strong enough to pollinate crops. Often, beekeepers can replace more bees by splitting hives than they lose over the winter, resulting in no net loss to their colonies.

\”Another way to rebuild a colony is to purchase “packaged bees” to replace an empty hive. (A 3-pound package typically costs about $90 and includes roughly 12,000 worker bees and a fertilized queen.) A third method is to replace an older queen with a new one. A queen bee is a productive egg-layer for one or two seasons; after that, replacing her will reinvigorate the health of the hive. If the new queen is accepted—as she often is when an experienced beekeeper installs her—the hive can be productive right away.

\”Replacing lost colonies by splitting hives is surprisingly straightforward and can be accomplished in about 20 minutes. New queens and packaged bees are also inexpensive. If a commercial beekeeper loses 100 of his hives, replacing them would come at a cost—the price of each new queen, plus the time required to split the existing hives—but it is unlikely to spell disaster. And because new hives can be up and running in short order, there is little or no lost time for pollination or honey production. As long as some healthy hives remain that can be used for splitting, beekeepers can quickly and easily rebuild lost colonies.\” 

Of course, there are still legitimate concerns about the health of wild bees, and their role in natural ecosystems.  But it seems fairly clear that the buzz over how colony collapse disorder threatened an imminent bee extinction–\”a world without bees\” and \”beemaggedon\” and all the rest–was grossly exaggerated. As the EPA reports:

\”Once thought to pose a major long term threat to bees, reported cases of CCD have declined substantially over the last five years. The number of hives that do not survive over the winter months – the overall indicator for bee health – has maintained an average of about 28.7 percent since 2006-2007 but dropped to 23.1 percent for the 2014-2015 winter. While winter losses remain somewhat high, the number of those losses attributed to CCD has dropped from roughly 60 percent of total hives lost in 2008 to 31.1 percent in 2013; in initial reports for 2014-2015 losses, CCD is not mentioned.\”

For more detail on economic adaptations to colony collapse disorder, and how actions by beekeepers have kept any economic losses very small, a useful starting point is the January 2016 working paper, \”Colony Collapse and the Economic Consequences of  Bee Disease: Adaptation to Environmental Change,\” by Randal R. Rucker, Walter N. Thurman, and Michael Burgett.

US Public Firm Agonistes

The number of shareholder-owned US corporations is in steep decline, falling from 7,507 in 1997 to 3,766 by 2015. Thus, Kathleen M. Kahle and René M. Stulz ask \”Is the US Public Corporation in Trouble?\” in their article in the Summer 2017 Journal of Economic Perspectives (31:3, pp. 67-88). (Full disclosure: I\’ve been Managing Editor of JEP since its inception in 1987, and thus may be predisposed to believe that the articles appearing there are worth reading! All articles in JEP, from the most recent issue back to the first, are freely available online compliments of its publisher, the American Economic Association.)

The blue line in this figure shows the number of publicly-trades US corporations. The bars show (inflation-adjusted) market capitalization–that is, the total value if you take all the shares of all the companies and multiply by the price of the shares.

Kahle and Stulz consider the evidence on US corporations along a number of dimensions. Here are a few of their points that caught my eye:

  • \”In 1975, the US economy has 22.4 publicly listed firms per million inhabitants. In 2015, it has just 11.7 listed firms per million inhabitants.\” 
  • The main reason for the decline in the number of firms is mergers of existing firms–combined with a slowdown in the rate of new firms being created. \”[T]he number of initial public offerings decreases dramatically after 2000, such that the average yearly number of initial public offerings after 2000 is roughly one-third of the average from 1980 to 2000 …\”
  • The average age of a public firm was 12.2 years in 1995, and 18.4 years in 2015.
  • \”A simple but rough benchmark is to compute the percentage of listed firms that are small, defined as having a market capitalization of less than $100 million in 2015 dollars. In 1975, 61.5 percent of listed firms are small … This percentage peaks at 63.2 percent in 1990, and then falls. The share of small, listed firms dropped all the way to 19.1 percent of listed firms in 2013, before rebounding slightly to 22.6 percent in 2015. In other words, small listed firms are much scarcer today than 20 or 40 years ago.\”
  • \”Listed firms have  a much lower average ratio of capital expenditures to assets and a much higher ratio of R&D expenditures to assets in 2015 than they do in 1975. Figure 2 shows the evolution of average R&D to assets over time.\”
  • \”[I]n 1975, 50 percent of the total earnings of public firms is earned by the 109 top-earning firms; by 2015, the top 30 firms earn 50 percent of the total earnings of the US public firms. Even more striking, in results not separately tabulated here, we find that the earnings of the top 200 firms by earnings exceed the earnings of all listed firms combined in 2015, which means that the combined earnings of the firms not in the top 200 are negative. In 1975, the 94 largest firms own half of the assets of US public firms, but 35 do so in 2015. Finally, 24 firms account for half of the cash holdings of public firms in 1975, but 11 firms do in 2015.\”
  • \”None of our leverage measures are elevated at the end of the sample period in 2015, suggesting that concerns about corporate leverage are less relevant for public firms now than at other times during the sample period. Leverage is even less of an issue now because interest rates are extremely low since the credit crisis. Hence, interest paid as a percentage of assets has never been as low during the sample period as in recent years …\”
  • The share of firms paying dividends dropped substantially in the 1980s and 1990s, to the point where one occasionally read about \”the death of the dividend.\” However, firms have been paying out more to shareholders through the mechanisms of repurchasing shares, and so the payouts of firms as as share of their net income has risen substantially since 2000. 
  • \”These explanations imply that there are fewer public firms both because it has become harder to succeed as a public firm and also because the benefits of being  public have fallen. As a result, firms are acquired rather than growing organically. This process results in fewer thriving small public firms that challenge larger firms and eventually succeed in becoming large. A possible downside of this evolution is that larger firms may be able to worry less about competition, can become more set in their ways, and do not have to innovate and invest as much as they would with more youthful competition. Further, small firms are not as ambitious and often choose the path of being acquired rather than succeeding in public markets. With these possible explanations, the developments we document can be costly, leading to less investment, less growth, and less dynamism. … It may be in the best interests of shareholders for firms to behave that way, but the end result is likely to leave us with fewer public firms, who gradually become older, slower, and less ambitious. Consequently, fewer new private firms are born, as the rewards for entrepreneurship are not as large. And those firms that are born are more likely to lack ambition, as they aim to be acquired rather than to conquer the world.\”
In 1962, Richard Nixon announced that he was leaving politics and told the assembled journalists, with whom he had had a relationship that could politely be described as \”adversarial\”: \”Just think how much you\’re going to be missing. You don\’t have Nixon to kick around any more.\” Of course, Nixon came back, and I expect that the US public corporation will come back, as well. But even for those who like to kick around the public corporation, these patterns should offer some cause for concern. The public corporation, for all its warts and flaws, has been a primary engine of US economic growth for more than a century. When it no longer makes economic sense for most small firms to become public corporations,  when the number of firms is being continually depleted by mergers, and when large firms are often paying out a larger share of their income or hoarding cash rather than investing, these are all legitimate causes for public concern. 
The Kahle-Stulz paper is the first of four in a \”Symposium about the Modern Corporation\” in the Summer 2017 issue of JEP, and those interested in the subject will want to check out the other papers, too. 
Lucian A. Bebchuk, Alma Cohen, and Scott Hirst discuss  \”The Agency Problems of Institutional Investors (pp. 89-102).  The traditional problem of corporate governance has been what economists called \”the separation of ownership and control,\” which refers to the fact that while shareholders technically own corporations, a very large number of relatively small shareholders are likely to have a hard time actually controlling the corporation. Instead, top executives and boards of directors may be able to cooperate in back-scratching arrangements that make their own lives easier. However, in recent years, large blocks of stock are owned by \”institutional\” investors, like the giant stock market index funds. Unlike the much smaller shareholders of the past, the large institutional investors have some power to exert control over large companies–but they may have much incentive to do so. After all, one large indexed mutual fund gets no advantage over other large indexed mutual funds by exercising oversight over corporations. No matter what one index fund does, or doesn\’t do, investors in index funds just get the overall average market outcome.

Luigi Zingales offers some thoughts \”Towards a Political Theory of the Firm\” (pp. 113-30). He writes of the dangers of a \”Medici vicious circle,\” which can be described in the slogan: “Money to get power. Power to protect money.” He writes:

\”The ideal state of affairs is a “goldilocks” balance between the power of the state and the power of firms. If the state is too weak to enforce property rights, then firms will either resort to enforcing these rights by themselves (through private violence) or collapse. If a state is too strong, rather than enforcing property rights it will be tempted to expropriate from firms. When firms are too weak vis-à-vis the state, they risk being expropriated, if not formally (with a transfer of property rights to the government), then substantially (when the state demands a large portion of the returns to any investment). But when firms are too strong vis-à-vis the state, they may shape the definition of property rights and its enforcement in their own interest and not in the interest of the public at large, as in the Mickey Mouse Copyright Act example. …

\”While the perfect “goldilocks” balance is an unattainable ideal, given that ongoing events will expose the tradeoffs in any given approach, the countries closest to this ideal are probably the Scandinavian countries today and the United States in the second part of the twentieth century. Crucial to the success of a goldilocks balance is a strong administrative state, which operates according to the principal of impartiality (Rothstein 2011) and a competitive private sector economy.\”

Anat R. Admati provides \”A Skeptical View of Financialized Corporate Governance\” (pp. 131-50). She points out that the modern corporation has often sought to provide appropriate incentives to top corporate executives by linking their compensation to various financial measures, like stock market prices. However, she argues that this approach can in many cases provide misguided incentives, and does not seem to limit or hinder an ongoing parade of corporate scandals. From the abstract: 

\”Managerial compensation typically relies on financial yardsticks, such as profits, stock prices, and return on equity, to achieve alignment between the interests of managers and shareholders. But financialized governance may not actually work well for most shareholders, and even when it does, significant tradeoffs and inefficiencies can arise from the conflict between maximizing financialized measures and society\’s broader interests. Effective governance requires that those in control are accountable for actions they take. However, those who control and benefit most from corporations\’ success are often able to avoid accountability. The history of corporate governance includes a parade of scandals and crises that have caused significant harm. After each, most key individuals tend to minimize their own culpability. Common claims from executives, boards of directors, auditors, rating agencies, politicians, and regulators include \”we just didn\’t know,\” \”we couldn\’t have predicted,\” or \”it was just a few bad apples.\” Economists, as well, may react to corporate scandals and crises with their own version of \”we just didn\’t know,\” as their models had ruled out certain possibilities. Effective governance of institutions in the private and public sectors should make it much more difficult for individuals in these institutions to get away with claiming that harm was out of their control when in reality they had encouraged or enabled harmful misconduct, and ought to have taken action to prevent it.\”

Digitization of Media Industries: Quantity and Quality

Digitization has revolutionized media industries. The equipment needed to produce a movie, television show, or musical album has gotten remarkably cheaper. The cost of distributing video, sound, or text has dropped dramatically, too, in some cases to nearly zero. In addition, the power of the \”gatekeepers\” who used to determine what content would be broadly distributed–producers and publishers–has been substantially diminished.

All revolutions, technological and otherwise, can lead to either sunny or gloomy predictions. For digitization of media industries, the gloomy prediction sounded like this: High-quality producers will see greatly diminished returns, as their work is pirated and redistributed. As they fade, consumers will find that they have greatly expanded access to cheap and low-quality producers, who will flood these market with drivel and trash. As a precursor of what might come, consider the decline in total value of shipments for the US music industry as revenues for the music industry after the Napster file-sharing service appeared circa 1999.

Joel Waldfogel makes the case for a  more positive view in \”How Digitization Has Created a Golden Age of Music, Movies, Books, and Television,\” in the Summer 2017 issue of the Journal of Economic Perspectives (31:3, pp. 195-214). (Full disclosure: I\’ve been Managing Editor of JEP since its inception in 1987, and thus may be predisposed to believe that the articles appearing there are worth reading! All articles in JEP, from the most recent issue back to the first, are freely available online compliments of its publisher, the American Economic Association.)

Digitization of media industries has increased output. For movies, Waldfogel writes: \”[T]he number of new motion pictures produced in the United States rose from about 500 features in 1990 to 1,200 in 2000, and by 2010 had risen to nearly 3,000. Growth in US-origin documentaries is even larger, and the patterns for other countries are similar …\” Back in 1990, the big TV networks produced about 25 new shows per year; now that many consumers have access to 150 channels and more, the number of new TV shows is about 250 per year.  The number of new popular songs was about 50,000 per year in the 1980s, and is now headed toward 400,000 per year. The number of self-published books is skyrocketing, now approaching 500,000 per year.

Consumers can benefit from these changes in several ways. One is lower-cost access to the content that would have existed anyway, even without digitization. Another is the \”long tail,\” which refers to the situation in which greater entry allows the production and availability of highly specialized content, so that consumers who like a certain specific niche of book or music or show have access to it. As Waldfogel writes: \”The idea [of the long tail] is well-illustrated by a comparison between the welfare consumers derived from, say, the 50,000 titles available in their local book stores compared with the 1,000,000 titles available to them from a retailer like Amazon that effectively has infinite shelf space. While each of the additional 950,000 titles has low demand, the sum of the incremental welfare delivered by many small things may be large.\”

But Waldfogel emphasizes another factor that may be even  more important, which is based on the idea that the gatekeepers in traditional media industries had imperfect judgment. Year after year, lots of the content that they approved turned out not to be a hit, or in some cases not to be even remotely popular. Thus, it\’s not just that additional entry into media industries can cater to niche tastes: in a number of cases, the additional entry into media industries is leading the production and distribution of content that a number of consumers view as higher quality.

This argument is a delicate one to make in purely economic terms, because measuring the \”quality\” of what consumers prefer is a slippery business, but Waldfogel lines up a number of indicators which point in the direction of this conclusion. For example, he collects evidence from \”best-of\” lists produced by movie and music critics, as well as evidence from sites that tally popular opinion. He also looks at whether people are choosing to spend more of their money on movies and music from small-scale and independent producers, on whether more best-selling books are self-publishes, and so on. Here\’s a figure showing some of the patterns.

The overall theme that emerges is that new entry into media industries from digitization is not just a matter of a \”long tail\” of niche products. Instead, some proportion of the content that would have been screened out by the traditional media gatekeepers is finding a large and receptive audience. There\’s no doubt that as media output increases, a lot of crud get produced. But the options that people choose and prefer do not seem to be decreasing in quality, and may well be  increasing.

This argument further implies that consumers of media are not finding themselves especially overwhelmed by the range of new choices available. Herbert Simon wrote back in 1971: \”What information consumes is rather obvious: it consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information resources that might consume it.\” But it turns out that online reviews and social media offer a gatekeepers for new media products–both for specialized niche products and for those aimed at a mass audience–in a way that lets consumers sort through the options. Indeed,  many consumers seem to find the sorting process through interactive social media to be fun in itself.

Kindleberger on International Use of the US Dollar and the English Language

In a world of many languages, it is efficient if everyone shares a second language. In a world of many currencies, it is efficient if everyone shares a second currency. In the current world economy, that common second language has been English and the common second currency has now been the US dollar for a half-century. In August 1967, Charles P. Kindleberger made this point in an essay called \”The Politics of International Money and World Language,\” published by the economics department at Princeton University as #61 in a series of \”Essays in International Finance.\”

As Kindleberger points out in the essay, debates over a widely shared second language or second currency are inevitably controversial in  political terms, because culture and prestige are at stake. He writes: \”The basic question that will be left unanswered is whether economic efficiency is less important in these matters than political appearances, which many other observers would probably call political reality. It is possible that it is, but economists are accustomed to having doubts. At the least, I would insist that there is a trade-off between economic inefficiency and political appearances which must be explicitly evaluated to see whether the cost in one is worth the benefit in the other.\”

Here\’s a dose of Kindleberger:

However, the analogy which interests me most is that between the use of the dollar in international economics and the use of the English language in international intercourse more generally. Analogies are tempting, and dangerous because frequently misleading. But the dollar \”talks,\” and English is the \”coin\” of international communication. The French like neither fact, which is understandable. But to seek to use newly-created international money or a newly-created international language would be patently inefficient. 

Languages are ordered hierarchically. Like sterling, French used to dominate. Like the dollar, English does now. Frenchmen must learn English; it is not vital for Anglo-Saxons to learn French. 

The analogy with the language quarrel in Belgium is exact. The Flemish must learn French, but the Walloons, despite their constitutional edict of equality between the languages and the legislative edict which requires civil servants to do so, do not learn or use Dutch. The Flemish are offended and begin to insist on Flemish, exactly as France has insisted that its representatives at international conferences, even when they know English perfectly, must speak only French and insist on all speeches in English being translated into French. The transactions costs of translation, including the misunderstanding in communication and the waste of time, are even more evident than the transactions costs of converting gold to dollars and dollars to gold, when it is dollars—not gold—that are necessary to transactions.  …

It is easy to imagine what is implied in a \”sabotage\” of French as a working language at the United Nations. Someone—presumably an Anglo-Saxon—at a working-committee meeting, observing that all the Francophones had a good command of English, suggested that the translation into French from English and possibly from French into English be dispensed with in the interest of efficiency. The transactions (translation) costs of simultaneous but especially of consecutive translation are high in efficiency, owing to loss of time or accuracy and of intimacy in two-way communication. It is highly desirable for Americans and British to know enough French, German, Italian, Spanish, and perhaps Russian to be able to receive in those languages, or some of them, even if they transmit only in English. But world efficiency is achieved when all countries learn the same second language, just as when the different nationalities in India use English as a lingua franca. …  One\’s own currency is the native language, and foreign transactions are carried on in the vehicle currency of a common second language, the dollar. 

It is hard on French, which used to be the language of diplomacy, to have lost this distinction; but it is a fact. In scientific writing, as in communication between international airplane and control tower, English is the universal language, except for the rescue call \”Mayday\” which … would have put in French as \”M\’aidez.\” But a common second language is efficient, rather than nationalist or imperialist. 

The power of the dollar and the power of English represent la force des choses and not la force des hommes. This is not to gainsay the existence of unattractive nationals abroad—from virtually all countries. I recall particularly a Chicago Tribune reporter who got through Europe with two words: \”Whiskey\” and \”Steak.\” But it is not nationalism which spreads the use of the dollar and the use of English; it is the ordinary search of the world for short cuts in getting things done. … 

The selection of the dollar as the lingua franca of international monetary arrangements, then, is not the work of men but of circumstances. Pointing to its utility involves positive, not normative, economics. Students of international politics must deplore the nationalistic overtones and would like to see the ultimate bastions of the system, and the means of producing policy, international. 

But the analogy has one more aspect. The futility of a synthetic, deliberately created international medium of exchange is suggested by the analogy with Esperanto. This still commands a doughty band of true believers, but their legions have thinned. A linguistics expert states that Esperanto suffers from being inadequately planned as an international language. If he worked on it, he could devise a common language which would be much better suited to the task. Our instinct tells us that this is equally applicable to the myriad of [international monetary] plans—Triffin, Stamp, Postuma, Roosa, Bernstein, Modigliani-Kenen, and all the rest—all of which have strengths (and weaknesses) but also share the basic weakness that they do not grow out of the day-to-day life of markets, as the dollar standard based on New York has done, and likewise the Eurodollar. …
 At the other extreme, the French view that the international monetary system should re-enthrone gold as the international medium of exchange resembles an appeal for a return to Latin as the lingua franca of international discourse, an appeal not without its nostalgic value for those who admire ancient Rome and medieval culture, but one that is evidently swimming against the stream of history, as the increasingly rapid abandonment of Latin in the Catholic Church testifies. 

Finally, the many academic economists who recommend separating international money and capital markets by a system of flexible exchange rates between national currencies in effect call for a return to Babel, with foreign languages used by none save professional interpreters. This maximizes transactions costs and minimizes international discourse. A compromise between this and fixed exchange rates is possible: with separate dollar, sterling (or dollar-sterling), franc, and ruble areas, each with many countries having fixed exchange rates and speaking a common area language but with flexible exchange rates and full formal translation between them. …  For those who like neat Cartesian designs, it has much to recommend it. 

But how can one make such a division of the world among the great powers into spheres of influence stick, even if one has no misgivings about its morality? An earlier paper by Mundell raised the central issue, \”What is the Optimum Currency Area?\” and the same question could be put for languages. The rapid shrinkage of the world, however, makes it impractical to try to maintain traditional currency and language areas without infiltration of a single language and currency into a wider range of human activity. The Organization of Petroleum Exporting Countries (OPEC), consisting of Arab and Spanish-speaking states, inevitably reckons in dollars and discourses in English, and there is little that the statesmen of the maj or powers can do to prevent a succession of hundreds of similar steps toward reducing the costs of economic and social intercourse. In positive, not normative, terms the optimum currency and language area is rapidly expanding to the world. …

The ironic and politically very damaging fact is that the European language is English, or perhaps one should say American, just as the European unit in monetary affairs is the dollar. This is because the optimum language and currency areas today are not countries, nor continents, but the world; and because, for better or worse—and opinions differ on this—the choice of which language or which currency is made not on merit, or moral worth, but on size.

It\’s interesting to me that Kindleberger was making this point about the dominance of the US dollar and the English language a half-century ago, and this part of his argument seems to have stood the test of time. One of the most common questions I get asked in public forums is \”How long before the US dollar loses its global preeminence, and will be forced to share the world economic stage with the Chinese yuan,  the euro, the Japanese yen, and others. Kindleberger\’s meditation offers an answer to that question, which is roughly \”not very soon.\”

Dissecting Long-Run International Productivity Patterns

The overall growth of a countries GDP can be divided into five categories: growth of population, change in employment rate, change in hours worked, \”capital deepening\” (change in capital per worker), and growth of productivity (here measured by total factor productivity or GDP). Antonin Bergeaud, Gilbert Cette, and Rémy Lecat use this framework to review \”Total Factor Productivity in Advanced Countries: A Long-termPerspective,\” in the Spring 2017 issue of International Productivity Monitor (vol. 32,  pp. 6-24).

This useful figure shows the breakdown of the five factors, looking across the United States, the euro area, the United Kingdom, and Japan. The perspective here is long-term, from 1890 to 2015, so we are talking about big changes in patterns, not about shorter-run effects of recession and recovery.

What are some of the patterns that become apparent in the figure?

Hours worked (shown by the part of the bars with diagonal stripes) has tended to be been flat or declining, so they tend to subtract from overall GDP growth–and in the figure, they go down into negative territory,

The employment rate is shown by the black portion of the bars. At various times in the past, a higher employment rate contributed to a rise in GDP: for example, in the US economy from 1975 to 1995 women are entering the (paid) labor force in substantial numbers. But in the most recent bars for the 2005-2015 interval, the employment rate is either negative or very small.

The bars have three other shaded areas, which from darker to lighter I\’ll refer to as gray, silver, and white. The gray portion of the bar shows \”capital deepening,\” which is the rise in capital/worker. This is consistently a positive force in raising growth. It was especially powerful in the post-World War II years in Japan and the euro area. In recent years, with low investment rates, it has tended to make a smaller contribution.

The silver portion of the bar is population growth, which has become very small in the UK, euro area, and Japan, although it is comparatively larger in the United States.

Finally, the white portion of the bar is \”total factor productivity,\” which is the growth in  productivity that remains after stripping out these other factors. It\’s useful to be clear on this point: productivity is not measured directly, but instead is calculated as what is left over after other forces are accounted for. In a classic line from the growth economist Moses Abramowitz, productivity growth is \”the measure of our ignorance.

As the figure shows, productivity growth is also the largest part of what causes economic growth. Fluctuations in productivity are the biggest factor in rises and falls in overall economic growth. The authors try to look in more detail at factors like education and technology shocks. They write: Our main contribution is to show that including the quality of factors of production, especially
education and technological shocks, significantly reduce the share of 20th century GDP growth that is unexplained. Nevertheless, still this share remains important, which suggests that there is scope for further analysis to better measure TFP [total factor productivity] growth.\”

They also note another productivity  puzzle that has been discussed here a time or two. The reduced rate of productivity doesn\’t seem to be affecting the cutting-edge firms on the productivity frontier; instead, what\’s seems to be happening is that other firms are having a harder time keeping up with the productivity gains at the frontier. Or to put it another way, productivity gains are not diffusing through the rest of the economy in the same way that they used to. The authors write:

Regarding the productivity slowdown observed during the 2000s, analyses carried out by the OECD at the firm level suggest that this slowdown does not appear to be observed for the most productive firms, in other words, at the productivity frontier. The productivity slowdown appears to be a diffusion problem from the best performances at the frontier to the laggard firms. This diffusion problem seems to hinge on the nature of innovations at the current juncture, with intangible capital being more difficult to replicate, or on a winner-takes- all phenomenon in ICT [information and communications technology] sectors. The puzzle is why such innovation diffusion difficulties appear to have become worse simultaneously in all developed countries, which are at different stages of development. … 

Thus, the unsatisfactory measure of our ignorance is that productivity has slowed down, but we don\’t really know why. One hypothesis is that errors in measurement mean that current productivity growth and investment is underestimated, but this explanation is controversial. For an extended discussion, see the the symposium on \”Are Measures of Economic Growth Biased?\” in the Spring 2017 issue of the Journal of Economic Perspectives, with papers by Martin Feldstein (\”Underestimating the Real Growth of GDP, Personal Income, and Productivity\”), Chad Syverson (\”Challenges to Mismeasurement Explanations for the US Productivity Slowdown\”), and the team of Erica L. Groshen, Brian C. Moyer,Ana M. Aizcorbe, Ralph Bradley and David M. Friedman (\”How Government Statistics Adjust for Potential Biases from Quality Change and New Goods in an Age of Digital Technologies: A View from the Trenches\”).

In the essay under discussion here, Bergeaud, Cette, and Lecat offer some possible hypotheses as well, which are intriguing if unproven. They write:

\”One explanation being tested is that this weaker cleansing mechanism could at least partly be explained by a decline in real interest rates and less expensive capital, which allow low productivity firms to survive and highly productive firms to thrive. Less expensive capital lowers the return on capital expected from firms and allows innovative firms to take on more risks. But this could also contribute to capital misallocation, as financing becomes less selective on the main innovative projects. Recent researchers have found that such an explanation may be relevant for Southern European countries such as Portugal, Italy and Spain …\”

In other words, when interest rates are so very low, the pressures to lend mostly to those firms with really good economic prospects is diminished, and lower-productivity firms feel less pressure to upgrade. I\’m not confident that this answer applies very well for the US economy, but then, I\’m not sure what other answers to offer, either.

Price Dispersement and Bargain Hunting

Economists sometimes talk of the \”law of one price,\” which basically says that when consumers are bargain-hunting between competing firms, the same price will tend to prevail everywhere. After all, any provider who tries to charge more would lose sales.

Of course, in real life the \”law\” isn\’t absolute. It needs to be adjusted for factors like how easy it is to find prices from competitors and buy elsewhere.  In addition, stores have sales, and offer coupons or membership programs, and thus act in ways that even those who buy from the same store during the same day can face different prices for the same product. From the point of view of the seller, the strategy goes like this: Find a way to offer special deals to attract the bargain-hunters who are highly motivated to find the lowest price, but in addition, find ways to charge a  higher \”regular\” price to shoppers who are not as highly motivated to find the lowest price. Greg Kaplan provides an overview of research on this topic in \”Price Dispersion and Bargain Hunting in the Macroeconomy,\” which appears in the NBER Reporter (2017 Number 2).

As an example of a common pattern, here\’s data from one study on the price paid for a 36-ounce bottle of Heinz ketchup in stores in Minneapolis in January 2007. The underlying data comes from the  \”KiltsNielsen Consumer Panel (KNCP) data set, which contains price and quantity information for more than 300 million transactions by 50,000 households for over 1.4 million goods in 54 geographic markets during the period 2004–9.\”

Kaplan

As a rough estimate, for the typical good in this study about two-thirds of transactions have an observed price within 20% of the average price (either above or below), which means that the remaining one-third of transactions have a price more than 20% away from the average.

Why are some patterns of this price dispersion? Perhaps surprisingly, the main reason for different prices isn\’t because of drastically different kinds of stores, like shopping in a large chain grocery story vs. shopping at a mini-mart attached to a gas station. Kaplan explains:

\”However, perhaps surprisingly, only a small fraction of this [price] dispersion arises because some stores are more expensive than other stores. We can infer this because our scanner data allows us to observe the same store selling lots of different goods, the same good sold at lots of different stores, and the same good being sold at the same store in many different transactions. Most of the observed dispersion in prices actually takes place within stores. About half is due to a transaction component that captures both temporal variation in the price of a good at a given store due to temporary sales and other price changes and the fact that not all customers pay the same price for the same good on the same day because, for example, some use coupons or loyalty cards. The other half is due to persistent differences in the prices charged for a given product across stores that are equally expensive on average.\”

What are some implications of living in the real world, where the law of one price applies only approximately, and stores are seeking ways to offer different prices to different kinds of shoppers?

It turns out that prices paid for goods declines with age, with those aged 55-and-older paying prices that are on average about 3-4% lower than 25 year-olds. An obvious reason is that retirees have more flexibility and knowledge about times and places and ways to shop. Those who shop more frequently, or who visit more stores on a given shopping trip,or who are more likely to use coupons, also pay lower prices. Indeed, your personal experience of the extent to which inflation is happening in food prices will be strongly affected by whether you are a bargain-hunter, and thus more likely to get lower prices, or whether you are paying the regular prices. In groceries, as in other consumer purchases, careful shopping can save money without altering the quality of what is purchased.

"The Limited Liability Corporation is the Greatest Single Discovery of Modern Times"

Economists and other gossipmongers sometimes like to quote the words of Nicholas Murray Butler, who was at the time President of Columbia University, in a 1911 speech called \”Politics and Economics\” to the 143rd Annual Banquet of the Chamber of Commerce of the State of New York in 1911 (pp. 43-55), and available through the magic of the HathiTrust Digital Library, .

Here\’s the much-quoted part of the speech:

\”I weigh my words, when I say that in my judgment the limited liability corporation is the greatest single discovery of modern times, whether you judge it by its social, by its ethical, by its industrial or, in the long run,—-after we understand it and know how to use it,—by its political, effects. Even steam and electricity are far less important than the limited liability corporation, and they would be reduced to comparative impotence without it.\”

The quotation is often used to illustrate someone who has gone far over-the-top in admiring the private company–and perhaps even someone who is licking the boots of New York\’s rich and powerful at this banquet dinner. But that interpretation is unfair to Butler. This is just an speech, not an academic tract, but he is pointing out the enormous industrial change that has occurred, while also arguing in favor of the development of a body of law like the Sherman antitrust legislation to constrain corporations, and also making the point that these issues of large-scale companies had already been an issue for hundreds of years, back at least to reports commissioned by the Diet of Nuremburg in the 1520s. Here\’s a more extended quotation from of Butler (bracketed references to laughter or applause are omitted):

The fact of the matter is, and it may just as well be recognized in this country and in every other country, that the era of unrestricted individual competition has gone forever. And the reason why it has gone is partly because it has done its work, partly because it has been taken up into a new and larger principle of co-operation. What happens in every form of organic evolution is that an old part no longer useful to the structure drops away, and its functions pass over into and are absorbed by a new development. That new development is co operation, and co-operation as a substitute for un- limited, unrestricted, individual competition has come to stay as an economic fact, and legal institutions will have to be adjusted to it. It cannot be stopped. It ought not to be stopped. It is not in the public interest that it should be stopped. 

Now, how has this co-operation manifested itself? This new movement of cooperation has manifested itself in the last sixty or seventy years chiefly in the limited liability corporation. I weigh my words, when I say that in my judgment the limited liability corporation is the greatest single discovery of modern times, whether you judge it by its social, by its ethical, by its industrial or, in the long run,—-after we understand it and know how to use it,—by its political, effects. Even steam and electricity are far less important than the limited liability corporation, and they would be reduced to comparative impotence without it. Now, what is this limited liability corporation? It is simply a device by which a large number of individuals may share in an undertaking without risking in that undertaking more than they voluntarily and individually assume. It substitutes co-operation on a large scale for individual, cut-throat, parochial, competition. It makes possible huge economy in production and in trading. It means the steadier employment of labor at an increased wage. It means the modern provision of industrial insurance, of care for disability, old age and widowhood. It means—and this is vital to a body like this—it means the only possible engine for carrying on international trade on a scale commensurate with modern needs and opportunities. … 

\”I know how unsafe it is for any layman even to mention the SHERMAN law. I know that there is a prejudice in some political and journalistic circles against a layman saying anything about that law except the single word “Guilty.” But let me suggest that you do not agitate for an amendment of the SHERMAN law. Supplement it, if you like, but do not amend it. The SHERMAN law has now been subjected to twenty years of the most careful, the most extensive and the most elaborate legal and judicial examination and determination. Under it you are working out a solution slowly, patiently, and with much doubt; but you are working out a solution of the relations of business to that law by the very processes which have always been those governing in our Anglo-Saxon life, the process of the application of the common law, building up from precedent to precedent ; and the man who undertakes to amend that law will make it worse. The first thing that will be done in that case will be to except some privileged people from it, and the only people who will be excepted will be those with a large number of votes. …

\”There is nothing new about all this conflict over large and new business undertakings. …  As a matter of fact there has not a single thing been said about corporations, about large industrial combinations, which was not said in England about co-partnerships, when co-partnerships were first invented. You may go all the way back five hundred years, and you will find exactly these same expressions. I ran upon this the other day. Let me read it, and perhaps you may guess from what American daily newspaper it comes:

 “`The merchants form great companies and become wealthy ; but many of\’ them are dishonest and cheat one another. Hence the directors of the companies, who have charge of the accounts, are nearly always richer than their associates. Those who thus grow rich are clever, since they do not have the reputation of being thieves.\’

\”That was not published in New York, or Chicago or San Francisco. That is found in the Chronicle of Augsburg, Germany, in 1512. In one year more that quotation will be four hundred years old. They were very much disturbed about this problem in those days, and the Diet of Nuremberg appointed a committee in 1522 to investigate monopolies. They sent an inquiry to several cities, to Boards of Trade and Chambers of Commerce, to know what better be done. This is the answer they got from Augsburg: 

“`It is impossible to limit the size of the companies for that would limit business and hurt the common welfare; the bigger and more numerous they are the better for everybody. If a merchant is not perfectly free to do business in Germany he will go elsewhere to Germany’s loss. Any one can see what harm and evil such an action would mean to us. If a merchant cannot do business, above a certain amount, what is he to do with his surplus money? It is impossible to set a limit to business, and it would be well to let the merchant alone and put no restrictions on his ability or capital. * * * * * Some people talk of limiting the earning capacity of investments. This would be unbearable and would work great injustice and harm by taking away the livelihood of widows, orphans and other sufferers, noble and non-noble, who derive their income from investments in these companies. Many merchants out of love and friendship invest the money of their friends—men, women and children—- who know nothing of business, in order to provide them with an assured income. Hence any one can see that the idea that the merchant companies undermine the public welfare ought not to be seriously considered. …\’

\”I read that to illustrate that the business and political mind of Europe has been on this question for at least four hundred years. … . We must learn that economic laws, economic principles, based on everlasting human nature are fundamental and vital, and your care and mine, as citizens of this Republic, is not to interfere with these laws, not to check them; but to see to it that no moral wrong is done in their name. That is a very different proposition from the one of overturning a great economic and industrial system by statute.\” 

At least for me, the closing paragraph contains food for thought. 

Summer 2017 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon was launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I\’ll start with Table of Contents for the just-released Summer 2017 issue, which in the Taylor household is sometimes known as issue #121. Below that are abstracts and direct links for all of the papers. I will almost certainly blog about some of the individual papers in the next week or two, as well.

___________________

Symposium: The Global Monetary System

\”International Monetary Relations: Taking Finance Seriously,\” by Maurice Obstfeld and Alan M. Taylor
In this essay, we highlight the interactions of the international monetary system with financial conditions, not just with the output, inflation, and balance of payments goals usually discussed. We review how financial conditions and outright financial crises have posed difficulties for each of the main international monetary systems in the last 150 years or so: the gold standard, the interwar period, the Bretton Woods system, and the current system of floating exchange rates. We argue that even as the world economy has evolved and sentiments have shifted among widely different policy regimes, there remain three fundamental challenges for any international monetary and financial system: How should exchange rates between national currencies be determined? How can countries with balance of payments deficits reduce these without sharply contracting their economies and with minimal risk of possible negative spillovers abroad? How can the international system ensure that countries have access to an adequate supply of international liquidity—financial resources generally acceptable to foreigners in all circumstances? In concluding, we evaluate how the current international monetary system answers these questions.
Full-Text Access | Supplementary Materials

\”The Safe Assets Shortage Conundrum,\” by Ricardo J. Caballero, Emmanuel Farhi and Pierre-Olivier Gourinchas
A safe asset is a simple debt instrument that is expected to preserve its value during adverse systemic events. The supply of safe assets, private and public, has historically been concentrated in a small number of advanced economies, most prominently the United States. Over the last few decades, with minor cyclical interruptions, the supply of safe assets has not kept up with global demand. The reason is straightforward: the collective growth rate of the advanced economies that produce safe assets has been lower than the world\’s growth rate, which has been driven disproportionately by the high growth rate of high-saving emerging economies such as China. The signature of this growing shortage is a steady increase in the price of safe assets; equivalently, global safe interest rates must decline, as has been the case since the 1980s. The early literature, brought to light by Ben Bernanke\’s famous \”savings glut\” speech of 2005, focused on a general shortage of assets without isolating its safe asset component. The distinction, however, has become increasingly important over time, particularly in the aftermath of the subprime mortgage crisis and its sequels. We begin by describing the main facts and macroeconomic implications of safe asset shortages. Faced with such a structural conundrum, what are the likely short- to medium-term escape valves? We analyze four of them, each with its own macroeconomic and financial trade-offs.
Full-Text Access | Supplementary Materials

\”Dealing with Monetary Paralysis at the Zero Bound,\” by Kenneth Rogoff
Recently, the key constraint for central banks is the zero lower bound on nominal interest rates. Central banks fear that if they push short-term policy interest rates too deeply negative, there will be a massive flight into paper currency. This paper asks whether, in a world where paper currency is becoming increasingly vestigial outside small transactions (at least in the legal, tax compliant economy), there might be relatively simple ways to finesse the zero bound without affecting how most ordinary people live. Surprisingly, this question gets little attention compared to the massive number of articles that take the zero bound as given and look for out-of-the-box solutions for dealing with it. In an inversion of the old joke, it is a bit as if the economics literature has insisted on positing \”assume we don\’t have a can opener,\” without considering the possibility that we might be able to devise one. It makes sense not to wait until the next financial crisis to develop plans. Fundamentally, there is no practical obstacle to paying negative (or positive) interest rates on electronic currency and, as we shall see, effective negative rate policy does not require eliminating paper currency.
Full-Text Access | Supplementary Materials

Symposium: The Modern Corporation

\”Is the US Public Corporation in Trouble?\” by Kathleen M. Kahle and René M. Stulz
We examine the current state of the US public corporation and how it has evolved over the last 40 years. After falling by 50 percent since its peak in 1997, the number of public corporations is now smaller than 40 years ago. These corporations are now much larger and over the last twenty years have become much older; they invest differently, as the average firm invests more in R&D than it spends on capital expenditures; and compared to the 1990s, the ratio of investment to assets is lower, especially for large firms. Public firms have record high cash holdings and, in most recent years, the average firm has more cash than long-term debt. Measuring profitability by the ratio of earnings to assets, the average firm is less profitable, but that is driven by smaller firms. Earnings of public firms have become more concentrated—the top 200 firms in profits earn as much as all public firms combined. Firms\’ total payouts to shareholders as a percent of earnings are at record levels. Possible explanations for the current state of the public corporation include a decrease in the net benefits of being a public company, changes in financial intermediation, technological change, globalization, and consolidation through mergers.
Full-Text Access | Supplementary Materials

\”The Agency Problems of Institutional Investors,\” by Lucian A. Bebchuk, Alma Cohen and Scott Hirst
Financial economics and corporate governance have long focused on the agency problems between corporate managers and shareholders that result from the dispersion of ownership in large publicly traded corporations. In this paper, we focus on how the rise of institutional investors over the past several decades has transformed the corporate landscape and, in turn, the governance problems of the modern corporation. The rise of institutional investors has led to increased concentration of equity ownership, with most public corporations now having a substantial proportion of their shares held by a small number of institutional investors. At the same time, these institutions are controlled by investment managers, which have their own agency problems vis-á-vis their own beneficial investors. We develop an analytical framework for understanding the agency problems of institutional investors, and apply it to examine the agency problems and behavior of several key types of investment managers, including those that manage mutual funds—both index funds and actively managed funds—and activist hedge funds. We show that index funds have especially poor incentives to engage in stewardship activities that could improve governance and increase value. Activist hedge funds have substantially better incentives than managers of index funds or active mutual funds. While their activities may partially compensate, we show that they do not provide a complete solution for the agency problems of other institutional investors.
Full-Text Access | Supplementary Materials

\”Towards a Political Theory of the Firm,\” by Luigi Zingales
The revenues of large companies often rival those of national governments, and some companies have annual revenues higher than many national governments. Among the largest corporations in 2015, some had private security forces that rivaled the best secret services, public relations offices that dwarfed a US presidential campaign headquarters, more lawyers than the US Justice Department, and enough money to capture (through campaign donations, lobbying, and even explicit bribes) a majority of the elected representatives. The only powers these large corporations missed were the power to wage war and the legal power of detaining people, although their political influence was sufficiently large that many would argue that, at least in certain settings, large corporations can exercise those powers by proxy. Yet in economics, the commonly prevailing view of the firm ignores all these elements of politics and power. We must recognize that large firms have considerable power to influe nce the rules of the game. I call attention to the risk of a \”Medici vicious circle,\” in which economic and political power reinforce each other. The possibility and extent of a \”Medici vicious circle\” depends upon several nonmarket factors. I discuss how they should be incorporated in a broader \”Political Theory\” of the firm.
Full-Text Access | Supplementary Materials

\”A Skeptical View of Financialized Corporate Governance,\” by Anat R. Admati
Managerial compensation typically relies on financial yardsticks, such as profits, stock prices, and return on equity, to achieve alignment between the interests of managers and shareholders. But financialized governance may not actually work well for most shareholders, and even when it does, significant tradeoffs and inefficiencies can arise from the conflict between maximizing financialized measures and society\’s broader interests. Effective governance requires that those in control are accountable for actions they take. However, those who control and benefit most from corporations\’ success are often able to avoid accountability. The history of corporate governance includes a parade of scandals and crises that have caused significant harm. After each, most key individuals tend to minimize their own culpability. Common claims from executives, boards of directors, auditors, rating agencies, politicians, and regulators include \”we just didn\’t know,\” \”we couldn\’t have predicted,\” or \”it was just a few bad apples.\” Economists, as well, may react to corporate scandals and crises with their own version of \”we just didn\’t know,\” as their models had ruled out certain possibilities. Effective governance of institutions in the private and public sectors should make it much more difficult for individuals in these institutions to get away with claiming that harm was out of their control when in reality they had encouraged or enabled harmful misconduct, and ought to have taken action to prevent it.
Full-Text Access | Supplementary Materials

Articles
\”The Causes and Costs of Misallocation,\” by Diego Restuccia and Richard Rogerson
Why do living standards differ so much across countries? A consensus in the development literature is that differences in productivity are a dominant source of these differences. But what accounts for productivity differences across countries? One explanation is that frontier technologies and best practice methods are slow to diffuse to low-income countries. The recent literature on misallocation offers a distinct but complementary explanation: low-income countries are not as effective in allocating their factors of production to their most efficient use. We provide our perspective on three key questions. First, how important is misallocation? Second, what are the causes of misallocation? And third, beyond the direct cost of lower contemporaneous output, are there additional costs associated with misallocation? A summary of our answers is as follows: Misallocation appears to be a substantial channel in accounting for productivity differences across countries, but the measured magnitude of the effects depends on the approach and context. Researchers have not yet found a dominant source of misallocation; instead, many specific factors seem to contribute a small part of the overall effect. Beyond the static cost of misallocation, we believe that the dynamic effects of misallocation on productivity growth are significant and deserve much more attention going forward.
Full-Text Access | Supplementary Materials

\”Federal Budget Policy with an Aging Population and Persistently Low Interest Rates,\” Douglas W. Elmendorf and Louise M. Sheiner
Some observers have argued that the projections for high and rising debt pose a grave threat to the country\’s economic future and give the government less fiscal space to respond to recessions or other unexpected developments, so they urge significant changes in tax or spending policies to reduce federal borrowing. In stark contrast, others have noted that interest rates on long-term federal debt are extremely low and have argued that such persistently low interest rates justify additional federal borrowing and investment, at least for the short and medium term. We analyze this controversy focusing on two main issues: the aging of the US population and interest rates on US government debt. It is generally understood that these factors play an important role in the projected path of the US debt-to-GDP ratio. What is less recognized is that these changes also have implications for the appropriate level of US debt. We argue that many—though not all—of the factors that may be contributing to the historically low level of interest rates imply that both federal debt and federal investment should be substantially larger than they would be otherwise. In conclusion, although significant policy changes to reduce federal budget deficits ultimately will be needed, they do not have to be implemented right away. Instead, the focus of federal budget policy over the coming decade should be to increase federal investment while enacting changes in federal spending and taxes that will reduce deficits gradually over time.
Full-Text Access | Supplementary Materials

\”How Digitization Has Created a Golden Age of Music, Movies, Books, and Television,\” Joel Waldfogel
Digitization is disrupting a number of copyright-protected media industries, including books, music, radio, television, and movies. Once information is transformed into digital form, it can be copied and distributed at near-zero marginal costs. This change has facilitated piracy in some industries, which in turn has made it difficult for commercial sellers to continue generating the same levels of revenue for bringing products to market in the traditional ways. Yet despite the sharp revenue reductions for recorded music, as well as threats to revenue in some other traditional media industries, other aspects of digitization have had the offsetting effects of reducing the costs of bringing new products to market in music, movies, books, and television. On balance, digitization has increased the number of new products that are created and made available to consumers. Moreover, given the unpredictable nature of product quality, growth in new products has given rise to substantial increases in the quality of the best products. Although there were concerns that consumer welfare from media products would fall, the opposite scenario has emerged—a golden age for consumers who wish to consume media products.
Full-Text Access | Supplementary Materials

Features

\”Retrospectives: Friedrich Hayek and the Market Algorithm,\” by Samuel Bowles, Alan Kirman and Rajiv Sethi
Friedrich A. Hayek (1899-1992) is known for his vision of the market economy as an information processing system characterized by spontaneous order: the emergence of coherence through the independent actions of large numbers of individuals, each with limited and local knowledge, coordinated by prices that arise from decentralized processes of competition. Hayek is also known for his advocacy of a broad range of free market policies and, indeed, considered the substantially unregulated market system to be superior to competing alternatives precisely because it made the best use of dispersed knowledge. Our purpose in writing this paper is twofold: First, we believe that Hayek\’s economic vision and critique of equilibrium theory not only remain relevant, but apply with greater force as information has become ever more central to economic activity and the complexity of the information aggregation process has become increasingly apparent. Second, we wish to call into question Hayek\’s belief that his advocacy of free market policies follows as a matter of logic from his economic vision. The very usefulness of prices (and other economic variables) as informative messages—which is the centerpiece of Hayek\’s economics—creates incentives to extract information from signals in ways that can be destabilizing. Markets can promote prosperity but can also generate crises. We will argue, accordingly, that a Hayekian understanding of the economy as an information-processing system does not support the type of policy positions that he favored. Thus, we find considerable lasting value in Hayek\’s economic analysis while nonetheless questioning the connection of this analysis to his political philosophy.
\”Recommendations for Further Reading,\” by  Timothy Taylor

Housing Prices: Highs and Lows

The interaction of a bubble in real estate prices with the banking and financial sector drove the US economy into the Great Recession. For an overview of how the US housing market is faring a decade after dysfunctions in housing market finance helped bring on the Great Recession that started in 2007, a useful starting point is The State of the Nation\’s Housing 2017, by the Joint Center for Housing Studies of Harvard University.

Here\’s the pattern of overall US housing prices: adjusted for inflation, home prices are about 30% above their level in 2000, but still below where they were at the peak of the housing bubble in 2006.

Perhaps not surprisingly, given that pattern, even ten years after the start of the financial crisis in 2007, there there are millions of Americans who are still \”underwater\” on their mortgages: that is, what they owe is more than what the home would sell for: \”According to CoreLogic, the number of households underwater on their mortgages dropped from 4.3 million in 2015 to 3.2 million in 2016, reducing their share of all homeowners from 8.4 percent to 6.2 percent. … Despite this progress, the share of homeowners with negative equity in some markets is still more than double the national rate. For example, 16.1 percent of homeowners in the Miami metro area were underwater on their mortgages in 2016, along with 15.5 percent in Las Vegas and 12.6 percent in Chicago. At the other extreme, only 0.6 percent of owners in the San Francisco metro area had negative equity.\”

Comparisons like these help to emphasize that price patterns have been VERY different across US housing markets. For example, here\’s one figure comparing the rise in home prices in the  ten highest-cost metro areas, compared to the lowest-cost areas and the US average. Clearly, the bubble in housing prices was much more extreme in these high-cost markets. It\’s also striking that housing prics in the high-cost markets were roughly double the national average in 2000, but more like triple the national average in 2016.

Here\’s another look at differences in housing prices across local markets using a heat map diagram. The orange areas are places where housing prices rose at least 40%, and sometimes much more, between 2000 and 2016. The blue areas show where housing prices are actually lower, and sometimes much lower, in 2016 than in 2000.

One of my takeaways here is that the affordability of housing is in some ways a regional and even a local issue. Whether housing is \”unaffordable\” is commonly measured by using a rule-of-thumb like the share of people in a given market who could afford the median-priced house if they spent no more than 30% of their income on housing. By this measure, a very high proportion of households at or below the poverty line will face a problem of unaffordable housing in pretty much every real estate market. But in the markets where housing prices are highest and have been rising most quickly, a substantial share of middle-class families can face housing affordability issues, too.

I have no problem with federal programs to help those at or near the poverty line afford  necessities of life like food, medical care, housing, and so on. But it\’s not clear that federal programs are appropriate to help those who are not poor, but face an affordability problem because they live in a region with a high cost of housing. It\’s not clear why those not clear why those who live in Pocatello or Dubuque or Detroit should pay taxes to subsidize the higher cost of living in south Florida or southern California.

In addition, one key reason why housing prices are so high in certain regions is that local rules can make it costly and time-consuming to build. In short, many of the areas with high housing prices have, to a substantial extent, brought those high costs on themselves by the ways in which they regulate and limit construction of new housing. Again, federal programs to help those with very low incomes is one thing, but when a local area or region has helped to create its own high housing costs, that same local areas or region should also have most of the responsibility for addressing the affordability consequences of those decisions.

The theme that housing construction has been relatively slow surfaces in the report in a number of places: \”A variety of factors may be holding back a more robust supply response. Labor shortages are a key constraint, reflecting both the substantial drop in the construction workforce following the housing bust and the lower number of young workers entering the industry. In addition, regulatory and stricter financing requirements have limited the supply of land available for both single- and multifamily housing construction. In combination, these forces raise development costs and make it less feasible to build smaller homes for first-time buyers and rental units affordable to low- and moderate-income households.\”

For example, the blue line in this figure shows the \”vacancy rate,\” which is quite low. The yellow bars show how many new houses have been constructed in the previous 10 years, which is also very low.

Similarly, the vacancy rate for rental apartments is also quite low, and rents are rising in most metro areas.

With low vacancy rates for housing and, at least in certain areas, very high prices, one would expect to see a resurgence of construction.  But in a number of local markets, efforts to build additional housing stock is held back substantially by the conditions demanded by existing home-owners and imposed in their name by local regulators.

Those interested in this topic might also want to check: