What Should Banks Be Allowed To Do?

Charles Morris offers a nice overview of the course of bank regulation in the last century or so in \”What Should Banks Be Allowed To Do?\” It appears in the Fourth Quarter 2011 issue of the Economic Review published by the Federal Reserve Bank of Kansas City.

For me, the article serves two useful purposes. First,  it\’s a reminder of why bank deregulation in the 1980s and 1990s wasn\’t some clever ploy by the financial-sector lobbyists, but was absolutely necessary given the evolution of the industry at that time. To be sure, the deregulation could have been carried out in different ways, posing different risks, but some kind of deregulation was unavoidable. Second, it makes the case for limiting what banks are allowed to so. I\’m completely persuaded that the proposed reform would make the banking sector safer and with less risk of needing a bailout, but I\’m less sure that the reform would make the financial sector as a whole safer. Let me say a bit more about each of these, drawing heavily on Morris\’s exposition.

The banking sector as it emerged from the 1930s had five characteristics salient for the discussion here: 1) it was overseen by bank regulators for safety; 2) it had access to a public safety net of emergency loans from the Fed and deposit insurance; 3) it was forbidden to go into other financial areas like investment banking, securities dealing, or insurance; 4) it faced legal limits on the insurance it could pay on deposits; and 5) it faced geographic restrictions on branching across state lines and within states. In short, it was an industry that was shielded from competition, limited in what it could do, and heavily regulated.

In the 1970s, the wheels began to come off this wagon. Those who wished to save money began to seek out investment options like mutual funds, including money market mutual funds, and insurance companies. Banks were limited in the interest rate they could pay, and inflation was high. Banks began to hemorrage deposits. Those who wished to borrow money found other options, too. They borrowed through commercial paper, through high-yield bonds, and through securitized markets including mortgage-backed securities and asset-backed securities. Separate finance companies made car loans and loans for retail purchases. Other companies financed trade receivables.

In short, both the savers and the borrowers were migrating outside the banking industry. Instead, the process of financial intermediation between savers and borrowers was happening outside the banking industry, in what came to be called the \”shadow banking\” sector.  If the banks had not been deregulated and allowed to compete in this new financial sector–at least in some ways–the banks themselves would have shrunk dramatically and a very large part of the U.S. saving and borrowing would have passed completely outside the purview of the bank regulators.

As banks were allowed to compete across the financial sector more broadly, starting in the 1980s, the industry began to consolidate. This made some sense: when banks were allowed to open branches across states and across state lines, for example, not as many small banks were needed. But the top banks not only became very large, but an ever-growing share of their assets were outside the traditional business of banking. Here\’s how Morris summarizes how the industry evolved (footnote omitted):

\”Technological improvements, interstate banking, and the GLB [Graham-Leach-Bliley] Act resulted in fewer banks and a much more concentrated banking industry, with the largest BHCs [bank holding companies] ultimately engaging in more varied and nontraditional activities. For example, the number of banks fell from about 12,500 in 1990 to about 6,400 in 2011. The share of industry assets held by the 10 largest BHCs rose from about 25 percent in 1990 to about 45 percent in 1997 (just before the GLB Act) and to almost 70 percent in 2011. The share of loans and deposits of the top 10 BHCs
also rose sharply (Table 1). In addition, only four of the 10 largest BHCs that existed before the passage of the GLB Act remain today (Citigroup, JPMorgan Chase, Bank of America, and Wells Fargo), with those four BHCs having acquired five of the other top 10 BHCs.

\”Table 1 also shows how the activities of the 10 largest BHCs have changed in the past 14 years. In 1997, the share of banking assets relative to total assets at these companies was 87 percent, with only one company having a share less than 80 percent. Today, the share of banking assets is 58 percent, with only two BHCs having a share greater than 80 percent.\”

Morris\’s diagnosis and proposed solution are straightforward. Bank holding companies have gotten into too many risky financial activities, and so should be restrained. But Morris is also clearly and sensibly aware that just trying to turn back the clock to 1930s-style regulated banking isn\’t possible. That toothpaste is out of the tube. He suggests that banks be allowed to pursue three areas of business:

  • Commercial banking—deposit taking and lending to individuals and businesses.
  • Investment banking—underwriting securities (stocks and bonds) and providing advisory services.
  • Asset and wealth management—managing assets for individuals and institutions.

 Conversely, Morris argues that banks be barred from three other areas:

  • Dealing and market making—intermediating securities, money market instruments, and over-the-counter derivatives transactions for customers.
  • Brokerage services—brokering for retail and institutional investors, including hedge funds (prime brokerage).
  • Proprietary trading—trading for an organization’s own account and owning hedge and private equity funds.

The key distinction here for Morris is that underwriting securities and providing advice are largely fee-based services. They don\’t involve putting much of a bank\’s capital at risk. Dealing, market-making, hedge funds, and private equity all involve taking risks with capital that are harder both for the institution to understand and for regulators to monitor.

Morris\’s proposal is certainly sensible enough, but it does leave me with a couple of questions. First, if banks were holding lots of mortgage loans, as they clearly could be under Morris\’s proposal, then they would have been vulnerable to a meltdown of housing prices like the one that has occurred. Thus, it\’s not clear to me that anything in this proposal would have limited the very aggressive home lending that occurred or the price meltdown afterward. Indeed, the sort of limited banks Morris advocates might in some ways have been relatively even more exposed to losses in the housing market.

Second, Morris\’s proposal, like all \”narrow bank\” proposals, would clearly make the banking sector safer. But one of the disturbing facts about the financial troubles of 2008 was that it wasn\’t just commercial banks that were deemed to be systemically important to the U.S. economy: it was also investment banks like Bear Stearns, money market funds, insurance companies like AIG, brokers that sell Treasury bonds, and others. Focusing on banking is all very well, but the shadow banking sector and the potential risks that it poses aren\’t going away.

The Mundane Cost Obstacle to Nuclear Power

I\’ve long believed that the main problems with expanding nuclear power related to health and safety concerns: for example, the small chance of a plant malfunctioning, along with issues related to waste disposal and possible links between nuclear power technology and weapons technology. But Lucas Davis argues persuasively in \”Prospects for Nuclear Power\” in the Winter 2012 issue of my own Journal of Economic Perspectives, which is freely available on-line courtesy of the American Economic Association, that I\’ve been assuming too much. Here\’s Davis (citations omitted):

\”Nuclear power has long been controversial because of concerns about nuclear accidents, storage of spent fuel, and about how the spread of nuclear power might raise risks of the proliferation of nuclear weapons. These concerns are real and important. However, emphasizing these concerns implicitly suggests that unless these issues are taken into account, nuclear power would otherwise be cost effective compared to other forms of electricity generation. This implication is unwarranted. Throughout the history of nuclear power, a key challenge has been the high cost of construction for nuclear plants. Construction costs are high enough that it becomes difficult to make an economic argument for nuclear even before incorporating these external factors. This is particularly true in countries like the United States where recent technological advances have dramatically increased the availability of natural gas. The chairman of one of the largest U.S. nuclear companies recently said that
his company would not break ground on a new nuclear plant until the price of natural gas was more than double today’s level and carbon emissions cost $25 per ton. This comment summarizes the current economics of nuclear power pretty well. Yes, there is a certain confluence of factors that could make
nuclear power a viable economic option. Otherwise, a nuclear power renaissance seems unlikely.\”

The argument from Davis is complemented by some other recent discussions of nuclear power. The
Federation of American Scientists has a report out on  The Future of Nuclear Power in the United States, edited by Charles D. Ferguson and Frank A. Settle. The most recent issue of the Economist magazine (March 10) has a 14-page cover story on \”Nuclear Power: The Dream that Failed.\”
Finally, a Report to the Secretary of Energy by the Blue Ribbon Commission on America\’s Energy Future was released in late January.

Here is some basic background from Davis in his JEP article. The first figure shows nuclear power plants under construction around the world. Notice that the plants under construction in the United States and western Europe dropped off to near-zero in the 1990s. The recent spike in plants under construction is driven by the \”other\” category, which is largely China, but it remains to be seen how many of these plants will end up being completed.

The next figure shows rising costs of constructing nuclear power plants in the United States. The costs are per kilowatt-hour of capacity, and so adjusted for size. The costs are also adjusted for inflation.

Finally, this figure shows the slowdown in construction times–for example, plants started in the 1960s were completed in 8.6 years while those completed in the 1970s took 14.1 years. Moreover, there was growing uncertainty as to whether a nuclear power plants would be completed: 89% of plants announced in the 1960s were completed, compared with only 25% of those announced in the 1970s being completed. 

Of course, it\’s not possible to separate cleanly the safety concerns over nuclear power from these cost issues: additional safety precautions–and the accompanying paperwork–are part of what drives up costs. But perhaps the more fundamental story here is that technological progress in nuclear power hasn\’t been increasing fast enough to assuage concerns about safety and to drive down costs. Stephen Maloney digs into this in some detail in Chapter 2 of the FAS report, \”A Critical Examination of Nuclear Power\’s Costs.\”

\”Since the nuclear industry’s inception more than 50 years ago, its forecasts for costs have been consistently unreliable. The “first generation” plants, comprising both prototype reactors and the standard designs of the 1950s-1960s, failed to live up to promised economics. This trend continued with the construction of Generation II plants completed in the 1970s, which make up the present nuclear fleet.

\”First, the total costs were far higher than for coal-generated electricity. In particular, the capital cost of nuclear plants built through 1980 were, on average, 50 percent higher than comparably-sized coal-fired plants, adjusting for inflation and including backfits to meet Clean Air Act standards. Second, there were extraordinary cost escalations over the original low cost promises. Nuclear plant construction costs escalated approximately 24 percent per calendar year compared to 6 percent annual escalation for coal plants. Third, the economies of scale expected were not achieved in the Generation II designs. The scale-up of nuclear plants brought less than half the economic efficiencies projected.

\”In addition, over 120 nuclear units, approximately half the reactors ordered, were never started or cancelled. The total write-offs were more than $15 billion in nominal dollars. … In the late 1970s, the Atomic Industrial Forum (AIF), predecessor to the Nuclear Energy Institute, identified the main drivers of unmet expectations as growing understanding of nuclear accident hazards, failure of regulatory standardization policies, and increased documentation standards to ensure as-built plants actually met
safety standards. The combined effects doubled the quantities of materials, equipment, and labor needed, and tripled the magnitude of the engineering effort for building a nuclear power plant.\”

Of course, it\’s possible to sketch business and technological scenarios under which nuclear power plants of the future use simpler, safer designs, which combine with economies of scale in production to drive down costs. But such predictions haven\’t held true over the history of nuclear power, and they don\’t seem to be holding true recently, either. Here\’s one of many examples, from Maloney:

\”In June 2006, a consortium of companies announced plans to build two more reactors at the South Texas Project site for an estimated cost of $5.2 billion. NRG, the lead company, made history by becoming the first company to file an application with the NRC. CPS Energy, a municipal utility, was one of its partners. In October 2007, CPS Energy’s board approved $206 million for preliminary design and engineering. In June 2009, NRG revised the estimate to $10 billion for the two reactors, including finance charges. A few weeks later, this estimate rose to $13 billion, including finance charges. Later that year, the estimate reached $18.2 billion …\” Cost overruns of similar magnitude aren\’t just a U.S. phenomenon; for example, they also have occurred at recent nuclear power projects in France and in Finland.

To be sure, there are promising new nuclear technologies out there. One hot topic is small modular reactors, discussed both in the Economist article and by Daniel Ingersoll in Chapter 10 of the FAS report. But at some point, a degree of skepticism seems appropriate. The Economist has a wonderful quotation from Admiral Hyman Rickover, who drove the process that created America\’s nuclear submarines, and commented back in the 1950s:

\”An academic reactor or reactor plant almost always has the following basic characteristics: (1) It is simple. (2) It is small. (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose. (7) Very little development will be required. It will use off-the-shelf components. (8) The reactor is in the study phase. It is not being built now. On the other hand a practical reactor can be distinguished by the following characteristics: (1) It is being built now. (2) It is behind schedule. (3) It requires an immense amount of development on apparently trivial items. (4) It is very expensive. (5) It takes a long time to build because of its engineering development problems. (6) It is large. (7) It is heavy. (8) It is complicated.\”

Finally, arguments over appropriate disposal of nuclear waste will surely continue. For an overview of these issues, a useful starting point is the Report to the Secretary of Energy by the Blue Ribbon Commission on America\’s Energy Future that was released in late January. Personally, I didn\’t find the Commission report to be especially encouraging about resolving these issues. For example, the first recommendation is to start a process of encouraging communities to volunteer for being nuclear waste disposal sites, which they think might take 15-20 years. Having just watched the argument over a possible repository at Yucca Mountain in Nevada run for 25 years, before the decision of the Obama administration to halt that process, this time frame seems optimistic. Of course, there are alternatives: consolidated storage facilities, and technologies for processing nuclear waste. But the alternatives aren\’t cost-free, either.

Nuclear power isn\’t going away. Plants that have been working well still have several decades to run, and the marginal costs of running them are now low. Additional nuclear power plants will be built in countries where the government makes it a priority, or perhaps in some settings where other sources of power are extremely high cost. But as the U.S. enters what seems to be a time of cheap and plentiful natural gas, building a substantial number of new nuclear power plants in this country seems highly unlikely.

Small Firms and Job Creation

Most job creation comes from small firms–and so does most job destruction. The Congressional Budget Office has a useful summary of the evidence in a short March 2012 report, \”Small Firms, Employment, and Federal Policy.\”
Here are some basic facts about firm size and employment. In 2011, firms with more than 1,000 employees were 0.2% of all firms, but 38.6% of all private-sector employees. Conversely, firm with fewer than 19 employees make up 87.5% of all firms, but have 18.4% of total employees. Here\’s the detailed table.

The share of private sector employment by firm size has barely budged in recent decades. The share of employment in very small enterprises of 1-19 employees is down a few percentage points, and the share of employment at the largest employers of over 500 employees is up just a touch, but the overall stability of these patterns is remarkable.

Studies of the effect of small firms on employment often seem to reach mixed results. One reason becomes clear if you think about this figure for a moment. If small firms grow enough in size, they aren\’t small any more. So does that mean that small firms stop contributing to employment growth if they get larger? Conversely, imagine a large firm in a death spiral, bleeding employees until it becomes a smaller firm. Does this now become an example of a small firm that is losing employees? If people are laid off at big firms, and then try to start small companies, is this evidence of the dynamism of small firms–or of a sick economy?

The emphasis in recent studies of small firms and employment is in adjusting for the age of the firm.
Here\’s the CBO explanation (footnotes omitted):

\”It is widely believed that small firms promote job growth. In fact, small firms both create and eliminate far more jobs than large firms do. On balance, they account for a disproportionate share of net job growth—however, that greater net growth is driven primarily by the creation of new small firms, frequently referred to as start-ups, rather than by the expansion of mature small firms. …

\”Recent research, however, has found that it is young small firms, especially start-ups, that grow faster—and consequently create jobs at a higher rate—than either large firms or established small firms do. One study found that the smallest firms, those with between one and four employees, grew 4.7 percent faster than the largest firms, those with more than 10,000 employees. However, when the comparison is made between firms of the same age, small firms grow more slowly than large firms do.

\”Almost all firms start small. Many fail and, of those that do survive, most have no desire to expand beyond “small firm” status. Only a few grow substantially and become large firms. Thus, the faster average growth of young small firms is driven by the ambitions and successes of a fairly narrow set of start-up employers.\”

In other words, the issue isn\’t about small size, but rather about whether the firm is in a line of business that offers possibilities for dramatic expansion. Many small firms like certain retail stores or various personal and professional services, don\’t have much potential to expand substantially, and aren\’t really seeking to do so. I remember a venture capitalist, who would be expected to be sympathetic to good news about small firms, once telling me: \”The thing you need to remember about small firms is that a lot of what they do is sell to bigger firms.\” I\’m not sure that statement is true about small firms in general, but I think it was true in terms of what he was looking for: that is, firms that were currently small but had potential to link to a much wider market.

International Poverty: Progress and a Puzzle

Shaohua Chen and Martin Ravallion at the World Bank have prepared a \”briefing note\” with good news: global poverty rates are dropping.  They write: \”That means that 1.29 billion people in 2008 lived below $1.25 a day, as compared to 1.94 billion in 1981. 2.47 billion people in 2008 consumed less than $2 a day, as compared to 2.59 billion in 1981.\”

The idea of measuring poverty as $1.25 per day or $2 per day may shock some Americans. After all, In contrast, in the United States the poverty threshold in 2011 for a three-person family, single parent with two children, was $18,123, which works out to about $16.50 per person per day. But as the report explains: \”$1.25 is the average of the national poverty lines found in the poorest 10-20 countries.
Using this line, poverty in the world as a whole is being judged by what “poverty” mean
in the world’s poorest countries. Naturally, better off countries tend to have higher poverty lines than this frugal standard. $2 a day is the median poverty line for all developing countries.\”

The reduction in poverty rates reaches across all regions of the developing world. Not surprisingly, much of the most rapid reduction of poverty has come from China. As the figure shows, it used to be that if you included China with the rest of the developed world, it raised the overall poverty rate. But now, if you include China with the rest of the developed world, it reduces the overall poverty rate. Nonetheless, 173 million people in China remain below the $1.25 poverty line. As the World Bank describes it:

\”Looking back to the early 1980s, East Asia was the region with the highest incidence of poverty in the world, with 77% living below $1.25 a day in 1981. By 2008 this had fallen to 14%. In China alone, 662 million fewer people living in poverty by the $1.25 standard, though progress in China has been uneven over time. In 2008, 13% (173 million people) of China’s population still lived below $1.25 a day. In the developing world outside China, the $1.25 poverty rate has fallen from 41% to 25% over 1981-2008, though not enough to bring down the total number of poor, which was around 1.1 billion in both 1981 and 2008, although rising in the 1980s and ‘90s, then falling since 1999 …\”

The reduction in poverty rates is clearly good news, but the pattern of reduction in poverty rates across countries poses a puzzle that Martin Ravallion raises in \”Why Don’t We See Poverty Convergence?\” in the February 2012 issue of the American Economic Review. The article isn\’t freely available on-line, although many academics will have access through their libraries. He sets up the discussion of the puzzle this way\”

\”Two prominent stylized facts about economic development are that there is an advantage of backwardness, such that in a comparison of two otherwise similar countries the one with the lower initial mean income will tend to see the higher rate of economic growth, and that there is an advantage of growth, whereby a higher mean income tends to come with a lower incidence of absolute poverty. Past empirical support for both stylized facts has almost invariably assumed that the dynamic processes for growth and poverty reduction do not depend directly on the initial level of poverty. Under that assumption, the two stylized facts imply that we should see poverty convergence: countries starting out with a high incidence of absolute poverty should enjoy a higher subsequent growth rate in mean consumption and (hence) a higher proportionate rate of poverty reduction. That poses a puzzle. The data on poverty measures over time for 90 developing countries assembled for this article reveal little or no sign of poverty convergence.\”

Here is Ravallion\’s figure to illustrate the point. The horizontal axis has poverty rates for 90 countries, mostly from the 1980s and 1990s as data became available. The vertical axis shows the decline in poverty rates from the start of the data up to 2005. Notice that the best-fit line doesn\’t show that countries which started from higher levels of poverty have larger reductions in poverty: if anything, the relationship goes a bit the other way.

Ravallion puts it this way (citation omitted): \”The overall poverty rate of the developing world has been falling since at least 1980, but the proportionate rate of decline has been no higher in its poorest countries.\” This finding suggests that while poverty rates are diminishing over time, there is no particular reason based on past patterns to expect that the poverty rates will fall more quickly where poverty is greatest. Ravallion offers the clear implication: There do often seem to be \”advantages of backwardness,\” which relatively poorer countries can take advantage of global knowledge and market to reach a faster growth rate, but there is apparently also a drag of high poverty rates, in which the existence of a high poverty rate makes it harder for a country to reduce its poverty rate further. These factors tend to offset each other, and as a result, the poorer countries don\’t in fact reduce their poverty rates faster.

Why might the existence of high poverty rates make it harder to grow? Perhaps high poverty rates reflect the lack of a middle class, which in turn makes it harder for an economy to grow. Perhaps high poverty rates reflect a poorly-educated workforce, which means that investment in the country is unprofitable, which slows growth. Perhaps high poverty rates lead to poor health, which reduces the prospects for growth. Ravallion investigates whether factors schooling, life expectancy, and the price
of investment goods might provide a link from high initial poverty to the lack of reductions in poverty, but doesn\’t find statistical connections.  Understanding why the poorest countries have no greater success in reducing their poverty rates remains a good research topic.

The U.S. and Europe: Productivity Puzzles and Information Technology

From the 1970s into the 1990s, productivity levels in Europe, as measured by output per hour, was converging with the United States. But since the mid-1990s, the U.S. productivity lead has been expanding.

Why? Nicholas Bloom, Raffaella Sadun, and John Van Reenen try to answer that question in \”Americans Do IT Better: US Multinationals and the Productivity Miracle,\” which appears in the February 2012 issue of the American Economic Review. The article isn\’t freely available on-line, although many in academia will have access through a library subscription. Part of the answer is that the resurgence of U.S. productivity since about 1995 has been driven by industries that either make or use information and communications technology. As measured by the total stock of information technology capital divided by hours worked, the U.S. economy has opened a larger lead over Europe.

These patterns are fairly well-known in the economics literature on determinants of economic growth. For example, in the Winter 2008 issue of my own Journal of Economic Perspectives, which is freely available on-line courtesy of the American Economic Association, Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh offer \”A Retrospective Look at the U.S. Productivity Growth Resurgence,\” which discusses how the resurgence in U.S. productivity growth in the mid-1990s was first led by productivity increases in the information-technology-producing sector, and then was led by productivity increases in industries that made intensive use of information technology. In that same issue, Bart van Ark, Mary O’Mahony, and Marcel P. Timmer discuss \”The Productivity Gap Between Europe and the United States: Trends and Causes.\” They discuss how European productivity was converging with U.S. levels, until it started diverging, and point to a possible role for information technology, along with issues related to the role of information technology and differences in labor and product market regulation.

In their just-published article, Bloom, Sadun, and  Van Reenen pose the question, and their answer, this way (footnotes omitted):

\”Given the common availability of IT throughout the world at broadly similar prices, it is a major puzzle why these IT related productivity effects have not been more widespread in Europe. There are at least two broad classes of explanation for this puzzle. First, there may be some “natural advantage” to being located in the United States, enabling firms to make better use of the opportunity that comes from rapidly falling IT prices. These natural advantages could be tougher product market competition, lower regulation, better access to risk capital, more educated or younger workers, larger market size, greater geographical space, or a host of other factors. A second class of explanations stresses that it is not the US environment per se that matters but rather the way that US firms
are managed that enables better exploitation of IT (“the US management hypothesis”). These explanations are not mutually exclusive. …\”

\”Nevertheless, one straightforward way to test whether the US management hypothesis has any validity is to examine the IT performance of US owned organizations in a European environment. If US multinationals partially transfer their business models to their overseas affiliates—and a walk into McDonald’s or Starbucks anywhere in Europe suggests that this is not an unreasonable assumption—then analyzing the IT performance of US multinational establishments in Europe should be informative. Finding a systematically better use of IT by American firms outside the United States suggests that we should take the US management hypothesis seriously. …

We report that foreign affiliates of US multinationals appear to obtain higher productivity than non-US multinationals (and domestic firms) from their IT capital and are also more IT intensive. This is true in both the UK establishment-level dataset and the European firm-level dataset. … Using our new international management practices dataset, we then show that American firms have higher scores on “people management” practices defined in terms of promotions, rewards, hiring, and firing. This holds true for both domestically based US firms as well as US multinationals operating in Europe. Using our European firm-level panel, we find these management practices account for most of the higher output elasticity of IT of US firms. This appears to be because people management
practices enable US firms to better exploit IT.\”

They carefully add in a footnote: \”It is plausible that higher scores reflect “better” management, but we do not assume this. All we claim is that American firms have different people management practices than European firms, and these are complementary with IT.\”

This work is part of a longer-term project of these authors, which seeks to spell out what is meant by \”good management,\” and then to use survey data to figure out where \”good management is being practiced. In a Winter 2010 article in my own journal,  Bloom and Van Reenen offer a useful overview of this work in \”Why Do Management Practices Differ across Firms and Countries?\”

They describe how they try to measure good management, using 18 different categories \”which focuses on aspects of management like systematic performance monitoring, setting appropriate targets, and providing incentives for good performance.\” Their group conducts interviews with middle-level corporate managers, and ranks firms on a scale of 1-5 in each of these 18 categories. One of their findings is that average management scores for U.S. firms are the highest in the world.

It sometimes seems to me, reading the news, that American firms are managed by time-serving functionaries who run the gamut from myopic to venal. But like most Americans, my direct experience with companies operating in the rest of the world is almost nonexistent. By international standards, managers of U.S. firms as a group may well be among the best in the world.

A Third Kind of Unemployment?

Economists typically think of unemployment as falling into two categories. There is \”cyclical\” unemployment, which is the unemployment that occurs because of a recession. And there is \”structural\” unemployment–sometimes called the \”natural rate of unemployment\” or the NAIRU for \”nonaccelerating inflation rate of unemployment.\” This is the rate of unemployment that would arise in a dynamic labor market even if there was no recession, as firms expand and contract and people move between jobs. The level of structural unemployment will be influenced by factors that influence the incentives of people to seek out jobs (like the costs of mobility between jobs and the structure of unemployment, welfare, and disability benefits) and the incentives of businesses to hire (including rules affecting the costs of business expansion, rules affecting what firms must provide to employees, and even rule affecting the costs of firing employees, if necessary, later on).

Inconveniently, the unemployment that the United States is currently experiencing doesn\’t fit neatly into either of the two conventional categories.

After all, the recession officially ended in June 2009, according to the Business Cycle Dating Committee of the National Bureau of Economic Research.  However, the unemployment rate has been above 8% since February 2009, and in a February 2012 report on \”Understanding and Responding to Persistently High Unemployment,\” the Congressional Budget Office is forecasting that it will remain above 8% until 2014.

In a conventional economic framework, it\’s not clear how to make sense \”cyclical\” unemployment that persists for four or five years after the recession is over. However, the CBO and other forecaster have been predicting all along that the unemployment rate will eventually drop as the aftereffects of the Great Recession wear off, and in that sense it doesn\’t seem like natural or structural unemployment, either.

It\’s not clear what to call this persistent jobless recovery unemployment. \”Lethargic\” unemployment? \”Sluggish\” unemployment? \”Torpid\” unemployment? \”Tar-pit\” unemployment?

However you label it, this this is now the third consecutive \”jobless recovery,\” where it has taken a substantial time after the end of the recession for unemployment rates to come back down. It used to be that unemployment rates peaked almost right at the end of the recession, and the steadily dropped. Here\’s a graph of unemployment rates from the ever-useful FRED website of the St. Louis Fed. Periods of recession are shaded.

For example, when the 1974-75 recession ended in March 1975, unemployment was 8.6%. It climbed just a bit higher, to 9% in May 1975, but then fell steadily and by May 1978 was at 5.9%. Or look at the aftermath of the \”back-to-back\” recessions of 1980-81 and 1982. When the recession ended in November 1982, the unemployment rate was also peaking at 10.8%. It then dropped steadily and was down to 7.2% by November 1984 and 5.9% by September 1987.

In the jobless recoveries since then, the pattern has been different. When the 1990-91 recession ended in March 1991, the unemployment rate was 6.8%. But the unemployment rate kept rising, peaking more than a year later at 7.8% in June 1992. it wasn\’t until August 1993, more than two years after the economy had resumed growing, that unemployment rates had fallen back to the 6.8% rate that prevailed at the official end of the recession.

A similar pattern arose after the 2001 recession. At the end of that recession in November 2001, the unemployment rate was 5.5%. But then the unemployment rate kept rising, peaking out at 6.3% in June 2003. It wasn\’t until July 2004 that unemployment rates declined back to the 5.5% that had prevailed at the end of the 2001 recession.

In the most recent recession, unemployment was at 9.5% in June 2009, when the Great Recession officially ended. The official unemployment rate peaked at 10% in October 2009, and has drifted down since then. But in this recovery, the unemployment rate is an underestimate of labor market woes, because the official unemployment rate only counts those who are \”in the labor force,\” meaning that they are out of work but looking for a job. Those who have given up looking, or who are working part-time but would like full-time work, aren\’t counted as unemployed. The last few years have seen a dramatic drop in the \”labor force participation rate,\” that is, the share of adults who are \”in the labor force.\” This rate rose substantially from the 1970s through the 1990s as a greater share of women entered the (paid) labor force. But with job prospects so poor, it has been dropping off. 

The February 2012 CBO report describes the disconnect from the official unemployment rate to a broader appraisal of the U.S. labor market this way: \”The rate of unemployment in the United States has exceeded 8 percent since February 2009, making the past three years the longest stretch of high unemployment in this country since the Great Depression. Moreover, the Congressional Budget Office (CBO) projects that the unemployment rate will remain above 8 percent until 2014. The official unemployment rate excludes those individuals who would like to work but have not searched
for a job in the past four weeks as well as those who are working part-time but would prefer full-time work; if those people were counted among the unemployed, the unemployment rate in January 2012 would have been about 15 percent.\”

Our public discussions of what to do about these persistently high rates of lethargic or torpic unemployment have been unfortunately locked into the two older categories of cyclical and structural unemployment.

For example, some argue that if only the federal government had enacted an extra $1 trillion or so in fiscal stimulus, probably backed by a Federal Reserve willing to carry out another \”quantitative easing\” by printing money to finance the Treasury bonds for this stimulus, then the economy and the unemployment rate would be recovering much more quickly. But the federal government is in the process of running its four largest annual deficits since World War II from 2009 to 2012. The Fed is planning to hold the benchmark federal funds interest rate near zero percent for six years (!), while also engaging in $2 trillion of quantitative easing. The amount of countercyclical macroeconomic policy has been massive, and I have a hard time believing that just another boost would have fixed everything.

While I in general supported the countercyclical macroeconomic policies taken during the Great Recession (with some reservations about the details), it seems to me that countercyclical macroeconomic policy is like taking aspirin when you have a bad case of flu–or if you prefer a more extreme metaphor when talking about an unemployment rate that may exceed 8% for 7-8 years, like an athlete taking a cortisone shot for an injury before playing in the big game. Such steps can be worth taking, and they can sometimes even modestly help the healing process, but they are palliative, not curative. Also, the CBO offers a reminder that while more fiscal stimulus could help the economy in the short-term, it will injure the economy over the long run unless it is counterbalanced by a way of holding down government debt over time.

\”Despite the near-term economic benefits, such actions would add to the already large projected budget deficits that would exist under current policies, either immediately or over time. Unless other actions were taken to reverse the accumulation of government debt, the nation’s output and people’s income would ultimately be lower than they otherwise would have been. To boost the economy in the near term while seeking to achieve long-term fiscal sustainability, a combination of policies would be required: changes in taxes and spending that would increase the deficit now but reduce it later in the decade.\”

But the standard policy agenda for dealing with structural unemployment doesn\’t seem particularly on-point just now, either. Sure, it would be useful to encourage mobility between jobs and to rethink how regulatory and other policies affect incentives to work and to hire. But while this kind of rethinking is always useful, it\’s not clear that it addresses the reality of high unemployment here and now.

We need a convincing theory of this third kind of unemployment–sluggish unemployment, tar-pit unemployment–and an associated sense of what policies are useful for addressing it. Firms as a group have high profits and strong cash reserves, but they are not seeing it as worthwhile to raise hiring substantially, preferring instead to focus on getting more productivity from the existing workforce. Are there ways to reduce the costs and risks that firms face when thinking about hiring? Many households are struggling with outsized debt burdens, including those who have mortgages that are larger than the value of their home. Are there policy levers to help them move past their debt burdens?

Long-term unemployment is very high. CBO writes: \”[T]he share of unemployed people looking
for work for more than six months—referred to as the long-term unemployed—topped 40 percent in December 2009 for the first time since 1948, when such data began to be collected; it has remained above that level ever since.\” What do we know about getting the long-term unemployed back into the labor force?  Are there ways to encourage greater mobility of people between jobs, perhaps by spreading more information about job opportunities, making it easier for employers to verify skills of potential employees, or encouraging both greater geographic mobility and mobility across sectors of the economy?

Tolstoy famously started Anna Karenina with the comment: \”All happy families are alike; each unhappy family is unhappy in its own way.\” Each unhappy recession is unhappy its own way, too–and the Great Recession is quite different from previous post-war U.S. recessions. It needs some fresh thinking about policies to address what has happened.

Generational Flip-Flop

Throughout human history, the overall pattern of intergenerational transfers has been clear. Transfers from adults to children are substantial. Not that long ago, transfers to the elderly were quite low, because people tended to work until they died, at which point many of them left bequests to the next generation. Even in more recent times,when social programs began to create transfers from working-age adults to the elderly, the combination of support for those who are younger when alive and bequests after death meant that the overall pattern of intergenerational transfers went from older to younger.

But with the relatively smaller number of children in many countries, the relatively larger of elderly, and the growing costs of government programs to support the elderly, the fundamental historical pattern of transferring assets from older to younger generations seems to have flip-flopped in several countries–with more on the way.

Ronald D. Lee and Andrew Mason tell the story in \”Generational Economics in a Changing World,\” which appeared in Population and Development Review, January 2011 (supplementary issue), pp.  115-142.  Many in academia will have access to the journal through library subscriptions, but the article is not freely available on-line. They draw upon the work of 23 country teams participating in the National Transfer Accounts project. For an overview of this project,  a useful starting point is the 2011 book edited by Lee and Mason called Population Aging and the Generational Economy: A Global Perspective. The book has 32 chapters by about 50 economists and demographers.

The fundamentals of the generational flip-flop story can be told through a figure that shows patterns of income and consumption over the life cycle. Each figure has refers to three sets of societies: \”hunter-gatherers,\” based on data from anthropological studies; poor countries, which is based on data from Kenya, Indonesia, Philippines, and India; and rich countries, based on data from Japan, the United States, Sweden and Finland. The first figure shows age on the horizontal axis. The vertical axis is the number expressed as ratio to average labor income for those in the 30-49 age bracket. The dashed lines show income; the solid lines show consumption.

First consider income and consumption for children. For all three kinds of societies, the \”income\” lines are essentially zero for those under age 15 or so, who don\’t produce much. The consumption lines for children rise from about .3 of an adult\’s labor income at birth to .4 by teenage years. Consumption of children is notably higher in the rich countries.

Then consider the income-earning years. Remember that the vertical axis is scaled as the ratio of average income for someone in the age 30-49 age bracket, so it\’s no surprise that all three of the income-earning lines rise to be equal to roughly 1 for that age interval. However, it\’s interesting to note that peak income-earning drops off sharply first in the hunter-gatherer countries, and then then a few years later in the poor countries, and then a few years after that (just after age 60) in the rich countries. In all three of these types of societies,  consumption during peak income-earning years is about .6 of typical adult\’s income; after all, a substantial chunk of that income is going to support children at this time.

Now look at the older age brackets. Note that on average, even at age 70, the hunter-gatherers have income well above consumption. In those societies, transfers from older to younger generations continue pretty much up to death. For poor societies, income drops below consumption at about age 60, but consumption stays more-or-less flat to the end of life. For the rich societies, income drops below consumption about age 65. At that point, income in rich countries keeps falling sharply, dropping below the level for poor countries by the late 60s. Moreover, consumption of the elderly in rich countries is not flat, but instead is rising; indeed, consumption levels of the elderly as a share of the income of a working adult are higher than at any earlier point in life!

Looking at a broader group of countries, Lee and Mason state: \”We show that the direction of intergenerational transfers in the population has shifted from downward to upward, at least in a few leading rich nations.\” In particular, they look across countries and present calculations of the average age at which income is earned, and the average age at which consumption happens.

For example, in the United States the average age for earning $1 of income is 43.4 years, while the average age for $1 of consumption is 41.3 years. Thus, the U.S. maintains the traditional pattern of overall transfers from older to younger.

As you might expect, this pattern of older-to-younger transfers is more extreme in low income countries. For example: In India, the average age of income is 39.5 years, and the average age of consumption is 30 years. In Kenya, the average age of income is 35.7 years, and the average age of consumption is 23.9 years.

But in a few countries, the classic pattern that has lasted throughout human history has now been reversed. In Japan, the average age of income is 45 years, while the average age of consumption is 45.8 years. In Germany, the average age of income is 42.2 years and the average age of consumption is 44.9 years.

There is nothing inherently unsustainable about societies in which flows of funds, as a whole, are transferred from younger to older generations. In the narrow programmatic sense, it is true in many countries that that the current promises of payments to the elderly do not have sufficient funding in the existing programs, and so the facts of accounting will at some point force these programs to evolve. But in a broader sense, a younger-to-older society will need to run on a set of social expectations and arrangements that are different from any previous society in human history, and will involve social, political, and institutional changes that I think we are only dimly beginning to discern.

Hyperinflation and the Zimbabwe Example

Back in the Paleolithic era when I was learning economics, Germany\’s hyperinflation of the 1920s was the classic example of hyperinflation. When I was teaching intro economics classes in the late 1980s, I would use hyperinflation examples from Latin America, and by the mid-1990s, I could use examples from eastern Europe after the collapse of the Soviet Union. But for the next few years, I suspect, the canonical example of hyperinflation will be what has happened in Zimbabwe from 2007-2009, which has the dubious distinction of being the only hyperinflation of the still-young 21st century. Janet Koech of the Federal Reserve Bank of Dallas offers a nice overview of \”Hyperinflation in Zimbabwe,\” which appears in the Annual Report of the Globalization and Monetary Policy Institute. 

Koech reports: \”From 2007 to 2008, the local legal tender lost more than 99.9 percent of its value.\” Here\’s a picture of the infamous $100 trillion bill, issued in 2009. The existence of the bill is black comedy, because represents economic devastation for the 12 million or so people of Zimbabwe.

Hyperinflation doesn\’t have a precise definition, but a common rule-of-thumb is that it occurs when the rate of price inflation exceeds 50% per month. If this rate is compounded over a year, prices multiply by a factor about 130 in a year. Here\’s the monthly inflation rate taking off in in Zimbabwe.

Zimbabwe\’s economic disaster was built on terrible economic policy and bad luck. A combination of droughts and ill-designed land \”reforms\” savaged production of important crops like maize and tobacco. Government spending was nearly uncontrolled. Foreign debts mounted. And then the government of Zimbabwe tried to solve its problems by turning on the printing presses.

Koech writes: \”Hyperinflation and economic troubles were so profound that by 2008, they wiped out the
wealth of citizens and set the country back more than a half century. In 1954, the average GDP per
capita for Southern Rhodesia was US$151 per year (based on constant 2005 U.S.-dollar purchasing power-parity rates). In 2008, that average declined to US$136, eliminating gains over the preceding 53
years …\”

Hyperinflation causes extraordinary contortions in an economy. When all prices are rising dramatically all the time, comparing prices become essentially impossible, and the price mechanisms itself breaks down. Koech describes (footnotes omitted):

\”The Economic Times newspaper noted on June 13, 2008, that “a loaf of bread now costs what 12 new cars did a decade ago,” and “a small pack of locally produced coffee beans costs just short of 1 billion Zimbabwe dollars. A decade ago, that sum would have bought 60 new cars.” At the height of the hyperinflation, prices doubled every few days, and Zimbabweans struggled to keep their cash resources from evaporating. Businesses still quoted prices in local currency but revised them several times a day. A minibus driver taking commuters into Harare still charged passengers in local currency but at a higher price on the evening trip home. And he changed his local notes into hard currency three times a day.

The government attempted to quell rampant inflation by controlling the prices of basic commodities and services in 2007 and 2008. Authorities forced merchants—sometimes with police force—to lower prices that exceeded set ceilings. This quickly produced food shortages because businesses couldn’t earn a profit selling at government-mandated prices and producers of goods and services cut output to avoid incurring losses. People waited in long lines at fuel stations and stores. While supermarket shelves were empty, a thriving black market developed where goods traded at much higher prices. Underground markets for foreign exchange also sprang up in back offices and parking lots where local notes
were converted to hard currencies at much more than the official central bank rate. Some commodities, such as gasoline, were exclusively traded in U.S. dollars or the South African rand, and landlords often accepted groceries and food items as barter for rent.\”

Here are some manifestations of hyperinflation: empty supermarket shelves (after all, anything real will hold value better than a hyperinflating currency) and gallows humor.

By early 2009, no trust in the Zimbabwe currency remained, and the Zimbabwe economy essentially \”dollarized.\” Koech writes: \”While the South African rand, Botswana pula and the U.S. dollar were granted official status, the U.S. dollar became the principal currency. Budget revenue estimates and planned expenditures for 2009 were denominated in U.S. dollars, and the subsequent budget for 2010 was also set in U.S. dollars. An estimated four-fifths of all transactions in 2010 took place in U.S. dollars, including most wage payments …\”

Here\’s the Dishonor Role of Hyperinflations during the last couple of hundred years.

Academic Journals: Print Fading

When I started my job as Managing Editor of the Journal of Economic Perspectives, we distributed about 25,500 print copies of each early issues in the late 1980s. In 2011 we distributed about 13,000 print copies of each issue.  But compared to a lot of leading law reviews, our print circulation is doing extremely well. Ross E. Davies looks at \”Law Review Circulation 2011: More Change, More Same,\” in a just-released paper for the Journal of Law, available on SSRN here.

Here is a table with a few illustrative numbers on the drop-off of print subscriptions at law reviews, whihc are extreme.

<!–[if !mso]> st1\\:*{behavior:url(#ieooui) } <![endif]–>

Some Law Review Annual Print Circulation Figures
Law Reviews
1974-75
1990-91
2010-2011
Harvard
10,193
7,768
1,896
Yale
4,250
3,700
1,520
Columbia
3,831
2,676
1,076
Michigan
3,038
2,382
   777
Northwestern
1,918
951
 514
Boalt (California)
2,734
1,740
719

For law reviews, one standard explanation is the arrival of Lexis-Nexis and then other methods of doing legal research on-line. These reasons apply to my own journal, as well. Back issues of my journal have been available though JSTOR for years.

My impressionistic sense is also that law reviews occupy a less central place in the practice of law than they did a few decades ago.For example, Supreme Court Chief Justice John Roberts has on several occasions said in interviews that he doesn\’t find law reviews very useful. Here\’s a 2011 comment: \”Pick up a copy of any law review that you see, and the first article is likely to be, you know, the influence of Immanuel Kant on evidentiary approaches in 18th Century Bulgaria, or something, which I’m sure was of great interest to the academic that wrote it, but isn’t of much help to the bar.” As reported after a 2010 interview, \”Roberts said he doesn’t pay much attention to academic legal writing. Law review articles are `more abstract\’ than practical, and aren’t `particularly helpful for practitioners and judges.\’\”

For my own journal, I think (or hope?) that the issues are more about alternative methods of access to the journal rather than a perceived lack of relevance. For example, several thousand AEA members have been choosing to get my journal on CD-ROM rather than on paper–and the CD-ROM for any given issue includes about a decade of back issues, too. About two years ago, the AEA voted to make the articles from journal freely available on-line–both current issues and archives. Very soon, it will also be possible to download entire issues onto your own CD-ROM, or on to an e-reader like a Kindle or Nook.

Reducing the barriers to accessing academic journals by making it electronically available seems like an unambiguously good thing. But I do worry about the ongoing decline of print. It\’s a bit like the old koan, \”If a tree falls in the forest, and no one hears it, does it make a sound?\” In my world, if a journal is made available on-line, does anyone actually read it? Sure, the AEA can send out a blast e-mail to let members know that the issue is available. I probably delete a dozen e-mail notifications of something or other almost every day, without giving them much of a look.

The question for my own journal, and for many publications, is how to get the attention of readers if you aren\’t arriving in physical print form. Attention is a bit \”sticky,\” in the sense that people get used to looking at certain things and not others. In running a journal, you worry that the journal might fall out of the rotation of what people are looking at. Moreover, I worry that the digital world might undervalue a certain kind of intellectual serendipity: the process of looking up an article on one subject, and in the print copy or further along the bookshelf, running across something quite different.

Of course, being freely available over the web also offers enormous opportunities for my own journal to garner attention and to be the outcome of serendipitous searching. Perhaps the real message here is to spend some time surfing purposefully but whimsically in your areas of professional interest, so that you remain open to finding new information sources and new perspectives.

Here\’s a July 11, 2011 post with further thoughts on Online Access and Academic Journals.

Some Facts about American Unions

The Bureau of Labor Statistics published annual report on union membership, \”UNION MEMBERS — 2011,\” in late January.  Here are some facts about American unions in 2011, along with some historical perspective and international comparisons. I\’ll also mention some of the general lessons I see in these patterns:

First, some highlights from the BLS report (references to the detailed data tables omitted here):

\”In 2011, the union membership rate—the percent of wage and salary workers who were members of a union—was 11.8 percent, essentially unchanged from 11.9 percent in 2010 …

\”In 2011, 7.6 million employees in the public sector belonged to a union, compared with 7.2 million union workers in the private sector. The union membership rate for public-sector workers (37.0 percent) was substantially higher than the rate for private-sector workers (6.9 percent). Within the public sector, local government workers had the highest union membership rate, 43.2 percent. This group includes workers in heavily unionized occupations, such as teachers, police officers, and firefighters. Private-sector industries with high unionization rates included transportation and utilities (21.1 percent) and construction (14.0 percent), while low unionization rates occurred in agriculture and related industries (1.4 percent) and in financial activities (1.6 percent). Among occupational groups, education, training, and library occupations (36.8 percent) and protective service occupations (34.5 percent) had the highest unionization rates in 2011. Sales and related occupations (3.0 percent) and farming, fishing, and forestry occupations (3.4 percent) had the lowest unionization rates….

By age, the union membership rate was highest among workers 55 to 64 years old (15.7 percent). The lowest union membership rate occurred among those ages 16 to 24 (4.4 percent).\”

My sense is that many people know the unionization rate is higher in the public sector than in the private sector. However, it wasn\’t until recently that a majority of the absolute number of unionized workers in the country were in the public sector. Even within the private sector, some of the highest unionization rates are often found in very regulated industries like utilities. In the U.S. private sector, unionization rates are down into single digits and continuing to fade.

Historically, the unionization rate in the United States shows one big rise, leading to a peak in the early 1950s when about one-third of non-agricultural workers belonged to a union. The pattern has been one of decline ever since.

Clearly, the decline in unions has been long and steady, occurring under both political parties. If not for the rise in public sector unions, the decline would have been even more severe. Thus, it doesn\’t make sense to blame this decline on some single event in the last decade or two or three–it\’s bigger than that. In the Winter 2008 issue of my own Journal of Economic Perspectives, Barry T. Hirsch offered an explanation in his paper \”Sluggish Institutions in a Dynamic World: Can Unions and Industrial Competition Coexist?\”  His argument is that overtime, in the dynamic and competitive U.S. markets, formal union rules are too inflexible, and thus impose extra costs on firms which over time make the firms less able to compete. He argues: \”If worker-based institutions are to flourish, they must add value and permit companies to perform at levels similar to those obtained under evolving nonunion governance norms.\”

International comparisons show that the the U.S. economy is something of an outlier in its low levels of union membership. The first column of the table shows the union membership rate in 2006. The second column shows the union \”coverage rate,\” which refers to the total share of workers whose compensation is determined by union bargaining, even if some of those workers are not union members. In the United States, union membership and union coverage are very similar. For example, the BLS report notes that in 2011, the U.S. economy had 14.8 million union members and another 1.5 million workers who did not belong to a union, but whose jobs are covered by a union contract. About half of that 1.5 million are government workers. However, in some other countries, like France, the gap between union membership and union coverage can be quite substantial.  In Japan, it is even possible to be a union member but not to have wages determined by a bargaining contract.

Clearly, it is possible for high-income countries around the world like Germany, France, Sweden, the United Kingdom, and Canada to grow and continue to be high-income even with far higher rate of unionization than the U.S. economy. The extremely wide variation across countries also suggests that unionization may be a rather different phenomenon across countries.

With this wide variation in mind, I\’ve grown cautious over the years about all blanket statements about unionization–positive or negative. In the private sector, American-style unionization has essentially failed to propagate; in the public sector, it has had at best only very partial success. But back in 1970, the great sociologist Albert Hirschman wrote a book called Exit, Voice, and Loyalty. He argued that when members of any organization are faced with conflict, they must choose between expressing their disagreement through \”voice\” or leaving the organization through \”exit.\” Many American workplaces are essentially organized around the principle where the voice of workers is constrained and the possibility of exit is emphasized. I sometimes wonder if a different kind of American labor organization might do a better job of using voice to improve productivity and in that way raise the compensation of its members.

Source note: Thanks to Danlu Hu for putting together the time series graph of unionization rates over time and the data table on international comparisons.

For unionization rates over time, the data from 2001 to 2011 is readily available at the Bureau of Labor Statistics website. Data on U.S. union membership going from 1930 to 1994.is available at
.  The remaining data can be found by hunting around the BLS website, or else by looking at the 2004 paper by Gerald Mayer, \”Union Membership Trends in the United States.\”

The data on international comparisons is from the Data Base on Institutional Characteristics of Trade Unions, Wage Setting, State Intervention and Social Pacts, 1960-2010, maintained by the  (ICTWSS), available here at the website of the Amsterdam Institute for Advanced Labor Studies.