The U.S. and Europe: Productivity Puzzles and Information Technology

From the 1970s into the 1990s, productivity levels in Europe, as measured by output per hour, was converging with the United States. But since the mid-1990s, the U.S. productivity lead has been expanding.

Why? Nicholas Bloom, Raffaella Sadun, and John Van Reenen try to answer that question in \”Americans Do IT Better: US Multinationals and the Productivity Miracle,\” which appears in the February 2012 issue of the American Economic Review. The article isn\’t freely available on-line, although many in academia will have access through a library subscription. Part of the answer is that the resurgence of U.S. productivity since about 1995 has been driven by industries that either make or use information and communications technology. As measured by the total stock of information technology capital divided by hours worked, the U.S. economy has opened a larger lead over Europe.

These patterns are fairly well-known in the economics literature on determinants of economic growth. For example, in the Winter 2008 issue of my own Journal of Economic Perspectives, which is freely available on-line courtesy of the American Economic Association, Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh offer \”A Retrospective Look at the U.S. Productivity Growth Resurgence,\” which discusses how the resurgence in U.S. productivity growth in the mid-1990s was first led by productivity increases in the information-technology-producing sector, and then was led by productivity increases in industries that made intensive use of information technology. In that same issue, Bart van Ark, Mary O’Mahony, and Marcel P. Timmer discuss \”The Productivity Gap Between Europe and the United States: Trends and Causes.\” They discuss how European productivity was converging with U.S. levels, until it started diverging, and point to a possible role for information technology, along with issues related to the role of information technology and differences in labor and product market regulation.

In their just-published article, Bloom, Sadun, and  Van Reenen pose the question, and their answer, this way (footnotes omitted):

\”Given the common availability of IT throughout the world at broadly similar prices, it is a major puzzle why these IT related productivity effects have not been more widespread in Europe. There are at least two broad classes of explanation for this puzzle. First, there may be some “natural advantage” to being located in the United States, enabling firms to make better use of the opportunity that comes from rapidly falling IT prices. These natural advantages could be tougher product market competition, lower regulation, better access to risk capital, more educated or younger workers, larger market size, greater geographical space, or a host of other factors. A second class of explanations stresses that it is not the US environment per se that matters but rather the way that US firms
are managed that enables better exploitation of IT (“the US management hypothesis”). These explanations are not mutually exclusive. …\”

\”Nevertheless, one straightforward way to test whether the US management hypothesis has any validity is to examine the IT performance of US owned organizations in a European environment. If US multinationals partially transfer their business models to their overseas affiliates—and a walk into McDonald’s or Starbucks anywhere in Europe suggests that this is not an unreasonable assumption—then analyzing the IT performance of US multinational establishments in Europe should be informative. Finding a systematically better use of IT by American firms outside the United States suggests that we should take the US management hypothesis seriously. …

We report that foreign affiliates of US multinationals appear to obtain higher productivity than non-US multinationals (and domestic firms) from their IT capital and are also more IT intensive. This is true in both the UK establishment-level dataset and the European firm-level dataset. … Using our new international management practices dataset, we then show that American firms have higher scores on “people management” practices defined in terms of promotions, rewards, hiring, and firing. This holds true for both domestically based US firms as well as US multinationals operating in Europe. Using our European firm-level panel, we find these management practices account for most of the higher output elasticity of IT of US firms. This appears to be because people management
practices enable US firms to better exploit IT.\”

They carefully add in a footnote: \”It is plausible that higher scores reflect “better” management, but we do not assume this. All we claim is that American firms have different people management practices than European firms, and these are complementary with IT.\”

This work is part of a longer-term project of these authors, which seeks to spell out what is meant by \”good management,\” and then to use survey data to figure out where \”good management is being practiced. In a Winter 2010 article in my own journal,  Bloom and Van Reenen offer a useful overview of this work in \”Why Do Management Practices Differ across Firms and Countries?\”

They describe how they try to measure good management, using 18 different categories \”which focuses on aspects of management like systematic performance monitoring, setting appropriate targets, and providing incentives for good performance.\” Their group conducts interviews with middle-level corporate managers, and ranks firms on a scale of 1-5 in each of these 18 categories. One of their findings is that average management scores for U.S. firms are the highest in the world.

It sometimes seems to me, reading the news, that American firms are managed by time-serving functionaries who run the gamut from myopic to venal. But like most Americans, my direct experience with companies operating in the rest of the world is almost nonexistent. By international standards, managers of U.S. firms as a group may well be among the best in the world.

A Third Kind of Unemployment?

Economists typically think of unemployment as falling into two categories. There is \”cyclical\” unemployment, which is the unemployment that occurs because of a recession. And there is \”structural\” unemployment–sometimes called the \”natural rate of unemployment\” or the NAIRU for \”nonaccelerating inflation rate of unemployment.\” This is the rate of unemployment that would arise in a dynamic labor market even if there was no recession, as firms expand and contract and people move between jobs. The level of structural unemployment will be influenced by factors that influence the incentives of people to seek out jobs (like the costs of mobility between jobs and the structure of unemployment, welfare, and disability benefits) and the incentives of businesses to hire (including rules affecting the costs of business expansion, rules affecting what firms must provide to employees, and even rule affecting the costs of firing employees, if necessary, later on).

Inconveniently, the unemployment that the United States is currently experiencing doesn\’t fit neatly into either of the two conventional categories.

After all, the recession officially ended in June 2009, according to the Business Cycle Dating Committee of the National Bureau of Economic Research.  However, the unemployment rate has been above 8% since February 2009, and in a February 2012 report on \”Understanding and Responding to Persistently High Unemployment,\” the Congressional Budget Office is forecasting that it will remain above 8% until 2014.

In a conventional economic framework, it\’s not clear how to make sense \”cyclical\” unemployment that persists for four or five years after the recession is over. However, the CBO and other forecaster have been predicting all along that the unemployment rate will eventually drop as the aftereffects of the Great Recession wear off, and in that sense it doesn\’t seem like natural or structural unemployment, either.

It\’s not clear what to call this persistent jobless recovery unemployment. \”Lethargic\” unemployment? \”Sluggish\” unemployment? \”Torpid\” unemployment? \”Tar-pit\” unemployment?

However you label it, this this is now the third consecutive \”jobless recovery,\” where it has taken a substantial time after the end of the recession for unemployment rates to come back down. It used to be that unemployment rates peaked almost right at the end of the recession, and the steadily dropped. Here\’s a graph of unemployment rates from the ever-useful FRED website of the St. Louis Fed. Periods of recession are shaded.

For example, when the 1974-75 recession ended in March 1975, unemployment was 8.6%. It climbed just a bit higher, to 9% in May 1975, but then fell steadily and by May 1978 was at 5.9%. Or look at the aftermath of the \”back-to-back\” recessions of 1980-81 and 1982. When the recession ended in November 1982, the unemployment rate was also peaking at 10.8%. It then dropped steadily and was down to 7.2% by November 1984 and 5.9% by September 1987.

In the jobless recoveries since then, the pattern has been different. When the 1990-91 recession ended in March 1991, the unemployment rate was 6.8%. But the unemployment rate kept rising, peaking more than a year later at 7.8% in June 1992. it wasn\’t until August 1993, more than two years after the economy had resumed growing, that unemployment rates had fallen back to the 6.8% rate that prevailed at the official end of the recession.

A similar pattern arose after the 2001 recession. At the end of that recession in November 2001, the unemployment rate was 5.5%. But then the unemployment rate kept rising, peaking out at 6.3% in June 2003. It wasn\’t until July 2004 that unemployment rates declined back to the 5.5% that had prevailed at the end of the 2001 recession.

In the most recent recession, unemployment was at 9.5% in June 2009, when the Great Recession officially ended. The official unemployment rate peaked at 10% in October 2009, and has drifted down since then. But in this recovery, the unemployment rate is an underestimate of labor market woes, because the official unemployment rate only counts those who are \”in the labor force,\” meaning that they are out of work but looking for a job. Those who have given up looking, or who are working part-time but would like full-time work, aren\’t counted as unemployed. The last few years have seen a dramatic drop in the \”labor force participation rate,\” that is, the share of adults who are \”in the labor force.\” This rate rose substantially from the 1970s through the 1990s as a greater share of women entered the (paid) labor force. But with job prospects so poor, it has been dropping off. 

The February 2012 CBO report describes the disconnect from the official unemployment rate to a broader appraisal of the U.S. labor market this way: \”The rate of unemployment in the United States has exceeded 8 percent since February 2009, making the past three years the longest stretch of high unemployment in this country since the Great Depression. Moreover, the Congressional Budget Office (CBO) projects that the unemployment rate will remain above 8 percent until 2014. The official unemployment rate excludes those individuals who would like to work but have not searched
for a job in the past four weeks as well as those who are working part-time but would prefer full-time work; if those people were counted among the unemployed, the unemployment rate in January 2012 would have been about 15 percent.\”

Our public discussions of what to do about these persistently high rates of lethargic or torpic unemployment have been unfortunately locked into the two older categories of cyclical and structural unemployment.

For example, some argue that if only the federal government had enacted an extra $1 trillion or so in fiscal stimulus, probably backed by a Federal Reserve willing to carry out another \”quantitative easing\” by printing money to finance the Treasury bonds for this stimulus, then the economy and the unemployment rate would be recovering much more quickly. But the federal government is in the process of running its four largest annual deficits since World War II from 2009 to 2012. The Fed is planning to hold the benchmark federal funds interest rate near zero percent for six years (!), while also engaging in $2 trillion of quantitative easing. The amount of countercyclical macroeconomic policy has been massive, and I have a hard time believing that just another boost would have fixed everything.

While I in general supported the countercyclical macroeconomic policies taken during the Great Recession (with some reservations about the details), it seems to me that countercyclical macroeconomic policy is like taking aspirin when you have a bad case of flu–or if you prefer a more extreme metaphor when talking about an unemployment rate that may exceed 8% for 7-8 years, like an athlete taking a cortisone shot for an injury before playing in the big game. Such steps can be worth taking, and they can sometimes even modestly help the healing process, but they are palliative, not curative. Also, the CBO offers a reminder that while more fiscal stimulus could help the economy in the short-term, it will injure the economy over the long run unless it is counterbalanced by a way of holding down government debt over time.

\”Despite the near-term economic benefits, such actions would add to the already large projected budget deficits that would exist under current policies, either immediately or over time. Unless other actions were taken to reverse the accumulation of government debt, the nation’s output and people’s income would ultimately be lower than they otherwise would have been. To boost the economy in the near term while seeking to achieve long-term fiscal sustainability, a combination of policies would be required: changes in taxes and spending that would increase the deficit now but reduce it later in the decade.\”

But the standard policy agenda for dealing with structural unemployment doesn\’t seem particularly on-point just now, either. Sure, it would be useful to encourage mobility between jobs and to rethink how regulatory and other policies affect incentives to work and to hire. But while this kind of rethinking is always useful, it\’s not clear that it addresses the reality of high unemployment here and now.

We need a convincing theory of this third kind of unemployment–sluggish unemployment, tar-pit unemployment–and an associated sense of what policies are useful for addressing it. Firms as a group have high profits and strong cash reserves, but they are not seeing it as worthwhile to raise hiring substantially, preferring instead to focus on getting more productivity from the existing workforce. Are there ways to reduce the costs and risks that firms face when thinking about hiring? Many households are struggling with outsized debt burdens, including those who have mortgages that are larger than the value of their home. Are there policy levers to help them move past their debt burdens?

Long-term unemployment is very high. CBO writes: \”[T]he share of unemployed people looking
for work for more than six months—referred to as the long-term unemployed—topped 40 percent in December 2009 for the first time since 1948, when such data began to be collected; it has remained above that level ever since.\” What do we know about getting the long-term unemployed back into the labor force?  Are there ways to encourage greater mobility of people between jobs, perhaps by spreading more information about job opportunities, making it easier for employers to verify skills of potential employees, or encouraging both greater geographic mobility and mobility across sectors of the economy?

Tolstoy famously started Anna Karenina with the comment: \”All happy families are alike; each unhappy family is unhappy in its own way.\” Each unhappy recession is unhappy its own way, too–and the Great Recession is quite different from previous post-war U.S. recessions. It needs some fresh thinking about policies to address what has happened.

Generational Flip-Flop

Throughout human history, the overall pattern of intergenerational transfers has been clear. Transfers from adults to children are substantial. Not that long ago, transfers to the elderly were quite low, because people tended to work until they died, at which point many of them left bequests to the next generation. Even in more recent times,when social programs began to create transfers from working-age adults to the elderly, the combination of support for those who are younger when alive and bequests after death meant that the overall pattern of intergenerational transfers went from older to younger.

But with the relatively smaller number of children in many countries, the relatively larger of elderly, and the growing costs of government programs to support the elderly, the fundamental historical pattern of transferring assets from older to younger generations seems to have flip-flopped in several countries–with more on the way.

Ronald D. Lee and Andrew Mason tell the story in \”Generational Economics in a Changing World,\” which appeared in Population and Development Review, January 2011 (supplementary issue), pp.  115-142.  Many in academia will have access to the journal through library subscriptions, but the article is not freely available on-line. They draw upon the work of 23 country teams participating in the National Transfer Accounts project. For an overview of this project,  a useful starting point is the 2011 book edited by Lee and Mason called Population Aging and the Generational Economy: A Global Perspective. The book has 32 chapters by about 50 economists and demographers.

The fundamentals of the generational flip-flop story can be told through a figure that shows patterns of income and consumption over the life cycle. Each figure has refers to three sets of societies: \”hunter-gatherers,\” based on data from anthropological studies; poor countries, which is based on data from Kenya, Indonesia, Philippines, and India; and rich countries, based on data from Japan, the United States, Sweden and Finland. The first figure shows age on the horizontal axis. The vertical axis is the number expressed as ratio to average labor income for those in the 30-49 age bracket. The dashed lines show income; the solid lines show consumption.

First consider income and consumption for children. For all three kinds of societies, the \”income\” lines are essentially zero for those under age 15 or so, who don\’t produce much. The consumption lines for children rise from about .3 of an adult\’s labor income at birth to .4 by teenage years. Consumption of children is notably higher in the rich countries.

Then consider the income-earning years. Remember that the vertical axis is scaled as the ratio of average income for someone in the age 30-49 age bracket, so it\’s no surprise that all three of the income-earning lines rise to be equal to roughly 1 for that age interval. However, it\’s interesting to note that peak income-earning drops off sharply first in the hunter-gatherer countries, and then then a few years later in the poor countries, and then a few years after that (just after age 60) in the rich countries. In all three of these types of societies,  consumption during peak income-earning years is about .6 of typical adult\’s income; after all, a substantial chunk of that income is going to support children at this time.

Now look at the older age brackets. Note that on average, even at age 70, the hunter-gatherers have income well above consumption. In those societies, transfers from older to younger generations continue pretty much up to death. For poor societies, income drops below consumption at about age 60, but consumption stays more-or-less flat to the end of life. For the rich societies, income drops below consumption about age 65. At that point, income in rich countries keeps falling sharply, dropping below the level for poor countries by the late 60s. Moreover, consumption of the elderly in rich countries is not flat, but instead is rising; indeed, consumption levels of the elderly as a share of the income of a working adult are higher than at any earlier point in life!

Looking at a broader group of countries, Lee and Mason state: \”We show that the direction of intergenerational transfers in the population has shifted from downward to upward, at least in a few leading rich nations.\” In particular, they look across countries and present calculations of the average age at which income is earned, and the average age at which consumption happens.

For example, in the United States the average age for earning $1 of income is 43.4 years, while the average age for $1 of consumption is 41.3 years. Thus, the U.S. maintains the traditional pattern of overall transfers from older to younger.

As you might expect, this pattern of older-to-younger transfers is more extreme in low income countries. For example: In India, the average age of income is 39.5 years, and the average age of consumption is 30 years. In Kenya, the average age of income is 35.7 years, and the average age of consumption is 23.9 years.

But in a few countries, the classic pattern that has lasted throughout human history has now been reversed. In Japan, the average age of income is 45 years, while the average age of consumption is 45.8 years. In Germany, the average age of income is 42.2 years and the average age of consumption is 44.9 years.

There is nothing inherently unsustainable about societies in which flows of funds, as a whole, are transferred from younger to older generations. In the narrow programmatic sense, it is true in many countries that that the current promises of payments to the elderly do not have sufficient funding in the existing programs, and so the facts of accounting will at some point force these programs to evolve. But in a broader sense, a younger-to-older society will need to run on a set of social expectations and arrangements that are different from any previous society in human history, and will involve social, political, and institutional changes that I think we are only dimly beginning to discern.

Hyperinflation and the Zimbabwe Example

Back in the Paleolithic era when I was learning economics, Germany\’s hyperinflation of the 1920s was the classic example of hyperinflation. When I was teaching intro economics classes in the late 1980s, I would use hyperinflation examples from Latin America, and by the mid-1990s, I could use examples from eastern Europe after the collapse of the Soviet Union. But for the next few years, I suspect, the canonical example of hyperinflation will be what has happened in Zimbabwe from 2007-2009, which has the dubious distinction of being the only hyperinflation of the still-young 21st century. Janet Koech of the Federal Reserve Bank of Dallas offers a nice overview of \”Hyperinflation in Zimbabwe,\” which appears in the Annual Report of the Globalization and Monetary Policy Institute. 

Koech reports: \”From 2007 to 2008, the local legal tender lost more than 99.9 percent of its value.\” Here\’s a picture of the infamous $100 trillion bill, issued in 2009. The existence of the bill is black comedy, because represents economic devastation for the 12 million or so people of Zimbabwe.

Hyperinflation doesn\’t have a precise definition, but a common rule-of-thumb is that it occurs when the rate of price inflation exceeds 50% per month. If this rate is compounded over a year, prices multiply by a factor about 130 in a year. Here\’s the monthly inflation rate taking off in in Zimbabwe.

Zimbabwe\’s economic disaster was built on terrible economic policy and bad luck. A combination of droughts and ill-designed land \”reforms\” savaged production of important crops like maize and tobacco. Government spending was nearly uncontrolled. Foreign debts mounted. And then the government of Zimbabwe tried to solve its problems by turning on the printing presses.

Koech writes: \”Hyperinflation and economic troubles were so profound that by 2008, they wiped out the
wealth of citizens and set the country back more than a half century. In 1954, the average GDP per
capita for Southern Rhodesia was US$151 per year (based on constant 2005 U.S.-dollar purchasing power-parity rates). In 2008, that average declined to US$136, eliminating gains over the preceding 53
years …\”

Hyperinflation causes extraordinary contortions in an economy. When all prices are rising dramatically all the time, comparing prices become essentially impossible, and the price mechanisms itself breaks down. Koech describes (footnotes omitted):

\”The Economic Times newspaper noted on June 13, 2008, that “a loaf of bread now costs what 12 new cars did a decade ago,” and “a small pack of locally produced coffee beans costs just short of 1 billion Zimbabwe dollars. A decade ago, that sum would have bought 60 new cars.” At the height of the hyperinflation, prices doubled every few days, and Zimbabweans struggled to keep their cash resources from evaporating. Businesses still quoted prices in local currency but revised them several times a day. A minibus driver taking commuters into Harare still charged passengers in local currency but at a higher price on the evening trip home. And he changed his local notes into hard currency three times a day.

The government attempted to quell rampant inflation by controlling the prices of basic commodities and services in 2007 and 2008. Authorities forced merchants—sometimes with police force—to lower prices that exceeded set ceilings. This quickly produced food shortages because businesses couldn’t earn a profit selling at government-mandated prices and producers of goods and services cut output to avoid incurring losses. People waited in long lines at fuel stations and stores. While supermarket shelves were empty, a thriving black market developed where goods traded at much higher prices. Underground markets for foreign exchange also sprang up in back offices and parking lots where local notes
were converted to hard currencies at much more than the official central bank rate. Some commodities, such as gasoline, were exclusively traded in U.S. dollars or the South African rand, and landlords often accepted groceries and food items as barter for rent.\”

Here are some manifestations of hyperinflation: empty supermarket shelves (after all, anything real will hold value better than a hyperinflating currency) and gallows humor.

By early 2009, no trust in the Zimbabwe currency remained, and the Zimbabwe economy essentially \”dollarized.\” Koech writes: \”While the South African rand, Botswana pula and the U.S. dollar were granted official status, the U.S. dollar became the principal currency. Budget revenue estimates and planned expenditures for 2009 were denominated in U.S. dollars, and the subsequent budget for 2010 was also set in U.S. dollars. An estimated four-fifths of all transactions in 2010 took place in U.S. dollars, including most wage payments …\”

Here\’s the Dishonor Role of Hyperinflations during the last couple of hundred years.

Academic Journals: Print Fading

When I started my job as Managing Editor of the Journal of Economic Perspectives, we distributed about 25,500 print copies of each early issues in the late 1980s. In 2011 we distributed about 13,000 print copies of each issue.  But compared to a lot of leading law reviews, our print circulation is doing extremely well. Ross E. Davies looks at \”Law Review Circulation 2011: More Change, More Same,\” in a just-released paper for the Journal of Law, available on SSRN here.

Here is a table with a few illustrative numbers on the drop-off of print subscriptions at law reviews, whihc are extreme.

<!–[if !mso]> st1\\:*{behavior:url(#ieooui) } <![endif]–>

Some Law Review Annual Print Circulation Figures
Law Reviews
1974-75
1990-91
2010-2011
Harvard
10,193
7,768
1,896
Yale
4,250
3,700
1,520
Columbia
3,831
2,676
1,076
Michigan
3,038
2,382
   777
Northwestern
1,918
951
 514
Boalt (California)
2,734
1,740
719

For law reviews, one standard explanation is the arrival of Lexis-Nexis and then other methods of doing legal research on-line. These reasons apply to my own journal, as well. Back issues of my journal have been available though JSTOR for years.

My impressionistic sense is also that law reviews occupy a less central place in the practice of law than they did a few decades ago.For example, Supreme Court Chief Justice John Roberts has on several occasions said in interviews that he doesn\’t find law reviews very useful. Here\’s a 2011 comment: \”Pick up a copy of any law review that you see, and the first article is likely to be, you know, the influence of Immanuel Kant on evidentiary approaches in 18th Century Bulgaria, or something, which I’m sure was of great interest to the academic that wrote it, but isn’t of much help to the bar.” As reported after a 2010 interview, \”Roberts said he doesn’t pay much attention to academic legal writing. Law review articles are `more abstract\’ than practical, and aren’t `particularly helpful for practitioners and judges.\’\”

For my own journal, I think (or hope?) that the issues are more about alternative methods of access to the journal rather than a perceived lack of relevance. For example, several thousand AEA members have been choosing to get my journal on CD-ROM rather than on paper–and the CD-ROM for any given issue includes about a decade of back issues, too. About two years ago, the AEA voted to make the articles from journal freely available on-line–both current issues and archives. Very soon, it will also be possible to download entire issues onto your own CD-ROM, or on to an e-reader like a Kindle or Nook.

Reducing the barriers to accessing academic journals by making it electronically available seems like an unambiguously good thing. But I do worry about the ongoing decline of print. It\’s a bit like the old koan, \”If a tree falls in the forest, and no one hears it, does it make a sound?\” In my world, if a journal is made available on-line, does anyone actually read it? Sure, the AEA can send out a blast e-mail to let members know that the issue is available. I probably delete a dozen e-mail notifications of something or other almost every day, without giving them much of a look.

The question for my own journal, and for many publications, is how to get the attention of readers if you aren\’t arriving in physical print form. Attention is a bit \”sticky,\” in the sense that people get used to looking at certain things and not others. In running a journal, you worry that the journal might fall out of the rotation of what people are looking at. Moreover, I worry that the digital world might undervalue a certain kind of intellectual serendipity: the process of looking up an article on one subject, and in the print copy or further along the bookshelf, running across something quite different.

Of course, being freely available over the web also offers enormous opportunities for my own journal to garner attention and to be the outcome of serendipitous searching. Perhaps the real message here is to spend some time surfing purposefully but whimsically in your areas of professional interest, so that you remain open to finding new information sources and new perspectives.

Here\’s a July 11, 2011 post with further thoughts on Online Access and Academic Journals.

Some Facts about American Unions

The Bureau of Labor Statistics published annual report on union membership, \”UNION MEMBERS — 2011,\” in late January.  Here are some facts about American unions in 2011, along with some historical perspective and international comparisons. I\’ll also mention some of the general lessons I see in these patterns:

First, some highlights from the BLS report (references to the detailed data tables omitted here):

\”In 2011, the union membership rate—the percent of wage and salary workers who were members of a union—was 11.8 percent, essentially unchanged from 11.9 percent in 2010 …

\”In 2011, 7.6 million employees in the public sector belonged to a union, compared with 7.2 million union workers in the private sector. The union membership rate for public-sector workers (37.0 percent) was substantially higher than the rate for private-sector workers (6.9 percent). Within the public sector, local government workers had the highest union membership rate, 43.2 percent. This group includes workers in heavily unionized occupations, such as teachers, police officers, and firefighters. Private-sector industries with high unionization rates included transportation and utilities (21.1 percent) and construction (14.0 percent), while low unionization rates occurred in agriculture and related industries (1.4 percent) and in financial activities (1.6 percent). Among occupational groups, education, training, and library occupations (36.8 percent) and protective service occupations (34.5 percent) had the highest unionization rates in 2011. Sales and related occupations (3.0 percent) and farming, fishing, and forestry occupations (3.4 percent) had the lowest unionization rates….

By age, the union membership rate was highest among workers 55 to 64 years old (15.7 percent). The lowest union membership rate occurred among those ages 16 to 24 (4.4 percent).\”

My sense is that many people know the unionization rate is higher in the public sector than in the private sector. However, it wasn\’t until recently that a majority of the absolute number of unionized workers in the country were in the public sector. Even within the private sector, some of the highest unionization rates are often found in very regulated industries like utilities. In the U.S. private sector, unionization rates are down into single digits and continuing to fade.

Historically, the unionization rate in the United States shows one big rise, leading to a peak in the early 1950s when about one-third of non-agricultural workers belonged to a union. The pattern has been one of decline ever since.

Clearly, the decline in unions has been long and steady, occurring under both political parties. If not for the rise in public sector unions, the decline would have been even more severe. Thus, it doesn\’t make sense to blame this decline on some single event in the last decade or two or three–it\’s bigger than that. In the Winter 2008 issue of my own Journal of Economic Perspectives, Barry T. Hirsch offered an explanation in his paper \”Sluggish Institutions in a Dynamic World: Can Unions and Industrial Competition Coexist?\”  His argument is that overtime, in the dynamic and competitive U.S. markets, formal union rules are too inflexible, and thus impose extra costs on firms which over time make the firms less able to compete. He argues: \”If worker-based institutions are to flourish, they must add value and permit companies to perform at levels similar to those obtained under evolving nonunion governance norms.\”

International comparisons show that the the U.S. economy is something of an outlier in its low levels of union membership. The first column of the table shows the union membership rate in 2006. The second column shows the union \”coverage rate,\” which refers to the total share of workers whose compensation is determined by union bargaining, even if some of those workers are not union members. In the United States, union membership and union coverage are very similar. For example, the BLS report notes that in 2011, the U.S. economy had 14.8 million union members and another 1.5 million workers who did not belong to a union, but whose jobs are covered by a union contract. About half of that 1.5 million are government workers. However, in some other countries, like France, the gap between union membership and union coverage can be quite substantial.  In Japan, it is even possible to be a union member but not to have wages determined by a bargaining contract.

Clearly, it is possible for high-income countries around the world like Germany, France, Sweden, the United Kingdom, and Canada to grow and continue to be high-income even with far higher rate of unionization than the U.S. economy. The extremely wide variation across countries also suggests that unionization may be a rather different phenomenon across countries.

With this wide variation in mind, I\’ve grown cautious over the years about all blanket statements about unionization–positive or negative. In the private sector, American-style unionization has essentially failed to propagate; in the public sector, it has had at best only very partial success. But back in 1970, the great sociologist Albert Hirschman wrote a book called Exit, Voice, and Loyalty. He argued that when members of any organization are faced with conflict, they must choose between expressing their disagreement through \”voice\” or leaving the organization through \”exit.\” Many American workplaces are essentially organized around the principle where the voice of workers is constrained and the possibility of exit is emphasized. I sometimes wonder if a different kind of American labor organization might do a better job of using voice to improve productivity and in that way raise the compensation of its members.

Source note: Thanks to Danlu Hu for putting together the time series graph of unionization rates over time and the data table on international comparisons.

For unionization rates over time, the data from 2001 to 2011 is readily available at the Bureau of Labor Statistics website. Data on U.S. union membership going from 1930 to 1994.is available at
.  The remaining data can be found by hunting around the BLS website, or else by looking at the 2004 paper by Gerald Mayer, \”Union Membership Trends in the United States.\”

The data on international comparisons is from the Data Base on Institutional Characteristics of Trade Unions, Wage Setting, State Intervention and Social Pacts, 1960-2010, maintained by the  (ICTWSS), available here at the website of the Amsterdam Institute for Advanced Labor Studies.

U.S. Gasoline Prices and Consumption in International Context

Gasoline prices are spiking up toward $4 per gallon, so it\’s a useful time to review prices over the last couple of decades and some international comparisons. Here\’s a figure I generated using the ever-helpful FRED (Federal Reserve Economic Data) website maintained by the St. Louis Fed showing gasoline prices since 1990.

The overall pattern here is fairly clear. Gasoline prices were fairly flat through the 1990s at between about $1 and $1.50 per gallon. Starting around 2000, gasoline prices start rising. There\’s a lot of volatility in the pattern, and in particular, gasoline prices drop off when demand for gasoline falls during recessionary periods (as shown by the shaded areas in the figure). But the overall pattern of rising gasoline prices from about 2000 up through 2008 is pretty clear. Gasoline prices are now headed back toward their level before the recession-induced fall in prices in 2008.

Most supply-and-demand explanations of gasoline markets emphasize that supply often adjusts fairly slowly. The process of searching for and discovering oil, and then drilling, transporting, refining, wholesaling and retailing it, takes many economic interconnections. Thus, when  demand falls in a recession, the production process of oil doesn\’t drop off sharply–instead, prices fall. At other times, however, a disruption in this chain of supply can create a drop in the quantity that would otherwise have been available later in the process, and so prices rise.

The overall pattern of rising prices through most of the 2000s is usually attributed to the growth of demand for oil outstripping the growth of supply–where much of the rising demand for oil comes from rapidly growing economies like China, India, and Brazil. From this perspective, it seems utterly unsurprising that the price of gasoline has rebounded back near the peaks it reached earlier in 2008.

Although I think most Americans have a general idea that gasoline is often taxed more highly in other countries than it is here, the magnitude of these taxes isn\’t always well-known. Here\’s a table from Christopher R. Knittel\’s article, \”Reducing Petroleum Consumption from Transportation,\” in the Winter 2012 issue of my own Journal of Economic Perspectives, comparing gasoline taxes across countries.

The United States taxes gasoline at about 49 cents to the gallon, counting both federal taxes and the average of state taxes. By the time you get to the bottom of the list, you see that countries like the United Kingdom, Germany and Netherlands have gasoline taxes about eight times as high at roughly $4/gallon. Population densities and living patterns are different in the United States than  in these other countries, and I wouldn\’t advocate raising taxes to those levels.  On the other side, it\’s hard to believe that phasing in an increase in U.S. gasoline taxes to Canadian levels of 96 cents per gallon would be an unsustainable blow to the U.S. .economy, perhaps with a substantial of the money earmarked for offsetting income tax cuts and part earmarked for long-term deficit reduction. There are a variety of environmental and geopolitical reason why it might be reasonable policy for the U.S. to put some price disincentives in place for petroleum use.

Here\’s one more figure from Knittel. The horizontal axis of the graph shows the price of gasoline in each country, taxes included. The vertical axis shows the quantity of gasoline used for transportation in each country, measured in gallons per year per capita. This is part of the considerable body of evidence suggesting that when energy prices are higher, people and firms find ways to conserve.

Many Americans do truly hate the idea of higher energy taxes, so I don\’t expect this kind of proposal to make any political progress. Instead, Americans like to pretend that by setting technology standards to require more fuel efficient cars over time, the country can conserve energy without facing a cost. I explained in a post last week, \”Are the New Auto Fuel Economy Standards For Real?\” why this less flexible approach actually imposes higher costs as a way of encouraging energy conservation.

The Great Gatsby Curve

The current chairman of the Council of Economic Advisers, Alan Krueger, has called it the Great Gatsby curve. (Full disclosure: Alan was editor of my own Journal of Economic Perspectives, and thus my direct boss, from 1996-2002.) Here is the curve, from the 2012 Economic Report of the President:

The horizontal axis of the diagram is a measure of economic inequality called the Gini coefficient. For a detailed explanation of how it is calculated, see my earlier here. For present purposes, suffice it to say that this scale runs on a scale where 0 is perfect equality, where all people have the same income, and 1 is perfect inequality, where one person has all the income. Using data for 1985, countries like the United States and Spain have high levels of income inequality, while Nordic countries like Sweden, Finland, Norway, and Denmark have relatively low levels.

The vertical axis of the diagram is labelled the \”intergenerational earnings elasticity.\” This is a way of saying how how much the incomes of individuals are correlated with those of their parents. Here\’s the explanation from the Economic Report of the President:

 \”Family (or individual) incomes in one generation are also highly correlated with family (or individual) incomes in the next generation. In other words, the children of parents who are poor are more likely than
the children of well-off parents to be poor when they grow up. A common measure of mobility across generations is the intergenerational elasticity (IGE) of earnings or income, which is defined as the percentage difference in a child’s income associated with a 1 percent difference in the parent’s
income. … Studies based on U.S. data … suggest that plausible estimates of the average IGE between fathers and sons are between 0.4 and 0.6. An IGE of 0.4 means that if one father earned 20 percent more
than another over their lifetime, the first father’s son on average would earn 8 percent more than the second father’s son; an IGE of 0.6 means that the first father’s son would earn 12 percent more on average than the second father’s son. That is, the higher the IGE is, the lower economic mobility is between the generations.\”

Thus, the basic message of the Great Gatsby curve is that when a country has a higher level of income inequality at a point in time (the Gini coefficient on the horizontal axis), that country will also tend to experience less intergenerational mobility (that is, the correlation of income between one generation and the next will tend to be higher, on the vertical axis).

A first obvious question about this relationship is whether it is determined by some quirk in measurement: for example, would using different countries, or a different year, or different measures of inequality alter the relationship greatly? The Economic Report answers that question: \”As other research has shown, the finding of a positive relationship between IGE and inequality … is robust to alternative choices of countries, intergenerational mobility measures, and year in which income inequality is
measured …\”

Indeed, for economist who study this literature, the finding that the U.S. economy has less intergenerational mobility than many other high-income countries isn\’t even much of a surprise. For example, Gary Solon was listing evidence on this point in my own Journal of Economic Perspectives back in the Summer 2002 issue in his article, \”Cross-Country Differences in Intergenerational Earnings Mobility.\” That issue also includes three other articles with theories and evidence on intergenerational mobility.

Bhashkar Mazumder of the Chicago Fed offers an overview of the path of intergenerational mobility over time in the United States. He writes: \”After staying relatively stable for several decades, intergenerational
mobility appears to have declined sharply at some point between 1980 and 1990, a period in which both income inequality and the economic returns to education rose sharply. … There is fairly consistent evidence that intergenerational mobility has stayed roughly constant since 1990 but remains below the rates of mobility experienced from 1950 to 1980.\”

Of course, income inequality has been high and growing in the United States since the 1985 data shown on the graph above. The clear implication is that the intergenerational earnings elasticity will continue to grow as well. In a January 2012 speech on these issues, Alan Krueger ventured a projection:
\”While we will not know for sure whether, and how much, income mobility across generations has been
exacerbated by the rise in inequality in the U.S. until today’s children have grown up and completed their careers, we can use the Great Gatsby Curve to make a rough forecast. … The IGE for the U.S. is predicted to rise from .47 to .56. In other words, the persistence in the advantages and disadvantages of income passed from parents to the children is predicted to rise by about a quarter for the next generation as a result of the rise in inequality that the U.S. has seen in the last 25 years.\”

The most common theoretical mechanism hypothesized for this connection between current inequality and less intergenerational mobility is the education system. Gary Solon did much of the early modelling on this issue: here\’s a description of that work from Mazumder:

\”Economic models have emphasized the importance of parental investment in children’s human capital as one of the key mechanisms behind the intergenerational transmission of labor market earnings. One such model developed by Solon points to at least two important factors that could cause intergenerational
mobility to change over time: changes in the labor market returns to education and changes in the public provision of human capital. In periods where the returns to schooling are rising, the payoff to a given level of parental investment in children’s human capital will be larger, causing differences between families to persist longer and leading to a decline in intergenerational mobility. In contrast, during periods where public access to schooling becomes more widely available, then one might expect the intergenerational association to decline and mobility to rise.\”

In short, when the returns to human capital are especially high, inequality will be higher. In this situation, those with income will invest more in the education of their children, using education as a way to pass on their own economic position. Indeed, Mazumder also points to some evidence that \”the difference
in test scores by family income has grown by 30% to 40% for children born in 2001 relative to those born in 1976.\”

As a solution, it\’s easy to say that we\’re all in favor of expanding education for those at the bottom of the income scale, but if that is indeed true, it\’s fair to say that we as a society have been doing a lousy job of accomplishing that goal over the last few decades. Indeed, we\’ve been doing a lousy enough job as to make one wonder if a general broad rise in overall education levels–as opposed to better schools for their child or their town–really is a shared goal for many Americans. Work by Nobel laureate James Heckman and various co-authors has argued that the U.S. high school graduation rate, when consistently measured over time, peaked in the 1960s and has declined since then.

It\’s easy to say that \”we\’re all in favor\” of mobility between generations, but of course, in practice, many of us aren\’t. After all, the highest level of intergenerational mobility would mean zero correlation between incomes of parents and children. I earn an above-average income, and I invest time and energy and money make location choices so that my children will greater human capital and earn above-average incomes, too. Thus, I must admit that I do not favor completely free mobility of incomes. I\’m sure I\’m not alone. Divide the income distribution into fifths, and think about parents in the top fifth. How many of them would like to live in an economy where their children have an equal chance of ending up in any of the other fifths of the income distribution? (Megan McArdle makes this point with nice force in a blog post here.)

Those who would like an overview of some of the recent more technical debates over the Great Gatsby curve might usefully begin with this post by Miles Corak, an economist at the University of Ottawa who was one of the first to draw the curve.

Putting a Value on State Parks

In the latest issue of Resources magazine, from Resources for the Future, Juha Siikamäki inquires into \”State Parks: Assessing Their Benefits.\”

\”Each year, more than 700 million visits are made to America’s 6,600 state parks. … Using conventional economic approaches to estimate the value of recreation time combined with relatively conservative assumptions, the estimated an annual contribution of the state park system is around $14 billion. That value is considerably larger than the annual operation and management costs of state parks.\”

Siikamäki\’s approach goes like this. Start with estimates of how people use their time. Combining data from a number of time use surveys over time provides this overall pattern for hours of nature recreation per person.

This data on time use can be broken down to the state level. Siikamäki then also created a data base on how state parks have changed over time. \”Between 1975 and 2007, about 3,000 new parks totaling about 2 million acres were established in the United States, increasing the total area of the state park system by nearly one-quarter.\” It\’s then possible to try to determine the relationship between how changes in state parks affect the how nature recreation time changes in a certain state–using statistical methods to try to hold constant other possible confounding factors.

The Resources article is a highly readable overview of this work. Those who want the gory details need to turn to Siikamäki\’s more technical article in article in the Proceedings of the National Academy of Sciences, August 23, 2011, \”Contributions of the US state park system to nature recreation.\” Here\’s the result of the calculation:

 \”This expansion of the state parks is estimated to contribute about 9 percent of all current time use for nature recreation. Overall in the United States, this equals annually about 600 million additional hours of nature recreation, or about 2.7 hours of nature recreation per capita. … Valuing recreation time monetarily requires determining the opportunity cost of time. To illustrate the potential magnitude of recreation’s time value, I used a conventional and commonly adopted approach where recreation time is valued at one-third the wage rate. … Extrapolating from the above results, I estimate about 33 percent of current time use for nature recreation can be attributed to the U.S. state park system. This equals annually about 9.7 hours of nature recreation per capita, or about 2.2 billion hours of nature recreation in total in the United States. The estimated time value of nature recreation generated by the entire U.S. state park system is about $14 billion annually (about $62 per person annually, on average).\”

Of course, these results, like all statistical results, need to be handled with care. Even if state parks encourage considerable recreation on average, it is surely still true that some state parks have bigger payoffs while others have smaller payoffs. It could be that states where the citizens had a high demand for nature recreation are also the states  that are more likely to add to the state park system–and perhaps the quantity of nature activities would have risen in those states even without the expansion of the state parks. Even if the state parks had not been expanded, people would have done something with their time, so the value of the state parks should be the marginal increase over that alternative use–an inevitably tricky task.

On the other side, by measuring the benefits of state parks purely in terms of recreation benefits leaves out other benefits and thus underestimates total social benefits from state parks.  Siikamäki concludes: \”Nature recreation represents only a partial assessment of the full range of ecosystem services produced by natural areas. Examples of other potentially relevant ecosystem services include carbon sequestration and storage through biological processes, contributions to surface and groundwater services, and benefits from preserving endangered and threatened species. A full assessment of ecosystem services from state parks should consider these nonrecreation contributions, yielding an even more comprehensive—and presumably larger—estimate of the value of America’s state park system.\”

Economics (Almost) Never Sleeps

Catherine Rampell at Economix, the economics blog at the New York Times, reports on  \”America’s 10 Most Sleep-Deprived Jobs.\” She writes that Sleepy’s, the mattress chain \” hired researchers to analyze data from the National Health Interview Survey to determine which occupations, on average, produce workers who sleep the least and the most. The jobs with the most sleep-deprived work forces are below, starting with the most sleep-deprived at the top:\”

Most Sleep-Deprived
6h57m Home Health Aides
7h Lawyer
7h1m Police Officers
7h2m Physicians, Paramedics
7h3m Economists
7h3m Social Workers
7h3m Computer Programmers
7h5m Financial Analysts
7h7m Plant Operators
7h8m Secretaries

My own take is that the sleeplessness of economists is a fact begging for an unsubstantiated and unproveable hypothesis.

Does a miserable economy cause economists to sleep less, as they internalize the pain of others? This seems implausible, because it would require that economists care about others.

Is economics the kind of field that tends to attract those who have trouble sleeping? Perhaps only those who have physical trouble in sleeping can make it through the economics curriculum, while normal sleepers will perpetually be dozing off through the required classes.

Is economics a field that attracts hypercompetitive people who can\’t sleep because they fear that, somewhere, some other economist might be getting ahead of them? This would also help explain why economists have a tendency to believe that all other economic actors are hypercompetitive, too–they are just projecting their own personalities.

Perhaps the lack of sleep helps to explain why economists have such a difficult time perceiving the reality around them, and thus why their advice is sometimes so weirdly out-of-synch with the actual economy. 

And of course, one possible response is: \”Who cares why economists are sleeping less? As long as economists are suffering in some way, the sun will shine just a little brighter today.\”  

Further hypotheses are solicited. Send suggestions to .