Encouraging Work: Tax Incentives or Social Support?

Consider two approaches to encouraging those with low skills to be fully engaged in the workplace. The American approach focuses on keeping tax rates low and thus providing a greater financial incentive for people to take jobs. The Scandinavian approach focuses on providing a broad range of day care, education, and other services to support working families, but then imposes high tax rates to pay for it all. In the most recent issue of the Journal of Economic Perspectives, Henrik Jacobsen Kleven contrasts these two models in \”How Can Scandinavians Tax So Much?\” (28:4, 77-98). Kleven is from Denmark, so perhaps his conclusion is predictable. But the analysis along the way is intriguing.

As a starting point, consider what Kleven calls the \”participation tax rate.\” When an average worker in a country takes a job, how much will the money they earn increase their standard of living? The answer will depend on two factors: any taxes imposed on what they earn, including, income, payroll, and sales taxes; and also the loss of any government benefits for which they become less eligible or ineligible because they are working. In the Scandinavian countries of Denmark, Norway, and Sweden, this \”participation tax rate\” is about double what it is in the United States. Here\’s Kleven:

The contrast is even more striking when considering the so-called “participation tax rate,” which is the effective average tax rate on labor force participation when accounting for the distortions due to income taxes, payroll taxes, consumption taxes, and means-tested transfers. This tax rate is around 80 percent in the Scandinavian countries, implying that an average worker entering employment will be able to increase consumption by only 20 percent of earned income due to the combined effect of higher taxes and lower transfers. By contrast, the average worker in the United States gets to keep 63 percent of earnings when accounting for the full effect of the tax and welfare system.

A standard American-style prediction would be that countries where gains from working are so low should see a lower level of participation in the workforce. That prediction does not hold true in cross-country data among high-income countries. Here\’s a figure from Kleven\’s paper: Notice that the Scandinavian countries have among the highest participation tax rates, but also have among the highest employment rates in the age 20-59 population, both overall and for females. Correlation isn\’t causation, as the econometricians love to chant, but it\’s still intriguing that overall pattern across countries is that a higher participation tax rates is correlated with a higher employment rate–the opposite of what one might expect.

What explains this pattern? Kleven argues that just looking at the tax rate isn\’t enough, because it also matters what the tax revenue is spent on. For example, the Scandinavian countries spend a lot of money on universal programs for preschool, child care, and elderly care. Kleven calls these \”participation subsidies,\” because they make it easier for people to work–especially for people who otherwise would need to find a way to cover or pay for child care or elder care. The programs are universal, which means that their value expressed as a share of income earned means much more to a low- or middle-income family than to a high-income family. Here\’s Kleven:

“[P]articipation subsidies” [are] due to public spending on the provision of child care, preschool, and elderly care. Even though these programs are typically universal (and therefore available to both working and nonworking families), they effectively subsidize labor supply by lowering the prices of goods that are complementary to working. That is, working families have greater need for support in taking care of their young children or elderly parents, and so demand more of those services other things equal. From this perspective, the cross-country correlations shown in Figure 5 have the expected sign; higher public support for preschool, child care, and elder care is positively associated with the rate of employment. Moreover, the Scandinavian countries are strong outliers as they spend more on such participation subsidies (about 6 percent of aggregate labor income) than any other country.\”

Any direct comparisons between the United States (population of 316 million) and the Scandinavian countries of Denmark (6 million), Norway,  (5 million) and Sweden (10 million) is of course fraught with peril. Their history, politics, economies, and institutions differ in so many ways. You can\’t just pick up can\’t just pick up long-standing policies or institutions in one country, plunk them down in another country, and expect them to work the same way.

That said, Kleven basic conceptual point seems sound. Provision of good-quality preschool, child care and elder care does make it easier for all families, but especially low-income  families with children, to participate in the labor market.   In these three Scandinavian countries, the power of these programs to encourage labor force participation seems to overcome the work disincentives that arise in financing and operating them. This argument has nothing to do with whether preschool and child care programs might help some children to perform better in school–although if they do work in that way, it would strengthen the case for taking this approach.

So here is a hard but intriguing hypothetical question: The U.S. government spends something like $60 billion per year on the Earned Income Tax Credit, which is a refundable tax credit providing income mainly to low-income families with children, and almost as much on the refundable child tax credit. Would low-income families with children be better off, and more attached to the workforce, if a sizeable portion of the 100 billion-plus spent for these tax credits–and aimed at providing financial incentives to work–was instead directed toward universal programs of preschool, child care, and elder care?

World Toilet Day

The reason that the United Nations voted last year to designate November 19 as World Toilet Day is because the first World Toilet Summit began on that day in 2001, and that day is also the start of the World Toilet Organization. Out of the global population of 7 billion, about 1 billion people defecate in the open, with about 600 million of those people living in India. According to the World Health Organization and UNICEF, there are 19 countries in the world where more than half the rural population still practices open defecation.

Especially in areas with relatively dense populations, this practice has health consequences. It\’s difficult to separate the effects of lacking toilets from other issues of unsafe water supplies. But the World Toilet Organization says that the lack of toilets causes an average of 1,000 child deaths each day due to diarrhea, and other estimates refer to stunted growth and prevalence of infections like typhoid from fecal-borne diseases. There is also an issue of violence against women and girls perpetrated when they lack a private and secure place to defecate.

Part of the answer here is just to build more toilets: indeed, India announced a program this summer for building millions of toilets in a few months–at an average pace of about one per second. But the research in this area also suggests the importance of altering social norms about sanitation and the water supply. A much-discussed program here is “Community-Led Total Sanitation” (CLTS), which seeks to involve the community in construction and maintenance of sanitation facilities, as well as changing past practices where needed. Some links to research are available at the CLTS website. For examples of research in this area, here\’s a study of experience in Indonesia, in East Java, and preliminary results from a four-country comparison study of India, Indonesia, Mali, and Tanzania.

Talking about toilets can feel uncomfortable, and the discussion can quickly lose its policy focus. In the bulk of this post, I have manfully avoided referring to Sir Thomas Crapper, who greatly improved and popularized the flush toilet in the 19th century. I have not discussed the We Can\’t Wait promotions or the dancing turds ads in India. I have sidestepped whether toilet policy should be pursued through a bottom-up or top-down approach. In this case, as in so many others, the easy giggle can too often be a way of minimizing a real public health challenge.

Robert Solow on Topics in Productivity Growth

For the long-run future of the U.S. economy, and indeed, the global economy, no subject is more important than the likely course of productivity growth. The McKinsey Quarterly celebrated 50 years of publication with its September 2014 issue. That issue includes a short interview with Robert Solow, with Martin Neil Baily and Frank Comes as interlocutors.

Solow, of course, won the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (commonly known as the \”Nobel Prize in economics\”) in 1987 \”for his contributions to the theory of economic growth.\” In a nutshell, Solow demonstrated that the accumulation of capital and of labor was not a sufficient explanation for the process of economic growth, and that a broad element of \”technological progress\” also needed to play a role. If that concept seems obvious now, it is Solow\’s pathbreaking work from more than half-century ago that helped to make it obvious. Solow is also one of the most gifted expositors in economics. Here are a few of his comments from the interview:

Solow on economic forecasting:

\”As an ordinary macroeconomist, I have avoided forecasting as if it were a foul disease—as indeed it is. It’s very damaging to the tissues. So I don’t think one can say too much.\”

Solow on capital intensity in the service sector:

\”I don’t think we even have a very clear idea about the relative capital intensity within the service sector or between the service sector and goods-producing sector. I remember I was once writing something in which I was describing the service sector as being of relatively low capital intensity. And then I stopped and remembered that the following day I had an appointment with my dentist and that my dentist’s office was as capital intensive a 500 square feet as I had ever seen in my life.\”

Solow on the importance of global competition to productivity growth:

What came as something completely new to me was that if you looked at the same industry across countries, there were almost always dramatic differences in either labor productivity or total factor productivity. To my surprise, it turned out that most of the time, certainly more often than not, the difference in productivity—in the auto industry or the steel industry or the residential-construction industry in the US and in countries in Europe—was not only substantial but couldn’t seriously be explained by differences in access to technology.

We also found that the productivity differences could not be traced to differences in access to investment capital. The French automobile industry, much to my surprise, turned out to be more capital intensive than the American automobile industry. So it was not that either. The MGI [McKinsey Global Institute] studies instead traced these differences in productivity to organizational differences, to the way tasks were allocated within a firm or a division—essentially, to failures in managerial decisions. I was, of course, instantly suspicious of this. I figured to myself, “What do you expect a bunch of management consultants to find but differences in management capacities? That’s in their genes. That’s not in my genes.” But MGI made a very convincing case for this. And I came to believe that it was right. …

[T]here was another surprise, for which there was partly anecdotal, partly statistical evidence. If you asked why there were differences that could be erased or diminished by better management, the answer was that it took the spur of sharp competition to induce managers to do what they were in principle capable of doing. So the idea that everybody is everywhere and always maximizing profits turned out to be not quite right.
MGI made a very good case that what was lacking in these trailing industries in other countries—or in the US, in cases where the US trailed—was enough exposure to competition from whoever in the world had the best practice. And this, of course, can apply within a country. We know that in any industry, there is a whole distribution of productivity levels across firms and even, sometimes, across establishments within a firm. And much of that must be due to the absence of any spur to do more. So an interesting conclusion to me was that international trade serves a purpose beyond exploiting comparative advantage. It exposes high-level managers in various countries to a little fright. And fright turns out to be an important motivation. … [I]t goes beyond that, even. Competing as part of the world economy is an important way of gaining access to scale. If you’re a Belgian company or even a French company, it may be that best practice requires a scale of production larger than the French domestic market will provide for French producers. So it’s important for such companies to have access to the international market.

How Many Still Without Health Insurance?

The Patient Protection and Affordable Care Act was passed in 2010. The exchanges aimed at increasing the number of people without health insurance started operating, albeit in a halting and often dysfunctional way, in October 2013. So what progress has been made in reducing the number of Americans who how lack health insurance? I checked four sources: the Current Population Survey from the U.S. Census Bureau, the American Community Survey also from the Census, the National Health Interview Survey from the Centers for Disease Control, and the Gallup Poll.

Before listing the results, I\’ll just point out that my expectations were not high. No one who took more than a minute to consider the actual legislation ever expected that it would provide universal health insurance. As one example, here\’s a White House announcement in September 2010 predicting that the act would reduce the number of Americans without health insurance from about 50 million to about 18 million. In May 2013, the Congressional Budget Office estimated last hat the implementation of the Affordable Care Act would reduce the number of Americans without health insurance from 55 million in 2013 to 31 million in 2016, with most of that drop coming from people signing up for the new insurance \”exchanges\” and some coming from an expansion of Medicaid. But the CBO also estimated that by 2023, there would still be 31 million uninsured. These estimates were noticed: for exmaple, here\’s a June 2013 a Washington Post story about those 31 million. So after all the tumult and the shouting over the Affordable Care Act, both, during and after its passage, its White House supporters optimistically expected it to solve about 60% of the problem of Americans lacking health insurance, and nonpartisan sources like the CBO thought it might address about 40% of that problem.

What\’s the evidence from the four sources I checked? Well, the first thing one discovers is that it\’s only three sources. In one of those acts of bad timing that verges on statistical malpractice, the Census Bureau decided that 2013, right on the verge of the biggest change in the U.S. healthcare system in the 1960s, was an appropriate time to change its survey questions about whether people have health insurance in such a way that the answers from past surveys were not comparable to the results for 2013. For details, see the September 2014 report on \”Health Insurance in the United States: 2013\” from the U.S. Census Bureau. They write:

The CPS [Current Population Survey] is the longest-running survey conducted by the Census Bureau. The key purpose of the CPS ASEC [Annual Social and Economic Supplement] is to provide timely and detailed estimates of economic well-being, of which health insurance coverage is an important part. . . .Traditionally, this report has included detailed comparisons of year-to-year changes in health insurance coverage using the CPS ASEC. However, due to the redesign of the health insurance section of the CPS ASEC, its estimates of health insurance coverage are not directly comparable to estimates from prior years of the survey. … The redesigned CPS ASEC is based on over a decade of research, including two national field tests as well as cognitive testing.

If government statisticians had a fan club, I\’d be sitting in the front row beaming. But no matter the reasons (the plans for the change were set years earlier, limits on funding precluded doing two surveys, and so on), the timing of this change was a blunder. The survey reports: \”In 2013, the percentage of people without health insurance coverage for the entire calendar year was 13.4 percent, or 42.0 million.\” Whether this was higher or lower than previous years cannot be answered using this data.

However, the Census points to another sukrvey, the American Community Survey, which has data on the share of those without health insurance from 2008 to 2013. The results isn\’t especially informative: a small rise in the share of uninsured during the Great Recession, and a small fall since then.

The National Health Interview Survey from the Centers for Disease Control has been done since 1957. Here are the preliminary results released in mid-September with regard to health insurance for the survey carried out in March 2014. Again, the pattern shows a rise in the share of uninsured during the Great Recession, and a fall since then, without a big break from trend.

For those looking for evidence that the share of uninsured is falling, the strongest evidence comes from the most recent Gallup survey data. This data goes through the third quarter of 2014, and shows a substantial fall in the share of those without health insurance since fall 2013–when the efforts to start covering more of the uninsured kicked into gear.

141006Q3Uninsured_1_revised

The Gallup data is the only one of these sources that tracks up through third quarter of 2014. The drop in the last year almost surely means something. But it\’s concerning that the patterns of the Gallup data do not match the overall pattern of the systematic and well-established government surveys. In comparing surveys, the level of a certain answer may be higher or lower, depending on exactly how a question is worded, but the change in the level should still show similar timing. The government surveys show the share of uninsured peaking in 2010, while the Gallup data shows a more-or-less steady rise in the share of uninsured, with a couple of puzzling downward bumps, until third quarter 2013. It may be that people\’s awareness of health insurance, or whether they think they have it, or their concern over having health insurance, may be fluctuating in ways that have a bigger effect on the Gallup poll results than on the other surveys.

At this stage, there are bundles of the news stories about how many people signed up for the health insurance exchanges or for the expansion of Medicaid, and how the share of people getting health insurance through their jobs is falling. But what\’s the overall effect? The national surveys don\’t show show that the 2010 health care legislation has had much effect at all on the share of those without health insurance, at least not through the first quarter of 2014. Next March and June, when the National Health Interview Survey preliminary data for the later part of 2014 are published, we should start to have a better picture–and a sense of whether the drop shown in the Gallup data holds up in better-established surveys. In September 2015, we\’ll have data for 2014 from the Current Population Survey, too.

But it should be crystal-clear at this point that if you believed the Patient Protection and Affordable Care Act would provide anything remotely close to universal health insurance coverage, you were badly misled. So far, the CBO-style predictions that the legislation was headed for addressing less than half of this problem seem on the mark–and perhaps even a bit too optimistic.

Facts on U.S. Income Distribution, Before and After Taxes

However much and in whatever direction your knee-jerk reflexes twitch when the subject of income distribution arises, it\’s useful to start with a grounding of facts. The Congressional Budget Office lays out many of the key facts in November 2014 \”The Distribution of Household Income and Federal Taxes, 2011.\”

For starters, here\’s an overview of the distribution of before-tax, after-transfer, and after-tax income for the US. population. The population is first divided into fifths, or \”quintiles,\” according to market income–which includes labor, business, and capital income.

Here are a few thoughts:

1) There\’s a nice illustration here of the difference between median and average. The average household income for the middle quintile is $49,800. This will also be roughly the median income for a U.S. household: that is, the level where 50% of households have more and 50% have less. But the average market income for all U.S. households as shown in the last column is $80,600, because the incomes of those at the top are so high that they pull up the average for the distribution as a whole.

2) It\’s interesting that government transfers for those in the bottom quintile are smaller than those for other income groups. To be clear, the value of these government transfers includes cash payments from all levels of government–federal, state, and local–and also includes the value of in-kind transfers like Medicaid, Food Stamps, and Medicare.

3) Federal taxes rise steadily with income levels, as one would hope and expect.

What have patterns of market income and federal taxes looked like over time? First, here is the change in  market income over time. This also is divided by the lowest quintile, the three middle quintiles, the top quintile minus the top 1%, and then the top 1%.

Again, a few thoughts:

1) If one was only looking at the comparison of the bottom four quartiles with the 81st-99th percentile, the growth in inequality of income would actuall be fairly stark. Over this time, the bottom four quintiles have both seen an increase in market income of 16%, while the 81st to 99th percentile group has seen a rise of 56%. When I look at this divergence, I find that I am comfortable with an explanation involving higher returns to skilled labor as the information and communications technology revolution has taken place.

2) But then there\’s the 1% line. The construction of the figure requires that all the lines start in the same place at 0% change in 1979, but the 1% is almost immediately rising faster than the other groups. Remember that this is a rise in market income, not a a change in after-tax income that could be directly related to changes in tax rates at the top. The 1% line spikes in the 1990s, with the dot-com boom, and then falls, spike again with the resurgence of the housing bubble and stock market resurgence before the Great Recession, and then falls again. It is hard to look at the 1% line and describe it as a smooth rise in the returns to skilled labor; it looks a lot more like the 1% are receiving a greater share of their income related to stock market gains, in ways that rise and fall with the market.

3) When I post this kind of figure, I often receive notes telling me to beware of the fact that the top 1% isn\’t the same each year, but over time is instead a rotating group. The point is a fair one. But it\’s also worth noting that there is no evidence of which I am aware suggesting that movement in and out of different income groups has increased over time. Thus, what is clearly a greater inequality of income does not appear to be offset by greater mobility.

What about the path of federal tax rates? Here\’s the path of tax rates over time that includes all federal taxes: that is, federal income taxes, payroll taxes, corporate income taxes (attributed back to individuals). and excise taxes. Again, the division here is top 1%, 81st-99th percentile, middle three quintiles, and bottom quintile. Notice that these are average tax rates. Thus, a person at the very top of the income distribution might well be paying a tax rate on the marginal dollar of market income received of 40% or more, but the average tax rate for that same person over all income received could well by the 29% shown for 2011 in the figure.

Clearly, there is some fall and rise and fall againin the aveage tax rates paid by the top 1%. There is also some fall since the mid-1990s in average federal tax rates on income paid by all income groups. But the changes in average tax rates are clearly much, much smaller than the change in market income levels, which are what is really driving the rise in inequality.

As I have pointed out on this blog before, various reports have emphasized the theme that when after-tax, after-benefit inequality in the U.S. economy is compared to other high-income countries, the greater inequality in the U.S. economy is not primarily driven by the fact that the U.S. tax system is less progressive than that of other countries, which doesn\’t actually seem to be true, but by the fact that the benefits paid by the U.S. government are not as targetted to those with lower incomes as the benefits paid in other countries. For example, here\’s a discussion of an OECD report on this theme, and here\’s a discussion of a CBO report noting that while U.S. redistribution via the tax code hasn\’t changed much in recent decades, redistribution via government spending has declined.

Fall 2014 Journal of Economic Perspectives

One of my hobbies is blogging as the Conversable Economist. But my actual paid job since 1986 has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which several years back made the decision–much to my delight–that the journal wouuld be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here, and this is the Table of Contents for the Fall 2014 issue, with direct links to the papers. I will include the abstracts below, and will probably blog about some of the individual papers in the next week or two, as well.

Symposium: Social Networks
\”Networks in the Understanding of Economic Behaviors,\” by Matthew O. Jackson
Full-Text Access
\”From Micro to Macro via Production Networks,\” by Vasco M. Carvalho
Full-Text Access
\”Community Networks and the Process of Development,\” by Kaivan Munshi
Full-Text Access

Symposium: Tax Enforcement and Compliance
\”How Can Scandinavians Tax So Much?\” by Henrik Jacobsen Kleven
Full-Text Access
\”Why Do Developing Countries Tax So Little?\” by Timothy Besley and Torsten Persson
Full-Text Access
\”Taxing across Borders: Tracking Personal Wealth and Corporate Profits,\” by Gabriel Zucman
Full-Text Access
\”Tax Morale,\” by Erzo F. P. Luttmer and Monica Singhal
Full-Text Access

Articles
\”The Economics of Guilds,\” by Sheilagh Ogilvie
Full-Text Access
\”The Wages of Sinistrality: Handedness, Brain Structure, and Human Capital Accumulation,\” by Joshua Goodman
Full-Text Access

Features

\”Retrospectives: The Cold-War Origins of the Value of Statistical Life,\” by H. Spencer Banzhaf
Full-Text Access
\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access
Correspondence: \”The Missing Middle,\” by James Tybout
Full-Text Access

_______________________

Abstracts for the Fall 2014 Journal of Economic Perspectives 

\”Networks in the Understanding of Economic Behaviors\”
Matthew O. Jackson
As economists endeavor to build better models of human behavior, they cannot ignore that humans are fundamentally a social species with interaction patterns that shape their behaviors. People\’s opinions, which products they buy, whether they invest in education, become criminals, and so forth, are all influenced by friends and acquaintances. Ultimately, the full network of relationships- how dense it is, whether some groups are segregated, who sits in central positions- affects how information spreads and how people behave. Increased availability of data coupled with increased computing power allows us to analyze networks in economic settings in ways not previously possible. In this paper, I describe some of the ways in which networks are helping economists to model and understand behavior. I begin with an example that demonstrates the sorts of things that researchers can miss if they do not account for network patterns of interaction. Next I discuss a taxonomy of network properties and how they impact behaviors. Finally, I discuss the problem of developing tractable models of network formation.
Full-Text Access | Supplementary Materials

\”From Micro to Macro via Production Networks\”
Vasco M. Carvalho
A modern economy is an intricately linked web of specialized production units, each relying on the flow of inputs from their suppliers to produce their own output which, in turn, is routed towards other downstream units. In this essay, I argue that this network perspective on production linkages can offer novel insights on how local shocks occurring along this production network can propagate across the economy and give rise to aggregate fluctuations. First, I discuss how production networks can be mapped to a standard general equilibrium setup. In particular, through a series of stylized examples, I demonstrate how the propagation of sectoral shocks- and hence aggregate volatility- depends on different arrangements of production, that is, on different \”shapes\” of the underlying production network. Next I explore, from a network perspective, the empirical properties of a large-scale production network as given by detailed US input-output data. Finally I address how theory and data on production networks can be usefully combined to shed light on comovement and aggregate fluctuations.
Full-Text Access | Supplementary Materials

\”Community Networks and the Process of Development\”
Kaivan Munshi
My objective in this paper is to lay the groundwork for a new network-based theory of economic development. The first step is to establish that community-based networks are active throughout the developing world. Plenty of anecdotal and descriptive evidence supports this claim. However, showing that these networks improve the economic outcomes of their members is more of a challenge. Over the course of the paper, I will present multiple strategies that have been employed to directly or indirectly identify network effects. The second step is to look beyond a static role for community networks, one of overcoming market failures and improving the outcomes of their members in the short-run, to examine how these informal institutions can support group mobility. A voluminous literature documents the involvement of communities in internal and international migration, both historically and in the contemporary economy. As with the static analysis, the challenge here is to show statistically that community networks directly support the movement of groups of individuals. I will show how predictions from the theory can be used to infer a link between networks and migration in very different contexts.
Full-Text Access | Supplementary Materials

\”How Can Scandinavians Tax So Much?\”
Henrik Jacobsen Kleven
American visitors to Scandinavian countries are often puzzled by what they observe: despite large income redistribution through distortionary taxes and transfers, these are very high-income countries. They rank among the highest in the world in terms of income per capita, as well as most other economic and social outcomes. The economic and social success of Scandinavia poses important questions for economics and for those arguing against large redistribution based on its supposedly detrimental effect on economic growth and welfare. How can Scandinavian countries raise large amounts of tax revenue for redistribution and social insurance while maintaining some of the strongest economic outcomes in the world? Combining micro and macro evidence, this paper identifies three policies that can help explain this apparent anomaly: the coverage of third-party information reporting (ensuring a low level of tax evasion), the broadness of tax bases (ensuring a low level of tax avoidance), and the strong subsidization of goods that are complementary to working (ensuring a high level of labor force participation). The paper also presents descriptive evidence on a variety of social and cultural indicators that may help in explaining the economic and social success of Scandinavia.
Full-Text Access | Supplementary Materials

\”Why Do Developing Countries Tax So Little?\”
Timothy Besley and Torsten Persson
Low-income countries typically collect taxes of between 10 to 20 percent of GDP while the average for high-income countries is more like 40 percent. In order to understand taxation, economic development, and the relationships between them, we need to think about the forces that drive the development process. Poor countries are poor for certain reasons, and these reasons can also help to explain their weakness in raising tax revenue. We begin by laying out some basic relationships regarding how tax revenue as a share of GDP varies with per capita income and with the breadth of a country\’s tax base. We sketch a baseline model of what determines a country\’s tax revenue as a share of GDP. We then turn to our primary focus: why do developing countries tax so little? We begin with factors related to the economic structure of these economies. But we argue that there is also an important role for political factors, such as weak institutions, fragmented polities, and a lack of transparency due to weak news media. Moreover, sociological and cultural factors- such as a weak sense of national identity and a poor norm for compliance- may stifle the collection of tax revenue. In each case, we suggest the need for a dynamic approach that encompasses the two-way interactions between these political, social, and cultural factors and the economy.
Full-Text Access | Supplementary Materials

\”Taxing across Borders: Tracking Personal Wealth and Corporate Profits\”
Gabriel Zucman
This article attempts to estimate the magnitude of corporate tax avoidance and personal tax evasion through offshore tax havens. US corporations book 20 percent of their profits in tax havens, a tenfold increase since the 1980; their effective tax rate has declined from 30 to 20 percent over the last 15 years, and about two-thirds of this decline can be attributed to increased international tax avoidance. Globally, 8 percent of the world\’s personal financial wealth is held offshore, costing more than $200 billion to governments every year. Despite ambitious policy initiatives, profit shifting to tax havens and offshore wealth are rising. I discuss the recent proposals made to address these issues, and I argue that the main objective should be to create a world financial registry.
Full-Text Access | Supplementary Materials

\”Tax Morale\”
Erzo F. P. Luttmer and Monica Singhal
There is an apparent disconnect between much of the academic literature on tax compliance and the administration of tax policy. In the benchmark economic model, the key policy parameters affecting tax evasion are the tax rate, the detection probability, and the penalty imposed conditional on the evasion being detected. Meanwhile, tax administrators also tend to place a great deal of emphasis on the importance of improving \”tax morale,\” by which they generally mean increasing voluntary compliance with tax laws and creating a social norm of compliance. We will define tax morale broadly to include nonpecuniary motivations for tax compliance as well as factors that fall outside the standard, expected utility framework. Tax morale does indeed appear to be an important component of compliance decisions. We demonstrate that tax morale operates through a variety of underlying mechanisms, drawing on evidence from laboratory studies, natural experiments, and an emerging literature employing randomized field experiments. We consider the implications for tax policy and attempt to understand why recent interventions designed to improve morale, and thereby compliance, have had mixed results to date.
Full-Text Access | Supplementary Materials

\”The Economics of Guilds\”
Sheilagh Ogilvie
Occupational guilds in medieval and early modern Europe offered an effective institutional mechanism whereby two powerful groups, guild members and political elites, could collaborate in capturing a larger slice of the economic pie and redistributing it to themselves at the expense of the rest of the economy. Guilds provided an organizational mechanism for groups of businessmen to negotiate with political elites for exclusive legal privileges that allowed them to reap monopoly rents. Guild members then used their guilds to redirect a share of these rents to political elites in return for support and enforcement. In short, guilds enabled their members and political elites to negotiate a way of extracting rents in the manufacturing and commercial sectors, rents that neither party could have extracted on its own. First, I provide an overview of where and when European guilds arose, what occupations they encompassed, how large they were, and how they varied across time and space. I then examine how guild activities affected market competition, commercial security, contract enforcement, product quality, human capital, and technological innovation. The historical findings on guilds provide strong support for the view that institutions arise and survive for centuries not because they are efficient but because they serve the distributional interests of powerful groups.
Full-Text Access | Supplementary Materials

\”The Wages of Sinistrality: Handedness, Brain Structure, and Human Capital Accumulation\”
Joshua Goodman
Left- and right-handed individuals have different neurological wiring, particularly with regard to language processing. Multiple datasets from the United States and the United Kingdom show that lefties exhibit significant human capital deficits relative to righties. Lefties score 0.1 standard deviations lower on cognitive skill measures, have more behavioral problems, have more learning disabilities such as dyslexia, complete less schooling, and work in occupations requiring less cognitive skill. Most strikingly, lefties have 10-12 percent lower annual earnings than righties, much of which can be explained by observable differences in cognitive skills and behavioral problems. Lefties work in more manually intensive occupations than do righties, further suggesting their primary labor market disadvantage is cognitive rather then physical. I argue here that handedness can be used to explore the long-run impacts of differential brain structure generated in part by genetics and in part by poor infant health.
Full-Text Access | Supplementary Materials

\”Retrospectives: The Cold-War Origins of the Value of Statistical Life\”
H. Spencer Banzhaf
This paper traces the history of the \”Value of Statistical Life\” (VSL), which today is used routinely in benefit-cost analysis of life-saving investments. The \”value of statistical life\” terminology was introduced by Thomas Schelling (1968) in his essay, \”The Life You Save May Be Your Own.\” Schelling made the crucial move to think in terms of risk rather than individual lives, with the hope to dodge the moral thicket of valuing \”life.\” But as recent policy debates have illustrated, his move only thickened it. Tellingly, interest in the subject can be traced back another twenty years before Schelling\’s essay to a controversy at RAND Corporation following its earliest application of operations research to defense planning. RAND wanted to avoid valuing pilot\’s lives but the Air Force insisted they confront the issue. Thus, the VSL is not only well acquainted with political controversy; it was born from it.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading\”
Timothy Taylor
Full-Text Access | Supplementary Materials

\”Correspondence: The Missing Middle\”
James Tybout
Full-Text Access | Supplementary Materials

Why Experts Buy Generic

You are buying an over-the-counter medication. You see several shelves of brand-name medications of the type you want, along with a generic store brand that is typically much cheaper. Do you buy the cheaper generic version? And perhaps more interesting, if you were standing beside an expert who really knew what was in all of the medications, would the expert buy the cheaper generic version?

Bart J. Bronnenberg, Jean-Pierre Dubé, Matthew Gentzkow, and Jesse M. Shapiro tackle this question in \”Do Pharmacists Buy Bayer? Informed Shoppers and the Brand Premium.\” It\’s available as National Bureau of Economic Research Working Paper #20295, August 2014. (NBER working papers are not freely available on-line to everyone, but they are inexpensively available, and  many readers will have access through library subscriptions.) They start the paper by pointing out the example of aspirin as a generic equivalent (citations omitted).

A 100-tablet package of 325mg Bayer Aspirin costs $6.29 at cvs.com. A 100-tablet package of 325mg CVS store-brand aspirin costs $1.99. The two brands share the same dosage, directions, and active ingredient. Aspirin has been sold in the United States for more than 100 years, CVS explicitly directs consumers to compare Bayer to the CVS alternative, and CVS is one of the the largest pharmacy chains in the country, with presumably little incentive to sell a faulty product. Yet the prevailing prices are evidence that some consumers are willing to pay a three-fold premium to buy Bayer. 

A short readable overview of the paper is available from NBER here, and here are some notable findings from the overview.

In a detailed case study of headache remedy purchases, the researchers find that more-informed consumers are less likely to pay extra to buy national brands, with pharmacists choosing them over store brands only 9 percent of the time, compared with 26 percent of the time for the average consumer. Similarly, chefs devote 12 percentage points less of their purchases of kitchen staples to national brands than otherwise similar non-chefs. …

Controlling for household income, other demographics, and the market, chain, and quarter in which the purchase is made, a household whose primary shopper correctly identifies all active ingredients in a national brand has an 85 percent chance of purchasing a store brand, 19 percentage points higher than a shopper who identifies none of the ingredients. … When the primary shopper is either a pharmacist or a physician, the probability of purchasing the store brand is 91 percent, 15 percentage points higher than the probability of otherwise similar buyers who are not in these fields. Primary shoppers who were science majors in college buy more store brands than those with other college degrees. In a second case study of pantry staples such as salt, sugar, and baking powder, the researchers find that chefs devote nearly 80 percent of their purchases to store brands, compared with 60 percent for the average consumer.

The overall pattern is clear: those who are less knowledgeable are more likely to buy brand names, presumably because they feel that there is quality difference in doing so. Those who are more knowledgeable are more likely to go with generic equivalents, because they feel comfortable making their own judgements about quality–and then going with the lower price.

The data for the study comes from the Nielsen Homescan Panel. It includes information on  purchases made on more than 77 million shopping trips by 125,114 households between 2004 and 2011. People in the panel scan barcodes for all consumer packaged goods they buy. As a result, the data includes detailed information on the product and price, as well as when and where it was bought. The Nielson survey also has basic information about houeholds, like level of composition of household, education, income, race, age, homeownership, and area of residence. The researchers supplemented this data by doing their own survey to gather more information on the specific jobs held by the panelists, and whether panelists can name the specific ingredients in various products. The researchers can then use this mass of data to compare the buying patterns of different groups between branded and generic goods. Here\’s their bottom line:

We estimate that consumers spend $196 billion annually in consumer packaged goods categories in which a store-brand alternative to the national brand exists, and that they would spend approximately $44 billion less (at current prices) if they switched to the store brand whenever possible. If consumers are systematically misled by brand claims, this has clear implications for evaluating the welfare effects of the roughly $140 billion spent on advertising each year in the US, and for designing federal regulation to minimize the potential for harm …

Life Expectancy Risk and Annuities

In a series of television ads for Ameriprise, spokeman Tommy Lee Jones asks some version of the question: \”Retirement. Will You Outlive Your Money?\”  Katharine G. Abraham and Benjamin H. Harris tackle various aspects of this question in a November 6 research paper from the Economic Studies group at the Brookings Institution: \”Better Financial Security in Retirement? Realizing the Promise of Longevity Annuities.\”

If everyone knew precisely how long they were going to live, along with what expenses needed to be incurred along the way, retirement planning would be considerably easier. But a lot of Americans seem underestimate how long they will live. A survey done back in 1991-92 asked Americans who at that time were aged 58-61 what percentage chance they had of living to age 75. Enough time has now passed that we know how many actually lived to age 75. For example, the table below shows that of those who said they had a 0% chance of living to age 75, 49.2% did actually live to age 75, as did 59.9% of those who thought they had a 10% chance of reaching 75, and 64.6% of those who thought they had a 20% chance of reaching 75. Again, this survey wasn\’t asking 20 year-olds or 30 year-old, but was asking people who were around age 60–and who presumably knew something about their health status.

Indeed, the table does suggest that people do know something about their health status. Those who thought they had a better chance of living to age 75 mostly do live longer. But those who were presumably in poorer or average health or more pessimistic for other reasons, and who said that they had a 70% or less chance of living to age 75, seem to systematically underestimate how long they are likely to live. At the other end, those who are in the best of health or more optimistic, who thought they had a 90 or 100% chance of living to age 75, tend to overestimate their chances.

Annuities are the straightforward financial tool that provides insurance against running out of money before the end of life: basically, you pay for the annuity up front, and then the company guarantees you a stream of payments for the rest of your life.

But many people don\’t like the idea of annuities, for a variety of reasons. Many people don\’t like the idea that they will buy an annuity and then die sooner than expected, thus \”losing money\” on the annuity. (Of course, it is an unavoidable reality of insurance that people should be happy if they pay for the insurance year after year, but end up never needing to use it because they don\’t have the accident or problem for which they bought insurance in the first place. But most people dislike this aspect of insurance.) Other people fear that they might need to make a large expense in the future, perhaps for health care or to help a family member, and if they have annuitized a large share of their retirement wealth they would lose that flexibility. Some may figure that they already have an life-long annuity, in the form of Social Security, so they don\’t wish to put any more of their wealth into annuity form. Some people fear, with reason, that the markets for annuities in the past didn\’t always offer that good a deal in terms of what it cost to guarantee a certain stream of income in the future, so that they would rather sit down with a financial adviser and plan their own path for spending.

For an in-depth discussion of these issues, I recommend an essay on \”Annuitization Puzzles\”  by Shlomo Benartzi, Alessandro Previtero and Richard H. Thaler in the Fall 2011 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been Managing Editor of the JEP since 1987, so I am predisposed to think the articles are of consuming interest.) These authors argue that many people do very much like annuities. For example, Social Security is an annuity program, and it is quite popular. Most of the people who are currently receiving a \”defined benefit\” pension plan that pays a steady amount for the rest of their lives are not eager to switch over to a \”defined contribution\” plan where they would need to manage their own wealth. They argue that the lack of annuity purchasing is due more to the decision-making hurdles that face people who are thinking about buying annuities. They write: \”An annuity should be viewed as a risk-reducing strategy, but it is instead often considered a gamble: “Will I live long enough for this to pay off?”\”

However, those who believe that many people have a pent-up demand for annuities do face a difficult empirical puzzle. The decision about what age to claim Social Security can be viewed as a decision about implicitly buying an annuity. Consider a person who retires at age 62 or 65, but lives off their saving for several years and doesn\’t claim Social Security until age 70. In effect, that person is \”buying\” an annuity by not receiving Social Security payments sooner, and in exchange will receive a larger monthly Social Security check because of deferring the start of payments until age 70. For many people, this \”Retire Now, Social Security Later\” option would make them better off over their lifetime. But again, many people worry that if they don\’t start collecting Social Security as soon as possible, they will die in the near future and thus would have \”wasted\” their Social Security benefits.

The specific focus for Abraham and Harris are so-called \”longevity annuities.\” With a standard annuity, you pay a big chunk of money up front, and then receive a stream of payments for the rest of your life. With a longevity annuity, you pay a big chunk of money up front, wait 10 or 20 years, and then receive a stream of payments for the rest of your life. Of course, the benefit is that if you pay now and wait to receive the payments until later, the payments can be larger–even considerably larger. Here\’s a table showing the payoffs for a man or woman buying an annuity at age 60. If the man starts receiving payments immediately, he gets $534 per month. If the man waits for 15 years, his monthly payments would be about three times as much.

The potential benefit of longevity annuities is that they are truly insurance against outliving your assets, by offering a relatively large payoff if you live to a longer age than you expect. For example, a person could buy a longevity annuity that is set to kick in at age 80 or 85, and then figure that they can pretty much spend the rest of their wealth before that age, secure in the knowledge that they have a backstop in place if they live longer than expected. The tradeoff, like all insurance policies, is that if you don\’t reach the age where the longevity annuity kicks in, your money ends up being paid to someone else who lived longer than expected.

The market for longevity annuities is growing, but is still small. Abraham and Harris write:

\”While the market for deferred-income annuities (DIAs) has blossomed in recent years, many DIAs are sold to individuals in their 50s and almost all are sold with deferral periods of less than 15 years. The current market for true longevity annuities remains
very thin. … After managing just $50 million in sales a few years earlier, deferred income annuities reached $2 billion in sales in 2013. … One risk that a standard longevity annuity contract does not address is inflation risk. … [W]e are not aware of any currently offered longevity annuity product that includes an inflation protection option …\”

With the spread of 401(k) and IRA and other tax-deferred retirement accounts in recent decades, more and more people are going to face the question of whether to buy annuities in the future. Abraham and Harris look at the distribution of funds that people have in these kinds of accounts. They find that the bottom half or so little or nothing saved in a defined contribution account–in part becasue many of them don\’t have such an account in the first place. But among those in the 55-64 age bracket, the top 25% have $143,000 or more in such an account, and the top 10% have $463,000 or more in such an account.

Annuities may turn out to be one of those products that people don\’t like to buy, but after they have taken the plunge, they are glad that they brought. One can imagine an option where some degree of annuitization of wealth could be built into 401(k) and IRA accounts. For example, it might be that the default option is that 30% of what goes into your 401(k) or IRA goes to a regular annuity that kicks in when you retire, another 20% goes to a longevity annuity that kicks in at age 80, and the other 50% is treated like a current retirement account, where you can access the money pretty much as you desire after retirement. If you wanted to alter those defaults, you could do so. But experience teaches that many people would stick with the default options, just out of sheer inertia–and that many of them would be glad to have some additional annuity income after retirement.

The Two New Tools of Monetary Policy

Every basic exposition of how monetary policy is conducted before about 2008 is soon to become obsolete. The three basic monetary policy tools that used to be taught in almost every introductory economics course were changing reserve requirements, changing the discount rate, and conducting open market operations. Because of the way in which monetary policy was conducted during the Great Recession, all three tools have become essentially irrelevant. In the future, The Federal Reserve will mainly influence interest rates through two quite different tools that didn\’t even exist before 2008: primarily by using the rate of interest that it chooses to pay on bank reserves, and secondarily using its new reverse repo facility.

There\’s no secret about these new tools.  For example, here\’s an explanation from September 16-17 meeting of Federal Open Market Committee:

  • When economic conditions and the economic outlook warrant a less accommodative monetary policy, the Committee will raise its target range for the federal funds rate. 
  • During normalization, the Federal Reserve intends to move the federal funds rate into the target range set by the FOMC primarily by adjusting the interest rate it pays on excess reserve balances. 
  • During normalization, the Federal Reserve intends to use an overnight reverse repurchase agreement facility and other supplementary tools as needed to help control the federal funds rate. The Committee will use an overnight reverse repurchase agreement facility only to the extent necessary and will phase it out when it is no longer needed to help control the federal funds rate. 

Let\’s do a quick review of the old monetary policy tools, explain why they no longer work, and then introduce the new monetary policy tools.

In the old days before 2008, monetary policy worked because banks were required to hold reserves with the central bank, but baanks typically tried not to hold extra reserves, because it used to be that money being held with the central bank earned no return. The requirement that banks hold reserves gave the central bank leverage over how much credit banks were willing to offer, and led to what used to be considered the three main tools of monetary policy.

1) The central bank could alter the reserve requirement, which meant that banks had either more or less to lend than before.

2) If a bank reached the end of a business day and found that the last round of transactions had left it a little short of what it needed to be holding, one option was to borrow the additional funds from the central bank. When doing so, the bank would need to pay an interest rate called the discount rate. So by raising or lowering the discount rate, the central bank could influence whether a bank kept a fairly small or fairly large cushion of reserves over and above the required amount–and thus affect the bank\’s willingness to lend.

3) However, banks usually preferred not to borrow from the central bank, because doing so was often viewed as a danger signal that no other private bank was willing to lend to you. Instead, banks that were running a little short of reserves at the end of the day\’s transactions borrowed overnight from other banks and paid the so-called \”federal funds interest rate.\” The Federal Reserve treated this federal fund interest rate as a target for monetary policy. It would buy or sell government bonds to set the federal funds interest rate at a desired level.

For example, here\’s  a figure (from the ever-useful FRED website run by the Federal Reserve Bank of St. Louis) showing the federal funds rate over time. You can tell the history of monetary policy back a half-century by the Fed reducing this federal funds rate in recessions (the gray bars) and increasing it when recessions were over and inflation seemed at least a potential threat.  You can also see how the Fed took this interst rate to near-zero during the Great Recession, and has held it there ever since, which is why there has been a need to use quantitative easing and forward guidance as tools of monetary policy since then.

Notice that all three of these traditional tools of monetary policy  rely in one way or another on banks being fairly close to the level of required reserves. However, as part of its \”quantitative easing\” response to the financial crisis, the Fed started buying large quantities of government debt and mortgage-backed securities. Instead of holding these bonds and securities, banks found that they were holding very large quantities of cash reserves, far above the legally required level. To be specific, U.S. banks were legally required to be holding $101 billion in reserves as of October 29, but they were actually holding $2,557 billion in reserves–about 25 times the required amount.

But when banks are holding large quantities of extra reserves, the old-style monetary policies don\’t work. Change the reserve requirement? Reducing it won\’t matter, and unless the central bank raises it by a multiple of 25, raising doesn\’t matter either. Also, no one wants to try to lock up these excess bank reserves, because the hope is that banks will find ways to lend this money. Altering the discount rate has no effect if banks don\’t need to borrow from the central bank, because there is no danger they will run short of reserves. And banks don\’t need to borrow much from each other at the federal funds interest rate, either, again because they are holding such high levels of reserves. Indeed, the quantity loaned and borrowed at the federal funds interest rate has dropped dramatically in recent years, as shown in this figure from economists at the Federal Reserve Bank of New York.

Ch1_total-federal-funds-sold

The situation of banks holding vastly more reserves than legally required, as a result of the quantitative easing policies of the Fed, seems likely to persist for years to come. So when the Federal Reserve decides that it wishes to raise the federal funds interest rate, what monetary policy tools will work in this situation?

The main monetary policy tool of the future is likely to be the amount of interest that the Fed pays on these bank reserves. Traditionally, the Fed paid zero percent interest on bank reserves. But a law passed in 2006 authorized the Fed to start paying interest on reserves as of 2011, and the Emergency Economic Stabilization Act of 2008 allowed the Fed to start paying interest on bank reserves as of October 1, 2008. The current interest rate paid by the Federal Reserve to banks on their excess reserves is 0.25 percent. By altering this interest rate, the Federal Reserve has a tool that can directly affect how much banks want to hold in reserves and how much they are willing to lend and at what interest rates. For example, say that the Fed gradually raised the interest rate it is paying on bank excess reserves. In that case, banks would be more likely to hold funds in the form of reserves with the central bank and less likely to make loans, including making overnight loans to each other in the federal funds market, which should tend to push up interest rates. In short, when the Fed decides that it\’s time to raise interest rates, the policy tool you should be watching is the interest rate charged on bank reserves.

What if this new policy tool doesn\’t work well? The back-up monetary policy tool, as the Fed said in the quotation above, is the \”overnight reverse repurchase agreement facility,\” which will be used \”only to the extent necessary and will phase it out when it is no longer needed to help control the federal funds rate.\”

Some definition of terms for those uninitiated in repurchase markets may be useful here. In a \”repurchase agreement,\” one party sells a security to another, while simultaneously agreeing to repurchase it at a slightly higher price in the near future–often the next day. For the other party, the one which is agreeing to buy a security and then sell it back the next day, this is called a \”reverse repurchase\” or reverse repo agreement. The Fed is has been testing its ability to operate in the  repurchase market since September 2013, first selling and then buying back Treasury securities. Here\’s a description of reverse repurchase agreements from the Federal Reserve Bank of New York, which would be conducting these operations.

A reverse repurchase agreement, also called a “reverse repo” or “RRP,” is an open market operation in which the Desk [the New York Fed trading desk] sells a security to an eligible RRP counterparty with an agreement to repurchase that same security at a specified price at a specific time in the future. The difference between the sale price and the repurchase price, together with the length of time between the sale and purchase, implies a rate of interest paid by the Federal Reserve on the cash invested by the RRP counterparty.

For policy purposes, the key point here is the last sentence: a repurchase agreement is really a form of very short-term lending and borrowing. Investors who have a lot of cash are always looking for a chance to earn a return, even a small return. When an investor with cash buys a Treasury bond from the Fed, and then sells it back to the Fed the  next day, that investor has received, in effect, an interest payment on their cash. Thus, when the Fed conducts its repurchase operations, it is in effect setting the level of interest rates for very short-term overnight borrowing. Such overnight borrowing using Treasury bonds as collateral is very similar to the overnight borrowing that banks do with each other at the federal funds interest rate.

Jeremy Stein, who left the Federal Reserve Board of Governers in May, spells out how this will work in an interview last July with Ylan Q. Mui in the Washington Post. Basically, the notion is that the interest rate paid on reserves will establish a ceiling for the federal funds interest rate, while the interest rate embedded in the reverse repo facility will set a floor under that rate. But how much space there will be between the ceiling and the floor, and just how these tools will be used in setting monetary policy, is still very much a work in progress, waiting for when the Fed decides it is actually time to raise the federal funds interest rate.

For purely pedagogical reasons, I\’m hoping the use of reverse repos doesn\’t come up too frequently and can largely ignored, because explaining that tool of monetary policy to an intro-level economics class will be just no fun at all.

Why Different Unemployment Measures Tell (Mostly) the Same Story

As the unemployment rate has decreased during the last few years, from its peak of 10% in October 2009 to 5.9% in September 2014, it\’s become common to hear a cynical reaction along the following lines: \”Sure, the official unemployment rate was 5.9% in September 2014, but when you take into account those who have become too discouraged too for a job and those with a part-time job who would prefer full-time work, the real unemployment rate is twice as high at 11.8%.\”

This comment is usually made in a \”gotcha\” tone, with a eyebrow-lifting emphasis on official and real, as if those nefarious government economic statisticians are trying to pull a fast one on the unsuspecting public. But color me unsurprised that if you define \”unemployment\” in different ways, you get a different number. In fact, the Bureau of Labor Statistics has been publishing alternative unemployment rates quite openly since 1976.  Vernon Brundage lays out the distinction in \”Trends in unemployment and other labor market difficulties,\” written as the November 2014 \”Beyond the Numbers\” from the U.S. Bureau of Labor Statistics.

Here are the six measures of unemployment produced by the BLS. U-3 is the official unemployment rate. (Wiggle eyebrows on official as needed.)
  • U‑1: People who are unemployed for 15 weeks or longer as a percent of the civilian labor force.
  • U‑2: Job losers, plus people who completed temporary jobs, as a percent of the civilian labor force.
  • U‑3: Total number of people who are unemployed as a percent of the civilian labor force (official unemployment rate).
  • U‑4: Total number of people who are unemployed, plus discouraged workers, as a percent of the civilian labor force plus discouraged workers.
  • U‑5: Total number of people who are unemployed, plus discouraged workers, plus all other persons marginally attached to the labor force, as a percent of the civilian labor force plus all persons marginally attached to the labor force.
  • U‑6: Total number of people who are unemployed, plus all persons marginally attached to the labor force, plus total employed part time for economic reasons, as a percent of the civilian labor force plus all persons marginally attached to the labor force.

When looking at how any economic statistic has changed over time, it\’s important to compare apples to apples. So if you are interested in the question of how the unemployment rate has changed, it\’s not useful to point out that different definitions of unemployment provide different answers. These six definitions of unemployment typically move pretty much together.

So if you want to focus on a broader measure of unemployment like U-6 that includes those who have become discouraged in looking for work and part-timers who would like full-time work, fair enough: That measure of unemployment was 17.2% in April 2010, and has fallen to 11.8% in September 2014. But pick a measure of unemployment, any measure of unemployment, and the U.S. economy is doing much better now than back in the dark days of 2009 and 2010.

Of course, the relationships between these measures of unemployment do change over time. For exmaple, the U-2 measure of unemployment has usually been higher that U-1, but they have been about the same in the last few years. Brundage explains:

For most of the history of these series, the number of persons unemployed for 15 weeks or longer (the numerator for U‑1) had been less than the number of people who had lost their jobs or completed temporary jobs (the numerator for U‑2), even during economic downturns. … However, the two series began to converge shortly after the end of the recession, largely reflecting a greater increase in the number of people who were unemployed for 15 weeks or longer during the downturn. In December 2007, the number of people who had been unemployed for 15 weeks or longer (2.5 million) was well below the number of job losers (3.9 million). In September 2014, 4.4 million people had been jobless for 15 weeks or longer and 4.5 million had lost their jobs; thus, the U‑1 and U‑2 rates were very similar, at 2.8 and 2.9 percent, respectively.

The other change is that the ratio between the broadly defined U-6 unemployment rate and the U-3 official unemployment rate has risen. The broadly defined U-6 rate adds the official unemployment rate to the part-timers who would prefer a full-time job and to the \”marginally attached\” who have largely given up looking for work. As this figure shows, the number of the marginally attached  has actually not risen too much, but the number of part-timers who would prefer full-time work remains elevated since the end of the recession.

In short, there\’s no conspiracy here by government statisticians to lowball the official unemployment rate. Each measure of unemployment is defined differently. Each one conveys different information about the labor market. And that\’s why the Bureau of Labor Statistics quite openly and transparently publishes six different measures.