Teen Pregnancy: What Causes What?

Here is a classic problem of cause and effect. Teenagers who give birth are more likely to be from households with lower income levels. Also, teenagers who give tend to end up later in life in households with lower income levels. But does the lower income level cause teens to be more likely to give birth? Or does giving birth cause as a teen cause that woman to be more likely to end up in a lower-income household? How can one untangle cause and effect? Melissa S. Kearney and Phillip B. Levine tackle these questions in \”Why is the Teen Birth Rate in the United States So High and Why Does It Matter?\” which appears in the Spring 2012 issue of my own Journal of Economic Perspectives. They have lots of interesting comments to make about variation in teen birthrates across states and countries. Here, I\’ll focus on their analysis of the cause and effect question, which surprised me and offers a nice example of  how economist try to disentangle these sorts of issues.

\”Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried and that poor outcomes seen later in life (relative to teens who do not have children) are simply the continuation of the original low economic trajectory. That is, teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself
does not appear to have much direct economic consequence.\”

Conceptually, how would one tell whether giving birth as a teenager is a cause of lower future economic prospects? Just comparing life outcomes for teenage girls who give birth and those who don\’t will give you a correlation, but not causation.  \”A comparison of the outcomes of women who did and who did not give birth as teens is inherently biased by selection effects: teenage girls who “select” into becoming pregnant and subsequently giving birth (as opposed to choosing abortion) are different in terms of their background characteristics and potential future outcomes than teenage girls who delay childbearing.\” The problem is made more difficult because some of the background characteristics may be measurable in the data (like family income level, or ethnicity, or if it\’s a single-parent family) but many other characteristics are not available in the data (like the personality traits of the teenage girl or the values lived by the family).

 In an ideal experiment, one might want a research design in which a random sample of teenagers becomes pregnant and gives birth, and then you could track the outcomes. Of course, randomized pregnancy is an impractical research design! But here are four approaches used by clever economists to disentangle this question of cause and effect. 

A within-family approach. Look at life outcomes for sisters who give birth at different ages. The result of this kind of study is \”once background characteristics are controlled for, the differences are quite modest. Furthermore, even these modest differences likely overstate the costs of teen childbearing, since the sister who gives birth as a teen is likely to be “negatively” selected compared
to her sister who does not.\”

Miscarriages.  Of those teens who become pregnant, some will suffer miscarriages. Compare women who are similar in measured characteristics of family background, but some of whom gave birth as teenagers while others had a miscarriage. It turns out that their life outcomes look quite similar: that is, giving birth as a teenager doesn\’t appear to cause any additional decline in later life outcomes.

Age at first menstruation. Girls who menstruate earlier are at greater risk of becoming pregnant as teenagers. One can use a statistical approach to look at two groups of women who are similar in measured characteristics of family background, but where one group has a higher pregnancy rate because they began their menstrual cycle earlier. However, the life outcomes for these groups look quite similar; is not correlated with lower life outcomes: that is, a random chance of being more likely to give birth as a teenager (because of an earlier age of first menstruation) doesn\’t appear to cause any additional decline in later life outcomes.


 Propensity scores. Look at girls within a certain school, so that they live in more-or-less the same neighborhood. Using the available data, develop a \”propensity score\” that measures how likely a girl is to give birth as a teenager. Then compare the life outcomes for girls with similar propensity scores, some of whom gave birth and some of whom did not. There doesn\’t seem to be a difference in life outcomes, again suggesting that giving birth as a teenager doesn\’t much alter other life outcomes. 

Kearney and Levine sum up the evidence on cause and effect this way: \”Taken as a whole, previous research has had considerable difficulty finding much evidence in support of the claim that teen childbearing has a causal impact on mothers and their children. Instead, at least a substantial majority of the observed correlation between teen childbearing and inferior outcomes is the result of underlying differences between those who give birth as a teen and those who do not.\”

Kearney and Levine also offer an unexpected (to me) perspective on policies to reduce teen pregnancy:

\”Moreover, no silver bullet such as expanding access to contraception or abstinence education will solve this particular social problem. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to “drop-out” of the economic mainstream; they choose nonmarital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement. This thesis suggests that to address teen childbearing in America will require addressing some difficult social problems: in particular, the perceived and actual lack of economic opportunity among those at the bottom of the economic ladder.\”

The statement about teenage girls \”choosing\” nonmarital motherhood should be understood not as a claim that all pregnant 15 year-olds carefully considered their life options and decided on pregnancy!  Instead, the economists\’ view of choice is that we all make groups of choices every day–say, choices about exercise and calories consumed–that make certain outcomes more likely. Decisions that are not well-considered, or that raise the risk of undesired side effects, still have a large ingredient of choice. For example, we typically view those who drive drunk as having made a \”choice.\”

The cause-and-effect evidence here suggests that for many women who give birth as teenagers, their life outcomes like level of education achieved, income, employment, and chance of marriage are already so constrained that they are not made worse off by having a child as a teenager. Encouragement about contraception or abstinence can help reduce teen pregnancy on the margin. But what many teen girls from low socioeconomic status backgrounds need is a reduced prospect of marginalization, and a greater chance for personal and economic advancement.

On the Job for 100 Issues of JEP

I was hired by Joseph Stiglitz 26 years ago to start a new economics journal, the Journal of Economic Perspectives. It took us a year from the starting line to mailing our first issue in the mail, but the Spring 2012 issue, now available on-line, is the 100th issue. Like all issues of JEP back to 1994, it is freely available to all, courtesy of the American Economic Association. The first three articles are about the journal: one by current editor David Autor on the effect of the journal within the economics profession, one by Joe Stiglitz remembering the early years and commenting on how the journal has evolved, and one by me called \”From the Desk of the Managing Editor.\”

Here are the two opening paragraphs and the closing paragraph of my essay:

\”Editing isn’t “teaching” and it isn’t “research,” so in the holy trinity of academic responsibilities it is apparently bunched with faculty committees, student advising, and talks to the local Kiwanis club as part of “service.” Yet for many economists, editing seems to loom larger in their professional lives. After all, EconLit indexes more than 750 academic journals of economics, which require an ever-shifting group of editors, co-editors, and advisory boards to function. Roughly one-third of the books in the annotated listings at the back of each issue of the Journal of Economic Literature are edited volumes.

Editors are gatekeepers, and editors are road-blocks—or perhaps these are essentially the same task. Editors shape “the literature,” both what and who is included and how it is presented. I’ve come to believe that “editing” is no more susceptible to a compact single defifi nition than “manufacturing” or “services.” But here is one take on the enterprise of editing from someone who has been sitting in the Managing Editor’s chair for all 100 issues of the Journal of Economic Perspectives since before the first issue of the journal mailed in Summer 1987. …

My job as Managing Editor of JEP has been a pride and a pleasure for these last 25 years. It’s consistently interesting work: after all, my job is to do close readings of the highly varied work of a succession of prominent economists who are trying to explain their thinking—and then to ask them questions until they explain it all to me! Editing an academic journal also offers the psychic frisson of leaving something behind: 100 issues and counting, to be precise. When I visit another college or university, I sometimes walk through the periodical stacks just to see JEP on the shelf. Running an academic journal for a long time offers a pleasing sense of place within the discipline of economics, spinning a web of personal contacts from the up-and-comers to the well-established in academic institutions around the world. Some of my friends refer to my job at the journal as “the guy who gets thanked” at the end of articles. There are worse epitaphs.\”

Spring 2012 Journal of Economic Perspectives

The Spring 2012 issue of my own Journal of Economic Perspectives is now freely available on-line, along with earlier issues back to 1994, courtesy of the American Economic Association. It\’s the 100th issue, and thus a bit of a landmark for the journal and for me personally, because I\’ve been the Managing Editor since the journal began. I\’ll blog about some of the individual papers in the next week or so. Here, I\’ll just provide the \”Table of Contents\” at the top, abstracts below, and links to the papers.  

 Symposium: 100 Issue of JEP

 The Journal of Economic Perspectives at 100 (Issues)
David Autor
Full-Text Access | Supplementary Materials
The Journal of Economic Perspectives and the Marketplace of Ideas: A View from the Founding
Joseph E. Stiglitz
Full-Text Access | Supplementary Materials
From the Desk of the Managing Editor
Timothy Taylor
Full-Text Access | Supplementary Materials

Symposium: International Trade


The Rise of Middle Kingdoms: Emerging Economies in Global Trade
Gordon H. Hanson
Full-Text Access | Supplementary Materials
 Putting Ricardo to Work
Jonathan Eaton and Samuel Kortum
Full-Text Access | Supplementary Materials
Gains from Trade When Firms Matter
Marc J. Melitz and Daniel Trefler
Full-Text Access | Supplementary Materials
Globalization and U.S. Wages: Modifying Classic Theory to Explain Recent Facts

Jonathan Haskel, Robert Z. Lawrence, Edward E. Leamer and Matthew J. Slaughter
Full-Text Access | Supplementary Materials

Articles


Why Is the Teen Birth Rate in the United States So High and Why Does It Matter?
Melissa S. Kearney and Phillip B. Levine
Full-Text Access | Supplementary Materials
Why Was the Arab World Poised for Revolution? Schooling, Economic Opportunities, and the Arab Spring
Filipe R. Campante and Davin Chor
Full-Text Access | Supplementary Materials
Using Internet Data for Economic Research
Benjamin Edelman
Full-Text Access | Supplementary Materials
Jonathan Levin: 2011 John Bates Clark Medalist
Liran Einav and Steve Tadelis
Full-Text Access | Supplementary Materials

Features


Retrospectives: The Introduction of the Cobb-Douglas Regression
Jeff Biddle
Full-Text Access | Supplementary Materials
Recommendations for Further Reading
Timothy Taylor
Full-Text Access | Supplementary Materials


The Journal of Economic Perspectives at 100 (Issues) 
David Autor
When I was a graduate student, I discovered that the Journal of Economic Perspectives embodied much of what I love about the field of economics: the clarity that pierces rhetoric to seek the core of a question; the rigor to identify the causal relationships, tradeoffs, and indeterminancies inherent in a problem; the self-assurance to apply the disciplinary toolkit to problems both sacred and profane; and the force of logic to reach conclusions that might be unexpected, controversial, or refreshingly bland. It never occurred to me in those years that one day I would edit the journal. While doing so is a privilege and a pleasure, I equally confess that it\’s no small weight to be the custodial parent of one of our profession\’s most beloved offspring. No less intimidating is the task of stipulating what this upstart youth has accomplished in its first 25 years and 100 issues in print. Like any empiricist, I recognize that the counterfactual world that would exist without the JEP is unknowable, but my strong hunch is that our profession would be worse off in that counterfactual world. In this essay, I reflect on the journal\’s accomplishments and articulate some of my own goals for the JEP going forward.
Full-Text Access | Supplementary Materials

The Journal of Economic Perspectives and the Marketplace of Ideas: A View from the Founding Joseph E. Stiglitz
I welcome the opportunity to join in the celebration of the twenty-fifth birthday of the Journal of Economic Perspectives. It is wonderful to see how this \”baby,\” which I, along with Carl Shapiro and Timothy Taylor, nurtured through its formative years—from 1984 (three years before the first issue in 1987) until I left in 1993—has grown up and become an established part of the economics profession. In founding the journal, we had many objectives, hopes, and ambitions. We were concerned about the increasing specialization within the economics profession. We sought to have complex and sometimes arcane or highly mathematical ideas translated into plain English, or at least that dialect of the language known as \”Economese\”—and in a way that was not only informative but engaging. We were worried too about a growing distance between economics and policy. At least a portion of economic research should be related to ideas that were, or should or would be, part of the national and global policy debates. We began with an explicit commitment to present a diversity of viewpoints, hence the word \”perspectives\” in the title. One of the goals we set out for ourselves was to disseminate developments within economics more rapidly. We never shied away from controversy at the journal, but we tried to ensure that the discussion was balanced.
 Full-Text Access | Supplementary Materials

From the Desk of the Managing Editor 
Timothy Taylor
Editing isn\’t \”teaching\” and it isn\’t \”research,\” so in the holy trinity of academic responsibilities it is apparently bunched with faculty committees, student advising, and talks to the local Kiwanis club as part of \”service.\” Yet for many economists, editing seems to loom larger in their professional lives. After all, EconLit indexes more than 750 academic journals of economics, which require an ever-shifting group of editors, co-editors, and advisory boards to function. Roughly one-third of the books in the annotated listings at the back of each issue of the Journal of Economic Literature are edited volumes. Here is one take on the enterprise of editing from someone who has been sitting in the Managing Editor\’s chair for all 100 issues of the Journal of Economic Perspectives since before the first issue of the journal mailed in Summer 1987.
Full-Text Access | Supplementary Materials

The Rise of Middle Kingdoms: Emerging Economies in Global Trade 
Gordon H. Hanson
In this paper, I examine changes in international trade associated with the integration of low- and middle-income countries into the global economy. Led by China and India, the share of developing economies in global exports more than doubled between 1994 and 2008. One feature of new trade patterns is greater South-South trade. China and India have booming demand for imported raw materials, which they use to build cities and factories. Industrialization throughout the South has deepened global production networks, contributing to greater trade in intermediate inputs. A second feature of new trade patterns is the return of comparative advantage as a driver of global commerce. Growth in low- and middle-income nations makes specialization according to comparative advantage more important for the global composition of trade, as North-South and South-South commerce overtakes North-North flows. China\’s export specialization evolves rapidly over time, revealing a capacity to speed up product ladders. Most developing countries hyper-specialize in a handful of export products. The emergence of low- and middle-income countries in trade reveals significant gaps in knowledge about the deep empirical determinants of export specialization, the dynamics of specialization patterns, and why South-South and North-North trade differ.
Full-Text Access | Supplementary Materials

Putting Ricardo to Work
Jonathan Eaton and Samuel Kortum
David Ricardo (1817) provided a mathematical example showing that countries could gain from trade by exploiting innate differences in their ability to make different goods. In the basic Ricardian example, two countries do better by specializing in different goods and exchanging them for each other, even when one country is better at making both. This example typically gets presented in the first or second chapter of a text on international trade, and sometimes appears even in a principles text. But having served its pedagogical purpose, the model is rarely heard from again. The Ricardian model became something like a family heirloom, brought down from the attic to show a new generation of students, and then put back. Nearly two centuries later, however, the Ricardian framework has experienced a revival. Much work in international trade during the last decade has returned to the assumption that countries gain from trade because they have access to different technologies. These technologies may be generally available to producers in a country, as in the Ricardian model of trade, our topic here, or exclusive to individual firms. This line of thought has brought Ricardo\’s theory of comparative advantage back to center stage. Our goal is to make this new old trade theory accessible and to put it to work on some current issues in the international economy. Full-Text Access | Supplementary Materials

Gains from Trade When Firms Matter
Marc J. Melitz and Daniel Trefler
The rising prominence of intra-industry trade and huge multinationals has transformed the way economists think about the gains from trade. In the past, we focused on gains that stemmed either from endowment differences (wheat for iron ore) or inter-industry comparative advantage (David Ricardo\’s classic example of cloth for port). Today, we focus on three sources of gains from trade: 1) love-of-variety gains associated with intra-industry trade; 2) allocative efficiency gains associated with shifting labor and capital out of small, less-productive firms and into large, more-productive firms; and 3) productive efficiency gains associated with trade-induced innovation. This paper reviews these three sources of gains from trade both theoretically and empirically. Our empirical evidence will be centered on the experience of Canada following its closer economic integration in 1989 with the United States—the largest example of bilateral intra-industry trade in the world—but we will also describe evidence for other countries.
 Full-Text Access | Supplementary Materials

Globalization and U.S. Wages: Modifying Classic Theory to Explain Recent Facts 
Jonathan Haskel, Robert Z. Lawrence, Edward E. Leamer and Matthew J. Slaughter
This paper seeks to review how globalization might explain the recent trends in real and relative wages in the United States. We begin with an overview of what is new during the last 10-15 years in globalization, productivity, and patterns of U.S. earnings. To preview our results, we then work through four main findings: First, there is only mixed evidence that trade in goods, intermediates, and services has been raising inequality between more- and less-skilled workers. Second, it is more possible, although far from proven, that globalization has been boosting the real and relative earnings of superstars. The usual trade-in-goods mechanisms probably have not done this. But other globalization channels—such as the combination of greater tradability of services and larger market sizes abroad—may be playing an important role. Third, seeing this possible role requires expanding standard Heckscher-Ohlin trade models, partly by adding insights of more recent research with heterogeneous firms and workers. Finally, our expanded trade framework offers new insights on the sobering fact of pervasive real-income declines for the large majority of Americans in the past decade.
Full-Text Access | Supplementary Materials

Why Is the Teen Birth Rate in the United States So High and Why Does It Matter? 
Melissa S. Kearney and Phillip B. Levine
Why is the rate of teen childbearing is so unusually high in the United States as a whole, and in some U.S. states in particular? U.S. teens are two and a half times as likely to give birth as compared to teens in Canada, around four times as likely as teens in Germany or Norway, and almost ten times as likely as teens in Switzerland. A teenage girl in Mississippi is four times more likely to give birth than a teenage girl in New Hampshire—and 15 times more likely to give birth as a teen compared to a teenage girl in Switzerland. We examine teen birth rates alongside pregnancy, abortion, and \”shotgun\” marriage rates as well as the antecedent behaviors of sexual activity and contraceptive use. We demonstrate that variation in income inequality across U.S. states and developed countries can explain a sizable share of the geographic variation in teen childbearing. Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried. Teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself does not appear to have much direct economic consequence. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to \”drop-out\” of the economic mainstream; they choose nonmarital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement.
 Full-Text Access | Supplementary Materials

Why Was the Arab World Poised for Revolution? Schooling, Economic Opportunities, and the Arab Spring 
Filipe R. Campante and Davin Chor
What underlying long-term conditions set the stage for the Arab Spring? In recent decades, the Arab region has been characterized by an expansion in schooling coupled with weak labor market conditions. This pattern is especially pronounced in those countries that saw significant upheaval during the first year of the Arab Spring uprisings. We argue that the lack of adequate economic opportunities for an increasingly educated populace can help us understand episodes of regime instability such as the Arab Spring.
 Full-Text Access | Supplementary Materials

Using Internet Data for Economic Research
Benjamin Edelman
The data used by economists can be broadly divided into two categories. First, structured datasets arise when a government agency, trade association, or company can justify the expense of assembling records. The Internet has transformed how economists interact with these datasets by lowering the cost of storing, updating, distributing, finding, and retrieving this information. Second, some economic researchers affirmatively collect data of interest. For researcher-collected data, the Internet opens exceptional possibilities both by increasing the amount of information available for researchers to gather and by lowering researchers\’ costs of collecting information. In this paper, I explore the Internet\’s new datasets, present methods for harnessing their wealth, and survey a sampling of the research questions these data help to answer. The first section of this paper discusses \”scraping\” the Internet for data—that is, collecting data on prices, quantities, and key characteristics that are already available on websites but not yet organized in a form useful for economic research. A second part of the paper considers online experiments, including experiments that the economic researcher observes but does not control (for example, when Amazon or eBay alters site design or bidding rules); and experiments in which a researcher participates in design, including those conducted in partnership with a company or website, and online versions of laboratory experiments. Finally, I discuss certain limits to this type of data collection, including both \”terms of use\” restrictions on websites and concerns about privacy and confidentiality.
 Full-Text Access | Supplementary Materials

Jonathan Levin: 2011 John Bates Clark Medalist 
Liran Einav and Steve Tadelis
Jonathan Levin, the 2011 recipient of the American Economic Association\’s John Bates Clark Medal, has established himself as a leader in the fields of industrial organization and microeconomic theory. Jon has made important contributions in many areas: the economics of contracts and organizations; market design; markets with asymmetric information; and estimation methods for dynamic games. Jon\’s combination of breadth and depth is remarkable, ranging from important papers in very distinct areas such as economic theory and econometric methods to applied work that seamlessly integrates theory with data. In what follows, we will attempt to do justice not only to Jon\’s academic work, but also try to sketch a broader portrait of Jon\’s other contributions to economics as a gifted teacher, dedicated advisor, and selfless provider of public goods.
 Full-Text Access | Supplementary Materials

Retrospectives: The Introduction of the Cobb-Douglas Regression
Jeff Biddle
At the 1927 meetings of the American Economic Association, Paul Douglas presented a paper entitled \”A Theory of Production,\” which he had coauthored with Charles Cobb. The paper proposed the now familiar Cobb-Douglas function as a mathematical representation of the relationship between capital, labor, and output. The paper\’s innovation, however, was not the function itself, which had originally been proposed by Knut Wicksell, but the use of the function as the basis of a statistical procedure for estimating the relationship between inputs and output. The paper\’s least squares regression of the log of the output-to-capital ratio in manufacturing on the log of the labor-to-capital ratio—the first Cobb-Douglas regression—was a realization of Douglas\’s innovative vision that a stable relationship between empirical measures of inputs and outputs could be discovered through statistical analysis, and that this stable relationship could cast light on important questions of economic theory and policy. This essay provides an account of the introduction of the Cobb-Douglas regression: its roots in Douglas\’s own work and in trends in economics in the 1920s, its initial application to time series data in the 1927 paper and Douglas\’s 1934 book The Theory of Wages, and the early reactions of economists to this new empirical tool.
Full-Text Access | Supplementary Materials

Recommendations for Further Reading 
Timothy Taylor
 Full-Text Access | Supplementary Materials

True Cost of Electricity Generation

The price we pay for energy in part reflects the cost of production. But the primary sources energy involves other costs not captured in the purchase price: mainly health costs of pollution, but other environmental effects as well. Michael Greenstone and Adam Looney set out to compare the full costs of various ways of producing electricity in \”Paying Too Much for Energy? The True Costs of Our Energy Choices.\” It\’s available here at the Spring 2012 issue of Daedalus, and is also available here.

To set the stage, here\’s a comment on the health consequence of our primary sources of fuel for electrical generation (footnotes omitted): 

\”Our primary sources of energy impose significant health costs–particularly on infants and the elderly, our most vulnerable. For instance, even though many air pollutants are regulated under the Clean Air Act, fine particle pollution, or soot, still is estimated to contribute to roughly one out of every twenty premature deaths in the United States. Indeed, soot from coal power plants alone is estimated to cause thousands of premature deaths and hundreds of thousands of cases of illness each year. The resulting damages include costs from days missed at work and school due to illness, increases in emergency room and hospital visits, and other losses associated with premature deaths. In other countries the costs are still greater; recent research suggests that life expectancies in northern China are about five years shorter than in southern China due to the higher pollution levels in the north. The National Academy of Sciences recently estimated total non-climate change-related damages associated with energy consumption and use to be more than $120 billion in the United States in 2005. Nearly all of these damages resulted from the effects of air pollution on our health and wellness.\”

What do the true costs of the energy that we use for producing electricity look like if we include production costs, costs of air pollution not related to carbon emissions, and then a social cost of roughly $21/ton for carbon dioxide emissions related to the risk of climate change? In this figure, the solid black bar is the production cost of the electricity, the checkered area is the non-carbon health costs, and the light gray area is the costs of carbon emissions. For those interested in the underlying data, the private costs of production are from Greenstone and Looney\’s own work; the non-carbon health costs are based on estimates from a National Academy of Sciences report; and the costs of carbon are based on a U.S. government Interagency Working Group on the Social Cost of Carbon (with which Greenstone was heavily involved).

If you just look at production costs (dark bars), \”existing coal\” is clearly the least expensive option. If you take other costs into account, \”New Natural Gas\” is least expensive. When the horizontal axis refers to \”new,\” as in \”New Natural Gas\” or \”New Coal,\” it is referring to construction of a plant under the current regulatory regime for new plants. Thus, producing electricity from \”New Coal\” has a higher production cost than from \”Existing Coal,\” but the health and carbon costs are also lower.

Not all costs are included here, as the authors readily acknowledge For example, the health costs of air pollution don\’t include costs of mining coal or uranium, nor potential health risks for installers of solar panels. \”New Nuclear\” has no health costs added, not because such costs don\’t exist, but because they fall into the dreaded UTQ category, for \”Unable to Quantify.\” But it\’s always important to remember that absence of evidence is not evidence of absence. Clearly, nuclear power does have additional costs and risks, and mining coal and uranium have additional costs, and there are environmental risks and consequences of solar and wind power, too. I posted on March 14, 2012, about \”The Mundane Cost Obstacle to Nuclear Power.\”

The fourth and sixth columns are especially interesting to me, because they offer a realistic way to think about costs of solar and wind power. One main difficulty with solar and wind is that they are intermittent sources of electricity generation, and thus they must be combined with an alternative.
The fourth and sixth columns thus combine electricity generated by solar and with electricity generated from natural gas as a back-up for times when the sun isn\’t shining and the wind isn\’t blowing. If one only looks at production costs, these combinations aren\’t yet competitive with generating electricity with coal or natural gas, but if one looks at all private and social costs, the combination of wind and natural gas is already looking fairly competitive with coal.

My own proposed national energy policy is the \”The Drill-Baby Carbon Tax: A Grand Compromise on Energy Policy.\” The idea is to aggressively develop U.S. energy resources and at the same time to impose a tax that would reflect the environmental costs of using such resources.  Like many of my policy ideas, most people agree with only half of it–and they disagree about which half.  

The GM and Chrysler Bail-Outs

 The U.S. government first extended emergency loans to GM and Chrysler in 2008 and 2009, and then stage-managed their 2009 bankruptcies. How is that working out? Thomas H. Klier and James Rubenstein tell the story in \”Detroit back from the brink? Auto industry crisis and restructuring, 2008–11\” in the second quarter issue of Economic Perspectives from the Federal Reserve Bank of Chicago. I\’ll lift facts and background from their more detailed and dispassionate description and tell the story my own way.

The story really starts in the late 1990s. The big three traditional U.S. automakers– GM, Ford, and Chrysler–had held about 70-75% of the U.S. auto market through the 1980s and most of the 1990s, but then their market share began plunging, ultimately falling to just 45% of the market in 2009.   

When the Great Recession hit, demand for cars dropped off, financing dried up, and gasoline prices spiked all at the same time. All of the Big Three were experiencing large losses, but Ford had a larger cash reserve. GM and Chrysler weren\’t going to make it.

On December 19, 2008, the lame-duck President George W. Bush authorized the use of the Troubled Asset Relief Program to give loans to GM and Chrysler. For the record, the TARP legislation discussed support for \”financial firms,\” and had nothing to say about helping manufacturing firms. After the Obama administration came to office in early 2009, it gave TARP loans to GM and Chysler, too. Klier and Rubenstein report: \”GM ultimately received $50.2 billion through TARP, Chrysler $10.9 billion, and GMAC $17.2 billion.\”

But GM and Chrysler were still bleeding money, and so the federal government stage-managed their bankruptcies. Standard bankruptcy law, in a nutshell, is that the stockholders get wiped out and the debtors and bondholders take losses–but end up owning a restructured firm with renegotiated contracts and obligations. However, Chrysler and GM used a formerly obscure part of the U.S. Bankruptcy Code, Section 363(b), that had been used for the Lehman Brothers bankruptcy. Basically, a newly formed company would receive all the desirable assets from the company–properties, personnel, contracts–while the old company kept the toxic stuff. This approach created \”new\” Chrysler out of \”old\” Chrysler in a month; GM took five weeks.

The strategies for the two firms were quite different. The plan with Chrysler was basically to get Fiat to run the firm. The table shows the evolution of ownership of \”new\” Chrysler. In the table, VEBA stands for \”voluntary employees\’ beneficiary association,\” which is the legal form of the trust fund for retirement and health care of the United Auto Workers union. The old sad joke used to be that the big U.S. car companies were really a retirement fund with a car company attached: under this plan, the arrangement became explicit. Fiat was given a 20% ownership share for no cash payment, but with an agreement that it would run the firm and develop new products. The U.S. and Canadian governments took small ownership shares in exchange for the earlier loans they had made. Bondholders of secured debt got 29 cents on the dollar.

Part of the arrangement was that if Fiat met certain targets (sales, exports, developing fuel-efficient cars), then it could expand its ownership share of the firm. In May 2011, Chrysler paid back its government loans and Fiat bought out the remaining government ownership. Chrysler is again a car company primarily owned by, well, a car company, rather than a retirement fund.

GM was a different matter. In this case, the U.S. took 60.8% ownership while Canadian governments took another 11.7%. Thus, the moniker \”Government Motors\” was fully deserved. The VEBA trust got 17.5% ownership.  The GM bondholders, who in a standard bankruptcy arrangement would have ended up owning the firm, got the smallest slice.

In November 2010, GM had a stock offering and raised $24 billion, allowing the government to get rid of a bunch of its shares. But ultimately, even after the government shouldered out the GM bondholders, it seems unlikely to recoup the TARP money it loaned. Klier and Rubenstein write: \”In order for the government’s remaining 32 percent of the company to be worth $26.2 billion, representing all of the government’s remaining unrecovered investment, GM’s market capitalization would have to be approximately $81.9 billion. To achieve this market capitalization, the price of GM stock would have to exceed $52 per share, or more than twice its price in April 2012.\”

The bankruptcy did lead to dramatic changes. Here\’s a list of some changes (citations omitted):

  • \”GM’s North American bill for hourly labor declined from $16 billion in 2005 to $5 billion in 2010 …
  • \”Old GM had 111,000 hourly employees in 2005 and 91,000 in 2008. New GM had 75,000 immediately after bankruptcy in 2009 and 50,000 in 2010 …
  • \”GM had closed 13 of the 47 U.S. assembly and parts plants it operated in 2008.  GM’s Pontiac, Saturn, and Hummer brands were terminated, and Saab was sold. GM retained four nameplates in North America: Chevrolet, its mass-market brand; Cadillac, its premium brand; Buick; and GMC. …
  • \”GM also reduced its dealer network by about 25 percent. …
  • \”Detroit’s labor costs were now competitive with foreign producers operating within North America. Hourly labor costs ranged from $58 at Ford to $52 at Chrysler, compared with $55 for Toyota …\”

The policy question about GM and Chrysler is sometimes phrased as \”should they have been helped, or not.\” It\’s important to be clear that even though the two firms were helped, they still went into bankruptcy! If the firms hadn\’t received TARP loans, they would have gone into bankruptcy, too. Thus, the actual policy question here should be to compare the two stage-managed bankruptcy that did occur with what might have happened under a more standard bankruptcy procedure.

For example, it seems at least arguable that the accelerated bankruptcy process which occurred under extreme federal government pressure was faster and smoother than if the arrangements had been worked out in a standard bankruptcy court proceeding. It seems clear that the federal government shouldered out bondholders, who would have received more in a standard bankruptcy procedure, and thus created some uncertainty about how bondholders of other large firms might be treated in the future. On the other side, the UAW retirement funds did much better out of the stage-managed bankruptcy than they probably would have done in a standard bankruptcy. Fiat appears to have gotten a better deal under the stage-managed bankruptcy of Chrysler than it would have received in a standard bankruptcy. The stage-managed bankruptcy did lead to cost-cutting measures like plant closures, fewer employees, and more competitive wages for GM and Chrysler, but presumably these changes would have happened under a standard bankruptcy procedure, too–and perhaps they would have happened in a way that led to greater competitiveness for the firm moving forward.

The claim that the U.S. government \”saved\” GM and Chrysler is wildly overblown. The firms would have continued to exist if they had gone through a standard bankruptcy process. Were the TARP loans to GM and Chrysler and the government intervention in the bankruptcy process worth it? Part of the answer is the value you place on the faster bankruptcy process, or on how you feel about a process that gave bondholders less value and the UAW retirement fund more value than they probably would have received in a standard bankruptcy. But as another metric, let\’s say that the government ends up eventually losing $10 billion of its investment in GM, which has 50,000 hourly jobs in 2010. Say that in a standard bankruptcy, hourly jobs would have been slashed more sharply, down to 30,000. (Of course, it\’s possible that GM would have been managed differently under a standard bankruptcy, in such a way that jobs wouldn\’t have needed to be cut as sharply.) Saving 20,000 jobs at a cost of $10 billion works out to $500,000 in government spending per job saved. 

Inequality of Leisure

Inequality of incomes have risen in recent decades. Orazio Attanasio, Erik Hurst, and
Luigi Pistaferri provide evidence that inequality of consumption has risen as well. But here, I want to focus on another one of their arguments: the rise in inequality of leisure. But there\’s a twist here: those with more leisure, and who are benefiting from a disproportionate rise in leisure, tend to be those with lower skill levels. The evidence is in \”The Evolution of Income, Consumption, and Leisure Inequality in The US, 1980-2010,\” published as NBER Working Paper #17982 in April 2012. The paper is not freely available on-line, but many academics will have access through their libraries.

Here\’s a basic data table on hours of leisure per week, by gender and education level. An explanation from Attanasio, Hurst, and  Pistaferri follows:

\”[O]ur measure of leisure includes the actual time the individual spends in leisurely activities like watching television, socializing with friends, going to the movies, etc. A few things are of note from Table 1. First, in 1985, low educated men took only slightly more hours per week of leisure than high educated men. As above, we define high educated as those with more than 12 years of schooling. A similar pattern holds for women. However, by 2007, the leisure differences between high and low educated men are substantial. Specifically, low educated men experienced a 2.5 hours per week gain in leisure between 1985 and 2007. High educated men, during the same time period, experienced a 1.2 hour per week decline in leisure. The new effect is that leisure inequality increased dramatically after 1985. Again, similar patterns are found for women. …

\”Most of the increase in leisure occurred as a result of changes in the upper tail of the leisure distribution. A greater share of low educated men in 2003-7 are taking more than 50 hours per week of leisure than in 1985. This is not the case for higher educated men. If anything, there is slightly lower proportion of higher educated men taking more than 50 hours per week of leisure in 2003-7 than there was in 1985. … While it is true that the consumption of the high educated has grown rapidly relative to the consumption of the low educated, it is also true that leisure time of the low educated has grown rapidly relative to the leisure time of the low educated. … [A]s long as leisure has some positive value, the increase in consumption inequality between high and low educated households during the past few decades will overstate the true inequality in well being between these groups.\”

Just to be clear, Attanasio, Hurst, and Pistaferri are in no way making some foolish argument the rise in income and consumption inequality that benefits those at the top of the income scale shouldn\’t matter, because it is offset by greater inequality of leisure benefiting those who tend to be at the bottom of the income scale.

But although the U.S. economy has become much less equal with regard to income and consumption, it is worth remembering that these are not the only measures of well-being. For example, I posted on June 29, 2011, about how \”Inequality of Mortality\” has been greatly reduced. And as leisure has become less equally distributed in a way that tends to favor those with lower skill levels, those with a rising share of leisure are better off in that dimension of well-being, albeit in a way that isn\’t captured in income or consumption statistics.

Has Ben Bernanke Been Consistent?

Back in the late 1990s and early 2000s, Ben Bernanke sharply criticized the Bank of Japan. He argued that even though the BoJ had cut its target interest rate to near-zero, it could and should do much more to end deflation and to stimulate Japan\’s economy. In the last few months, a number of critics have accused Bernanke of inconsistency: that is, the Ben Bernanke who has been leading the Fed in the aftermath of the Great Recession is not following the advice of the Ben Bernanke who was criticizing the Bank of Japan back in 2000-2003. To me, this criticism seems like pretty thin gruel: indeed, I think the accusation of inconsistency basically an attention-getting cover for a more mundane policy disagreement over whether the Fed should immediately start another round of quantitative easing. Here, I\’ll lay out the arguments here as I seem them.

Bernanke\’s Earlier Criticisms

Bernanke first became a member of the Fed\’s Board of Governors in 2002. Back in 2000, while still a professor at Princeton, he published an essay called: “Japan’s Slump: A Case of Self-Induced Paralysis?” In the essay, Bernanke sharply criticizes the Bank of Japan for taking the position that since it had lowered its target interest rate to near-zero, there was nothing more it could do. In contrast Bernanke argued that a central bank had a number of other options when confronted with deflation: a long-term commitment to a near-zero interest rate; setting an inflation target of 3-4 percent per year; intervention in exchange-rate markets to depreciate the yen; using money creation to finance deficit spending; and central bank purchases of long-term government bonds, corporate bonds, and other financial securities.

Bernanke made similar arguments after joining the Fed. For examples, see his November 21, 2002 talk to the National Economists Club in Washington, D.C. called Deflation: Making Sure \”It\” Doesn\’t Happen Here\” or his May 31, 2003 talk to the Japan Society of Monetary Economics in Tokyo called \”Some Thoughts on Monetary Policy in Japan.\”  These talks do differ a bit in emphasis. For example, after joining the Fed, Bernanke was careful to say that he was in no way contemplating interventions in exchange rate markets. But the general messages remained the same: namely, that a central bank has a lot of tools available even when it has cut its target interest rate to near-zero, and that the Bank of Japan should make more aggressive use of these tools.

 
Accusing Bernanke of Inconsistency

One of the most prominent voices accusing Bernanke of inconsistency is Paul Krugman, who  wrote an essay in the New York Times Magazine on April 24, 2012, called  “Earth to Ben Bernanke: Chairman Bernanke Should Listen to Professor Bernanke,”  Here\’s Krugman:

\”Bernanke was and is a fine economist. More than that, before joining the Fed, he wrote extensively, in academic studies of both the Great Depression and modern Japan, about the exact problems he would confront at the end of 2008. He argued forcefully for an aggressive response, castigating the Bank of Japan, the Fed’s counterpart, for its passivity. Presumably, the Fed under his leadership would be different. 

\”Instead, while the Fed went to great lengths to rescue the financial system, it has done far less to rescue workers. The U.S. economy remains deeply depressed, with long-term unemployment in particular still disastrously high, a point Bernanke himself has recently emphasized. Yet the Fed isn’t taking strong action to rectify the situation.

\”The Bernanke Conundrum — the divergence between what Professor Bernanke advocated and what Chairman Bernanke has actually done — can be reconciled in a few possible ways. Maybe Professor Bernanke was wrong, and there’s nothing more a policy maker in this situation can do. Maybe politics are the impediment, and Chairman Bernanke has been forced to hide his inner professor. Or maybe the onetime academic has been assimilated by the Fed Borg and turned into a conventional central banker. Whichever account you prefer, however, the fact is that the Fed isn’t doing the job many economists expected it to do, and a result is mass suffering for American workers.\”

That\’s a heavy accusation, and Krugman isn\’t alone in making it. In a working paper for the National Bureau of Economic Research, Lawrence Ball argues that Bernanke has been inconsistent, and collects some other examples (\”Ben Bernanke and the Zero Bound, WP #17836, February 2012).
For example, after discussing some of Bernanke\’s earlier writings, Christina Romer (formerly head of the Council of Economic Advisers at the start of the Obama administration) reportedly said: “My reaction to it was, ‘I wish Ben would read this again.’” Joseph Gagnon (a former Fed economist now at the Peterson Institute) uses the title of Bernanke’s criticism back in in 2000 of the Bank of Japan to criticize Fed policy: “It’s really ironic. It’s a self-induced paralysis.”

Bernanke Pushes Back


Bernanke pushed back against the criticism of inconsistency in his press conference of April 25, 2012. In particular, here\’s part of what he answered in response to the question: \”[S]pecifically
could you address whether your current views are inconsistent with the views on that subject that you held as an academic?\” Bernanke answered:

\”So there\’s this view circulating that the views I expressed about 15 years ago on the Bank of Japan are somehow inconsistent with our current policies. That is absolutely incorrect. Our–my views and our policies today are completely consistent with the views that I held at that time. I made two points at that time to the Bank of Japan. The first was that I believe that a determined central bank could and should work to eliminate deflation, that is falling prices. The second point that I made was that when short-term interest rates hit zero, the tools of a central bank are no longer–are not exhausted, there are still other things that the central bank can do to create additional accommodation. Now looking at the current situation in United States, we are not in deflation. When deflation became a significant risk in late 2010 or at least a modest risk in late 2010, we used additional balance sheet tools to help return inflation close to the 2 percent target. Likewise, we have been aggressive and creative in using nonfederal funds rate centered tools to achieve additional accommodation for the U.S. economy. So the very critical difference between the Japanese situation 15 years ago and the U.S. situation today is that Japan was in deflation and clearly when you\’re in deflation and in recession, then both sides of your mandates, so to speak, are demanding additional accommodation. In this case is we are not in deflation, we have an inflation rate that\’s close to our objective. Now, why don\’t we do more? Well, first I would again reiterate that we are doing great deal, policy is extraordinarily accommodative, we–and I won\’t go through the list again, but you would–you know all the things that we have done to try to provide support to the economy. I guess the question is, does it make sense to actively seek a higher inflation rate in order to achieve a slightly increased reduction–a slightly increased pace of reduction in the unemployment rate? The view of the Committee is that that would be very reckless. We have–we, the Federal Reserve, have spent 30 years building up credibility for low and stable inflation which has proved extremely valuable in that we\’ve been be able to take strong accommodative actions in the last 4 or 5 years to support the economy without leading to a unanchoring of inflation expectations or a destabilization of inflation. To risk that asset for what I think would be quite tentative and perhaps doubtful gains on the real side would be, I think, an unwise thing to do.\”

As someone without any dog in this fight, how well-founded is the criticism that Bernanke has been inconsistent? Two broad points are worth considering here: How much does Japan\’s situation in the late 1990s differ from the U.S. situation in the Great Recession and its aftermath? And how has the Fed reaction differed from the Bank of Japan\’s reaction?

Remembering Japan\’s Asset Bubble and the Policy Reaction

Japan\’s Nikkei 22 stock market index rose almost seven-fold in the from 6,000 in 1980 to peak at almost 40,000 at the end of 1989. It then dropped by nearly half in 1990, and had slid back to 8,000 by 2003. Similarly, average land prices for Japan doubled from 1980 to 1991, and then fell back to 1980 levels by about 2004. Banks suffered huge paper losses, but were not forced to reorganize: instead, they took the low interest rates from the Bank of Japan and continued offering loans to underwater \”zombie\” firms. Moreover, Japan had begun by about 1998 to experience deflation, so even near-zero nominal interest rates were actually positive real interest rates.

In comparison, the U.S. Dow Jones Average rose by about 50% from 2003 to 2007, then fell back to below 2003 levels in early 2009, but now has recovered back to near the 2007 peak. The U.S. housing price bubble has been real and painful, but it wasn\’t a doubling and then a halving for the average of all housing prices nationwide. The U.S. had very low inflation for a time, but it hasn\’t actually dipped into deflation. U.S. banks have been recapitalized, by hook and by crook, and forced to undergo stress tests.

In short, an implicit argument that an impartial policymaker should react in exactly the same way to Japan circa 2000 and to the U.S. economy circa 2012 is off the mark, because the situations are substantially different.

Central Bank Policy Reactions

Bernanke\’s criticism back in 2000 was in response to the fact that in the face of this situation, Japan\’s central bank had done almost nothing but reduce interest rates for an entire decade.  In contrast, the Fed under Bernanke reacted much more quickly.

  1. The Fed started reducing the federal funds interest rate in 2007, and took it down to near-zero in late 2008. It took the Bank of Japan about four years after the crash to move its target interest rate down to near-zero; the Fed made this change in about 18 months. 
  2. The Fed set up a number of temporary agencies to give short-term loans to all sorts of players in the financial industry in late 2007, including the Term Auction Facility (TAF), Term Securities Lending Facility (TSLF), Primary Dealer Credit Facility (PDCF), Commercial Paper Funding Facility (CPFF), Term Asset-Backed Seucurities Loan Facility (TALF), and others.  All of these agencies were about making short-term loans to get through the crisis, and they were all closed by mid-2010.
  3. The Fed carried out \”quantitative easing\” through the direct purchase of U.S. Treasury debt–essentially printing money to finance $1 trillion or so of federal borrowing.
  4. The Fed also carried out \”quantitative easing\” through the direct purchase of a $1 trillion or so of mortgage-backed securities–essentially printing money to provide finance in this sector.
  5. The Fed offered forward guidance about its plans, announcing that it would keep the target interest rate near-zero interest rate low through 2014.   
  6. The Fed now has the power to pay interest to banks on the reserves they are required to hold with the Fed, giving the Fed another monetary policy tool.
  7. The Fed has also changed its policies so that it doesn\’t just buy short-term Treasury debt, but also buys long-term securities. 

My point here is not to argue over whether each of these policies is effective or useful or appropriate. Instead, I\’m listing the policies to emphasize that the Ben Bernanke who wrote back in 2000 has also deployed an unprecedented array of monetary policy tools. Back in 2007, it was reasonable to teach in an intro econ class that the Federal Reserve took action by affecting the federal funds interest rate through open market operations. By 2010, just three years later, the policy menu of the Fed had been transformed. The critics cannot plausibly accuse Bernanke or the Federal Reserve of following the Bank of Japan in the 1990s, cutting interest rates and then sitting on his hands. Instead, to the extent that their claim of inconsistency has any merit, the claim must be that Bernanke should lead the Fed toward even more aggressive use of these non-interest rate tools. But just what non-interest rate tools should be used

Back in his 2000 essay, Bernanke argued that one policy alternative for Japan was to intervene in foreign exchange markets to drive down the value of the yen. However, Bernanke\’s critics, when accusing him of inconsistency, often don\’t mention the policy alternative of driving down the U.S. dollar exchange rate. For example, Krugman doesn\’t mention this choice in his New York Times Magazine article.

Bernanke also recommended back in 2000 that the Bank of Japan announce an inflation target of 3-4 percent, but the Fed has not announced such a target for the U.S. economy. Part of the reason for this may be a matter of legality: the Fed has never announced an official target for the inflation rate, but instead has discussed its legal mandate to deliver \”price stability.\” However, by announcing that the federal funds interest rate will stay near-zero into 2014, the Federal Reserve has in effect promised that even if the economy recovers, it will allow inflation to rise rather than raising interest rates. Of course, the Fed could renege on this promise: but it could renege on an inflation target, too. My guess is that the Fed believes that it has made plenty clear to anyone with eyes to see that as long as economic growth remains so sluggish, it would not clamp down on a low level of inflation.

The remaining dispute is over whether Bernanke should push the Fed to do more of what it has already done; in particular, should the Fed print money to purchase another trillion or two (or three or four) in Treasury debt or private-sector financial securities? One can disagree back-and-forth about the usefulness and risks of this policy choice in good faith. I happen to agree with Bernanke and the Board of Governors that the time isn\’t ripe for such a step just now. But even if one thinks that the Federal Reserve should be even more aggressive with quantitative easing just now, there is no serious inconsistency between Bernanke\’s 2000 essay about Japan and deciding not to double down on quantitative easing in the U.S. economy back in 2011 or right now.

There is considerable evidence that recovering from a deep financial crisis takes several years, as households and firms and financial institutions shed debt and rebalance their financial situations. The assertion that this recovery process would have been dramatically shorter if only the Fed would have financed an additional few trillion dollars in quantitative easing is a highly controversial claim–a hopeful theoretical prediction not based on any historical examples.   Bernanke\’s writings from 2000 to 2003 argue that a central bank confronted with a financial crisis should make aggressive use of unorthodox non-interest rate policies to avoid deflation, and the Fed under his leadership has done so. But Bernanke\’s earlier writings do not suggest that such non-interest rate policies are a cure-all for what ails an economy and for getting the unemployed back to work the aftermath of a financial crisis. The earlier writings do not suggest that these non-interest rate policies be pursued without limit even after an economy has started growing again, and without regard for balancing their  potential gains and losses.  

How Many "Discouraged" Workers?

The good folks at the U.S. Bureau of Labor Statistics divide the adult population into three groups: the employed, the unemployed, and those out of the labor force. When an employed person stops working, if they are looking for work, they are counted as unemployed; if they aren\’t looking for work, they are counted as out of the labor force. This distinction makes conceptual sense: it would be peculiar to treat a 75 year-old retiree not looking for a job, or a stay-at-home spouse, as \”unemployed.\” But there are obvious practical difficulties with these distinctions as well.  For example, \”discouraged\” workers who would like a job, but who have given up looking, will be counted as out of the labor force, although their situation looks more like unemployment.

So how many discouraged workers are there? Actually, the same survey that is used to count the unemployed can also be used to count \”discouraged workers.\” In fact, BLS counts \”discouraged\” workers as one of two parts of an overall group of people who are \”Marginally attached to the labor force.\”

Within the overall category of \”marginally attached,\” the first subcategory of \”discouraged\” workers \”[i]ncludes those who did not actively look for work in the prior 4 weeks for reasons such as thinks no work available, could not find work, lacks schooling or training, employer thinks too young or old, and other types of discrimination.\” The other subcategory of \”other persons marginally attached to the labor force … [i]ncludes those who did not actively look for work in the prior 4 weeks for such reasons as school or family responsibilities, ill health, and transportation problems, as well as a number for whom reason for nonparticipation was not determined.\”

Here\’s a graph of the data from BLS, which I made using the ever-helpful FRED tool from the Federal Reserve Bank of St. Louis. The bottom line shows the number of discouraged workers fluctuated around 400,000 from 1994 up to about 2008–a little higher in the aftermath of the 1990-91 and 2011 recessions, a little lower other times. But in the aftermath of the Great Recession, the number of discouraged workers spiked above 1.2 million, before falling back to under  million more recently.

The top line shows the broader category of \”marginally attached\” workers, which includes both those classified as \”discouraged\” and those who are marginally attached for other reasons. From 1994 it fluctuates around 1.5 million: higher in the aftermath of the 1990-91 and 2001 recessions, lower other times. But since 2008, it spiked up to 2.8 million, before dropping back to around 2.4 million more recently.

For comparison, the number of unemployed people (and remember, \”discouraged\” doesn\’t count as unemployed) was 12.7 million in March 2012. There were 865,000 \”discouraged\” workers in March 2012, and 2,352,000 in the total \”marginally attached\” category. Thus, if one wanted a broader picture of the labor market including both those officially classified as unemployed along with the discouraged, the total number of people in these two categories would have been would have been about 7% higher than official unemployment alone, at 13,565,000. If one wanted a broader picture of the labor market including both the unemployed and all of the \”marginally attached,\” the total would have been 18% higher than official unemployment alone, at 15,052,000.

It doesn\’t seem right to treat the officially unemployed, who are actively looking for work, as in the same situation as the \”marginally attached.\” But the high numbers of discouraged and marginally attached do demonstrate that the unemployment rate only captures one aspect of the weakness in U.S. labor markets.

Tire Tariffs: Saving Jobs at $900,000 Apiece

In September 2009, President Obama approved a special tariff on imports of tires from China. In his 2012 State of the Union address, he stated that the policy had saved \”over a thousand\” jobs. Gary Clyde Hufbauer and Sean Lowry look at what happened in \”US Tire Tariffs: Saving Few Jobs at High Cost,\” written as an April 2012 \”Policy Brief\” for the Peterson Institute for International Economics.

The basic economic lessons here are the same as ever. There\’s never been any question that imposing tariffs on foreign competition could dissuade imports, and thus allow U.S. manufacturers to keep production and prices higher than they would otherwise be. As a result, U.S. consumers pay more, the firms make higher profits–and workers for those firms get some crumbs from the table. In this case, Hufbauer and Lowry estimate that consumers paid $1.1 billion in higher prices for tires in 2011. This saved a maximum of 1,200 jobs, so the average cost of the tariff was $900,000 per job saved. But of course, the worker didn\’t receive that $900,000; instead, most of it went to the tire companies. And in an especially odd twist, most of it contributed to profits earned by non-U.S. producers.

The story starts before in September 2009, when U.S. tariffs on tire imports were in the range of 3.4-4.0%. \”Starting on September 26, 2009, Chinese tires were subjected to an additional 35 percent ad valorem tariff duty in the first year, 30 percent ad valorem in the second year, and 25 percent ad valorem in the third year.\” The higher tariffs did reduce tire imports from China.  For example, \”radial car tires imported from China fell from a high of approximately 13.0 million tires in 2009Q3 to 5.6 million tires during 2009Q4—a 67 percent decrease.\”

Employment in the U.S. tire industry rose from 50,800 in September 2009 to 52,000 by September 2011, which is the basis for a rough estimate that 1,200 jobs were saved by the tariffs. (Of course, one could argue that jobs would have declined without the tariffs, so more than 1,200 jobs were saved, or one could argue that some of the job increase came from other forces, so less than 1,200 jobs were saved by the tariffs.)The average salary of a tire builder was $40,070 in 2011. So multiplying this income by 1,200 jobs, the total additional income received by tire workers would be $48 million.

Data from the Consumer Price Index shows that prices of tires from U.S. companies jumped after the tariff was imposed. This is totally expected, of course: the reason that import tariffs benefit U.S. firms is that it allows them to charge higher prices than they would otherwise be able to do. Hufbauer and Pauly calculate that the import restraints on Chinese tires led to additional higher prices for tires cost U.S. consumers about $295 million per year.

As U.S tire imports from China declined, tire imports increased from other countries. Indeed, the U.S. was importing about 27 million tires in the third quarter of 2009, when the tariff took effect, but was importing about 30 million tires by the third quarter of 2009. The tariff on Chinese-produced tires cut imports from China, but tire imports from places like Mexico, Indonesia and Thailand rose. The tariffs on China allowed these producers to raise prices for tires paid by U.S. consumers to the tune of about $800 million.

When tariff policy is laid out in this way, it looks literally insane. No one would ever advocate a policy of imposing a tax worth $1.1 billion on all U.S. purchasers of tires, with $48 million to go to actual workers who produce tires, $250 million to go to U.S tire companies, and $800 million of the revenue from that tax to go to foreign tire producers.

Moreover, any jobs saved in the tire industry were almost certainly more than offset by losses of jobs elsewhere in the economy. Hufbauer and Lowry do a back-of-the-envelope illustrative calculation that if the additional money spent on tires was diverted from other retail spending, it would cost something like 3,770 jobs in the retail sector. Also, China retaliated against the U.S. decision by imposing tariffs on U.S. exports of chicken parts. Hufbauer and Lowry report: \”The Chinese tariffs reduced exports by $1 billion as US poultry firms experienced a 90 percent collapse in their exports of chicken parts to China. Given the timing of the Chinese government’s actions, many trade policy experts view the trade dispute over China’s imports of “chicken feet” from the United States as a tit-for-tat response to the US safeguards on Chinese tire exports.\” Even as an attempt to save U.S. jobs at exorbitant cost,  President Obama\’s tariffs on Chinese tires were a failure.

Is Policy Uncertainty Delaying the Recovery?

The U.S. corporate sector has high profits. Interest rates are near historical lows. Those factors would seem to encourage investment, expansion, and hiring. But here we are in 2012, with the official end of the Great Recession nearly three years in the rear-view mirror, and many firms are still holding back. Scott R. Baker, Nick Bloom, and Steven J. Davis have written \”Is Policy Uncertainty Delaying the Recovery?\” as a policy brief for the Stanford Institute for Economic Policy Research. The underlying research paper and data are available here

There are lots or theoretical reasons why a high level of uncertainty might cause managers to be hesitant about starting new projects, investing, or hiring workers. But how does one collect data on the level of uncertainty–and in particular, on the level of uncertainty related to economic policy? Baker, Bloom and Davis mix together three sources of data into a single index:  \”We construct our index of policy uncertainty by combining three types of information: the frequency of newspaper articles that reference economic uncertainty and the role of policy; the number of federal tax code provisions that are set to expire in coming years; and the extent of disagreement among economic forecasters about future inflation and future government spending on goods and services.\” Here is their index, where a level of 100 is set arbitrarily to be equal to the average of the index for the 25 years from 1985 up to 2010.

Constructing any index like this involves some more-or-less arbitrary choices, so there will always be room for dispute. In addition, while the authors offer some arguments that this index is emphasizing \”policy\” uncertainty, I suspect that it\’s picking up other kinds of swings in economic confidence as well.

But for what it\’s worth, it does seem that the index is spiking at times one might expect: 9/11, the \”Black Monday\” stock market meltdown in 1987, wars, presidential elections, and the like.  In addition, policy uncertainty by this measure has been especially high since 2008, although in early 2012 the measure has fallen back to 2009 levels.   When the authors look more closely at the newspaper articles underlying their index, they find that the greatest sources of uncertainty are those related to monetary issues, which includes many steps taken by the Federal Reserve, and tax issues, like whether various tax provisions will be extended or ended.

How much does policy stability matter? As the authors ask (references to figures omitted):

\”How much near-term improvement could we expect from a stable, certainty-enhancing policy regime? We use techniques developed by Christopher Sims, one of the two 2011 Nobel laureates in economics, to estimate the effects of economic policy uncertainty. The results for the United States suggest that restoring 2006 (pre-crisis) levels of policy uncertainty could increase industrial production by 4% and employment by 2.3 million jobs over about 18 months. That would not be enough to create a booming economy, but it would be a big step in the right direction.\”

By the time one takes into account the problems of creating an index to measure policy uncertainty and the problems of blending policy uncertainty into a macroeconomic model, I wouldn\’t place much confidence in these exact numbers. But at a broader level, the calculations make a strong argument that the effects of policy uncertainty on output and employment have probably been a substantial contributor to the sluggishness of the U.S. economic recovery.