Voter Turnout Since 1964

To hear the candidates and the media tell it, every presidential election year has greater historical importance and is more frenzied and intense than any previous election. But the share of U.S. adults who actually vote has been voter turnout has mostly been trending down over time. Here\’s are some basic facts from a chartbook put together by the Stanford Institute for Economic Policy Research.

The youngest group of voters from age 18-24 have seen a rise in turnout recently, especially from 2000 to 2004, and there is a more modest rise in turnout for some other age groups. But all elections since 1988 have had lower turnout than that year; in turn, 1988 had lower turnout that the presidential elections from 1964-1972.

I see the chart as a reminder of a basic truth: Elections aren\’t decided by what people say to pollsters. They are determined by who actually casts a vote.

What is the Tragedy of the Commons?

A couple of weeks ago, I posted on how \”The Economics of Antibiotics Resistance\” could be viewed as an example of the \”tragedy of the commons.\” I got a few notes suggesting that I explain the term more fully. Here\’s the explanation from my own Principles of Economics textbook. (Of course, if you are teaching a college-level intro economics class, I would encourage you to take a look at it at the website of the publisher, Textbook Media. Along with many expository virtues, my book is priced far below the $200 price of many leading textbooks, at at $40 for a combination of a soft-cover paper copy and on-line access. On-line access alone, or micro and macro splits, are priced even lower.)  From Chapter 15:

\”The historical meaning of a commons is a piece of pasture land that is open to anyone who wishes to graze their cattle upon it. More recently, the term has come to apply to any area that is open to all, like a city park. In a famous 1968 article, a professor of ecology named Garrett Hardin (1915-2003) described a scenario called the tragedy of the commons, in which the utility-maximizing behavior of individuals ruins the commons for all.\”

 \”Hardin imagined a pasture that is open to many herdsmen, each with their own herd of cattle. A herdsman benefits from adding cows, but too many cows will lead to overgrazing and even to ruining the commons. The problem that when a herdsman adds a cow, the herdsman personally receives all of the gain, but when that cow contributes to overgrazing and injures the commons, the loss is suffered by all of the herdsmen as a group—so any individual herdsman suffers only a small fraction of the loss. Hardin wrote: `Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.\’\”

\”This tragedy of the commons can arise in any situations where benefits are primarily received by one party, while the costs are spread out over many parties. For example, clean air can be regarded as a commons, where firms that pollute air can gain higher profits, but firms that pay for anti-pollution equipment provide a benefit to others. A commons can be regarded as a public good, where it is difficult to exclude anyone from use (nonexcludability) and where many parties can use the resource simultaneously (nonrivalrous).\”

\”The historical commons was often protected, at least for a time, by social rules that limited how many cattle a  herdsman could graze. Avoiding a tragedy of the commons with the environment will require its own set of rules which limit how the common resource can be used.\”

Hardin\’s original 1968 article is widely available on the web–for example, here.

A few years ago in 2008, Ian Angus wrote a provocative essay in Monthly Review called “The Myth of the Tragedy of the Commons.” The tragedy of the commons is often considered a politically liberal insight, because it offers a potential justification for government regulation of shared resources. But Angus attacks from the thesis from further to the political left, argues that Hardin\’s thesis is evidence-free, that it ignores the reality of community self-regulation, and that it amounts to blaming the poor for their poverty. Here are a few words from Angus\’s trenchant essay:

“Since its publication in Science in December 1968, ‘The Tragedy of the Commons’ [by Garrett Hardin] has been anthologized in at least 111 books, making it one of the most-reprinted articles ever to appear in any scientific journal. . . . For 40 years it has been, in the words of a World Bank Discussion Paper, ‘the dominant paradigm within which social scientists assess natural resource
issues’. . . It’s shocking to realize that he provided no evidence at all to support his sweeping conclusions. He claimed that the ‘tragedy’ was inevitable—but he didn’t show that it had happened even once. Hardin simply ignored what actually happens in a real commons: self-regulation by the communities involved. . . . The success of Hardin’s argument reflects its usefulness as a pseudo-scientific explanation of global poverty and inequality, an explanation that doesn’t question the
dominant social and political order. It confirms the prejudices of those in power: logical and factual errors are nothing compared to the very attractive (to the rich) claim that the poor are responsible for their own poverty. The fact that Hardin’s argument also blames the poor for ecological destruction is a bonus.”

I enjoyed Angus\’s counterattack, but in the end, it seemed to me overwrought. The logic behind the tragedy of the commons is solid enough that it is often a useful starting point for thinking about shared resource issues. I\’m fairly confident that Hardin didn\’t see himself as blaming the poor for their own poverty and for ecological destruction! However, it\’s important to emphasize that because a tragedy of the commons is possible doesn\’t make it inevitable. And further, as the penultimate sentence from my short textbook description mentions,social rules and community self-regulation have often been able to manage the commons for a considerable period of time.

The Stagnant U.S. R&D Effort

Everyone knows that the future of the world economy will be heavily influenced by the development of new technology. Everyone is right! But the U.S. research and development effort has largely been stagnant in recent decades, with a gradual reduction in the importance of government support for R&D. Meanwhile, countries like China and South Korea are greatly expanding their R&D efforts. The facts are laid out in Chapter 4 of a biennial report from the National Science Foundation called
Science and Engineering Indicators 2012.  Here are some points that caught my eye.

Total R&D spending as a share of GDP has been more-or-less flat in the U.S. over the last few decades at about 2.7% of GDP. However, the share of R&D spending from the federal government has been dropping steadily, while the share from business has been rising steadily. The report explains: \”The federal government was once the predominant sponsor of the nation’s R&D, funding some 67% of all U.S. R&D in 1964 …  But the federal share decreased in subsequent
years to less than half in 1979 and to a low of 25% in 2000. Changing business conditions and expanded federal funding of health, defense, and counterterrorism R&D pushed it back up above 30% in 2009.\”

Business now dominates U.S. R&D both in terms of providing the funding and in terms of being the location where the research is actually done. Not surprisingly, the biggest category of R&D is not the basic research that seeks fundamental scientific breakthroughs, nor the applied research that looks for applications of those breakthroughs, but the experimental development of new products, which accounts for about 60% of total U.S. R&D spending. Here are the activities on which U.S. companies spend R&D dollars: the top single category by far is pharmaceutical and medicinal products, followed by software and semiconductors and other electronic components.
U.S. government spending on R&D has tilted more to defense-related than to non-defense products in recent decades. Among non-defense R&D, health-related research is far and away the leader. 
The enormous U.S. economy remains the single largest spender on R&D in the world in absolute dollars, substantially outstripping the European Union and Japan. But notice that R&D spending in China is rising fast, and recently outstripped R&D spending in Japan.
In terms of the share of GDP spent on R&D, the U.S. is mid-range. Also, notice that R&D spending as a share of GDP is rising in Japan, and rising rapidly in China and in South Korea. 
I have long found it frustrating that the U.S. government seems unable to prioritize greater R&D spending. Total U.S. R&D spending in 2009 was about $400 billion, with $124 billion coming from the federal government. Total federal spending in 2009 was about $3.1 trillion, so about 4% of the federal budget went to R&D. I believe that the budget deficit is a big problem. But I find it depressing to contemplate that the federal government can\’t commit, say, 5% of its spending to R&D instead of 4%. Most businesses will tend to focus most of their R&D on projects that could lead to new products in the fairly near term. Government should be supporting the research that has a possibility, over time, of creating whole new industries.

The Economics of Spam

Did you know that there are about 100 billion spam e-mails sent every day? Did you know that the overwhelming majority of this spam is screened out by your e-mail provider and never even ends up in your \”junk mail\” folder. That when one big spammer was taken down in 2009, global e-mail traffic fell by one-third? That American firms and consumers experience costs of about $20 billion per year because of e-mail spam? All this and more is discussed by Justin M. Rao and David H. Reiley in \”The Economics of Spam,\” which appears in the Summer 2012 issue of my own Journal of Economic Perspectives. Like all JEP articles from the current issue back to 1994, it is freely available on-line courtesy of the American Economic Association. 

I found especially interesting what the authors describe as a cat-and-mouse game between spammers and anti-spam software. For example, when many people label a message as \”spam,\” then it helps the anti-spam software to look for those words or URLs repeated in other messages, so that those messages can be filtered out. But then spammer responded with creative misspellings (like \”VIagrA\”) to trick the anti-spam filter, and used many different URLs that would all take the unwary to the same sales page. 

In addition, the spammers use software to mark messages as \”not spam,\” thus trying to offset those who label them as spam. Rao and Reiley write: \”In four months of 2009 Yahoo! Mail data, our Yahoo! colleagues found that (suspiciously) 63 percent of all “not spam” votes were cast by users who never cast a single “spam” vote.\”

Anti-spam software can try to identify the computer that is sending spam, and shut it down. But spammers have responded with \”botnets,\” which are a network of computers infected by malware that will send out spam e-mails. In addition, \”a zombie could be programmed to sign up for hundreds
of thousands of free email accounts at Gmail, and then send spam email through these accounts. … In 2011, Yahoo! Mail experienced an average of 2.5 million sign-ups for new accounts each day. The anti-spam team deactivated 25 percent of these immediately, because of clearly suspicious patterns in account creation (such as sequentially signing up account names JohnExample1, JohnExample2, . . .) and deactivated another 25 percent of these accounts within a week of activation due to
suspicious outbound email activity.\”


The volume of e-mail sent by botnets can be enormous: \”The largest botnet on record, known as Rustock, infected over a million computers and had the capacity to send 30 billion spam emails per day before it was taken down in March 2011. Microsoft, Pfifi zer, FireEye network security, and security experts at the University of Washington collaborated to reverse engineer the Rustock software to determine the location of the command servers. They then obtained orders from federal
courts in the United States and the Netherlands allowing them to seize Rustock’s command-and-control computers in a number of different geographic locations. … The takedown of this
single botnet coincided with a one-third reduction in global email spam— and hence a one-quarter reduction in global email traffic.\” 


Various websites began to use what is called a \”CAPTCHA, which is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart” to prevent spam and other automated software. As a first response, \”Spammers turned to visual-recognition software to break
CAPTCHAs, and in response email providers have created progressively more difficult CAPTCHAS, to the point where many legitimate human users struggle to solve them.\” But then spammers figured out how to get humans to solve the CAPTCHAs for them. \”[A] spammer would set up a pornography site, offering to display a free photo to any user who could successfully type in the text characters
in a CAPTCHA image. In the background, their software had applied for a mail account at a site like Hotmail, received a CAPTCHA image, and relayed it to the porn site; they would obtain text from a user interested in free porn and relay this back to the Hotmail site …\” And now one can hire faraway workers to break CAPTCHAs, perfectly legally: \”The market wage advertised for CAPTCHA-breaking laborers declined from nearly $10 per thousand CAPTCHAs in 2007 to $1 per thousand in
2009. These labor markets started with Eastern European labor and then moved to locations with lower wages: India, China, and Southeast Asia.\”


Several teams of researchers have managed to take over botnets and other spam-related software, which allowed them to see how many messages were going out, how many were blocked by anti-spam software, and how many responses were being received.  In one such study: \”In total, the group modified 345 million pharmaceutical emails sent from botnet zombies. Three-quarters of these were blocked through blacklisting, and the remaining 82 million emails led to a scant 28 conversions, or about 1 in 3,000,000.\”  Thus, e-mail spam is a profitable if illegal business if it receives one attempt to purchase a good out of every three million spam e-mails sent!

Given the ability of spammers to react to anti-spam efforts, what might be done about e-mail spam? Rao and Reiley are lukewarm about proposals that would seek to impose a small charge on senders of messages, which would go to recipients of e-mails as compensation. If a recipient desired, they could identify any e-mail senders who would not have to pay them when sending an e-mail. Rao and Reilley note that there is currently no mechanism for linking the sending of e-mails to payments. And if sending an e-mail automatically generated a payment, then spammers would have an incentive to hijack accounts and send thousands of messages–so that the senders could collect. Instead, they suggest approaches like going after the relatively few financial institutions around the world that process transactions for sales that result from spam e-mail, or even setting up a task force that would seek to \”spam the spammers,\” thus raising the costs of spam operators and perhaps making their operation unprofitable. 

European Currency Union Breakups: Lessons for the Euro

The euro isn\’t the first currency area to risk breaking up.  Anders Åslund makes the case as to \”Why a Breakup of the Euro Area Must Be Avoided: Lessons from Previous Breakups,\” written as Policy Brief PB12-20 for the Peterson Institute of International Economics.

Åslund focuses in particular on six breakups of monetary unions in Europe in the last century. He argues that three of them are not especially relevant to the euro situation. (Citations are omitted throughout for readability.) \”It was rather easy to dissolve a currency zone under the gold standard when countries maintained separate central banks and payments systems. Two prominent examples are the Latin Monetary Union and the Scandinavian Monetary Union. The Latin Monetary Union was formed first with France, Belgium, Italy, and Switzerland and later included Spain, Greece, Romania, Bulgaria, Serbia, and Venezuela. It lasted from 1865 to 1927. It failed because of misaligned exchange rates, the abandonment of the gold standard, and the debasement by some central banks of the currency. The similar Scandinavian Monetary Union among Sweden, Denmark, and Norway existed from 1873 until 1914. It was easily dissolved when Sweden abandoned the gold standard. These two currency zones were hardly real, because they did not involve a common central bank or a centralized payments system. They amounted to little but pegs to the gold standard. Therefore, they are not very relevant to the EMU.\”

The division of Czechoslovokia in 1992 into two countries with their own currencies also went smoothly, in part because financial connections were limited under the previous communist regime.  \”In particular, no financial instruments were available with which investors could speculate against the Slovak koruna.\”

However, three other European currency breakups are more relevant to the euro, and more concerning, because they all led to steep recessions and hyperinflations.

 \”The three other European examples of breakups in the last century are of the Habsburg Empire, the Soviet Union, and Yugoslavia. They are ominous indeed. All three ended in major disasters, each with hyperinflation in several countries. In the Habsburg Empire, Austria and Hungary faced hyperinflation. Yugoslavia experienced hyperinflation twice. In the former Soviet Union, 10 out of 15 republics had hyperinflation. The combined output falls were horrendous, though poorly documented because of the chaos. Officially, the average output fall in the former Soviet Union was 52 percent, and in the Baltics it amounted to 42 percent. According to the World Bank, in 2010, 5 out of 12 post-Soviet countries—Ukraine, Moldova, Georgia, Kyrgyzstan, and Tajikistan—had still not reached their 1990 GDP per capita levels in purchasing power parities. Similarly, out of seven Yugoslav successor states, at least Serbia and Montenegro, and probably Kosovo and Bosnia-Herzegovina, had not exceeded their 1990 GDP per capita levels in purchasing power parities two decades later. Arguably, Austria and Hungary did not recover from their hyperinflations in the early 1920s until the mid-1950s. Thus the historical record is that half the countries in a currency zone that breaks up experience hyperinflation and do not reach their prior GDP per capita as measured in purchasing power parities until about a quarter of a century later …\”

As Åslund notes, one might dismiss these historical parallels by saying that these examples are too far in the past (the Hapsburg empire), or are too interrelated with the breakup of Communist economic regimes (the Soviet Union and Yugoslavia). Thus, he is at some pains to spell out just how and why these consequences could easily happen in modern Europe. For example, he cites one estimate that \”the overall decline in output in the EMU countries of a complete breakup of the EMU at 5 to 9 percent during the first year and at 9 to 14 percent over three years.\” But his purpose is not to defend the specific numbers, but rather to spell out out what happens in a scenario where countries revert to national currencies as quickly as possible, and need to introduce temporary capital controls during the transition. He quotes five concerns from an earlier study, and then adds a sixth concern of his own. Here are the first five concerns:

\”First, “the logistical and legal problems of reintroducing national currencies, while transitional, would be serve and protracted.” Second, “capital flight and distress in the financial system would disrupt trade and investment.” Third, a “plunge in business and consumer confidence would likely be accompanied by a renewed dive in asset prices inside and outside the Eurozone.” Fourth, the “challenge of maintaining fiscal credibility and securing government funding would be intensified. This would call for yet more fiscal tightening measures, particularly for the weaker peripheral Eurozone countries.” Fifth, non–euro area countries would suffer from sharp appreciation of their currencies, “compounding the damage to their export growth.”

Åslund emphasizes yet another concern, that the payments system across the euro countries could be sharply disrupted. These issues lead him to conclude: \”The Economic and Monetary Union must be maintained at almost any cost. … The exit of any single country from the EMU, at the present time
when large imbalances have been accumulated, would likely lead to a bank run, which would cause the EMU payments system to break down and with it the EMU itself.\”

Åslund is a pragmatist, and thus is willing to contemplate the ways in which a \”velvet divorce\” of the euro could happen in practical terms. He writes: \”If the need for dissolution of the euro area appears
inevitable, all countries should agree on an early exit date. Fortunately, all the euro countries still have fully equipped central banks, which should greatly facilitate the process of recovering their old functions—distribution of bank notes, monetary policy, maintenance of international currency
reserves, exchange rate policy, foreign currency exchange, and payment routines.\”

But he points out that in terms of practical politics, a smooth dissolution of the euro is highly unlikely: \”In the end, no velvet divorce is likely. No serious politician is likely to promote a dissolution of the euro area unless forced to  do so, because no one wants to risk going down in history as the person who destroyed the EMU or the European Union. This is most of all true of German politicians. Th erefore, if the euro area breaks up, a messy collapse is most likely to ensue.\”

U.S. Education: "Unthinking, Unilateral Educational Disarmament"

Everyone knows that the future of the U.S. economy and standard of living is tied up with how well Americans are educated. Everyone is right! But the U.S. education system has been stuck in neutral for decades, while other countries have been moving ahead. Martin West summarizes some of the evidence in \”Global Lessons for Improving U.S. Education\”, which appears in the Spring 2012 edition of Issues in Science and Technology.The questions that follow are mine: the answers are excerpted from the article. I add a few remarks at the end.

What is the OECD test called the PISA, or \”Program for International Student Assessment\”?

\”The PISA is administered every three years to nationally representative samples of students in each OECD country and in a growing number of partner countries and subnational units such as Shanghai. The 74 education systems that participated in the latest PISA study, conducted during 2009, represented more than 85% of the global economy and included virtually all of the United States’ major trading partners, making it a particularly useful source of information on U.S. students’ relative standing.\”

What does the PISA show about U.S. educational performance?

\”Among the 34 developed democracies that are members of the Organization for Economic Cooperation and Development (OECD), 15-year-olds in the United States ranked 14th in reading, 17th in science, and no better than 25th in mathematics. … U.S. students performed well below the OECD average in math and essentially matched the average in science. In math, the United States trailed 17 OECD countries by a statistically significant margin, its performance was indistinguishable from that of 11 countries, and it significantly outperformed only five countries. In science, the United States significantly trailed 12 countries and outperformed nine. Countries scoring at similar levels to the United States in both subjects include Austria, the Czech Republic, Hungary, Ireland, Poland, Portugal, and Sweden. …. The gap in average math and science achievement between the United States and the top-performing national school systems is dramatic. In math, the average U.S. student by age 15 was at least a full year behind the average student in six countries, including Canada, Japan, and the Netherlands. Students in six additional countries, including Australia, Belgium, Estonia, and Germany, outperformed U.S. students by more than half a year.\”

Is the poor U.S. performance a recent development?

\”U.S. students, however, have never fared well in various international comparisons of student achievement. The United States ranked 11th out of 12 countries participating in the first major international study of student achievement, conducted in 1964, and its math and science scores on the 2009 PISA actually reflected modest improvements from the previous test. …
\”The United States’ traditional reputation as the world’s educational leader stems instead from the fact that it experienced a far earlier spread of mass secondary education than did most other nations. …. The United States’ historical advantage in terms of educational attainment has long since eroded, however. U.S. high-school graduation rates peaked in 1970 at roughly 80% and have declined slightly since, a trend often masked in official statistics by the growing number of students receiving alternative credentials, such as a General Educational Development (GED) certificate. … The U.S. high-school graduation rate now trails the average for European Union countries and ranks no better than 18th among the 26 OECD countries for which comparable data are available.\”
Can the U.S. make up for poor performance at the K-12 level by better performance in higher education? 

\”Although the share of [U.S.] students enrolling in college has continued to climb, the share completing a college degree has hardly budged. … Meanwhile, other developed countries have continued to see steady increases in educational attainment and, in many cases, now have postsecondary completion rates that exceed those in the United States. …  On average across the OECD, postsecondary completion rates have increased steadily from one age cohort to the next. Although only 20% of those aged 55 to 64 have a postsecondary degree, the share among those aged 25 to 34 is up to 35%. The postsecondary completion rate of U.S. residents aged 25 to 34 remains above the OECD average at 42%, but this reflects a decline of one percentage point relative to those aged 35 to 44 and is only marginally higher than the rate registered by older cohorts.

How large might the economic gains be from improving U.S. education performance? 

\”Consider the results of a simulation in which it is assumed that the math achievement of U.S. students improves by 0.25 standard deviation gradually over 20 years. This increase would raise U.S. performance to roughly that of some mid-level OECD countries, such as New Zealand and the Netherlands, but not to that of the highest-performing OECD countries. Assuming that the past relationship between test scores and economic growth holds true in the future, the net present value of the resulting increment to GDP over an 80-year horizon would amount to almost $44 trillion. A parallel simulation of the consequences of bringing U.S. students up to the level of the top-performing countries suggests that doing so would yield benefits with a net present value approaching $112 trillion.\”
What are some common factors across the other countries where the education systems seem to be outperforming the U.S. education system? 
\”[T]here are three broad areas in which the consistency of findings across studies using different international tests and country samples bears attention.
\”Exit exams. Perhaps the best-documented factor is that students perform at higher levels in countries (and in regions within countries) with externally administered, curriculum-based exams at the completion of secondary schooling that carry significant consequences for students of all ability levels. Although many states in the United States now require students to pass an exam in order to receive a high-school diploma, these tests are typically designed to assess minimum competency in math and reading and are all but irrelevant to students elsewhere in the performance distribution. In contrast, exit exams in many European and Asian countries cover a broader swath of the curriculum, play a central role in determining students’ postsecondary options, and carry significant weight in the labor market. … The most rigorous available evidence indicates that math and science achievement is a full grade-level equivalent higher in countries with such an exam system in the relevant subject.
\”Private-school competition. Countries vary widely in the extent to which they make use of the private sector to provide public education. …  Rigorous studies confirm that students in countries that for historical reasons have a larger share of students in private schools perform at higher levels on international assessments while spending less on primary and secondary education. Such evidence suggests that competition can spur school productivity. In addition, the achievement gap between socioeconomically disadvantaged and advantaged students is reduced in countries in which private schools receive more government funds.
\”High-ability teachers. Much attention has recently been devoted to the fact that several of the highest-performing countries internationally draw their teachers disproportionately from the top third of all students completing college degrees. This contrasts sharply with recruitment patterns in the United States.\”
Brief reflections

I remember back in 1983 when something called the National Commission on Excellence in Education issued a report called \”A Nation At Risk: Imperative For Educational Reform.\” It\’s available several places on the web, like here and here. Here are the introductory paragraphs, much quoted at the time:

\”Our Nation is at risk. Our once unchallenged preeminence in commerce, industry, science, and technological innovation is being overtaken by competitors throughout the world. This report is concerned with only one of the many causes and dimensions of the problem, but it is the one that undergirds American prosperity, security, and civility. We report to the American people that while we can take justifiable pride in what our schools and colleges have historically accomplished and contributed to the United States and the well-being of its people, the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a
Nation and a people. What was unimaginable a generation ago has begun to occur– others are matching and surpassing our educational attainments.\”

\”If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves. We have even squandered the gains in student achievement made in the wake of the Sputnik challenge. Moreover, we have dismantled essential support systems which helped make those gains possible. We have, in effect, been committing an act of unthinking, unilateral educational disarmament.\”

In 2008, the U.S. Department of Education followed up with a report called \”A Nation Accountable: Twenty-five Years After A Nation at Risk.\” It said: \”If we were “at risk” in 1983, we are at even greater risk now. The rising demands of our global economy, together with demographic shifts, require that we educate more students to higher levels than ever before. Yet, our education system is not keeping pace with these growing demands.\”

Adam Smith, patron saint of all economists, is said to have responded to overwrought predictions that Britain was about to be ruined by some setbacks during the Revolutionary War by remarking: \”There is a lot of ruin in a nation.\” I do not wish to sound overwrought. But a nation that does not steadily improve the education level of its population is not preparing itself sufficiently for a future of growing and shared prosperity.

Summer 2012 Journal of Economic Perspectives

Here is the table of contents for the Summer 2012 issue of my own Journal of Economic Perspectives, with abstracts and links to each article. I\’ll be blogging about some of these articles in more detail in the next week or so. As always, all JEP articles are freely available, going back to 1994, courtesy of the American Economic Association. 
Symposium: Labor Markets and Unemployment


“A Search and Matching Approach to Labor Markets: Did the Natural Rate of Unemployment Rise?”
Mary C. Daly, Bart Hobijn, Ayşegül Şahin and Robert G. Valletta
Abstract: The U.S. unemployment rate has remained stubbornly high since the 2007-2009 recession, leading some observers to conclude that structural rather than cyclical factors are to blame. Relying on a standard job search and matching framework and empirical evidence from a wide array of labor market indicators, we examine whether the natural rate of unemployment has increased since the recession began, and if so, whether the underlying causes are transitory or persistent. Our preferred estimate indicates an increase in the natural rate of unemployment of about one percentage point during the recession and its immediate aftermath, putting the current natural rate at around 6 percent. An assessment of the underlying factors responsible for this increase, including labor market mismatch, extended unemployment benefits, and uncertainty about overall economic conditions, implies that only a small fraction is likely to be persistent.

“Who Suffers during Recessions?”
Hilary Hoynes, Douglas L. Miller and Jessamyn Schaller
Abstract: “In this paper, we examine how business cycles affect labor market outcomes in the United States. We conduct a detailed analysis of how cycles affect outcomes differentially across persons of differing age, education, race, and gender, and we compare the cyclical sensitivity during the Great Recession to that in the early 1980s recession. We present raw tabulations and estimate a state panel data model that leverages variation across U.S. states in the timing and severity of business cycles. We find that the impacts of the Great Recession are not uniform across demographic groups and have been felt most strongly for men, black and Hispanic workers, youth, and low-education workers. These dramatic differences in the cyclicality across demographic groups are remarkably stable across three decades of time and throughout recessionary periods and expansionary periods. For the 2007 recession, these differences are largely explained by differences in exposure to cycles across industry-occupation employment.”

Symposium: Government Debt


“The European Sovereign Debt Crisis”
Philip R. Lane
Full-Text Access
Abstract: The origin and propagation of the European sovereign debt crisis can be attributed to the flawed original design of the euro. In particular, there was an incomplete understanding of the fragility of a monetary union under crisis conditions, especially in the absence of banking union and other European-level buffer mechanisms. Moreover, the inherent messiness involved in proposing and implementing incremental multicountry crisis management responses on the fly has been an important destabilizing factor throughout the crisis. After diagnosing the situation, we consider reforms that might improve the resilience of the euro area to future fiscal shocks.
“Public Debt Overhangs: Advanced-Economy Episodes since 1800”
Carmen M. Reinhart, Vincent R. Reinhart and Kenneth S. Rogoff
Abstract: We identify the major public debt overhang episodes in the advanced economies since the early 1800s, characterized by public debt to GDP levels exceeding 90 percent for at least five years. Consistent with Reinhart and Rogoff (2010) and most of the more recent research, we find that public debt overhang episodes are associated with lower growth than during other periods. The duration of the average debt overhang episode is perhaps its most striking feature. Among the 26 episodes we identify, 20 lasted more than a decade. The long duration belies the view that the correlation is caused mainly by debt buildups during business cycle recessions. The long duration also implies that the cumulative shortfall in output from debt overhang is potentially massive. These growth-reducing effects of high public debt are apparently not transmitted exclusively through high real interest rates, as in eleven of the episodes, interest rates are not materially higher.


Articles


The Economics of Spam
Justin M. Rao and David H. Reiley
Abstract: We estimate that American firms and consumers experience costs of almost $20 billion annually due to spam. Our figure is more conservative than the $50 billion figure often cited by other authors, and we also note that the figure would be much higher if it were not for private investment in anti-spam technology by firms, which we detail further on. Based on the work of crafty computer scientists who have infiltrated and monitored spammers\’ activity, we estimate that spammers and spam-advertised merchants collect gross worldwide revenues on the order of $200 million per year. Thus, the \”externality ratio\” of external costs to internal benefits for spam is around 100:1. In this paper, we start by describing the history of the market for spam, highlighting the strategic cat-and-mouse game between spammers and email providers. We discuss how the market structure for spamming has evolved from a diffuse network of independent spammers running their own online stores to a highly specialized industry featuring a well-organized network of merchants, spam distributors (botnets), and spammers (or \”advertisers\”). We then put the spam market\’s externality ratio of 100 into context by comparing it to other activities with negative externalities. Lastly, we evaluate various policy proposals designed to solve the spam problem, cautioning that these proposals may err in assuming away the spammers\’ ability to adapt.

“Identifying the Disadvantaged: Official Poverty, Consumption Poverty, and the New Supplemental Poverty Measure”
Bruce D. Meyer and James X. Sullivan
“We discuss poverty measurement, focusing on two alternatives to the current official measure: consumption poverty, and the Census Bureau\’s new Supplemental Poverty Measure (SPM) that was released for the first time last year. The SPM has advantages over the official poverty measure, including a more defensible adjustment for family size and composition, an expanded definition of the family unit that includes cohabitors, and a definition of income that is conceptually closer to resources available for consumption. The SPM\’s definition of income, though conceptually broader than pre-tax money income, is difficult to implement given available data and their accuracy. Furthermore, income data do not capture consumption out of savings and tangible assets such as houses and cars. A consumption-based measure has similar advantages but fewer disadvantages. We compare those added to and dropped from the poverty rolls by the alternative measures relative to the current official measure. We find that the SPM adds to poverty individuals who are more likely to be college graduates, own a home and a car, live in a larger housing unit, have air conditioning, health insurance, and substantial assets, and have other more favorable characteristics than those who are dropped from poverty. Meanwhile, we find that a consumption measure compared to the official measure or the SPM adds to the poverty rolls individuals who are more disadvantaged than those who are dropped. We decompose the differences between the SPM and official poverty and find that the most problematic aspect of the SPM is the subtraction of medical out-of-pocket expenses from SPM income. Also, because the SPM poverty thresholds change in an odd way over time, it will be hard to determine if changes in poverty are due to changes in income or changes in thresholds. Our results present strong evidence that a consumption-based poverty measure is preferable to both the official income-based poverty measure and to the Supplemental Poverty Measure for determining who are the most disadvantaged.”
“The New Demographic Transition: Most Gains in Life Expectancy Now Realized Late in Life”
Karen N. Eggleston and Victor R. Fuchs
Abstract: The share of increases in life expectancy realized after age 65 was only about 20 percent at the beginning of the 20th century for the United States and 16 other countries at comparable stages of development; but that share was close to 80 percent by the dawn of the 21st century, and is almost certainly approaching 100 percent asymptotically. This new demographic transition portends a diminished survival effect on working life. For high-income countries at the forefront of the longevity transition, expected lifetime labor force participation as a percent of life expectancy is declining. Innovative policies are needed if societies wish to preserve a positive relationship running from increasing longevity to greater prosperity.

“Groups Make Better Self-Interested Decisions”
Gary Charness and Matthias Sutter
Full-Text Access

Abstract: In this paper, we describe what economists have learned about differences between group and individual decision-making. This literature is still young, and in this paper, we will mostly draw on experimental work (mainly in the laboratory) that has compared individual decision-making to group decision-making, and to individual decision-making in situations with salient group membership. The bottom line emerging from economic research on group decision-making is that groups are more likely to make choices that follow standard game-theoretic predictions, while individuals are more likely to be influenced by biases, cognitive limitations, and social considerations. In this sense, groups are generally less \”behavioral\” than individuals. An immediate implication of this result is that individual decisions in isolation cannot necessarily be assumed to be good predictors of the decisions made by groups. More broadly, the evidence casts doubts on traditional approaches that model economic behavior as if individuals were making decisions in isolation.

“Deleveraging and Monetary Policy: Japan since the 1990s and the United States since 2007”
Kazuo Ueda

Abstract: As the U.S. economy works through a sluggish recovery several years after the Great Recession technically came to an end in June 2009, it can only look with horror toward Japan\’s experience of two decades of stagnant growth since the early 1990s. In contrast to Japan, U.S. policy authorities responded to the financial crisis since 2007 more quickly. Surely, they learned from Japan\’s experience. I will begin by describing how Japan\’s economic situation unfolded in the early 1990s and offering some comparisons with how the Great Recession unfolded in the U.S. economy. I then turn to the Bank of Japan\’s policy responses to the crisis and again offer some comparisons to the Federal Reserve. I will discuss the use of both the conventional interest rate tool—the federal funds rate in the United States, and the \”call rate\” in Japan—and nonconventional measures of monetary policy and consider their effectiveness in the context of the rest of the financial system.

“The Relationship between Unit Cost and Cumulative Quantity and the Evidence for Organizational Learning-by-Doing”
Peter Thompson
Abstract: The concept of a learning curve for individuals has been around since the beginning of the twentieth century. The idea that an analogous phenomenon might also apply at the level of the organization took longer to emerge, but it had begun to figure prominently in military procurement and scheduling at least a decade before Wright\’s (1936) classic paper providing evidence that the cost of producing an airframe declined as cumulative output increased. Wright (1936) was careful not to describe his empirical results as a learning curve. Of his three proposed three explanations for the relationships he observed between cost and cumulative quantity produced, only one is unambiguously a source of organizational learning; the others are consistent with organizational learning but also with standard static economies of scale. It quickly became apparent that the notion of organizational learning as a by-product of accumulated experience has important consequences for firm strategy. The Boston Consulting Group (BCG) built its consulting business around the concept of what it branded the experience curve, asserting that cost reductions associated with cumulative output applied to all costs, were \”consistently around 20-30% each time accumulated production is doubled, [and] this decline goes on in time without limit\” (Henderson 1968). Today, the negative relationship between unit production costs and cumulative output is one of the best-documented empirical regularities in economics. Nonetheless, the thesis of this paper is that the conceptual transformation of the relationship between cost and cumulative production into an organizational learning curve with profound strategic implications has not been sufficiently supported with direct empirical evidence.


Features


“Recommendations for Further Reading”
Timothy Taylor

High Government Debt: A Bang or a Whimper?

Watching the travails of the euro area in the last few years, it seems as if the negative consequences of high government debt are likely manifest themselves with a bang: that is, a scenario in which investors fear that the debt will not be repaid, and thus begin demanding much higher interest rates for being willing to hold the debt, which then makes it impossible for the government to repay. Rounds of financial panic alternating with recrimination follow, while the economy of the country flounders. In a roundabout way, this scenario is oddly comforting for Americans, because there is no sign in the financial markets (and remember, financial markets look toward future interest rates, not just current rates ) that U.S. Treasury debt is anywhere near to experiencing a surge in its perceived riskiness.

But in the Summer 2012 issue of my own Journal of Economic Perspectives, Carmen M. Reinhart, Vincent R. Reinhart and Kenneth S. Rogoff offer a different scenario in \”Public Debt Overhangs: Advanced-Economy Episodes since 1800.\” They argue that very high levels of government debt can also lead to a debt-without-drama situation in which interest rates rise little or not at all, and no deep financial crisis occurs–but the economy nonetheless suffers a prolonged slowdown in its long-term growth rate. 

They begin by collecting the available data on advanced economies from 1800 to 2011, and found 26 situations in which the ratio of gross government debt/GDP in a certain country exceeded 90% for at least five years. U.S. government debt passed the gross debt/GDP ratio in 2010, but because it has not remained in that zone for five years, the current U.S. debt experience is not included in their group of 26 examples. They point out many patterns in this data, but here, I would emphasize three:

  • When the government debt/GDP ratio climbs above 90%, it tends to remain there for awhile. They find only a few examples where the 90% ratio was reached that lasted less than five years–mainly cases of wartime debts that declined quickly after the war. As they note: \”the 26 episodes of public debt overhang in our sample had an average duration of 23 years.\” Some countries had multiple lengthy episodes of high government debt. \”For example, since 1848 (when the public debt data is available), Greece leads the way with 56 percent of the debt/GDP ratio observations above 90 percent.\”
  • \”However, we find that countries with a public debt overhang by no means always experience either a sharp rise in real interest rates or difficulties in gaining access to capital markets. Indeed, in 11 of the 26 cases where public debt was above the 90 percent debt/GDP threshold, real interest rates were either lower, or about the same, as during the lower debt/GDP years.\”
  • \”Consistent with a small but growing body of research, we find that the vast majority of high debt episodes—23 of the 26— coincide with substantially slower growth. On average across individual countries, debt/GDP levels above 90 percent are associated with an average annual growth rate 1.2 percent lower than in periods with debt below 90 percent debt; the average annual levels are 2.3 percent during the periods of exceptionally high debt versus 3.5 percent otherwise.\” The cases of high debt/GDP ratios and fast growth are typically cases of a bounceback from postwar rebuilding.

In discussing how government debt might lead to slower growth, there is a challenging problem of determining cause and effect. It is possible that high government debt leads to reduced growth, perhaps by leading to lower levels of domestic investment as government borrowing soaks up the available financial capital. (The authors do not have long-term data on investment levels to test this hypothesis.) But it is also possible that a country with slow economic growth might find it easier to build up excessive government debt and harder to muster the economic resources or political decision-making to reduce that debt. In all of these scenarios, high government debt and slow growth accompany each other–but which is the cause and which is the effect?

Reinhart, Reinhart, and Rogoff cite a number of studies using different groups of countries over different time frames, along with statistical approaches that seek to clarify the question of cause and effect (for example, instrumental variables, generalized method of moments estimation, measuring growth with five-year averages that are determined by other variables and thus not subject to feedback effects, fitting data to an endogenous growth model, and the like). They find:

 \”We would not claim that the cause-and-effect problems involved in determining how public debt overhang affects economic growth have been definitively addressed. But the balance of the existing evidence certainly suggests that public debt above a certain threshold leads to a rate of economic growth that is perhaps 1 percentage point slower per year. In addition, the 26 episodes of public debt overhang in our sample had an average duration of 23 years, so the cumulative effect of annual growth being 1 percentage point slower would be a GDP that is roughly one-fourth lower at the end of the period. This debt-without-drama scenario is reminiscent for us of T.S. Eliot’s (1925) lines in “The Hollow Men”: “ This is the way the world ends/Not with a bang but a whimper.” Last but not least, those who are inclined to the belief that slow growth is more likely to be causing high debt, rather than vice versa, need to better reconcile their beliefs with the apparent nonlinearity of the relationship, in which correlation is relatively low at low levels of debt but rises markedly when debt/GDP ratios exceed the 90 percent threshold. Overall, the general thrust of the evidence is that the cumulative economic losses from a sustained public debt overhang can be extremely large compared with the level of output that would otherwise have occurred, even when these economic losses do not manifest themselves as a financial crisis or a recession. …\”

\”This paper should not be interpreted as a manifesto for rapid public debt deleveraging exclusively via fiscal austerity in an environment of high unemployment. Our review of historical experience also highlights that, apart from outcomes of full or selective default on public debt, there are other strategies to address public debt overhang including debt restructuring and a plethora of debt conversions (voluntary and otherwise). The pathway to containing and reducing public debt will require a change that is sustained over the middle and the long term. However, the evidence, as we read it, casts doubt on the view that soaring government debt does not matter when markets (and official players, notably central banks) seem willing to absorb it at low interest rates—as is the case for now.\”

Man-cession and He-covery

One striking feature of the Great Recession is that the unemployment rate for men spiked higher than that for women–but now has recovered to roughly the same rate. Here\’s an illustrative figure, courtesy of a chartbook published by the Stanford Institute for Economic Policy Research.

How unexpected is this pattern of \”mancession\” and \”he-covery\”? Although I wasn\’t expecting it, perhaps I should have been. Looking on the graph at the aftermath of the \”jobless recoveries\” that followed the 2001 and the 1990-91 recessions, in both cases the jobless rate spiked higher for men than for women. In the Summer 2012 issue of my own Journal of Economic Perspectives, Hilary Hoynes, Douglas L. Miller and Jessamyn Schaller investigate the question: \”Who Suffers During Recessions?\”(All articles in the JEP, from the current issue back to 1994, are freely available on-line courtesy of the American Economic Association. Starting this year, entire issues can also be downloaded in PDF, Kindle, or ePub formats.)

Here is the conclusion from Hoynes, Miller, and Schaller:

\”The labor market effects of the Great Recession have not been not uniform across demographic groups. Men, blacks, Hispanics, youth, and those with lower education levels experience more employment declines and unemployment increases compared to women, whites, prime-aged workers, and those with high education levels. However, these dramatic differences in the cyclicality across demographic groups have been remarkably stable since at least the late 1970s and across recessionary periods versus expansionary periods. These gradients persist despite the dramatic changes in the labor market over the past 30 years, including the increase in labor force attachment for women, Hispanic immigration, the decline of manufacturing, and so on.\”

 \”The general tone of these findings might be surprising given much emphasis in the press on the “man-cession”—that is, the greater effect that the Great Recession has had on men … Our analysis shows that men, across recessions and recoveries, experience more cyclical labor market outcomes. This is largely the result of the higher propensity of men to be employed in highly cyclical industries such as construction and manufacturing, while women are more likely to be employed in less-cyclical industries such as services and public administration. More generally, much of the difference in the cyclical effect across groups during the 2007 recession is explained by differing exposure to fluctuations due to the industries and occupations in which the groups are employed.

\”Although overall the 2007–2009 recession appears similar to the 1980s recession, responsiveness by women’s employment and by that of the youngest and oldest workers was somewhat greater in the more recent recession. Further, we do find evidence of a “he-covery;” and the extent to which the current recovery is being experienced more by men than women (compared to the 1980s recovery) is largely due to a drop in women’s cyclicality during the current recovery.\”

\”Despite these various distinctions, the overarching picture is one of stability in the demographic patterns of response to the business cycle over time. Who loses in the Great Recession? The same groups who lost in the recessions of the 1980s and who experience weaker labor market outcomes even in the good times. Viewed through the lens of these demographic patterns across labor markets, the Great Recession is different from business cycles over the three decades earlier in size and
length, but not in type.\”

EPA on Value of a Life

What is the value of a human life? For U.S. regulatory purposes, the Environmental Protection Agency has a FAQ page up on the subject. Here\’s the bottom line:

\”EPA recommends that the central estimate of $7.4 million ($2006), updated to the year of the analysis, be used in all benefits analyses that seek to quantify mortality risk reduction benefits regardless of the age, income, or other population characteristics of the affected population until revised guidance becomes available …\”

On what sort of numbers is that estimate based? EPA offers this illustrative calculation:

\”In the scientific literature, these estimates of willingness to pay for small reductions in mortality risks are often referred to as the \”value of a statistical life.” This is because these values are typically reported in units that match the aggregate dollar amount that a large group of people would be willing to pay for a reduction in their individual risks of dying in a year, such that we would expect one fewer death among the group during that year on average. This is best explained by way of an example. Suppose each person in a sample of 100,000 people were asked how much he or she would be willing to pay for a reduction in their individual risk of dying of 1 in 100,000, or 0.001%, over the next year. Since this reduction in risk would mean that we would expect one fewer death among the sample of 100,000 people over the next year on average, this is sometimes described as \”one statistical life saved.” Now suppose that the average response to this hypothetical question was $100. Then the total dollar amount that the group would be willing to pay to save one statistical life in a year would be $100 per person × 100,000 people, or $10 million. This is what is meant by the \”value of a statistical life.” Importantly, this is not an estimate of how much money any single individual or group would be willing to pay to prevent the certain death of any particular person.\”

Other studies look a jobs that pose different mortality risks, and seek to estimate how much additional pay is required for people to take such jobs. Again, the willingness to take a certain amount of money in exchange for a change in the risk of dying can be translated into an estimated \”value of a statistical life.\” 

The estimate raises obvious questions, but hard experience has taught me that the obvious questions for me are not always the obvious questions for others! For many people, the obvious question is whether it isn\’t just morally wrong to put any value on life. To me, that question missed the point. Every time we set a rule or regulation at one level, and not another level, we are implicitly making a decision about the value of a human life. Think the speed limit should be slower, or faster, or the same? No matter your choice, you are implicitly setting a value on human life vs. other tradeoffs of time and money.

For me, one interesting question lies in the EPA assumption that every life has the same value, regardless of age or health characteristics. Thus, society should be willing to spend the same amount to save the live of an 80 year-old, a 40 year-old and a 10 year-old. EPA has some difficult history here. Back in 2003 it proposed a cost-benefit analysis of an air pollution regulation in which the statistical value of a life saved was lower for those over 70 than for those under 70. After a public outcry, this distinction was eliminated. Studies (like this one) show only weak support for the idea that those who are actually old or sick would place a lower value on their own live: indeed, some of those who are extremely ill tend to place a higher value on their lives in survey data. But when government is drawing up rules and regulations, it may choose other priorities.

Another interesting distinction about what value to place on life saved in the present vs. lives saved in the future–perhaps even several decades in the future. Ben Trachtenberg outlines some of these issues in a recent note for the UCLA Law Review. In the past, standard practice at the EPA and the U.S. Department of Transportation was that the value of lives in the future, like all costs and benefits arising in the future, was adjusted downward by a \”discount rate.\”\” He writes: \”Because, however, lives saved in the future were given the same nominal value as lives saved in the present, the real value of future lives was substantially eroded by discounting to present value, generally at annual rates of 3 and 7 percent. In other words, if a life saved today is worth $8 million, a life saved in ten or twenty years would be worth far less. A discount rate of 7 percent erodes half the value of a life expected to be saved in 2022 and three-quarters of one expected to be saved in 2032.\”

However, the rules have now changed. \”Before subjecting lifesaving benefits to the same discounting applied to other costs and benefits, the agencies adjust the values upward to reflect the expected higher income (and associated willingness to pay to avoid risks of harm) enjoyed by future persons. This seemingly minor procedural change can radically alter the expected benefits of major regulations …\” I suspect that this adjustment will prove quite controversial: after all, it suggest that future lives of those yet unborn have a higher value, before applying a discount rate, than present lives.

Yet another set of intriguing questions have to do with the fact that not all regulatory agencies use the same value for a statistical life: the EPA, the U.S. Department of Transportation, and the FDA use values that can be millions of dollars apart.

These issues matter because for most individuals and countries, the value of your life is the single largest asset you have. If my life is worth the EPA-approved $7.4 million, that is substantially higher than the value of any assets I am likely to accumulate in my life. A couple of weeks ago I posted about an effort by a group at the United Nations called the International Human Dimensions Program to measure the extent to which economic growth was sustainable by estimating the value of human capital, produced capital, and natural capital across countries in the Inclusive Wealth Report 2012.

That report included one of those thoughts that seemed obviously true, once someone had pointed it out. They excluded \”health capital\” from the calculations, because including it would have swamped everything else. \” \”Health capital of a nation’s population reflects the expected discounted value of years of life remaining. This is, understandably, a large number; indeed, we find that health capital makes up more than 90% of the capital base for all countries in the study. In the nations under study, the amount of health capital that each person owns outweighs all other forms of capital combined. Given a population, slight changes in mortality rates result in more or less health capital each
year.\”

Gains to health and life expectancy are extraordinarily important, but in a world of inevitable costs and tradeoffs, values and limits must still be set.