U.S. Debt Problems: Brewing for Decades

A lot of folks have a version of chronic fatigue syndrome when the topic of budget deficits comes up. After all, wasn\’t there a big spat over budget deficits through most of the 1980s and into the 1990s? And wasn\’t there a big spat over deficits through the middle years of the George W. Bush presidency in the mid-2000s? Every time you turn around the last few years, it feels like the federal government is about to the official \”debt ceiling\” and there\’s yet another round of breathless high stakes negotiating that pushes the problem back until after the next election, when we get to do it all again.

Thus, it\’s important to understand that America\’s current trajectory of deficits and debt is not just one more round  of political games. Instead, we have entered a situation where deficits and debt are already well outside usual parameters. What\’s often hard to explain is that the U.S. deficit and debt problem isn\’t (yet) an emergency situation that is likely to rock the U.S. economy in the next year or two or three. There\’s still some time for adjustments. But we really don\’t want to waste that time. Daniel Thornton offers useful overview and insights in \”The U.S. Deficit/Debt Problem:A Longer-Run Perspective,\” in the November/December 2012 issue of the Review published by the Federal Reserve Bank of St. Louis.

For starters, here\’s a figure showing U.S. annual budget deficits over time going back to 1800. There are five episodes of major budget deficits in the history of the U.S. government: the Civil War, World War I, the Great Depression, World War II, and the last few years. The deficits of the last few years don\’t match those of the major wars in U.S. history, but as a share of GDP, they do exceed the deficits of the Great Depression.

For another perspective, here\’s the ratio of accumulated gross government debt/GDP. The main episodes of high budget deficits are visible here as upward bumps in the ratio. The debt/GDP ratio is approaching levels that have only been broached by the funds borrowed to fight World War II.

Thornton emphasizes that the roots of our current deficit and debt troubles go back well before the Great Recession of 2007-2009, and well before Bush tax cuts earlier in the 2000. Instead, Thornton locates the start of the problems back to about 1970. In the chart of annual deficits, for example, notice that after about 1970 a pattern of volatile but growing deficits emerges. The pattern is interrupted for a few years in the late 1990s by the higher tax revenues and lower social spending resulting from the unsustainable dot-com boom, but a return to the larger deficits was coming eventually. Similarly, the debt/GDP chart shows that ratio bottoming out around the mid-1970s, and then beginning to climb–again with a bump for the dot-com years of the late 1990s.

What factor has been driving spending higher? Thornton\’s answer is straightforward: \”[M]ost of the increase in spending that generated the persistent deficit over the 38 years before the financial crisis was spending for Medicare and Medicaid, particularly Medicare.\”

My own take is that it\’s been clear since at least the 1980s, and arguably earlier, that the U.S. budget was going to run into severe difficulties when the baby boom generation started retiring. The leading edge of the boomer generation was born in 1946, and thus is just now hitting age 65 and heading into retirement in substantial numbers. This demographic shift was going to cause problems for Social Security, but those problems could be dealt with by phasing back the retirement age and tweaking formulas for payments and benefits. In comparison, there is no easy way of addressing the combination of an aging American and steadily rising health care costs. 

In other words, a fiscal crisis has been coming for the U.S. budget for some decades. But before the Great Recession, we thought we had 20 years or so to make adjustments before we hit the danger zone. When the Great Recession squashed tax revenues and the attempt at fiscal stimulus pushed up spending, much of that lead time evaporated. The Congressional Budget Office focuses on a somewhat different number than Thornton does, looking at debt \”held by the public\” rather than \”gross\” debt–essentially leaving out debt that the federal government owes to itself, like Treasury bonds held in the Social Security trust fund. By that measure, back in December 2007 before the Great Recession hit, long-term budget projections from CBO were that the debt/GDP ratio would rise to the danger zone of 100% by around 2030.  Eighteen months later, after the Great Recession hit, the June 2009 report from the CBO forecast that the debt/GDP ratio would hit 100% around 2022.

Of course, it would have been far more sensible to address these issues of high and rising health care costs and the looming problems of large fiscal deficits back before a Great Recession hit, but it wasn\’t politically possible to do so. So now we get to face these problems, decades in the making, in a weak economy and with a shortened timeline. 

Climate Change Strategies (Including Mangroves)

A UK organization called the Global Climate Project has put out its annual estimates of annual carbon emissions, and perhaps unsurprisingly, the world economy is on track to set a new record in 2012 of 38.2 billion tons, up a few percentage points from 2011.

Here\’s one figure from the report showing trendlines for the four largest emitters: China, the U.S., the EU, and India. Notice in particular that China is not only by far the largest carbon emitter, but is a spike-like upward trend in emissions. Emissions in the U.S. are fairly flat since the late 1990s. Emissions from India are on a path to soon surpass emissions from the EU.

This next figure shows carbon emissions on a per capita basis. The U.S. economy is by far the highest in per capita carbon emissions, although the level is down a fair amount since the 1970s, and down in the last 10 years or so as well. The rise in world per capita emissions is again being driven by China in particular, as well as India and other emerging market economies.

News stories on the report (for example, here ) quote climate change scientists to the effect that it\’s time to \”throw everything we have at the problem.\” What does that mean in practice?

I posted about a year ago on my support for \”The Drill-Baby Carbon Tax: A Grand Compromise on Energy Policy.\” \”The Drill-Baby Carbon Tax is my proposed grand compromise for energy policy in the United States. As the name suggests, it has two parts. On one side, there would be a national commitment to move ahead with all deliberate speed in developing the vast U.S. fossil fuel energy resources that are now technologically available. On the other side, the United States would enact a appropriate carbon tax to offset concerns over the risks of climate change.\”

In particular, those who feel that climate change is truly serious threat should be supporting accelerated exploitation of natural gas resources. As the just-released Global Carbon Project report notes,  \”The recent shift from coal to gas in the US could “kick start” mitigation\”. In a post last June, I reviewed some evidence on \”Unconventional Natural Gas and Environmental Issues.\” Yes, it\’s important that the newly available gas resources be exploited with care and best practices. But because natural gas burns so much more cleanly than coal, this shift offers a definite method of reducing carbon emissions over the next decade or so.

In the spirit of throwing everything we can at the problem, what other options are available? Last July I posted in\”Other Air Pollutants: Soot and Methane\” on some analysis and proposals for reducing these other air pollutants, both for their immediate health benefits, which are substantial, and also because they play an important role in climate change. I\’m less positive about increasing fuel economy standards for cars, for reasons discussed in a February post, \”Are the New Fuel Auto Economy Standards for Real?\”

We should also be looking at other methods of environmental protection that could have offsetting effects on carbon emissions. As one example, the most recent issue of Resources magazine from Resources for the Future discussed protection of mangroves in \”Blue Carbon: A Potentially Winning Climate Strategy.\”

\”Mangroves, which are among the most unique and rapidly disappearing natural environments in the world, store enormous amounts of carbon, especially in the earth below their roots, possibly equal in total to roughly 2.5 times annual global CO2 emissions. Between 1990 and 2005, mangrove loss occurred at a rate of about 0.7 percent per year. When these coastal habitats are disturbed by changes in land use, the so-called blue carbon locked away in the bodies of the plants or in the soil is gradually exposed to air and released as CO2 into the atmosphere. In a new study released by RFF and the University of California, Davis, co-authors Juha Siikamäki, James Sanchirico, and Sunny Jardine estimate that protecting mangrove forests from development can reduce CO2 emissions at a cost of $4–$10 per ton, while the current market prices for carbon offsets are on the order of $10–$20. By protecting these ecosystems from development or destruction, global leaders could achieve a reduction in greenhouse gas emissions on par with taking millions of cars off the roads. This suggests that in many places mangroves are worth saving for their carbon storage potential alone.\”

The study by Siikamäki, Sanchirico, and Jardine appeared in the September 2, 2012, issue of Proceedings of the Natural Academy of Sciences: a pre-print version is available here In a follow-up \”Commentary\” in that same issue, Ken Caldeira offers a word of pragmatic warning about mangrove protection. He points out that \”approximately 80% of the global “emissions avoidance potential” from mangrove protection is located in the 50% of countries that rank lowest on the World
Bank governance index. … Protection of mangroves provides a cost-effective means of avoiding CO2 emissions and, importantly, helps to maintain biodiversity. However, we need good policies and good institutions that can provide confidence that contracts can be enforced, displacement can be prevented, and some semblance of permanence can be a realistic expectation. It is shameful that we do not simply find the resources to protect and sustainably manage all mangrove ecosystems—and
consider avoided carbon emissions to be an ancillary cobenefit.\”

What else might be tried? I\’m all in favor of continuing research and development on other options that might prove useful: alternative forms of energy that don\’t produce carbon emissions, technologies for carbon capture and storage, making more of the roofs and roads white so that they are more reflective, even more controversial proposals for geo-engineering the clouds or the ocean. But at present, it\’s not clear to me how cost-effective these steps would be at a scale large enough to matter.

I sometimes find myself in the situation of spending time with people who speak very strongly about the dangers of climate change and fiercely attack those who do not trumpet these risks. I sometimes stand accused of not speaking out loudly enough, which is probably a fair complaint. I have no expertise in atmospheric science, and I do not understand at any deep level how climate models are built. However, I do accept that many top scientists believe that the risks of global climate change are real, and when confronted with risks, economics can offer useful ways of thinking about how the cost-effective policies to mitigate those risks.

But what strikes me as peculiar in these discussions is that the same folks who have been lecturing me about how I don\’t blow my little trumpet loudly enough about the enormous risks of climate change become quite picky when talking about possible solutions. For them, the policy agenda for climate change is to favor energy conservation and \”green\” alternative energy, but little else. They often oppose exploiting new natural gas resources. They are often queasy about a carbon tax. They reflexively dislike the idea that someone might find a way to capture and store carbon–thus solving the problem of carbon emissions without any reduction in energy use. They spend little time on the subject of how one might get carbon emissions from China and India under some semblance of control, nor on issues of how to reduce soot and methane or how to save mangrove swamps.

If the risks posed by high levels of carbon emissions and other greenhouse gases are worth addressing, and I think they are, then it\’s potentially useful and cost-effective to think about addressing them from every possible direction.

The Poverty Line in Low-Income Countries

For low-income countries, the appropriate definition of \”poverty\” is often based on the minimum level of consumption needed to keep body and soul together. For higher-income countries, poverty is almost always set as a relative measure: that is, an amount that would be too far below the standards commonly prevailing in that society. As low-income countries become better off, they typically start raising their poverty lines. As Martin Ravallion points out in \”A Relative Question\” in the December 2012 issue of Finance & Development:   \”For example, China recently doubled its national poverty line from 90 cents a day to $1.80 (adjusted to reflect constant 2005 purchasing power). Other countries—including Colombia, India, Mexico, Peru, and Vietnam—have also recently revised their poverty lines upward.\”

Ravallion and Shaohua Chen compiled a list of official poverty lines across 100 countries, and compare them to consumption levels per capita in each country. For the details of how it was done, see their working paper here. But here\’s the nice picture from Ravallion\’s F&D article showing the results.

Here are a few patterns that jump out at me:

1) As one might expect, what a country uses as its official level of poverty rises with the level of personal consumption. Ravallion writes: \”The highest line is in Luxembourg, at $43 a day, while the United States, with a similar level of average consumption to Luxembourg, has a $13-a-day line. The relativist gradient is evident as consumption levels decline. The average poverty line of the poorest 20 or so countries is $1.25 a day—which is how the World Bank’s international absolute line was set. Even among developing countries that use absolute lines, countries with higher average incomes tend to have higher real lines. Across countries it seems that poverty is indeed relative.\”

2) The U.S. poverty line, relative to income is substantially lower than the poverty lines often used by other high-income countries. For high-income countries, where to draw the poverty line is a social and political decision. Of course, it\’s also fairly common in many U.S. anti-poverty programs to have benefits that extend up to 125% or 150% or 200% or more of the \”official\” poverty line. Still, one suspects that what each society labels as \”poverty\” reveals something about how that society perceives the issue of those with low incomes and how that society will react to those issues.

3) When an economy grows rapidly, it is quite common to have a situation where some regions and individuals expand their consumption more rapidly than others. Thus, it is possible to have a situation in which people are better off in absolute terms–that is, their level of consumption has risen–but in which many of those same people feel that in relative terms, they are now worse off compared the new social norms in their society. Emerging markets like the BRIC countries Brazil, Russia, India, and China are all in their own ways facing the social turbulence in which reductions in absolute poverty are not matched by reductions in relative  poverty.

Global Manufacturing: A McKinsey View

The McKinsey Global Institute has published an intriguing report, \”Manufacturing the future: The next era of global growth and innovation.\” Here\’s a passage that to me captures much of the overall message, along with many of the controversies about manufacturing.

 \”The role of manufacturing in the economy changes over time. Empirical evidence shows that as economies become wealthier and reach middle-income status, manufacturing’s share of GDP peaks (at about 20 to 35 percent of GDP). Beyond that point, consumption shifts toward services, hiring in services outpaces job creation in manufacturing, and manufacturing’s share of GDP begins to fall along an inverted U curve. Employment follows a similar pattern: manufacturing’s share of US employment declined from 25 percent in 1950 to 9 percent in 2008. In Germany, manufacturing jobs fell from 35 percent of employment in 1970 to 18 percent in 2008, and South Korean manufacturing went from 28 percent of employment in 1989 to 17 percent in 2008.

\”As economies mature, manufacturing becomes more important for other attributes, such as its ability to drive productivity growth, innovation, and trade. Manufacturing also plays a critical role in tackling societal challenges, such as reducing energy and resource consumption and limiting greenhouse gas emissions. …Manufacturing continues to make outsize contributions to research and development, accounting for up to 90 percent of private R&D spending in major manufacturing nations. The sector contributes twice as much to productivity growth as its employment share, and it typically accounts for the largest share of an economy’s foreign trade; across major advanced and developing economies, manufacturing generates 70 percent of exports.\”

In short, the manufacturing share of GDP and employment tends to follow an inverted-U shape, and the United States and other high-income countries are on the downward side of that inverted U, while China, India, and others are on the upward side of their own inverted U. But as we watch the relative decline in U.S. manufacturing, there is a natural concern that in the process of this economic adaptation, we might be losing an important ingredient in the broader mix of the economy that holds difficult-to-replace importance for productivity, innovation, and even for certain types of employment that emphasizes a range of not-especially-academic skills.

Here are some figures with an international perspective on how manufacturing becomes less important as a share of the economy as the economy grows. The first figure looks at the value-added in manufacturing as a share of GDP. It\’s been falling for decades in high income countries, falling more slowly in middle-income countries, and rising in low-income countries. The second figure shows the common overall pattern, without plotting points from individual countries, of the inverted-U shape for manufacturing as a share of GDP. The third figure shows the inverted-U shape for manufacturing employment as a share of total employment as per capita income rises in a sample of eight countries. Mexico and India are still on the upward part of the inverted-U trajectory, and while no two countries are exactly alike, what\’s happening in the U.S. is qualitatively quite similar to what is happening in the United Kingdom, Germany, and others.

When confronted with this sort of tectonic shift in the economy, one natural reaction is a nostalgic desire to hold on to the way things used to be, but this reaction (almost by definition) is unlikely to help an economy move toward a higher future standard of living. How can we think about this shift away from manufacturing in a more nuanced way?

 The McKinsey analysis builds on the idea that manufacturing isn\’t monolithic. The study divides manufacturing into five categories with different traits. Here\’s a figure from the report, where the gray circles next to each category report the share of total global manufacturing in that category. 

The report emphasizes that the top category–chemicals, automotive, machinery and equipment–
moderately to heavy on research and development and on innovation. Regional processing industries like food processing tend to be highly automated and located near raw materials and demand, but not especially heavy on R&D. The third group of energy and resource-intensive commodities are often linked closely to what\’s happening energy or commodity prices. Global technology industries such as computers and electronics are heavy on R&D, and tend to involve products that are small in size by quite valuable (think mobile phones), which makes them important in global trade.  The final category of labor-intensive tradables, like apparel, is high tradeable and tends to be produced where labor costs are low. These kinds of insights, spelled out in more detail in the report, suggest that the appropriate response to an economy losing, say, output or jobs in apparel or in assembly of consumer electronics raises different concerns than losing output or jobs in chemicals or in computers.

The very fact that responses will differ across categories of manufacturing makes it hard to give extremely concrete advice, and the McKinsey report is full of terms like \”granularity\” and \”flexibility.\” To me, these kinds of terms describe a world in which firms need to recognize that ib tge demand side, the world economy and national economies are actually composed of many different and smaller markets, which have somewhat different desires for products and even for what features should be prominent on a given product (like a mobile phone or a fast food meal). And on the production side, decisions are no longer about choosing a location to build the largest factory with the cheapest labor, but instead about how to draw upon a geographically separate network of resources, including both direct production supply chains, but also R&D, management, financial, marketing, and other resources as needed. 

The McKinsey report makes the point this way: \”The way footprint decisions have been made in the past, especially the herd-like reflex to chase low-cost labor, needs to be replaced with more nuanced, multifactor analyses. Companies must look beyond the simple math of labor-cost arbitrage to consider total factor performance across the full range of factor inputs and other forces that determine what it costs to build and sell products—including labor, transportation, leadership talent, materials and components, energy, capital, regulation, and trade policy.\”

The role of \”manfacturing\” in these kinds of far-flung networks becomes a little blurry. On one side, there are a rich array of new technologies and innovations that are likely to continue transforming manufacturing, and the U.S. economy certainly should seek to play a substantial role in many of these technologies. But on the other side, implementing many of these technologies involves a blending of creativity and skill among many workers, only some of whom will actually be in direct physical contact with machinery that produces goods and services. Here\’s the McKinsey report:

\”A rich pipeline of innovations promises to create additional demand and drive further productivity gains across manufacturing industries and geographies. New technologies are increasing the importance of information, resource efficiency, and scale variations in manufacturing. These innovations include new materials such as carbon fiber components and nanotechnology, advanced robotics and 3-D printing, and new information technologies that can generate new forms of intelligence, such as big data and the use of data-gathering sensors in production machinery and in logistics (the so-called Internet of Things). …

Important advances are also taking place in development, process, and production technologies. It is increasingly possible to model the performance of a prototype that exists only as a CAD drawing. Additive manufacturing techniques, such as 3-D printing, are making prototyping easier and opening up exciting new options to produce intricate products such as aerospace components and even replacement human organs. Robots are gaining new capabilities at lower costs and are increasingly able to handle intricate work. The cost of automation relative to labor has fallen by 40 to 50 percent in advanced economies since 1990.\”

The U.S. economy has some notable advantages in this emerging new manufacturing sector. Compared to many firms around the world, U.S. firms are used to the concept of responding to customers and being flexible over time in production processes. Many firms have good internal capabilities and external connections for new technology and innovation. Moreover, U.S. firms have the advantage of operating in an economy with a well-established rule of law, with a fair degree or transparency and openness, and with a background of functioning infrastructure for communications, energy, and transportation. As the constellations of global manufacturing shift and move, many U.S. firms are well-positioned to do well.

But U.S. also faces a large and unique challenge: the enormous scale and dimension of the U.S. economy has meant that many U.S. firms could stay in the U.S. economy and not try that hard to compete in world markets. However, especially for manufactured goods, most of the growth of consumption in the decades ahead will be happening in the \”emerging market\” economies of east and south Asia, Latin America, and even eastern Europe and Africa. When U.S. manufacturing firms focus on \”granularity\” and \”flexibility,\” they need to look at locations for sales and production that are often outside their traditional geographic focus.

The BP Spill: What\’s the Monetary Cost of Environmental Damage?

 In April 2010, the BP Deepwater Horizon oil drilling rig suffered an explosion followed by an enormous oil spill. Here, I\’ll first lay out the question of much BP is likely to end up paying as a result of the spill, a number which is gradually being clarified by the passage of time and evolution of lawsuits. But beyond the question of what is going to happen, economists face a controversy about how best to place a dollar value on these kinds of environmental damages–and the most recent issue of my own Journal of Economic Perspectives has a three-paper symposium on the \”contingent valuation\” method.
A couple of weeks ago, Attorney General Eric Holder announced at a press conference in New Orleans: \”BP has agreed to plead guilty to all 14 criminal charges – admitting responsibility for the deaths of 11 people and the events that led to an unprecedented environmental catastrophe.   The company also has agreed to pay $4 billion in fines and penalties. This marks both the single largest criminal fine – more than $1.25 billion – and the single largest total criminal resolution – $4 billion – in the history of the United States.\”

But as Nathan Richardson of Resources for the Future points out, the criminal penalty is a small slice of what BP will end up paying: \”But remember that this criminal settlement is only a small part of BP’s liability. Earlier this year, BP reached a preliminary $7.8b class settlement with a large number of private plaintiffs (fishermen, property owners, etc.) harmed by the spill. That agreement is currently under review by a federal district court judge. This is in addition to $8b in payments made to private parties who agreed not to litigate (from BP’s oil spill “fund”). Future payments to private parties are likely as claims on the fund are resolved or as those who were not part of the class settlement pursue separate claims. BP also claims to have paid out $14b in cleanup costs.
But that’s not all. BP still must face civil suit from the federal government (and states) over natural resources damages. … BP also faces civil penalties under the Clean Water Act, which would quadruple from $5.5b to $21b if gross negligence is found. In other words, BP will pay out the largest criminal settlement in U.S. history and it will be only a small share of its total liability.\”

I don\’t have anything new to say about the parade of events leading up to the spill, nor about the halting efforts to stop the flow and start a clean-up. For details on what happened, a useful starting point is the report from the National Commission on the BP Deepwater Horizon  Oil Spill and Offshore Drilling  that was released in January 2011. From the Foreword of that report: \”The explosion that tore through the Deepwater Horizon drilling rig last April 20 [2010], as the rig’s
crew completed drilling the exploratory Macondo well deep under the waters of the Gulf of
Mexico, began a human, economic, and environmental disaster. Eleven crew members died, and others were seriously injured, as fire engulfed and ultimately destroyed the rig. And, although the nation would not know the full scope of the disaster for weeks, the first of more than four million barrels of oil began gushing uncontrolled into the Gulf—threatening livelihoods, precious habitats, and even a unique way of life. … There are recurring themes of missed warning signals, failure to share information, and a general lack of appreciation for the risks involved…. But that complacency affected government as well as industry. The Commission has documented the weaknesses and the inadequacies of the federal regulation and oversight, and made important recommendations for changes in legal authority, regulations, investments in expertise, and management.\”

In editing the Fall 2012 issue of my own Journal of Economic Perspectives, I found myself focused on a narrower issue: How does one put a meaningful economic number on widespread environmental damage. The issue has three papers focused on a method called \”contingent valuation,\” which involves using survey results to estimate damages. Catherine L. Kling, Daniel J. Phaneuf and Jinhua Zhao offer an overview of the disputes and issues surrounding this method. Then, Richard Carson makes the case that contingent valuation methods have developed sufficiently to be an accurate  estimating technique, while Jerry Hausman offers a skeptical view that contingent valuation surveys are so fundamentally flawed that their results should be completely disregarded. As usual, all JEP articles from the most recent back to the first issue in 1987 are freely available on-line, compliments of the American Economic Association.

From an economic perspective, the fundamental difficulty here is that not all the environmental damages affect economic output. A major oil spill, for example, affects production directly in industries like fishing and tourism and other industries directly, but it also affects birds and fish and beaches in ways that don\’t show up as a drop in economic output. In the economics literature, these losses are sometimes know as \”passive use value.\” The notion is that even if I never visit the Gulf Coast around Louisiana and Mississippi, nor eat fish caught there, my utility can be affected by the environmental destruction that occurred. Thus, the argument goes that economic theory should take this \”passive use\” into account–roughly, the value that people place on the environmental damage that occurred–in thinking about lawsuits and policy choices.

The immediate objections to contingent value methods of setting such values are obvious: If people are just asked to place a value on environment damage, isn\’t it plausible that their answers will be untethered by reality? Richard Carson, a strong advocate of these methods, faces such skepticism head-on. He writes: \”Economists are naturally skeptical of data generated from responses to survey questions—and they should be! Many surveys, including contingent valuation surveys, are inadequate.\” He also argues, \”The best contingent valuation surveys are among the best survey instruments currently being administered while the worst are among the worst.\”

Carson emphasizes that a high-quality contingent valuation survey takes considerable care to provide what can be several dozen pages of focus-group-tested information to consider, and emphasizes to the responders that the results of the survey are likely to help guide policy outcomes. In such a setting, he argues that people have the information and incentives to answer truthfully. Hausman responds that such surveys are plagued by difficulties: for example, the \”hypothetical bias\” that people tend to overstate their value when they aren\’t actually paying; or that valuations can vary according to how questions are phrases, like whether the question asks about willingness-to-pay to avoid environmental damage or willingness-to-accept that the same amount of environmental damage will be done; or that when people value, say, three projects separately or the combination of those three projects, their answers often don\’t add up. Carson discusses how those who carry out such surveys seek to deal with these issues and others. Hausman says that legislatures, regulatory agencies, and courts, relying on expert opinion, are by far a preferable way to take passive use value into account. Kling, Phaneuf and Zhao point out that over 7,000 of these contingent valuation studies have been done in the last two decades, provide a background and framework for thinking about all of these issues. Of course, those who want all the ins and outs and gory details are encouraged to check out the articles themselves. 

To my knowledge, no contingent valuation surveys of the costs of the BP oil spill have yet been published. But it is interesting that after the Exxon Valdez spill, the eventual settlement roughly matched the estimates of the contingent valuation study. As Richard Carson notes: \”Soon after the Exxon Valdez spill in March 1989, the state of Alaska funded a contingent valuation study, contained in Carson, Mitchell, Hanemann, Kopp, Presser, and Ruud (1992), which estimated the American public’s willingness to pay to avoid an oil spill similar to the Exxon Valdez at about $3 billion. The results of the study were shared with Exxon and a settlement for approximately $3 billion was reached, thus avoiding a long court case.\” As contingent valuation studies of the BP spill are published, it will be interesting to compare them with the amounts that BP is paying in the aftermath of the Deepwater Horizon spill.

International Capital Flows Slow Down

I\’m not sure why it\’s happening or what it means, but some OECD reports are showing that international investment flows are slowing down in late 2012, whether one looks at international merger and acquisition activity or at flows of foreign direct investment.

For example, the OECD Investment News for September 2012, written by Michael Gestrin, is titled \”Global investment dries up in 2012.\” The main focus of the report is on international merger and acquisition activity, and Gestrin writes:

\”After two years of steady gains, international investment is again falling sharply. After breaking $1 trillion in 2011, international mergers and acquisitions (IM&A) are projected to reach $675 billion in 2012, a 34% decline from 2011(figure 1) … At the same time as IM&A has been declining, firms have also been increasingly divesting themselves of international assets. As a result, net IM&A (the difference between IM&A and international divestment) has dropped to $317 billion, its lowest level since 2004 …\”

\”IM&A has declined more sharply than overall M&A activity. This is reflected in the projected drop in the share of IM&A in total M&A from 35% in 2011 to 29% in 2012 (figure 2). IM&A is declining three times faster than domestic M&A, suggesting that concerns and uncertainties specific to the international investment climate are behind the recent slide in IM&A, rather than IM&A simply following a broader downward trend.\”

The main exception to these downward trends seems to be a continuing rise in state-owned enterprises, particularly from China, carrying out more mergers and acquisitions, especially in transactions aimed at energy and mining operations in the Middle East and Africa. Here\’s one figure showing the drop in international investment, and another showing the drop as a share of total M&A activity.

In the October 2012 issue of FDI in Figures, a similar pattern emerges for foreign direct investment–which includes merger and acquisition activity. \”According to preliminary estimates, global foreign direct investment (FDI) flows continued shrinking in the second quarter of 2012 and declined by -10% from the previous quarter (-14% from a year earlier) to around USD 303 billion, similar to the value of FDI flows recorded in Q2 2010. The stock of global FDI at end-2011 was estimated at USD 20.4 trillion.\” As with M&A activity, there is something of a bounce-back in FDI between 2010 and 2011–although there is some fluctuation as well–but the first two quarters of 2012 are showing a decline. Here are graphs showing inflows and outflows of FDI for the world as a whole, as well as for the OECD countries, the G-20, and the EU (which are subgroups with overlapping memberships!).

In absolute dollars, China and the U.S. economy dominate these FDI flows, with China receiving about twice as much FDI in 2012 as the U.S. economy: \”As from the beginning of 2012, China became the first destination for FDI, recording USD 64 billion in Q1 2012 and USD 54 billion in Q2 2012. Corresponding figures for the United States are USD 22 billion and USD 33.5 billion, respectively.\” Other major countries for FDI inflows are France, Netherlands and the United Kingdom, Brazil and India. As far as outflows: \”Next to the United States outflows of USD 79 billion in Q2 2012 (-32% decrease from Q1 2012), the second largest investing economy was Japan at
USD 37 billion (or 61% increase) followed by Belgium at USD 16 billion (or 130% increase), China at USD 13 billion (or -10% decrease), Italy at USD 12.2 billion (or 16% increase), France at USD 12.1 billion (or -29% decrease) and Germany at USD 12.1 billion (or -66% decrease).\”

As I said at the start, I\’m not sure what to make of these patterns. Perhaps they are just random fluctuation that will sort itself out. But another a plausible interpretation would involve \”concerns and uncertainties specific to the international investment climate,\” as Gestrin put it. Without trying to itemize those concerns here across the euro area, the U.S., China, Russia, India, and elsewhere, we may be seeing a movement toward a situation in which in exporting and importing to other countries, without seeking a management interest in firms in those other countries, is looking relatively more attractive than a few years ago.

(For those shaky on their definitions, the report defines \”foreign direct investment\” this way: \”Foreign Direct Investment (FDI) is a category of investment that reflects the objective of establishing a lasting interest by a resident enterprise in one economy (direct investor) in an enterprise (direct investment enterprise) that is resident in an economy other than that of the direct investor. The lasting interest implies the existence of a long-term relationship between the direct investor and the direct investment enterprise and a significant degree of influence (not necessarily control) on the management of the enterprise. The direct or indirect ownership of 10% or more of the voting power of an enterprise resident in one economy by an investor resident in another economy is the statistical evidence of such a relationship.\”)

The Lucas Critique

The Society for Economic Dynamics has a short and delightful interview with Robert Lucas in the November 2012 issue of its newsletter, Economic Dynamics. Lucas, of course, received the Nobel prize in economics in 1995 and is, among other distinctions, the originator of the eponymous \”Lucas critique,\” which the Nobel committee described in this way:

 \”The \’Lucas critique\’ – Lucas\’s contribution to macroeconometric evaluation of economic policy – has received enormous attention and been completely incorporated in current thought. Briefly, the \’critique\’ implies that estimated parameters which were previously regarded as \’structural\’ in econometric analysis of economic policy actually depend on the economic policy pursued during the estimation period (for instance, the slope of the Phillips curve may depend on the variance of non-observed disturbances in money demand and money supply). Hence, the parameters may change with shifts in the policy regime. This is not only an academic point, but also important for economic-policy recommendations. The effects of policy regime shifts are often completely different if the agents\’ expectations adjust to the new regime than if they do not. Nowadays, it goes without saying that the effects of changing expectations should be taken into account when the consequences of a new policy are assessed – for instance, a new exchange rate system, a new monetary policy, a tax reform, or new rules for unemployment benefits.

\”When Lucas\’s seminal article (1976) was published, practically all existing macroeconometric models had behavioral functions that were in so-called reduced form; that is, the parameters in those functions might implicitly depend on the policy regime. If so, it is obviously problematic to use the same parameter values to evaluate other policy regimes. Nevertheless, the models were often used precisely in that way: Parameters estimated under a particular policy regime were used in simulations with other policy rules, for the purpose of predicting the effect on crucial macroeconomic variables. With regime-dependent parameters, the predictions could turn out to be erroneous and misleading.\”

Perhaps it\’s useful to add a specific example here. Say that we are trying to figure out how much the Federal Reserve can boost the economy during a recession by cutting interest rates. We try to calculate a \”parameter,\” that is, an estimate of  how much cutting the interest rate will boost lending and the economy. But what if it becomes widely expected that if the economy slows, the Federal Reserve will cut interest rates? Then it could be, for example, that when the economy shows signs of slowing, everyone begins to expect lower interest rates, and slows down their borrowing immediately because they are waiting for the lower interest rates to arrive–thus bringing on the threatened recession. Or it may be that because borrowers are expecting the lower interest rates, they have already taken those lower rates into account in their planning, and thus don\’t need to make any change in plans when those lower interest rates arrive. The key insight is that the effects of policy depend on whether that policy is expected or unexpected–and in general how the policy interacts with expectations. The parameters for effects of policy estimated under one set of expectations may well not apply in a setting where expectations differ.

As the Nobel committee noted more than a decade ago, this general point has now been thoroughly absorbed into economics. Thus, I was intrigued to see Lucas note that the phase \”Lucas critique\” has become detached from its original context in a way that can make it less useful as a method of argument. Here\’s Lucas in the recent interview:

\”My paper, \”Econometric Policy Evaluation: A Critique\” was written in the early 70s. Its main content was a criticism of specific econometric models—models that I had grown up with and had used in my own work. These models implied an operational way of extrapolating into the future to see what the \”long run\” would look like. … Of course every economist, then as now, knows that expectations matter but in those days it wasn\’t clear how to embody this knowledge in operational models. … But the term \”Lucas critique\” has survived, long after that original context has disappeared. It has a life of its own and means different things to different people. Sometimes it is used like a cross you are supposed to use to hold off vampires: Just waving it it an opponent defeats him. Too much of this, no matter what side you are on, becomes just name calling.\”

Lucas offers some lively observations on dynamic stochastic general equilibrium models, differences across business cycles, and microfoundations in macroeoconomic analysis. But his closing comment in particular gave me a smile. In answer to a question about the economy being in an \”unusual state,\” Lucas answers:  \”`Unusual state\’? Is that what we call it when our favorite models don\’t deliver what we had hoped? I would call that our usual state.\”

Why Doesn\’t Someone Undercut Payday Lending?

A payday loan works like this: The borrower received an amount that is typically between $100 and $500. The borrower writes a post-dated check to the lender, and the lender agrees not to cash the check for, say, two weeks. No collateral is required: the borrower often needs to show an ID, a recent pay stub, and maybe a statement showing that they have a bank account. The lender charges a fee of about $15 for every $100 borrowed. Paying $15 for a two-week loan of $100 works out to an astronomical annual rate of about 390% per year. But because the payment is a \”fee,\” not an \”interest rate,\” it does not fall afoul of state usury laws. A number of state have passed legislation to limit payday loans, either by capping the maximum amount, capping the interest rate, or banning them outright.

But for those who think like economists, complaints about price-gouging or unfairness in the payday lending market raise an obvious question: If payday lenders are making huge profits, then shouldn\’t we see entry into that  market from credit unions and banks, which would drive down the prices of such loans for everyone? Victor Stango offers some argument and evidence on this point in \”Are Payday Lending Markets Competitive,\” which appears in the Fall 2012 issue of Regulation magazine.
Stango writes:

\”The most direct evidence is the most telling in this case: very few credit unions currently offer payday loans. Fewer than 6 percent of credit unions offered payday loans as of 2009, and credit unions probably comprise less than 2 percent of the national payday loan market. This “market test” shows that credit unions find entering the payday loan market unattractive. With few regulatory obstacles to offering payday loans, it seems that credit unions cannot compete with a substantively similar product at lower prices.

\”Those few credit unions that do offer a payday advance product often have total fee and interest charges that are quite close to (or even higher than) standard payday loan fees. Credit union payday loans also have tighter credit requirements, which generate much  lower default rates by rationing riskier borrowers out of the market. The upshot is that risk-adjusted prices on credit union payday loans might be no lower than those on standard payday loans.\”

The question of whether payday lending should be restricted can make a useful topic for discussions or even short papers in an economics class. The industry is far more prevalent than many people recognize. As Stango describes:

\”The scale of a payday outlet can be quite small and startup costs are minimal compared to those of a bank. … They can locate nearly anywhere and have longer business hours than banks. … There are currently more than 24,000 physical payday outlets; by comparison there are roughly 16,000 banks and credit unions in total (with roughly 90,000 branches). Many more lenders offer payday loans online. Estimates of market penetration vary, but industry reports suggest that 5–10 percent of the adult population in the United States has used a payday loan at least once.\”

Payday lending fees do look uncomfortably high, but those with low incomes are often facing hard choices. Overdrawing a bank account often has high fees, as does exceeding a credit card limit. Having your electricity or water turned off for non-payment often leads to high fees, and not getting your car repaired for a couple of weeks can cost you your job.

Moreover, such loans are risky to make. Stango cites data that credit unions steer away from making payday loans because of their riskiness, and instead offer only only much safer loans that have lower costs to the borrower, but also have many more restrictions, like credit checks, or a longer application period, or a requirement that some of the \”loan\” be immediately placed into a savings account. Credit unions may also charge an \”annual\” fee for such a loan–but for someone taking out a short-term loan only once or twice in a year, whether the fee is labelled as \”annual\” or not doesn\’t affect what they pay. Indeed, Stango cites a July 2009 report from the National Consumer Law Center that criticized credit unions for offering \”false payday loan `alternatives\’\” that actually cost about as much as a typical payday loan.

Stango also cites evidence form his own small survey of payday loan borrowers in Sacramento, California, that many of them prefer the higher fees and looser restrictions on payday loans to the lower fees and tighter restrictions common on similar loans from credit unions. Those interested in a bit more background might begin with my post from July 2011, \”Could Restrictions on Payday Lending Hurt Consumers?\” and the links included there.

Was Curbside Recycling the Invention of Beverage Companies?

We often think of programs like curbside recycling as driven by a pure environmentalist agenda. But Bartow J. Elmore makes an intriguing argument these programs were passed in substantial part because of pressure from U.S. beverage makers, who were trying to address a public relations nightmare and to increase their profits. His essay, \”The American Beverage Industry and the Development of Curbside Recycling Programs, 1950-2000\” appears in the Autumn 2012 issue of the Business History Review (vol. 86, number 3, pp. 477-501). This journal isn\’t freely available on-line, but many in academia will have access through library subscriptions. From the abstract:

\”Many people today consider curbside recycling the quintessential model of eco-stewardship, yet this waste-management system in the United States was in many ways a polluter-sponsored initiative that allowed corporations to expand their productive capacity without fixing fundamental flaws in their packaging technology. For the soft-drink, brewing, and canning industries, the promise of recycling became a powerful weapon for combating mandatory deposit bills and other source-reduction measures in the 1970s and 1980s.\” 

As Elmore tells it, the story unfolds like this: For much of the 20th century, soft drink and beer companies shipped bottles. Then local bottling companies filled the bottles with beverages. The bottles included a deposit that was often 1 or 2 cents for returning them. Thus, the local bottling companies collected the empties, and washed and reused them. Elmore cites evidence that in the late 1940s, 96% of all soft drink bottles were returned, and a given bottle was often used 20-30 times before becoming chipped or broken.

But after Prohibition, as beer companies rebuilt their national sales networks, they started turning away from local bottlers, and instead using a larger centralized brewery and shipping beer in steel cans. Pepsi-Cola started shipping soft drinks in steel cans in 1953, and Coca-Cola followed in 1955. For the soft drink companies, there was a long-standing belief that they could raise profits if they could find a way to reduce the number of local bottlers: sure enough, the number of local bottlers fell from 4500 in 1960 to about 3,000 by 1972.

But there was a problem. The steel cans, and then aluminum cans, were \”one-way\”–that is, they weren\’t washed and recycled. To put it another way, they were a high volume of long-lasting garbage. People protested and state legislatures began to make ominous noises about taxing or banning nonreturnable drink containers. Industry banded together in the 1950s to create the first national anti-litter association: Keep America Beautiful. But promotional ads to encourage picking up litter weren\’t enough, and by the late 1960s and early 1970s, the U.S. Congress along with various states was again contemplating a ban on nonreturnable containers.

And so the beverage and canning companies, along with garbage giants like Waste Management and Browning-Ferris and scrap metal companies like Hugo Neu, formed a coalition with environmental groups like the Sierra Club and the National Wildlife Federation to push for federal grants that would help set up recycling programs. As Elmore writes: \”The beverage industry positioned itself as the keystone of the recyling system.\” When anyone argued for reuseable drink containers, a common response was that doing so would cripple the recycling system, and that it would cost jobs in the can-making industry. 

States stopped passing laws requiring mandatory deposits on cans and bottles: since 1986, only Hawaii has passed such a law. Instead, taxpayers and ratepayers at the federal and state level paid for curbside recycling. Elmore writes: \”Taxpayers were taking on the majority of the cost of collecting, processing, and returning corporate byproducts to producers, and industry remained exempt from disposal fees that might have been used to pay for expensive recycling systems. More critically, government-mandated source-reduction and polluter-pays programs had been discredited as viable methods for reducing the nation\’s pollution problem.\”

Compared to the 1940s when 96 percent of bottles were washed and reused, often a couple of dozen times, where do we stand today? Elmore cites evidence that in the mid-2000s, maybe 30-40 percent of cans and plastic bottles are recycled.

I wouldn\’t want to try to turn the clock back to the days of rewashing and reusing bottles. But it\’s not at all obvious to me that curbside recycling is doing the job. Ten states have laws requiring deposits on cans and bottles, according to the lobbyists at BottleBill.org. If we want people to be serious about recycling, having a policy of 5-10 cents for returning cans and bottles is likely to be a more effective tools than curbside recycling.

An Economist Chews Over Thanksgiving

(Originally appeared Thanksgiving 2011.)

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there\’s anything wrong with that.

The last time the U.S. Department of Agriculture did a detailed \”Overview of the U.S. Turkey Industry\” appears to be back in 2007. Some themes about the turkey market waddle out from that report on both the demand and supply sides.

On the demand side, the quantity of turkey consumed rose dramatically from the mid-1970s to the mid-1990s, but since then has declined somewhat. The figure below is from the USDA study, but more recent data from the Eatturkey.com website run by the National Turkey Federation report that U.S. producers raised 244 million turkeys in 2010, so the decline has continued in the last few years. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.


On the production side, the National Turkey Federation explains: \”Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing – from breeding through delivery to retail.\” However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys.  Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

\”In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg  capacity per hatchery in 2007.

Turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”

U.S. agriculture is full of these kinds of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a \”turkey\” as a product that doesn\’t have a lot of opportunity for technological development, but clearly I\’m wrong. Here\’s a graph showing the rise in size of turkeys over time.

The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here\’s a list of top turkey producers in 2010 from the National Turkey Federation


For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing? 

Anyway, the starting point for measuring inflation is to define a relevant \”basket\” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and for 26 years, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The top line of this graph shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. One could use the underlying data here to calculate an inflation rate: that is, the increase in nominal prices for the same basket of goods was 13% from 2010 to 2011. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate in the last 26 years. But in 2011, the rise in the price of the Classic Thanksgiving Dinner, like the rise in food prices generally, has outstripped the overall rise in inflation.

Thanksgiving is my favorite holiday. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like?