Does Marriage Help Children?

There aren\’t a whole lot of questions more touchy than whether children of married couples do better than children of unmarried couples or single parents. Indeed, a common response to this question is to evade it. See if you can spot the logical flaw in this syllogism: Proposition 1: Some marriages can be really bad. Proposition 2: Some single parents can do really well. Conclusion: There\’s no way to judge whether marriage helps children.

It\’s also true that providing persuasive evidence on the question of how marriage affects children is genuinely difficult. Just comparing children with married and unmarried parents is clearly not adequate, because married and unmarried parents are likely to differ in lots of ways–and compared with these other systematic differences, the issue of whether they are married may not be the most important factor to their parenting performance. Thus, I was delighted to see the Fall 2015 issue of The Future of Children, which offers an overview and eight essays on the theme \”Marriage and Child Well-being Revisited.\” For the impatient, here\’s the punch line from the overview by Sara McLanahan and Isabel Sawhill (footnotes omitted):

Marriage is on the decline. Men and women of the youngest generation are either marrying in their late twenties or not marrying at all. Childbearing has also been postponed, but not as much as marriage. The result is that a growing proportion of children are born to unmarried parents—roughly 40 percent in recent years, and over 50 percent for children born to women under 30. Many unmarried parents are cohabiting when their child is born. Indeed, almost all of the increase in nonmarital childbearing during the past two decades has occurred to cohabiting rather than single mothers. But cohabiting unions are very unstable, leading us to use the term “fragile families”
to describe them. About half of couples who are cohabiting at their child’s birth will
split by the time the child is five. Many of these young parents will go on to form new
relationships and to have additional children with new partners. The consequences of
this instability for children are not good. Research increasingly shows that family
instability undermines parents’ investments in their children, affecting the children’s
cognitive and social-emotional development in ways that constrain their life chances.

How does one tackle the question of how marriage affects children in a way that opens up some insights? With no disrespect to the other essays in the volume, here are a few that especially caught my eye.

David C. Ribar asks: \”Why Marriage Matters for Child Wellbeing.\” Ribar generates a long list of ways in which marriage might help children: for example, marriage on average may be associated with greater income, assets, and wealth; greater access to borrowing credit, and health insurance; availability of time, broader social networks, economies of scale and specialization in household production and family living; different patterns of inter-family bargaining; and less family instability, complexity, dysfunction, and conflict. If this a complete list of the pathways that that cause marriage to help children, then a statistical analysis adjusting for these factors should account for all of the differences between children who grow up with married or not-married parents. Testing this hypothesis is difficult, because data on a number of these pathways is limited. Ribar also points out that a number of these factors, and especially the economic factors, are somewhat amenable to policy interventions. But Ribar\’s judgement is that these factors make a difference, but do not account for all of the difference in how marriage seems to affect children. He writes:

While interventions that raise incomes, increase parental time availability, provide alternative services, or provide other in-kind resources would surely benefit children, these are likely to be, at best, only partial substitutes for marriage itself. The advantages of marriage for children appear to be the sum of many, many parts.

Shelly Lundberg and Robert A. Pollak take a different approach in \”The Evolving Role of Marriage: 1950 –2010.\” They argue that the gains from marriage are shifting. Some decades back, marriage was about a division of household labor. Now, they argue that marriage is often about a commitment to make a joint investment in raising children, and those who have higher levels of income and education are better-positioned to be making that commitment of waiting to have children until after marriage, at a time when ready to commit to that joint project of marriage and raising children. From the summary of their article:

The primary source of gains from marriage has shifted from production of household services to investment in children. For couples whose resources allow them to invest intensively in their children, marriage provides a commitment mechanism that supports such investment. For couples who lack the resources to invest intensively in their children, on the other hand, marriage may not be worth the cost of limited independence and potential mismatch.

Gary J. Gates tackles the subject of  \”Marriage and Family: LGBT Individuals and
Same-Sex Couples,\” which over time should offer some new evidence on the effects of marriage on child-bearing. From the summary:

 After carefully reviewing the evidence presented by scholars on both sides of the issue, Gary Gates concludes that same-sex couples are as good at parenting as their different-sex counterparts. Any differences in the wellbeing of children raised in same-sex and different-sex families can be explained not by their parents’ gender composition but by the fact that children being by raised by same-sex couples have, on average, experienced more family instability, because most children being raised by same-sex couples were born to different-sex parents, one of whom is now in the same-sex relationship. That pattern is changing, however. … Compared to a decade ago, same-sex couples today may be less likely to have children, but those who do are more likely to have children who were born with same-sex parents who are in stable relationships. In the past, most same-sex couples raising children were in a cohabiting relationship. With same-sex couples’ right to marry now secured throughout the country, the situation is changing rapidly.

Finally, Daniel Schneider takes an interesting tack in \”Lessons Learned from Non-Marriage
Experiments.\” He looks at a range of \”social experiments\”–that is, research studies in which people were randomly assigned to one group that received some kind of program or benefit, and thus could be compared to the \”control\” group didn\’t get the program or benefit. This methodology is widely recognized as being a powerful one, and these kinds of studies have been done in lots of areas, including projects in which the randomly chosen family received support for early childhood education, human capital development, workforce training, and income support. These studies were not designed to study marriage and children, but many of the studies collected data on marriage as part of the overall research effort. From the summary:

Schneider describes each intervention in detail, discussing its target population, experimental treatment, evaluation design, economic effects, and, finally, any effects on marriage or cohabitation. Overall, he finds little evidence that manipulating men’s economic resources increased the likelihood that they would marry, though there are exceptions. For women, on the other hand, there is more evidence of positive effects.

The evidence throughout this volume is inevitably limited and contingent. But it seems to me to support an argument that marriage (on average, and of course with exceptions) seems likely to be more than the sum of a list of ingredients like income and time. Getting married sets up a day-to-day context if interactions, expectations, and responsibilities that over time will often affect how you behave in a wide variety of contexts, including how you act when comes to raising children, and of course in other ways as well.

Saving Social Security: A Policy Menu

Back in 1981, actuarial projections foretold that Social Security could go broke as early as August 1983. A bipartisan commission, led by Alan Greenspan, was appointed by President Reagan and Congressional leaders to offer recommendations. Under political pressure from all sides, the commission process eventually produced a set of proposals to raise the Social Security payroll tax, phase in a later retirement age, and make some technical changes to benefit formulas that had the effect of trimming the future growth of benefits. These changes allowed the Social Security trust fund to start building up, with the idea that when the the baby boom generation started to hit retirement age around 2010,  there would be enough funding on hand. But it was apparent even back in the later part of the 1980s that while the Greenspan commission agreement had bought a half-century or so of solvency for the system, Social Security might well need revisiting in the second or third decade of the 21st century.

Well, the retirement of the baby boomers is underway and the time for that next round of Social Security changes is arriving. The Congressional Budget Office has a useful report out Social Security Policy Options, 2015 (December 15, 2015), which lists the menu of options. As I read the report, which is so similar to other reports of its kind that I\’ve been seeing for the last 15 years or more, I find myself thinking that there should be a political opportunity here. In the run-up to the 2016 elections, what party or politician is willing to look at the menu of options, pick a few of them, and then claim credit for taking the lead on fixing Social Security.

For background, here\’s the CBO figure showing the currently projected path for Social Security.

A few points about the diagram are worth noticing.

First, from 1985 up into the 2000s, the revenues going into the Social Security system exceeded the outlays. This was one legacy of the Greenspan commission reforms–that is, building up a larger trust fund.

Second, you can see outlays start rising sharply around 2010. Part of this change was workers in their 60s who were bushwhacked by the Great Recession and ended up going on Social Security sooner than they had planned. But you can also see that the rise in benefits payed (as a share of GDP) rises up to about 2035, when the retirement of the boomer generation will have run its course.

Third, notice that the system is now paying out more in benefits each year than it is collecting in taxes. It can do this because of the trust fund that was accumulated earlier.

Fourth, the trust fund runs out of money around 2034, which is why the the solid line in the figure drops down at that point, showing that if the system then relies on the taxes it is taking in, benefits will have to be substantially lower than currently promised.

Fifth, the gap between benefits and receipts doesn\’t change much after about 2035. This tells you that the Social Security problem is essentially a one-time problem, occurring as a result of the retirement of the boomer generation. If we can enact a series of reforms that moves up the receipts line and moves down the benefits line, then after about 2035 the system can be fairly stable for decades into the future.

So what does the menu of options look like? The CBO report lists 36 options, but many of them would have only a relatively small effect in addressing the overall problem. Here\’s a table that you may need to enlarge to read, but here\’s the gist of it. The lines on the right-hand side show what portion of the shortfall in Social Security over the next 75 years would be addressed by a given proposal.

The key point is that there are lots of proposals that would address major chunks of the the problem.

For example, on the tax side one might choose to phase in an increase of two percentage points in the payroll tax over the next decade, or to raise the amount of income covered by the payroll tax over the next 10 years up to about $320,000. Either step, assuming no corresponding change in benefits ,would solve about 40% of the long-run financing gap for Social Security.

Keep phasing in a later age of full retirement. For example, if full retirement age is gradually raised by by two months per birth year, it would eventually reach age 70 for workers who were born in 1978–and who are turning 70 in 2048. This step would solve about 30% of the long-run financing gap for Social Security.

Then tweak the benefits formula in a few ways. Given longer life expectancies and work histories, it makes sense to base the Social Security benefits formula on your top 40 years of earnings, rather than on the top 35 years. You could also use \”progressive price indexing,\” where those receiving lower levels of benefits would see the same increases as in current law, but those receiving higher benefits would see a slightly slower increases over time.

Some combination of tweaks like this would raise enough money so that Social Security could both address its long-term financial troubles and even have something left over to raise the benefits of some of the poorest people receiving Social Security and keep them above the poverty line.  

In short, fixing Social Security isn\’t rocket science. If you don\’t like my suggestions, pick your own from the lists above. But no party or politician seems willing even to talk in these terms. The political problem with seems to be that neither party is content with just fixing Social Security as it stands.

A lot of Republicans would like to see part of all of Social Security converted to a system of private retirement accounts. A lot of Democrats prefer a set of changes that would increase Social Security taxes on those with higher income levels and use the funding to pay the existing promised benefits–perhaps with some extra payments to those with the lowest income. Because this approach puts all the costs on those with higher incomes, but doesn\’t raise the benefits those people would receive, it would shift Social Security away from being a (mostly) contributory retirement system with a modest degree of redistribution to a system with a much heavier degree of redistribution. In short, both sides are taking turns either ignoring the issue or else pushing for their preferred changes at the expense of just fixing what\’s broke.

The Greenspan commission reforms back in the early 1980s only got traction when the system was scheduled to go broke in two years. Maybe our fractured political system is incapable of addressing this issue until the trust fund has less than two years to run. But I do wonder if there wouldn\’t be a political advantage to some party or politician from saying: \”Here\’s a common-sense approach to fixing Social Security. Sure, it doesn\’t make all the changes I\’d prefer to see, but it will work, and it will bring at least another half-century or more of solvency.\”

World GDP is Falling–If Measured at Market Exchange Rates

IMF statistics show that world GDP fell 4.9% from 2014 to 2015 which is almost as severe a drop as occurred from 2008-2009. Check out the bottom rows of Table A1 in the October 2015 issue of the World Economic Outlook, and you find that world GDP fell from $77.2 trillion in 2014 to $73.5 trillion in 2015.  Sure, the world economy hasn\’t been booming in the last year or so. But did we really just experience another global recession like it 2009? It seems implausible. So what\’s going on?

There are two plausible explanations, one of which seems to have a little more oomph than the other. Peter A.G. van Bergeijk has argued that some of the explanation is likely to have occurred from inadequacies in the IMF statistical system. As he points out, in a conceptual sense any exports from one country must be imports to another country–so the official statistics gathered by each country should add up in a way that global exports are equal to global imports. However, when the IMF adds up it statistics, it finds that total world exports exceed total world imports by $206 billion. This factor alone isn\’t sufficient to explain a $3.7 trillion drop in global GDP, but it does suggest that there are some problems in the underlying statistics.

The other explanation, courtesy of Maurice Obstfeld, Oya Celasun, Mandy Hemmati, and Gian Maria Milesi-Ferretti,  emphasizes that this decline in world GDP from 2014 to 2015 is based on a calculation that converts the GDP of each country into US dollars using market exchange rates. The problem arises because in the first nine months of 2015, the foreign exchange value of the US dollar rose by 13%. When the US dollar  becomes \”stronger\” and can buy more of foriegn currencies, it necessarily implies that the currencies of other countries are \”weaker\” and buy fewer US dollars. Thus, a stronger dollar means that when the IMF converts the GDP of other countries into US dollars, those GDPs will look smaller.

For one vivid example, the GDP of Russia was $1,861 billion in 2014, as measured by converting Russia\’s GDP measured in rubles to US dollars at the 2014 exchange rate. However, the US dollar increased a whopping 57% against the Russian ruble in 2015. So when take the GDP of Russia in 2015 as measured in rubles, and convert it into US dollars using the very different 2015 exchange rate, Russia\’s GDP in 2015 was $1,236 billion–down about one-third. Obviously, this decline in Russia\’s GDP isn\’t mostly about any change in goods and services produced in Russia\’s economy. It\’s about using a different exchange rate for converting between rubles and US dollars. 
Indeed, if you look at the very bottom row of Table A1 in the October 2015 issue of the World Economic Outlook,  you see an estimate of world GDP that is based on \”purchasing power parity\” exchange rates, which are exchange rates calculate by the International Comparison Program at the World Bank. The idea is to look at what a currency can actually buy–that is, its \”purchasing power\”–and to calculate what exchange rate would lead to equal purchasing power of currency. Exchange rates are much more volatile than prices of goods and services (like the 13% rise in the US dollar vs currencies in the rest of the world in 2015, or the 57% rise in the US dollar vs. the Russian ruble). Thus, when PPP exchange rates are used to convert GDP to US dollars, the result doesn\’t leap up and down with market exchange rates.  Using PPP exchange rates, world GDP increased from 2014 to 2015 from $108.7 trillion to $113.1 trillion. (Those who want some additional discussion of purchasing power parity exchange rates might begin here.)
This explanation addresses the specific issue in IMF statistics about world GDP from 2014 to 2015, but it\’s important to remember that there\’s a bigger issue here. Intro textbooks explain that one of the uses of money is as a \”yardstick\” so that we can compare values of goods and services and work and saving using a single measure. The difference between market-value exchange rates and purchasing power parity exchange rates is an especially vivid example of how the yardstick of can be misleading.

But it\’s also true that the process for calculating the PPP exchange rate is a difficult one, full of underlying assumptions.  In 2010, recent Nobel laureate Angus Deaton devoted his Presidential Address to the American Economic Association (freely available on-line here) to detailing the \”weak theoretical and empirical foundations\” of such measurements. When the PPP exchange rates are recalculated and readjusted every few years, the changes are often quite large–which confirms that the PPP calculations should be treated as having a substantial margin of error. For purposes of comparing world GDP from one year to the next, the PPP exchange rate is probably more accurate than the market exchange rate–but there\’s no reason to think that the PPP exchange rate is exactly right, either.

Domestically, we typically refer to US output as measured in money terms of US dollars, but a national-level GDP statistic doesn\’t take into account regional differences, like how higher oil prices are a positive for oil-producing regions but not for others, or how housing prices can vary substantially between states and between urban and rural areas. Measurements based on the common yardstick of money  is a shortcut that is often useful and productive. But of course, what ultimately matters for people is not value as expressed in money terms, but rather the quantities of goods and services that can be consumed, along with hours worked.

New Rules for Workers in the Gig Economy?

US labor law divides workers into \”employees,\” who are entitled to the coverage of certain laws like those relating to workers’ compensation, overtime pay, and the right to unionize, and \”independent contractors\” who are not covered by these laws.  But a variety of jobs in what is sometimes called the \”gig economy\” don\’t fall neatly into either category. Someone who drives for Uber on an ongoing basis, and thus makes their availability known via Uber\’s computer system, follows pricing and service guidelines laid down by Uber, and participates in Uber\’s various rating systems, doesn\’t quite seem like an \”independent contractor.\” But given that this worker is essentially free to set their own hours, and is not overseen by a supervisor, it doesn\’t quite seem like a standard employer-employee relationship, either

Rather than taking the round peg/square hole approach and trying to classify gig economy jobs as either \”employees\” or as \”independent contractors,\” it may be more useful to think about creating an additional legal category. Seth D. Harris and Alan B. Krueger offer some thoughts in \”A Proposal for Modernizing Labor Laws for Twenty-First-Century Work: The `Independent Worker,\’\” published in December 2015 by the Hamilton Project of the Brookings Institution. They propose a third legal category, which they call the \”independent worker,\” to fall between legal categories of employees and independent contractors. (In passing, it seems to me that there must be an alternative name for this group that would differentiate it more clearly from \”independent contractors.\” I like \”gig workers,\” myself, but I\’m open to suggestions.)

Harris and Krueger point out that the current legal standard for distinguishing between \”employees\” and \”independent contractor\” involves nine different distinctions–and these distinctions are made in different ways in the Fair Labor Standards Act, the Employee Retirement Income Security Act (ERISA), in tax law, and in various court decisions about all of the above. The nine distinctions are: 

\”Role of work: Is the work performed integral to the employer’s business?
Skills involved: Is the work not necessarily dependent on special skills?
Investment: Does the employer provide the necessary tools and/or equipment and bear the risk of loss from those investments?
Independent Business Judgment: Has the worker withdrawn from the competitive market to work for the employer?
Duration: Does the worker have a permanent or indefinite relationship with the employer?
Control: Does the employer set pay amount,w ork hours, and manner in
which work is performed?
Benefits: Does the worker receive insurance, pension plan, sick days, or other benefits that suggest an employment relationship?
Method of Payment: Does the worker receive a guaranteed wage or salary as opposed to a fee per task?
Intent: Do the parties believe they have created a employer– employee relationship?\”

It\’s not hard to imagine various work-and-pay relationships that cut across these distinctions in various ways. For example, in Canada there is a third category of \”dependent contractors,\” who are contractors that get 80% of their income from a single firm, and as a result have access to some but not all of the standard employee legal protections.

Harris and Krueger define their proposed legal category of \”independent workers\” in this way:

\”Independent workers operate in a triangular relationship: they provide services to customers identified with the help of intermediaries. The intermediaries create a communications channel, typically an “app,” that customers use to identify themselves as needing a service—for example, a car ride, landscaping services, or food delivery. (An intermediary need not utilize the Internet to match independent workers and customers …) … The intermediary does not assign the customer to the independent worker; rather, the independent worker chooses or declines to serve the customer (sometimes within broadly defined limits). However, the intermediary may set certain threshold
requirements for independent workers who are eligible to use its app, such as criminal background checks. The intermediary may also set the price (or at least an upper bound on the price) for the service provided by independent workers through its app. But the intermediary exercises no further control over how and whether a particular independent worker will serve a particular customer. The intermediary is typically rewarded for its services with a predetermined percentage of the fee paid by the customer to the independent worker. … The independent worker chooses when and whether to work at all. The relationship can be fleeting, occasional, or constant, at the discretion of the independent worker.\”

They estimate there are about 600,000 \”independent workers\”, which is about 0.4% of US employment, working with online intermediaries. in the gig economy. This number seems to be growing rapidly. They also mention a number of existing jobs that don\’t operate through on-line apps but seem to share many of the traits of \”independent workers,\” and discuss how many traditional taxi drivers (as opposed to Uber and Lyft drivers), temporary staffing agency employees, labor contractors, members who secure jobs through union hiring halls, outside sales employees, and (perhaps) direct sales employees  occupy the points of triangles with other economic actors.\”

Here\’s a quick summary (with more discussion in the paper) of Harris-Krueger proposal for how \”independent workers\’ would be treated under law:

In our proposal, independent workers — regardless of whether they work through an online or offline intermediary — would qualify for many, although not all, of the benefits and protections that employees receive, including the freedom to organize and collectively bargain, civil rights protections, tax withholding, and employer contributions for payroll taxes. Because it is conceptually impossible to attribute their work hours to any single intermediary, however, independent workers would not qualify for hours-based benefits, including overtime or minimum wage requirements. Further, because independent workers would rarely, if ever, qualify for unemployment insurance benefits given the discretion they have to choose whether to work through an intermediary, they would not be covered by the program or be required to contribute taxes to fund that program. However, intermediaries would be permitted to pool independent workers for purposes of purchasing and providing insurance and other benefits at lower cost and higher quality without the risk that their relationship will be transformed into an employment relationship.

Like any compromise choice, a new legal category like the Harris-Krueger proposal for \”independent workers\” is going to be somewhat unpopular with many parties. Many companies would prefer to treat their gig workers as independent contractors, to whom they have no additional legal responsibility. Some gig workers would prefer to have both their existing freedom of action but also the legal protections of employees. To resolve these issues, we can either go with the full-employment-for-lawyers approach and litigate the issues over and over in every new context in which they arise–an approach that is already underway–or we can settle on a compromise position. I don\’t have a strong opinion on whether the Harris-Krueger proposal for the legal status of \”independent workers\” is the right compromise. But it almost certainly beats smothering the gig economy in red tape and legal briefs.

Globalization Pauses: Will It Resume?

The volume of global trade rose more quickly that world economic growth during the 1980s, 1990s, and up to the global recession that started around 2008. This pattern of \”globalization\” sparked enormous numbers of books and articles. But during the last few years, the pattern of globalization has first halted, and may even have slightly reversed itself. Have we reached \”peak trade,\” so that the the era of rising globalization has ended? Or is the leveling out of the world trade/world GDP ratio just a pause, before globalization starts growing again? An VoxEU.ord ebook called  The Global Trade Slowdown: A New Normal?edited by Bernard Hoekman, explores these questions in an overview essay and 19 mostly readable chapters.

To set the stage for the discussion, here are some figures. The first one (from Hoekman\’s introduction) shows patterns of exports, imports, and net trade. The top panel shows that levels of imports and exports in G7 countries (the US, Canada, Japan, Great Britain, France, Germany, and Italy) peaked back in early 2008, rebounded back to that level, but has shown a drop-off in early 2015. The bottom panel shows that exports and imports in the BRIICS countries (Brazil, Russia, India, Indonesia, China and South Africa) rebounded to higher-than-2008 levels, but have also shown an actual decline in early 2015.

Here\’s a figure with a longer and different perspective, from the contribution to this volume by Hubert Escaith and Sébastien Miroudot. The shaded area shows the world trade/GDP ratio since 1970. In this figure trade is measured just by exports; because exports of one country are always imports for another country, if one measured trade by exports plus imports, the measures on the right-hand axis would be twice as high. The solid line measures the \”trade-income elasticity,\” which is another way of saying if you take the percentage change in trade and divide by the percentage change in world income, what\’s the ratio? When trade is growing at the same percentage rate as the world economy, this measure will be equal to 1.

There are two main categories of explanations for why the growth in trade as a share of GDP has paused, and in early 2015 even reversed a bit. One set of explanations is that global recession and slow recovery slowed down trade. Hoekman points out that the EU accounts for about one-third of all global trade, and China accounts for about 10% of global imports. The recent economic struggles of the EU and China\’s growth slowdown clearly play a role here, and several authors in the book argue that cyclical factors explain most or all of the slowdown of globalization (for examples, see the essays in this volume by  by Emine Boz, Matthieu Bussière and Clément Marsilli. or the one by Patrice Ollivaud and Cyrille Schwellnus).The implication of this view is that when the global economy again picks up speed, the ratio of global trade to world output will start rising again.

The other set of reasons is that something else, something \”structural\” as economist put it, is causing a slowdown in globalization. Of course, even if some structural factors have caused a slowdown in the growth of globalization, it\’s possible that other structural factors could again accelerate globalization in the future. In his overview essay, Hoekman identifies four structural factors drawn from various essays that might–although each is controversial in its own way–have contributed to the slowdown of globalization in recent years.

1) One possibility is that as the world economy expands, there is over time a shift in what is being traded. For example, Douglas Irwin\’s contribution to the symposium notes this pattern: \”After World
War II, manufactured goods accounted for about 40% of world trade, with agricultural goods and raw materials comprising most of the rest. Today, more than 80% of world trade is in manufactured goods. This fact probably accounts for the sensitivity of trade to production seen during boom periods, as well as sharp downturns such as the sharp contraction in world trade in 2009 that continues to receive analysis.\” As trade in manufactured goods becomes such a large share of world trade, that transition from agricultural to manufacturing trade is completing itself, and no longer pushing up globalization as quickly.

2) It may be that the rapid growth of trade relative to world GDP from the 1980s up to about 2007 was part of a one-time transition, in which the economy of China in particular (as discussed in depth in the essay by Guillaume Gaulier, Gianluca Santoni, Daria Taglioni and Soledad Zignago), but also the economies of eastern Europe and some other place around the world, became integrated into the world economy. Now that they are by-and-large integrated, the ratio of trade/world GDP would tend to flatten out.

3) A different transition is that the world economy has been moving toward \”global value chains,\” in which production of goods is more fragmented across national lines. This change would affect global trade flows–as they are usually measured–in an augmented way. Say that Nation 1 imports $100 worth of goods, and uses those materials to make output worth $200. It exports that $200 to Nation 2, which uses those materials to make output equal to $300. It exports those goods to Nation 3, which uses them to make goods worth $400. Notice that in this chain, each nation is adding only $100 in actual value to what it imported. However, the conventional trade statistics are based on the total value of what crosses national borders, not on value-added. As a result, crossing lots of borders will pump up standard measures of trade flows by even more than one would expect. Conversely, there is some preliminary and less-than-conclusive evidence that the move to global value chains has leveled off, or perhaps even reversed itself a bit, which would help globalization to level off. In their essay in this volume, Cristina Constantinescu, Aaditya Mattoo and Michele Ruta focus on global value chain chains.

4) Government actions during the Great Recession and its aftermath may have discouraged trade. Explicit measures of trade protectionism have not risen by much, but a number of countries have increases their incentive and subsidies for domestic firms in a way that could discourage imports.
In their essay \”Crisis-era trade distortions cut LDC export growth by 5.5% per
annum,\” Simon J. Evenett and Johannes Fritz write: \”[O]ur study breaks new ground by employing data on the trade potentially covered by trade-distorting domestic subsidies and export incentives.
The impact on LDC exports of different classes of trade distortions was estimated and
the total reduction in LDC export growth due to foreign trade distortions was computed
for each of the years 2009-2013.\”

So much for the structural factors that may have been contributing to the slowdown of globalization. What about structural factors that could cause the trade/world GDP ratio to start rising again? Here are four candidates.

1) Trade in services could rise in a way that generates a new wave of globalization. Hoekman writes in his overview essay (citations and footnotes omitted):

[I]n the future trade in services may expand significantly faster than trade in goods. Recent efforts by the OECD and the World Bank to collect information on the restrictiveness of trade policies for services show clearly that barriers to trade in services are often significant. In addition to explicit discrimination, differences in regulation across markets restrict trade. New vintage trade agreements such as the Canada-EU Comprehensive Economic and Trade Agreement or the Transatlantic Trade and Investment Partnership may result in a reduction in the average level of services trade costs. Unilateral actions by governments to enhance competition on services markets as an element of increasing productivity performance may also help foster greater trade in services. Services are more tradable than generally thought. But in practice, trade in services will often involve FDI [foreign direct investment] or the movement of service providers and/or buyers. These ‘modes of supply’ are not well measured. Indeed, sales of services by foreign affiliates are not regarded as trade in the national accounts, although they are regarded as trade by the WTO’s General Agreement on Trade in Services (GATS). As countries such as China shift towards greater reliance on domestic absorption, this is likely to generate greater demand for services and greater trade in services, including via FDI.

2) Instead of international trade being dominated by large firms, as it is today, the rise of information and communications technologies together with improved international logistics operations could allow a dramatic rise of international trade by small firms. In their essay in this volume, Usman Ahmed, Brian Bieron and Hanne Melin, economists who work for eBay, write about the rise of what they call the \”micro-multinationals.\”

Traditionally, SMEs [small and medium enterprises] have been limited by distance in terms of their ability to explore foreign markets, since most customers had to physically enter a business to transact. Reaching a customer in a different state, let alone in a different country, seemed like an impossible task for most SMEs. The internet has changed the calculus. eBay Marketplaces data demonstrate that 95% of US-based SMEs on the eBay platform sell to customers in foreign countries. In short, they export. This is in stark contrast to traditional businesses in the US, of which only about 4% engage in exporting

3) A number of regions of the world economy have low levels of trade within the region, which offers substantial possibilities for expanding international trade. In this volume,  the article by Ottavia Pesce, Stephen Karingi and Isabelle Gebretensaye focuses on the possibilities for expanding trade between countries in Africa, but parallel cases can be made for expanding trade in South America, south Asia, the Middle East, and elsewhere.

4) It may be that some of the structural factors that helped to drive globalization after the 1980s are not yet exhausted. For example, perhaps China will expand still further into global trade, or global value chains will continue to spread.

My own sense, for what it\’s worth, is that a resumption of globalization seems likely, in the sense that I expect the ratio of trade/world GDP to start rising again in the next 5-10 years. It seems to me that national borders are still very much  a barrier to international trade. But moving value across national borders is getting easier all the time, both because of how information and communications technology facilitates the coordinated movement of goods, and also in how it allows the direct buying and selling of information-related goods and services. As more of the world\’s value-added is embodied in information and in services, I expect new opportunities to arise for international transactions to become more casual and unremarked. Now when I order something online from a US supplier, I often have no idea–nor do I care enough to inquire–in what US state the product was made or from where it was shipped. A few years down the road, I expect that consumers and firms all over the world will order both goods and services just as casually from other countries.

Homelessness in America: A Slow Decline

Each year the US Department of Housing and Urban Development conducts a \”point-in-time\” survey in which it seeks to count the number of homeless on one night in January. Because the point-in-time survey is conducted with the same methods each year, it offers a way of looking at trends in homelessness over time. The 2015 Annual Homeless Assessment Report (AHAR) to Congress was published in November 2015. Here\’s the overall pattern:

Of course, even as the survey of course answers some questions, it raises others. For example, this measure is based on a single night, and so understates the number of people who find themselves homeless at some point during the year. Another part of the survey does seek to measure \”chronic\” homelessness, which is defined as someone with a disability who has been homeless for at least a year, or who has had at least four spells of homelessness in the last three years. Roughly one-sixth of the homeless counted in this survey are chronically homeless. Another issue is that the number of homeless people in shelters has remained much the same over the last decade. Thus, the decline in the number of homeless is due to a fall in the unsheltered homeless (for example, those who were spending that night in January under bridges, in cars, or in abandoned buildings). One suspects that, even with the best effort in the world, the estimated number of homeless in shelters is more accurate than the estimated number of unsheltered homeless.

The HUD report offer a lot of breakdowns of the homeless by state, metro area, individual or family status, and so on. My eye always jumps to the estimates that 22.6% of the homeless are under the age of 18, and another 9.4% are ages 18-24. 
The National Alliance to End Homelessness also does an annual report, and The State of Homlessness in America in 2015 offers some context for the basic numbers in the HUD survey (much of it based on the HUD data), and some overview of  policy efforts. The Bush administraion had a \”housing first\” policy that focused on getting the homeless into housing first, and then seeking to address their addiction or health problems, and was widely credited with some success. The Obama administration announced its \”Opening Doors\” policy to end homelessness in 2010. I am admittedly more prone than most, and perhaps more prone than fair, to make judgements about energy and effort based on whether reports are being filed. But with that caveat duly noted, it\’s troubling to me that the US Interagency Council on Homelessness last completed an annual report back in 2013
To get a concrete sense of one evolution in homelessness policy, consider this figure from the National Alliance to End Homelessness, which shows the inventory of housing for homelessness programs. Notice that the biggest rise in in the category of \”permanent supportive housing.\” Conversely, the biggest fall is in \”transitional housing,\” although this is slightly offset by the recently added category of \”rapid re-housing.\” The underlying theme here is to recognize that there is a need for emergency and transitional housing, but to put more emphasis on moving people into a permanent housing arrangement. For example, one current policy goal is to use the expansion of permanent housing to end \”chronic\” homelessness by 2017.
The gradual decline in homelessness is to me a bit of unexpected good news. If you had asked me for a prediction of how homelessness would change during the Great Recession and its aftermath, with high rates of long-run unemployment and turmoil in many housing markets, I would have predicted a rising level. 

Fed Interest Rate Increases: Not When, But How

It has seemed likely for about a year now that the Federal Reserve would raise interest rates in the later part of 2015. After all, back in December 2012 the Fed announced that \”this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent.\” But when the unemployment rate fell below that level in in April 2014, the Fed kept putting out complex and ambiguous statements, but wasn\’t yet ready to move. Now the unemployment rate has dropped to 5.0%, according to the statistics released in November, and a rise in interest rates seems likely soon, whether it happens at the next meeting of the Fed Open Market Committee on December 15-16, or early in 2016.

For example, Alan Blinder, the eminent Princeton professor of economics who was also a member of the Federal Reserve Board of Governors from 1994 to 1996, recently put it this way in the Wall Street Journal:

\”In all likelihood, the FOMC [Federal Open Market Committee] is heading for an eventual federal-funds rate near 3.5%. The key word is “eventual.” The Fed is in no hurry and will probably take three years or so. If so, the tightening pace would be roughly 100 basis points a year, which is less than half the average of the previous three tightening cycles (2004-06, 1994-95 and 1988-89). While the Fed hasn’t addressed this point, it’s also a good bet that the FOMC will not lock itself in to a fixed pace, as it did in 2004-06, when it raised the federal-funds rate 25 basis points at an amazing 17 meetings in a row. One hundred basis points a year translates to 25 basis points at every second meeting, on average. But the Fed won’t be that regular.\”

My focus here is not on when or how much the Fed will raise interest rates, but on the mechanics of how the policy will be carried out. The standard way of raising interest rates before 2007 was, as every intro textbook explains, \”open market operations.\” However, the Fed\’s quantitative easing policies have made open market operations obsolete, at least for the foreseeable future.  Jane E. Ihrig, Ellen E. Meade, and Gretchen C. Weinbach explain why, and what will take its place, in \”Monetary Policy 101: What’s the Fed’s Preferred Post-Crisis Approach to Raising Interest Rates?\”, published in the Fall 2015 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been Managing Editor of JEP since the first issue in 1987.)

For a longer discussion of the old-style approach to monetary policy through open market operations, you can check the Ihrig, Meade, and Weinbach  article, or an intro textbook, or the explanation at the Federal Reserve website. But in a nutshell,  the standard quick-and-dirty intro econ way of explaining the federal funds market n the old days before the Great Recession was that it involved very short-term loans, often literally overnight, from one bank to another. The Federal Reserve required that banks hold reserves, but those reserves paid no interest. Thus, banks would try to minimize their avoid excess reserves. But sometimes, as a result of a batch of large transactions at the end of a day, a healthy bank would find that it needed to boost its reserves a bit, and it would do so by borrowing in the federal funds market. When the Federal Reserve carried out its \”open market operations\”–that is, buying or selling government bonds with banks–it would aim at a certain target for the federal funds rate. For example, a higher federal funds rate would mean that banks were more concerned about needing to borrow at that rate, and so the banks would hold a larger cushion of reserves.

This basic story of how the federal funds market works no longer describes reality, for two main reasons. 1) As a result of \”quantitative easing,\” which involved the Federal Reserve buying assets from financial institutions including Treasury debt and mortgage-backed securities, US banks are holding literally trillions of dollars in \”excess reserves\” above any beyond what is legally required. In August 2007, banks had about $14 billion in reserves at the Fed; at the end of 2014, they had $2.6 trillion in reserves at the Fed. 2) In the past, the Fed paid no interest on reserves. Now it does. So banks have no particular incentive to avoid holding reserves as they did before.

Because banks are no longer worried about having enough reserves at the Fed (they already have lots of excess reserves and they are being paid interest on those reserves), the fundamental nature of the federal funds interest rate market has shifted. It\’s no longer mostly about depository institutions lending to other depository institutions. Instead, the main lenders in the federal funds market have become certain government sponsored-enterprises that help to create mortgage-backed securities–especially the 12 Federal Home Loan Banks, but also Fannie Mae and Freddie Mac. The main borrowers in the federal funds market are not stand-alone banks, either. They are mainly bank holding companies and foreign companies. A key point here is that banks are now paid interest on their excess reserves by the Fed, but agencies like the Federal Home Loan Banks are not. Thus, as Ihrig, Meade, and Weinbach put it: \”[M]ost transactions inthe federal funds market today reflect arbitrage activity between banks that earn interest on reserves and nonbanks that do not …\”
To sum up the situation here: When the Federal Reserve announces that it will raise interest rates, what it actually means in practice is that it will raise its target for the federal funds interest rate. But the market where that interest rate applies now involves relatively few actual US domestic banks. Instead, it mainly involves bank holding companies that receive interest on excess reserves at the Fed lending to Federal Home Loan Banks that don\’t have access to interest from the Fed. In fact, most domestic banks don\’t care that much about the federal funds interest rate, because they have plenty of extra reserves above the legal requirement and have little need to borrow at that rate.

So in this new situation, how will the Fed actually raise interest rates? Here\’s an illustrative figure from Ihrig, Meade, and Weinbach. The line between \”Today\” and \”In the future\” is the Fed decision to \”lift off\” and raise interest rates. Two things change.

The Federal Reserve will at some point announce a higher target for the federal funds interest rate. But to accomplish that target, the primary policy tool for the Fed is going to be raising the \”interest rate on excess reserves\” held by banks, will will then be transmitted to the federal funds interest rate by the force of arbitrage. Ihrig, Meade, and Weinbach describe the dynamic in this way (reference to a figure omitted):

All else equal, an increase in the interest rate on excess reserves would be expected to put upward pressure on the federal funds rate because banks would have an incentive to borrow in the federal funds market at rates below the interest rate on excess reserves and place those balances at the Fed. Since the Fed began paying interest on reserves, the market federal funds interest rate has generally been below the interest rate on excess reserves. One might think that interest on excess reserves (IOER) should provide a floor for the federal funds interest rate because banks would not lend at rates below what they could receive at the Fed. However, this situation has arisen because, in addition to banks not needing to borrow actively from each other because of the high quantity of reserves already in the banking system, the nonbank lenders in the federal funds market have an incentive to lend reserves at any rate above zero because they are not eligible to earn the interest rate on excess reserves on the balances they keep at the Fed. As explained above, the nonbanks that are active in the market for federal funds are government-sponsored enterprises. Banks borrow from these nonbanks to earn the spread between the market interest rate at which they borrow funds and the interest rate they earn from the Fed by holding those funds as excess reserves.

A quick-and-dirty way to think of this connection is that if the Fed is paying banks more interest on their excess reserves, banks are going to charge higher interest when lending out their money elsewhere.

The figure above also shows a rise in the ON RRP rate, which stands for \”overnight reverse repurchase rate.\” A repurchase agreement is a form of short-term lending and borrowing. What happens is that one party sells a financial security to another party, but as part of the deal agrees to repurchase that security at a specific price in the future. The pattern of payments here is just like a loan: that is, the party selling the financial security gets cash now, and repays that cash–plus a little extra–in the future.  In an \”overnight\” repurchase agreement, the repurchase happens the next day. Here\’s a description of reverse repurchase agreements from the Federal Reserve Bank of New York, which would be conducting these operations.

A reverse repurchase agreement, also called a “reverse repo” or “RRP,” is an open market operation in which the Desk [the New York Fed trading desk] sells a security to an eligible RRP counterparty with an agreement to repurchase that same security at a specified price at a specific time in the future. The difference between the sale price and the repurchase price, together with the length of time between the sale and purchase, implies a rate of interest paid by the Federal Reserve on the cash invested by the RRP counterparty.

Thus, an overnight reverse repurchase agreement is essentially an offer by the Federal Reserve to lend funds in the short-term, through the mechanism of first selling a security and then repurchasing it. A key point here is that while only banks with reserves at the Fed get paid interest on those reserves by the Fed, a much larger group of financial institutions can participate in ON RRP arrangements with the Fed.  Ihrig, Meade, and Weinbach write:

[T]he institutions that are eligible to participate in the Fed’s overnight reverse repurchase operations include about two dozen banks as well as a large number of money market funds under the management of 29 different firms, 22 primary dealers, and 13 government-sponsored enterprises (including 10 separate Federal Home Loan Banks). A full list of the counterparties that are eligible to participate in the Fed’s reverse repurchase operations appears at

However, the Fed is going to try to carry out this RRP lending only when strictly needed to raise interest rates, and to phase it out in between, so that the Fed doesn\’t become a major and permanent player in these markets.  

The main concern raised by the Federal Open Market Committee in using an overnight reverse repurchase agreement facility is that a large and persistent program could permanently alter patterns of borrowing and lending in repo markets and money markets as a whole—a concern the Committee has referred to as increasing the Federal Reserve’s role or size of its “footprint” in money markets. Keep in mind that the Fed’s operations in financial markets before the crisis were generally quite small and were aimed at affecting conditions in the federal funds market, a relatively small market. A large overnight reverse repurchase agreement facility could potentially expand the Federal Reserve’s role in financial intermediation and reshape the financial industry over time in ways that are difficult to anticipate in advance. In addition, in times of stress in financial markets, demand for a safe and liquid central bank asset might increase sharply, and the Fed’s counterparties could shift cash away from financial and nonfinancial corporations in the private sector and place it at the Fed instead, potentially causing or exacerbating disruptions in the availability of funds in money markets. To mitigate these concerns, the Federal Open Market Committee plans to use its overnight reverse repurchase agreement facility only to the extent necessary to support short-term interest rates, and it will phase the facility out when it is no longer needed

How accurately will the Federal Reserve be able to control the federal funds interest rate with this combination of raising the interest rate on overnight reserves and operating an overnight reverse repurchase operation? Will there be some unexpected or unwanted side-effects? The honest answer is that we can\’t be sure. The Fed has been experimenting at a smaller scales and gearing up in various ways, but we won\’t know how it works until it starts happening over the next few years.  Blinder offers one more reason for concern in his Wall Street Journal piece:

[H]ow will financial markets react on Dec. 16? Markets seem to have built a 25-basis-point rate increase into pricing and so shouldn’t react much when it actually happens. But markets have proven themselves to be fickle and frequently surprising. Remember: Some bond traders were teenagers the last time the Fed raised rates.

For a discussion of these new issues in how monetary policy will be carried out from about year ago, as these policies were taking shape, see my post on \”The Two New Tools of Monetary Policy\” (November 10, 2014).

What if Government Paid Kidney Donors?

When economists hear of a \”shortage,\” they immediately ask whether a price mechanism is being allowed to adjust in a way that would bring quantity demanded and quantity supplied into balance. Thus, when economists hear that there is a shortage of kidneys for purposes of transplantation, they wonder about whether a price mechanism–that is, paying kidney donors–could save lives and money. A group of doctors takes on this question in \”A Cost-Benefit Analysis of Government Compensation of Kidney Donors,\” by P. J. Held, F. McCormick, A. Ojo and J. P. Roberts, which is now available as an online pre-publication version at the American Journal of Transplantation.

One concern about paying for kidney donations is that it would be a mechanism in which rich people could exploit the short-term financial needs of poor people. With this concern in mind, the authors suggest that instead of anything looking like a \”market\” in buying and selling kidneys, the government should just pay kidney donors directly. Here\’s how they describe it:

This paper … provides a comprehensive cost-benefit analysis of … moving from our current kidney procurement system in which compensation of donors is legally prohibited to one in which the government (not private individuals) compensates living kidney donors $45 000, and deceased donors $10 000. Such compensation would be considered an expression of appreciation by society for someone who has given the gift of life to another. It could include an insurance policy against any health problems that might develop in the future as a result of the donation, including disability and death. Compensation for living donors could be paid in a delayed form, such as tax credits or health insurance, so people who are desperate for cash would not be tempted to sell a kidney. Compensation for deceased donors would be paid to their estate. All other aspects of the kidney procurement and allocation process would continue exactly as they are under the current system. In particular, living donors would continue to be carefully screened and informed of possible hazards associated with kidney donation. Kidneys would be allocated as the organs from deceased donors are now—by the federally funded and managed Organ Procurement and Transplant Network (currently administered under contract by United Network for Organ Sharing).

In this proposal, all kidney donors would be paid in a transparent and up-front manner, and those with high incomes would have no greater ability to jump the queue for a kidney transplant. Indeed, if enough kidneys were donated so that there was no shortage, pressures for black-market kidney purchases would be greatly diminished. What might such a plan accomplish? Held, McCormick, Ojo and Roberts summarize their estimates of costs and benefits in this way:

From 5000 to 10 000 kidney patients die prematurely in the United States each year, and about 100 000 more suffer the debilitating effects of dialysis, because of a shortage of transplant kidneys. To reduce this shortage, many advocate having the government compensate kidney donors. This paper presents a comprehensive cost-benefit analysis of such a change. It considers not only the substantial savings to society because kidney recipients would no longer need expensive dialysis treatments—$1.45 million per kidney recipient—but also estimates the monetary value of the longer and healthier lives that kidney recipients enjoy—about $1.3 million per recipient. These numbers dwarf the proposed $45 000-per-kidney compensation that might be needed to end the kidney shortage and eliminate the kidney transplant waiting list. From the viewpoint of society, the net benefit from saving thousands of lives each year and reducing the suffering of 100 000 more receiving dialysis would be about $46 billion per year, with the benefits exceeding the costs by a factor of 3. In addition, it would save taxpayers about $12 billion each year.

The paper (with its on-line appendices) walks through the basis for these estimates in a step-by-step manner. I\’m no expert in these numbers, but from what I\’ve seen, their estimates are all well within the range of plausibility. Let me just emphasize a few points that come up in their discussion:

One advantage of having a greater number of kidneys is that organ recipients wouldn\’t have to wait five years for a kidney. They would be younger and healthier when receiving a kidney, in part because they wouldn\’t have been going through dialysis for years. Thus, the beneficial health effects of receiving a kidney transplant would be on average higher with a greater number of donated kidneys.

Government currently ends up paying for a lot of health care costs related to kidney disease through Medicare and Medicaid. In particular, people with kidney failure are eligible for Medicare to pay for their kidney dialysis even if they have not reached age 65. Because kidney transplants eliminate the need for dialysis–and also reduce the need for other health care services–taxpayers would come out ahead with a government kidney-buying program. In other words, even leaving aside the nonmonetary benefits of longer life expectancy and better health for kidney donor recipients, this proposal would be a money-saver for government.

The concern that paying for kidneys would exploit the poor as a group is overstated, or even backwards. Held, McCormick, Ojo and Roberts put the case this way:

[T]he present system, in which compensation of kidney donors is legally prohibited, has resulted in a huge shortage of transplant kidneys that seriously harms all transplant candidates—especially the poor, and especially poor African Americans, because they are considerably overrepresented on the kidney waiting list due to the generally worse state of their health. In contrast, if the government compensated kidney donors, it would greatly increase the availability of transplant kidneys, making all transplant candidates, especially the poor, much better off. Indeed, the poor would enjoy the greatest net benefit because they would gain the $1.3 million value of a longer and healthier life, but almost all of the costs of transplantation for the poor person would be borne by the taxpayer through Medicare and Medicaid. So the current prohibition on compensating kidney donors, which is supposedly intended to keep the poor from being exploited, is in fact seriously harming them. And having the government compensate kidney donors would be an enormous boon for the poor.

Want more discussion of kidney and organ transplants? Along with the paper, one starting point is my post \”Selling a Kidney: Would the Option Necessarily be Beneficial?\” (March 12, 2014). Also, the Journal of Economic Perspectives published a \”Symposium on Organ Transplants\” back in the Summer 2007 issue with two Nobel laureates among the authors. (Full disclosure, I\’ve been the Managing Editor of JEP since 1987.) The articles are:

Interested in other cases where the question arises of whether to pay those who donate something from their body for medical purposes? In earlier posts, I\’ve offered some discussion of the US system of \”Volunteers for Blood: Paying for Plasma\” (May 16, 2014), and of \”The Human Breast Milk Market\” (August 24, 2015)

Cap and Trade: Lessons of Experience

A key question for a number of policy-makers, especially those contemplating the current negotiations in Paris about climate change issues, is whether a cap-and-trade approach might be a useful tool for reducing national emissions of carbon and other greenhouse gases in a cost-effective way. For example President Xi Jinping announced a couple of months ago that China will launch a national CO2 cap-and-tradsystem covering key industries in 2017. In the United States the American Clean Energy and Security Act of 2009 (more commonly known as the Waxman-Markey bill) was a cap-and-trade approach to reducing US carbon emissions. The bill passed the House, but was never brought up for discussion or a vote in the Senate.

For the uninitiated, a cap-and-trade approach to reducing pollution sets a total amount of pollution that can be emitted, and then divides up that quota among existing emitters of that pollution. The kicker is that a companies can trade–that is, buy and sell–portions of their pollution quota. As a result, every company that is emitting pollution has an incentive to seek out ways of emitting less for each unit of production, because it can then either expand production, or sell the extra pollution quota to a different firm that wants to expand production.

The potential advantage of this approach is that instead of requiring every pollution emitter to follow the same rule, those emitters who can reduce pollution most cheaply will have an incentive to do so. Moreover, the pollution quotas can also be gradually phased down over time, so that a permit which allows its holder to emit, say, one unit of pollution in a given year will only allow it to emit 95% of that amount in the next year, and lower amounts into the future. This challenge is that this approach to pollution reduction may sound unwieldy or unworkable.

Pragmatists who want to know when cap-and-trade has worked better or worse might begin with the essay by  Richard Schmalensee and Robert N. Stavins, \”Lessons Learned from Three Decades of Experience with Cap-and-Trade,\” which is available as a November 2015 discussion paper from Resources for the Future (DP 15-51), and will appear in a future issue of the Review of Environmental Economics and Policy.  Schmalensee and Stavins offer an overview, with references to the underlying research, of real-world cap-and-trade programs addressing air pollution issues. I\’ll append their summary table of the seven main programs they discuss at the end of this post, although it may be a little hard to read in the blog format. But one takeaway is that the successes of cap-and-trade approaches to this point have involved lead, sulfur dioxide, and nitrogen oxides–but not the carbon emissions associated with an elevated risk of climate change.

Here\’s are three well-researched success stories of a cap-and-trade approach in non-carbon contexts:

1) Back in the 1970s, there was a decision to reduce lead emissions from gasoline by 90%. This was done with a cap-and-trade program between gasoline refineries that involving pollution permits for lead that were shrinking in size. The flexibility from using cap-and-trade meant that the desired reduction in lead emissions was achieved at 20% lower cost than in an inflexible \”command-and-control\” program that would have just ordered each individual refinery to reduce emissions.  The program then ended in 1987.

2) In the 1990s, there was a decision to reduce sulfur dioxide emissions from fossil-fuel electrical generating plants. This was done with a cap-and-trade program. As they report: \”SO2 emissions from
electric power plants decreased 36 percent between 1990 and 2004, even though electricity generation from coal-fired power plants increased 25 percent over the same period.\” It\’s hard to estimate the cost savings in an exact way, because you have to do an estimate of what would have happened with a different set of regulations, but the range of estimates suggests that the pollution reduction happened with costs at least 15% lower, and maybe a lot more than 15% lower.

3) Emissions of nitrogen oxides are linked to ground-level ozone, otherwise known as smog. Starting in 1999, eleven northeastern states and the District of Columbia set up a regional cap-and-trade system to reduce nitrogen oxide emissions from about  1,000 electric generating and industrial units. Overall emissions fell by almost three-quarters, and one study found that the cost of the reduction was 40-47% lower with a cap-and-trade approach than it would have been with a less flexible set of regulations.

These examples offer what Schmalensee and Stavins at one point call \”proof of concept,\” which in this case means that cap-and-trade can be a useful tool for cost-effective pollution reduction. However, the empirical evidence on whether it is a useful approach for reducing carbon emissions is not yet clear. Here are three examples of unclear or incomplete results.

1) For example, nine northeastern U.S. states have joined Regional Greenhouse Gas Initiative (RGGI), which seeks to reduce carbon dioxide emissions from power plants. But the \”cap\” in this particular cap-and-trade approach was set at a level which didn\’t take into account how the Great Recession would reduce demand for electricity, or how the availability of lower-cost natural gas would reduce carbon emissions. As a result, actual emissions have been below the level set by the cap with no need for trading at all.

2) California passed a bill in 2006 to reduce the state\’s greenhouse gas emissions using a cap-and-trade approach. However, the implementation of the law started in 2013, applying only to the electrical power and manufacturing sector, and only expanded to cover fuel in 2015. There isn\’t yet any evidence on its effects over time.

3) The world\’s biggest carbon trading system is the Emission Trading System in the European Union. It was adopted in 2003, and started operating in 2005. As Schmalensee and Stavins report, \”the EU ETS covers about half of EU CO2 emissions in 31 countries. The 11,500 regulated emitters include electricity generators and large industrial sources. … The program does not cover most sources in the transportation, commercial, or residential sectors …\” However, the price for emitting carbon in this system has been very low: indeed, it collapsed all the way to a price of zero in 2007, and in recent years has been in the range of €5 to €10, which isn\’t enough to drive the necessary reductions in carbon emissions. There are a lot of issues here in the design of the program, but basically, if you give out lots of permits for emitting carbon, and also have lots of ways of meeting the rules for emissions reductions (like letting carbon emitters pay for reducing carbon somewhere else in the world, rather than actually cutting their own emissions), then the system may not function well.

The design of a cap-and-trade system clearly matters a lot, as does the type of environmental problem is it addressing, along with the economic and political context in which it is operating. Schmalensee and Stavins offer a useful overview of these very specific issues. My own sense is that when it comes to carbon emissions, these practical details make it hard to write legislation: indeed, the Waxman-Markey bill back in 2009 had eventually bloated to 1,200 pages of tall weeds in which the special interests could lurk.  Those who would prefer to think about a carbon tax as a way of reducing carbon emissions, rather than a cap-and-trade approach, might be interested in this post on \”Carbon Tax: Practicalities of Cutting a Deal\” (August 18, 2015).

Those who are interested in this subject might also want to check out the Winter 2013 symposium in the Journal of Economic Perspectives about \”Trading Pollution Allowances.\” The four papers in that symposium were:

Finally, here\’s the Schmalensee-Stavins summary table of the seven cap-and-trade programs in the recent discussion paper: