Putting Drug Policy Tradeoffs on the Table

It feels as if the tradeoffs involved in anti-drug policies are now up for discussion, in a way that they weren\’t 20 or 30 years ago. One signal is that the United Nations convened a special session about drugs back in 1998, with an underling theme of  of prohibitionism. Next week, the UN will convene another special session about drugs, and the tone may sound rather different. For a sense of how the argument is evolving, a useful starting point is \”Public health and international drug policy\”  from the Johns Hopkins-Lance Commission on Drug Policy and Health,  published at the website of the Lancet on March 24, 2016. \”The Johns Hopkins–Lancet Commission, cochaired by Professor Adeeba Kamarulzaman of the University of Malaya and Professor Michel Kazatchkine, the UN Special Envoy for HIV/AIDS in Eastern Europe and Central Asia, is composed of 22 experts from a wide range of disciplines and professions in low-income, middle-income, and high-income countries.\”

Their report harks back to the tone of the UN discussions about drug policy in 1998 (with footnotes and references to figures omitted here and throughout):

The previous UN General Assembly Special Session (UNGASS) on drugs in 1998—convened under the theme, “A drug-free world—we can do it!”—endorsed drug-control policies with the goal of prohibiting all use, possession, production, and trafficking of illicit drugs. This goal is enshrined in national laws in many countries. In pronouncing drugs a “grave threat to the health and wellbeing of all mankind”, the 1998 UNGASS echoed the foundational 1961 convention of the international drug-control regime, which justified eliminating the “evil” of drugs in the name of “the health and welfare of mankind”. But neither of these international agreements refers to the ways in which pursuing drug prohibition might affect public health. The war on drugs and zero-tolerance policies that grew out of the prohibitionist consensus are now being challenged on multiple fronts, including their health, human rights, and development impact. … The disconnect between drug-control policy and health outcomes is no longer tenable or credible.

The basic message here is simple enough: the goal of anti-drug policy is to improve public health. Thus, when evaluating anti-drug policy, it is reasonable to take into effect both how effective it is in reducing drug us and improving health, but also how the enforcement effort itself may be adversely affecting health health. Murders by drug cartels as one of the more obvious examples, but the Commission quotes a provocative comment from \”former UN Secretary-General Kofi Annan, `Drugs have destroyed many people, but wrong policies have destroyed many more\’.\”

Here are some of the tradeoffs of anti-drug policy as laid out by the Johns Hopkins-Lancet Commission. As a starting point, the gains from existing prohibitionist policies typically need to be phrased in terms of \”well, maybe they discouraged drug use from getting a lot bigger,\” because it\’s hard to demonstrate that drug use has been falling in more than a modest way.

In 1998, when the UN members states declared their commitment to a drug-free world, the UN estimated that 8 million people had used heroin in the previous year worldwide, about 13 million had used cocaine, about 30 million had used amphetamine-type substances (ATS), and more than 135 million were “abusers”—that is, users—of cannabis. When countries came together after 10 years to review progress towards a drug-free world in 2008, the UN estimated that 12 million people used heroin, 16 million used cocaine, almost 34 million used ATS, and over 165 million used cannabis in the previous year. The worldwide area used for opium poppy cultivation was estimated at about 238 000 hectares in 1998 and 235 700 hectares in 2008—a small decline. Prohibition as a policy had clearly failed. … North America continues to have by far the highest rates of drug consumption and drug-related death and morbidity of any region in the world, and drug policy in this region tends to influence global debates heavily. Between 2002 and 2013, heroin-related overdose deaths quadrupled in the USA, and deaths associated with prescription opioid overdose quadrupled from 1999 to 2010.

One of the most obvious tradeoffs of anti-drug policy has been gang violence. It\’s hard to measure this in any precise way, but the report cites evidence that in the Americas, about 30% of all homicides involve criminal groups and gangs, compared with about 1% in Europe or Asia. In Mexico,  the rise in homicide rates after 2006 has been so extreme–from a national rate of 11 per 100,000 to a rate of over 80 per 100,000 in the most heavily affected locations–that it actually reduced average life expectancies for the entire country. Of course, just looking at murders leave out other violence, including sexual assault. About 2% of Mexico\’s population is displaced from their homes by violence and risk of more violence. Colombia, Guatemala, and others have experienced a sharp rise in violence as well. Much of this is drug-related.

The illegality of drug use means that those who inject illegal drugs are likely to share needles, which in turn raises the rates of infection for HIV, hepatitis, tuberculosis, and other illnesses. One estimate is that outside Africa, 30% of cases of HIV infection are caused by unsafe drug injections. \”A landmark US study showed that over half of people who inject drugs were infected with HCV during their first year of injecting.\”

Illegality means that drugs are more likely to be taken by unsafe methods and in unsafe dosages –and when overdoses occur, the ability to get medical help may be quite limited. The Commission notes:

Drug overdose should be an urgent priority in drug policy and harm-reduction efforts. Overdose can be immediately lethal and can also leave people with debilitating morbidity and injury, including from cerebral hypoxia. …  In 2014, WHO estimated that about 69 000 people worldwide died annually from opioid overdose, but that estimate might ot have captured the substantial increase in opioid overdose deaths especially in North America since 2010. In the EU, drug overdoses account for 3·4% of deaths among people aged 15–39 years.

The illegality of drug use boosts prison populations around the world. \”[P]eople convicted of
drug crimes make up about 21% of incarcerated people worldwide. Possession of drugs for individual use was the most frequently reported crime globally. … [D]rug-possession offences constituted 83% of drug offences reported worldwide.\” The evidence that incarceration for possession or use of drugs deters use in any substantial way is weak. But incarceration does tend to reinforce other social inequalities: in the US, for example, African-Americans are disproportionately affects by drug-related incarceration. In many cases, young people and women who are low-level carriers of drugs end up with significant sentences. Prison is of course a place where additional drug use and violence are common. Those who aren\’t in any way involved personally in the drug business, but who live in communities where the rates of incarceration are high, find themselves bearing high costs, too.

When it comes to drugs, most of us are not pure prohibitionists at heart. We regularly consume caffeine through the workday, and occasionally alcohol after the workday. Maybe we don\’t use nicotine ourselves, but we don\’t see a compelling reason why our friends who like a nicotine hit now and again should be locked up. A growing number of Americans live in states–Washington, Colorado, Oregon, and Alaska–where recreational use of marijuana is legal. Some countries like Uruguay are experimenting with legalization of marijuana, as well.

On the other side, most of us are not pure libertarians when it comes to drugs, either. Rules about age limits, time and place of use, and intoxication while driving or just walking down the street can make some sense. A country which gives serious considering to limiting the size and availability of sugared soft drinks is unlikely to take a hands-off attitude to drug use.

When it comes to policy proposals, the Commission is essentially arguing that reducing the costs of anti-drug policies should matter, too. Without endorsing all of these steps myself, here are some of the recommendations:

  • Decriminalise minor, non-violent drug offences— use, possession, and petty sale—and strengthen health and social-sector alternatives to criminal sanctions.
  • Reduce the violence and other harms of drug policing, including phasing out the use of military forces in drug policing, better targeting of policing on the most violent armed criminals, allowing possession of syringes, not targeting harm-reduction services to boost arrest totals, and eliminating racial and ethnic discrimination in policing.
  • Ensure easy access to harm-reduction services for all who need them as a part of responding to drugs, in doing so recognising the effectiveness and cost-effectiveness of scaling up and sustaining these services. OST [opioid substitution therapy], NSP [needle and syringe programmes], supervised injection sites, and access to naloxone—brought to a scale adequate to meet demand—should all figure in health services … 
  • Efforts to address drug-crop production need to take health into account. Aerial spraying of toxic herbicides should be stopped, and alternative development programmes should be part of integrated development strategies …
  • Although regulated legal drug markets are not politically possible in the short term in some places, the harms of criminal markets and other consequences of prohibition catalogued in this Commission will probably lead more countries (and more US states) to move gradually in that direction—a direction we endorse. 

Hat tip: I ran across a mention of the Johns Hopkins-Lancet Commission on Drug Policy and Health in a post by Emily Skarbek at the Econlog website.

The Relationship Between Household Worth and Personal Saving

Household net worth can move substantially within a few years–say, when the stock market or real estate prices have a substantial rise or fall. When household worth rises, people tend to save less; conversely, when household worth falls, people tend to save more. The Federal Reserve Bank of New York offers a nice illustration of this relationship in the April 2016 edition of its monthly publication, \”U.S. Economy in a Snapshot.\”

The horizontal axis of this figures shows household net worth for the US economy as a whole, expresses as a share of total disposable income in the US economy at the time. Thus, total household net worth in the last few decades has ranged from about 450-650% of disposable income at any given time. The vertical axis measures the personal saving rate. The blue diamonds show quarterly data on these two variables from 1983 to 2005, and the blue line shows a best-fit curve for the relationship between these variables over this time.

What has happened since 2006 is shown by the red points, and the line connecting them. In the first quarter of 2006, a combination of high stock prices and soaring real estate values had driven the ratio of household worth/disposable income up to 643%, while the personal saving rate had fallen to 3.8%. However, the combination of the strop in the stock market and the fall in housing prices dropped the household worth/disposable income ratio down to about 500%. The corresponding change in the personal saving rate pretty much matched what would have been expected from the earlier data.

As the stock market and housing prices recovered, so did household net worth. As the points defining the red line moved out to the right, personal saving rates at first seemed on their way to falling again. But in the last couple of years, the personal saving rate has stayed moderately but noticeably higher than would have been predicted from the earlier data based on household worth.

Obviously, lots of factors affect the personal saving rate other than just household net worth. For example, the New York Fed writes: \”Restrained access to credit, continued high demand for precautionary saving, and increased concentration of wealth at the top of the income distribution may be potential explanations for this recent pattern.\” These potential causes will be researched and debated. But in the meantime, the tendency of people to save more than expected as their net worth rises is one factor that has contributed to the sluggishness of the economic upswing in the last few years.

Saving Global Fisheries With Property Rights

Economists often use over-fishing as an example of the \”tragedy of the commons,\” a situation in which each individual who exploits a common resource benefits individually, but no one has an individual incentive to trim back on exploiting the resource in the name of preserving its long-run health. An alternative is to use tradeable property rights for catching fish, which leads to a situation in which those who are doing the fishing have some incentive both to restrain themselves and to monitor others.

A group of 12 authors shows what is at stake in their essay\”Global fishery prospects under contrasting management regimes,\” which was published on-line in the Proceedings of the National Academy of Sciences on February 26, 2016.  The authors are Christopher Costello, Daniel Ovando, Tyler Clavelle, Kent Strauss, Ray Hilborn, Michael C. Melnychuk, Trevor A. Branch, Steven D. Gaines, Cody S. Szuwalski, Reniel B. Cabral, Douglas N. Rader, and Amanda Leland.

\”What would extensive fishery reform look like? In addition, what would be the benefits and trade-offs of implementing alternative approaches to fisheries management on a worldwide scale? To find out, we assembled the largest-of-its-kind database and coupled it to state-of-the-art bioeconomic models for more than 4,500 fisheries around the world. We find that, in nearly every country of the world, fishery recovery would simultaneously drive increases in food provision, fishery profits, and fish biomass in the sea. Our results suggest that a suite of approaches providing individual or communal access rights to fishery resources can align incentives across profit, food, and conservation so that few tradeoffs will have to be made across these objectives in selecting effective policy interventions. …

\”Current status is highly heterogeneous—the median fishery is in poor health (overfished,with further overfishing occurring), although 32% of fisheries are in good biological, although not necessarily economic, condition. Our business-as-usual scenario projects further divergence and continued collapse for many of the world’s fisheries. Applying sound management reforms to global fisheries in our dataset could generate annual increases exceeding 16 million metric tons (MMT) in catch, $53 billion in profit, and 619 MMT in biomass relative to business as usual. We also find that, with appropriate reforms, recovery can happen quickly, with the median fishery taking under 10 y to reach recovery targets. Our results show that commonsense reforms to fishery management would dramatically improve overall fish abundance while increasing food security and profits. …

\”We examined three approaches to future fishery management: (1) business-as-usual management (BAU) (for which status quo management is used for projections), (2) fishing to maximize long-term catch (FMSY), and (3) rights-based fishery management (RBFM), where economic value is optimized. The latter approach, in which catches are specifically chosen to maximize the long-term sustainable economic value of the fishery, has been shown to increase product prices (primarily due to increased quality and market timing) and reduce fishing costs (primarily due to a reduced race to fish); these are reflected in the model. In all scenarios, we account for the fact that fish prices will change in response to levels of harvest.\”

Here\’s a flavor of their results in an interestingly multidimensional graph. The vertical axis is based on a measure the biomass of the catch divided by the maximum sustainable yield. If this  ratio is 80% or higher, then the fishery is taken to be sustainably managed; if it\’s less than 80%, then overfishing is occurring. Thus, these estimates show that in a business-as-usual system, the share of fisheries that are being sustainably managed will continue to decline, and to experience financial losses. In contrast, the rights-based fishing management techniques at the top do the most to keep fisheries sustainable, and also have the biggest harvests (shown by the diameter of the circles) and the highest profits (shown by the blue color).

All the Job Growth is in "Alternative" Jobs

There\’s a widespread sense that fewer American jobs involve an ongoing connection to a employer, and a larger share are in some sense temporary or on-call. \”Gig economy\” jobs with companies like Uber are the most prominent recent example of this concern, but while the issue seems potentially much broader, hard data is lacking. The US Bureau of Labor Statistics has sometimes conducted a Contingent Worker Survey, but for budgetary reasons, that survey hasn\’t been conducted since 2005.
The Secretary of the US Department of Labor, Thomas Perez, announced a few ago that the the survey would be done again in May 2017.  But in the meantime, efforts to spell out \”How Many in the Gig Economy?\” (February 16, 2016) have typically used a wide variety of definitions and partial data sources, making it hard to reach clear conclusions.

Lawrence F. Katz and Alan B. Krueger took on this challenge head-on. The RAND Corp. conducts research using an American Life Panel, \”a nationally representative, probability-based panel of over 6000 members ages 18 and older.\”  Katz and Krueger contracted with RAND to include a set of questions about contingent workers in the October-November 2015 American Life Panel Survey. The questions were based on those used in 1995 and 2005 by the US Bureau of Labor Statistics. The first round of results from this research apppear in a working paper, \”The Rise and Nature of Alternative Work Arrangements in the United States, 1995-2015,\” which was published online on March 29, 2016.

The headline finding is that the share of US workers in \”alternative\” arrangements didn\’t rise much from 1995 to 2005, but did indeed rise substantially from 2005 to 2015. However, at least so far, only a small share of that increase is due to on-line gig economy jobs like Uber. Katz and Krueger write (citations omitted):

A comparison of our survey results from the 2015 RPCWS [RAND-Princeton Contingent Worker Survey] to the 2005 BLS CWS [Bureau of Labor Statistics Contingent Worker Survey] indicates that the percentage of workers engaged in alternative work arrangements – defined as temporary help agency workers, on-call workers, contract company workers, and independent contractors or freelancers – rose from 10.1 percent in February 2005 to 15.8 percent in late 2015. This increase
is particularly noteworthy given that the BLS CWS showed hardly any change in the percent of workers engaged in alternative work arrangements from 1995 to 2005. We further find that about 0.5 percent of workers indicate that they are working through an online intermediary, such as Uber or Task Rabbit … Thus, the online gig workforce is relatively small compared to other forms of alternative work arrangements, although it is growing very rapidly. …  The General Accounting Office (2015) analyzes data from the General Social Survey and CWS and finds that an expansive definition of alternative work arrangements, which includes part-time employees, increased from 35.3 to 40.4 percent of employment from 2006 to 2010.\”

Future work on this data will dig into details about wages, total earnings, and work hours for these alternative workers. But some striking patterns emerge even from this first cut at the data. For starters, as Katz and Krueger write: \”A striking implication of these estimates is that all of the net employment growth in the U.S. economy from 2005 to 2015 appears to have occurred in alternative work arrangements.\”

This change toward alternative work arrangements is widespread across industries. It also seems to be occurring in roughly equal force across the income distribution, although the \”alternative\” workers at the low end of the income distribution are more likely to be categorized as temporary help agency jobs and on-call jobs, while the alternative workers at the  high end of the income distribution are more likely to be categorized as independent consultants and freelancers.

But more broadly, it appears that those who have been looking for jobs in the last decade or so, or who are looking now, are much more likely to find that the jobs on offer involve a fundamentally different set of employment relationships compared to the common jobs of the 20th century–much less likely to involve an ongoing relationship with an employer.

Global Capital Flows: Why No Crisis (So Far) This Time?

Here\’s the puzzle. Outflows of international capital from emerging markets around the world played a key role in causing financial shocks and steep recessions all over the globe in the late 1990s, including the east Asian financial crisis affecting countries like Korea, Thailand, and Indonesia in 1997-98; Latin American countries like Brazil (1999) and Argentina (2001); along with an assortment of other places like Ukraine (1998) and Turkey (2000). But even larger outflows of  international capital from emerging markets have been occurring in the last few years, with much less dire economic consequences. What changed? The World Economic Outlook published by the IMF in April 2016 devotes a chapter to this subject: \”Chapter 2: Understanding the Slowdown in Capital Flows to Emerging Markets.\”

Here\’s a figure that sets the stage by showing net capital inflows to emerging market economies since 1980. You can see the drop in capital inflows in the late 1990s, surrounded by various crises. The more recent drop was accompanied by some crises during the worse of the global recession of 2008 and its immediate aftermath, but not much since then. (The recent external financial crises in this data are Ukraine in 2014 and Albania in 2015.)

What changed? In the late 1990s, the destructive economic dynamic sometimes evolved like this. Banks and large companies in a country like Thailand would borrow in US dollars. They would convert those US dollars to the local currency–say, Thai baht–and then lend and spend that currency in Thailand. This process of borrowing in US dollars and lending in Thai baht seemed fairly safek because government policy had been to  keep exchange rates fixed (or close to it) over time. But when international capital inflows went into reverse, exchange rates declined. As a result, the big banks and companies that had borrowed in US dollars and loaned in Thai baht could not repay those US dollar loans. The government had often in some way guaranteed the stability of the banking system, but didn\’t hold much in the way of US dollar reserves, so the government finances ended up in trouble, too.

In the last five years, many elements of this interaction have changed, at least in part because all the players learned from the experiences of the late 1990s. Here are some examples from the IMF report:

A rising share of the original international capital inflows aren\’t in the form of borrowed money and debt, but instead are in the form of stock market investments or foreign direct investment. These other kinds of investments are more flexible: an inability to repay debt causes legal consequences and even bankruptcy, but a drop in the price of stock market investments means only a loss in asset values.

Many of the big players emerging markets no longer need to borrow almost exclusively in US dollars; instead, they can now do a lot of borrowing in their own currency. As a result, they aren\’t exposed to the risk of exchange rate shifts. Here\’s a figure showing the rising ability of government and nongovernment borrowers in emerging markets to borrow in their own currency.

Many of the key emerging market no longer fix their exchange rate. What used to happen back in the 1980s and 1990s was that governments would claim to have a fixed exchange rate, which actually was a situation where the exchange rate was fixed for a time and then took a huge jump. Ongoing exchange rate adjustments mean that the players in the global economy are more likely to take the risks of exchange rate shifts into account, and the adjustments can be more gradual.

Many of the key emerging markets have built up substantial US dollar foreign reserves. Thus, if there is a sharp decline in capital inflows or an actual outflow, they can soften the effect of this change–at least for a time–by drawing down these foreign reserves.

The IMF report mentions some other factors, as well. But the central point is a bit of good news for the global economy: As emerging market economies are playing a larger role in global economic output, they are also becoming more integrated into the global financial system in more flexible ways. As a result, the cycles of inflows and outflows of international capital that caused severe disruptions and financial crises back in the 1980s and 1990s have had less power to do so in the last five years.

The EU Carbon Trading Market: A Decade of Experience

It\’s common to hear proposals that the US should adopt a \”cap-and-trade\” system for reducing climate emissions. Indeed, back in 2009 the House of Representatives passed the American Clean Energy and Security Act, the so-called \”Waxman-Markey\” act, which would have implemented such a law, but the legislation died in the US Senate without ever being brought to a vote. In the meantime, the European Union has now had a cap-and-trade policy in place for 10 years. How well is it working?

The Winter 2016 issue of the Review of Environmental Economics and Policy has a three-paper symposium on the subject of \”The EU Emissions Trading System: Research Findings and Needs.\”
(REEP is not freely available on-line, but many readers will have access through library subscriptions.) The three papers are:

  • A. Denny Ellerman,  Claudio Marcantonini,  and Aleksandar Zaklan, \”The European Union Emissions Trading System: Ten Years and Counting (pp. 89-107), 
  • Beat Hintermann,  Sonja Peterson,  and Wilfried Rickels, \”Price and Market Behavior in Phase II of the EU ETS: A Review of the Literature (pp. 108-128)
  • Ralf Martin, Mirabelle Muûls, and Ulrich J. Wagner, \”The Impact of the European Union Emissions Trading Scheme on Regulated Firms: What Is the Evidence after Ten Years\”\’ (pp. 129-148) 

The paper by Ellerman,  Marcantonini,  and Zaklan offers an overview of the EU carbon trading market, while the other papers drill down into more specific issues. Ellerman et al. describe the EU Emissions Trading System (ETS) in this way (footnotes and citations omitted):

The EU ETS is a classic cap-and-trade system. As of 2014, the EU ETS covered approximately 13,500 stationary installations in the electric utility andmajor industrial sectors and all domestic airline emissions in the EU’s twenty-eight member states, plus three members of the closely associated European Economic Area: Norway, Iceland, and Liechtenstein. Approximately two billion tons of carbon dioxide (CO2) and some other greenhouse gases (GHGs) are included in the system, together accounting for about 4 percent of global GHG emissions in 2014. Aside from its sheer size in terms of geographic scope, number of included sources, and value of allowances, another distinguishing feature of the EU ETS is its implementation through a multinational framework, namely the EU, rather than through the action of a single state or national government, as assumed in most theory and as has been the case for most other cap-and-trade systems.

They discuss a variety of the nuts-and-bolts details of the system. For example, should the permits to emit carbon be given to existing firms, or auctioned? Economic theory tends to favor auctioning, but the horse-trading of politics tends to favor giving the permits away, which is mainly what happened. How will the total carbon emissions over time be set? Will EU emitters be allowed to pay for steps that would reduce carbon emissions elsewhere in the world, and then use reductions to offset part of any required reduction in their own direct carbon emissions?

I won\’t try to describe the ins-and-outs of the EU system in this blog post. It\’s gone through three \”phases\” and a bunch of other rule changes during its single decade of existence. But two graphs give a sense of its main themes–and the difficulties in evaluating it performance. The first graph shows carbon emissions in the EU while the carbon trading system has been in effect.

Emissions are clearly down. However, it\’s not easy to separate out the effects of the emissions trading system from other policies that would have an effect of reduce carbon emissions, including other environmental regulations and taxes, efforts at energy conservation, subsidies for non-carbon forms of energy, and the like. In fact, a look at the price of carbon in the ETS suggests that it might not be having much of a role.

Notice that the price of allowances during \”phase I\” of the system fell all the way to zero: in other words, the allowed emissions of carbon were considerably above what was actually being emitted, so there was no additional cost for emitting. During phases II and III, the cost of emitting carbon has been at around €5 per ton of emissions, which is a lot lower than the price of about €30 per ton that is often recommended for making a realistic and useful dent in carbon emissions over time. To be blunt about it, the price data suggests that perhaps the EU ETS hasn\’t had much effect in reducing emissions, and may not have much effect looking forward.

Should the low price for carbon emissions be interpreted as a success story for an emissions trading system? After all, emissions are indeed down. Or should it be interpreted as a failure for an emissions trading system, and perhaps a sign that political pressures have led to a system where it has little effect? After all, emissions may be falling for other reasons and the current price of €5 just isn\’t very high.  Ellerman,  Marcantonini, and Zaklan describe the resulting debates in this way:

The great surprise of the second phase of the EU ETS was that, as phase III started in 2013, the price paid to emit carbon was less than €5, not the €30 or more that had been indicated by 2013 futures prices in 2008 and that was generally expected at that time. This development has created a lively debate about the future of the EU ETS and its role in climate policy. This debate can be summarized as being between those who view the current, much-lower-than-expected price as indicating serious flaws in the EU ETS and those who argue that the low price shows that the system is working exactly as it should given all that has happened since 2008 (i.e., reduced expectations for economic growth in the Eurozone, increased electricity generation from renewable sources, the significant use of offsets), including the possibility that abatement may be cheaper than initially expected. Fundamentally, this debate reflects differing views of the objectives of climate policy itself: whether the objective is solely to reduce GHG emissions or also (and perhaps principally) to transform the European energy system. Although no one is
suggesting that emissions have exceeded the cap, or that they will do so, current prices do not seem likely to lead to the kind of technological transformation that would greatly reduce Europe’s reliance on fossil fuels.

The other papers in the symposium make a case that the EU ETS policy should be regarded more as a partial success than as a partial failure. For example, Hintermann,  Peterson,  and Rickels write:

Although the EU ETS has been criticized from an environmental perspective because of its rather low carbon price signal, we believe that the policy has actually performed quite well from an economic point of view. It introduced a single price for emissions, which were previously a free public “bad,” and it correctly reflected the substantial oversupply of allowances in both phase I and phase II through a significant price drop. Moreover, the nonzero price toward the end of phase II, despite a nonbinding cap for the phase, reflected expectations of a cap on overall emissions that is binding in the long term, given the opportunity to bank allowances. … 

As is currently being discussed in the EU, the low allowance price could be counteracted through the use of new mechanisms such as price floors or strategic allowance reserves. However, a more direct (and environmentally beneficial) approach would be to tighten the cap, for example, by adjusting the rate at which it is decreased after 2020; because of banking, this should affect the price even today.  We would argue that the fact that allowance prices turned out to be lower than anticipated (and thus EU climate policy was cheaper than expected) should actually be interpreted as good news rather than a problem. After all, the main economic argument in favor of an emission allowance market is that it delivers a particular emissions goal at least cost.

Martin, Muûls, and Wagner focus on the specific sectors–energy and industry–that were directly regulated by the cap-and-trade arrangements under the ETS. They argue that there is little evidence the emissions trading system hurt these industries, and some evidence that it did accelerate their reduction in emissions and stimulate their innovation in cleaner-energy alternatives.

It seems to me that the best case one can make for the EU ETS after its first decade is along the lines of \”it\’s a start.\” Sure, the prices are very low, but they aren\’t zero. Ellerman et al. emphasize that the ETS is set up in a way that over the long-run, it will become a binding constraint forcing carbon emissions to decline. They write:

Absent a decision by the EU to abandon the program, which would require a super-majority, the EU ETS will march on with a continually declining cap, which, under all likely scenarios, will create continuing scarcity, thus virtually guaranteeing that a carbon price will be a permanent feature of the European economic landscape.

Right now, the political pressures to change the rules in such a way as to allow a lower carbon price are quite muted. It will be interesting to see how the political pressures play out a few years down the road when the carbon constraint in the ETS start to push the price of the allowances higher, up to €30 per ton or perhaps considerably more. 

What is a "Good Job?"

On the surface, it\’s easy to sketch what a \”good job\” means: having a job in the first place, along with good pay and access to benefits like health insurance. But that quick description is far from adequate, for several interrelated reasons. When most of us think about a \”good job,\” we have more than the paycheck in mind. Jobs can vary a lot in working conditions and predictability of hours. Jobs also vary according to whether the job offers a chance to develop useful skills and a chance for a career path over time. In turn, the extent to which a worker develops skills at a given job will affect whether that worker worker is a replaceable cog who can expect only minimal pay increases over time, or whether the worker will be in a position to get pay raises–or have options to be a leading candidate for jobs with other employers.

A majority of Americans do not consider themselves to be \”engaged\” with their jobs.  According to Gallup polling: \”The percentage of U.S. workers in 2015 who Gallup considered engaged in their jobs averaged 32%. The majority (50.8%) of employees were \”not engaged,\” while another 17.2% were \”actively disengaged.\” … Employee engagement entered a rather static state in 2015 and has not experienced large year-over-year improvements in Gallup\’s 15-year history of measuring and tracking the metric. Employee engagement has consistently averaged less than 33%.\”

U.S. Employee Engagement, 2011-2015, monthly

What makes a \”good job\” or an engaging job? The classic research on this seems to come from the Job Characteristics Theory put forward by  Greg R. Oldham and J. Richard Hackman back in a series of papers written in the the 1970s: for an overview, a useful starting point is their 1980 book Work Redesign.  Here, I\’ll focus on their 2010 article in the Journal of Organizational Behavior summarizing some findings from this line of research over time, \”Not what it was and not what it will be: The future of job design research\” (31: pp. 463–479).

Oldham and Hackman point out that from the time when Adam Smith described making pins and back in the eighteenth century up through when Frederick W. Taylor led a wave of industrial engineers doing time-and-motions studies of workplace activities in the early 20th century, and up through the assembly line as viewed by companies like General Motors and Ford, the concept of job design focused on the division of labor. In my own view, the job design efforts of this period tended to view workers as robots that carried out a specified set of physical tasks, and the problem was how to make those worker-robots more effective.

Whatever the merits of this view for its place and time, it has clearly become outdated in the last half-century or so. Even in assembly-line work, companies like Toyota that cross-trained workers for a variety of different jobs, including on-the-spot quality control, developed much higher productivity than their US counterparts. And for the swelling numbers of service-related and information-related jobs, the idea of an extreme division of labor, micro-managed at every stage, often seemed somewhere between irrelevant and counterproductive. When worker motivation matters, the question of how to design a \”good job\” has a different focus.

By the 1960s, Frederick  Herzberg is arguing that jobs often need to be enriched, rather than simplified. In the 1970s, Oldham and Hackman develop their Job Characteristics Theory, which they describe in the 2010 article like this:

We eventually settled on five ‘‘core’’ job characteristics: Skill variety (i.e., the degree to which the job requires a variety of different activities in carrying out the work, involving the use of a number of different skills and talents of the person), task identity (i.e., the degree to which the job requires doing a whole and identifiable piece of work from beginning to end), task significance (i.e., the degree to which the job has a substantial impact on the lives of other people, whether those people are in the immediate
organization or the world at large), autonomy (i.e., the degree to which the job provides substantial freedom, independence, and discretion to the individual in scheduling the work and in determining the procedures to be used in carrying it out), and job-based feedback (i.e., the degree to which carrying out the work activities required by the job provides the individual with direct and clear information about the effectiveness of his or her performance).

Each of the first three of these characteristics, we proposed, would contribute to the experienced meaningfulness of the work. Having autonomy would contribute to jobholders felt responsibility for work outcomes. And built-in feedback, of course, would provide direct knowledge of the results of the work. When these three psychological states were present—that is, when jobholders experienced the work to be meaningful, felt personally responsible for outcomes, and had knowledge of the results of their work—they would become internally motivated to perform well. And, just as importantly, they would not be able to give themselves a psychological pat on the back for performing well if the work were devoid of meaning, or if they were merely following someone else’s required procedures, or if doing the work generated no information about how well they were performing.

Of course, not everyone at all stages of life is looking for a job that is wrapped up with a high degree of motivation. At some times and places, all people want is a steady paycheck. Thus, Oldham and Hackman added two sets of distinctions between people:

So we incorporated two individual differences into our model—growth need strength (i.e., the degree to which an individual values opportunities for personal growth
and development at work) and job-relevant knowledge and skill. Absent the former, a jobholder would not seek or respond to the internal ‘‘kick’’ that comes from succeeding on a challenging task, and without the latter the jobholder would experience more failure than success, never a motivating state of affairs.

There has been a considerable amount of follow-up work on this approach: for an overview, interested readers might begin with the other essays in the same 2010 issue of the Journal of Organizational Behavior that contains the Oldham-Hackman essay. Their overview of this work emphasizes a number of ways in which the typical job has evolved during the last 40 years. They describe the change in this way:

It is true that many specific, well-defined jobs continue to exist in contemporary organizations. But we presently are in the midst of what we believe are fundamental changes in the relationships among people, the work they do, and the organizations for which they do it. Now individuals may telecommute rather than come to the office or plant every morning. They may be responsible for balancing among several different activities and responsibilities, none of which is defined as their main job. They may work in temporary teams whose membership shifts as work requirements change. They may be independent contractors, managing simultaneously temporary or semi-permanent relationships with multiple enterprises. They may serve on a project team whose other members come from different organizations—suppliers, clients or organizational partners. They may be required to market their services within their own organizations, with no single boss, no home organizational unit, and no assurance of long-term employment. Even managers are not immune to the changes. For example, they
may be members of a leadership team that is responsible for a large number of organizational activities rather than occupy a well-defined role as the sole leader of any one unit or function.

In their essay, Oldham and Hackman run through a number of ways in which jobs have evolved in ways that they did not expect or undervalued back in the 1970s. For example, they argue that the
opportunities for enrichment in front-line jobs is larger than they expected, that they undervalued the
social aspects of jobs, that they didn\’t anticipate the \”job crafting\” phenomenon in which jobs are shaped by workers and employers rather than being firmly specified. They point out that although working in teams has become a phenomenon, employers and workers are not always clear on the different kinds of teams that are possible: for example, \”surgical teams\” led by one person with support; \”co-acting teams\” in which people act individually, but have little need to interact face-to-face; \”face-to-face teams\” that meet regularly as a group to combine expertise; \”distributed teams\” that can draw on a very wide level of expertise when needed, but don\’t have a lot of interdependence or a need to meet with great regularity; and even \”sand dune\” teams that are constantly remaking and re-forming themselves with changing memberships and management.

When you start thinking about \”good jobs\” in these broader terms, the challenge of creating good jobs for a 21st century economy becomes more complex. A good job has what economists have called an element of \”gift exchange,\” which means that a motivated worker stands ready to offer some extra effort and energy beyond the bare minimum, while a motivated employer stands ready to offer their workers at all skill levels some extra pay, training, and support beyond the bare minimum. A good job has a degree of stability and predictability in the present, along with prospects for growth of skills and corresponding pay raises in the future. We want good jobs to be available at all skill levels, so that there is a pathway in the job market for those with little experience or skill to work their way up. But in the current economy, the average time spent at a given job is declining and on-the-job training is in decline. 

I certainly don\’t expect that we will ever reach a future in which jobs will be all about deep internal fulfillment, with a few giggles and some comradeship tossed in. As my wife and I remind each other when one of us has an especially tough day at the office, there\’s a reason they call it \”work,\” which is closely related to the reason that you get paid for doing it.

But with the unemployment rate now under 5%, the main issue in the workforce isn\’t a raw lack of jobs–as it was in the depths of the Great Recession–but instead is about how to encourage the economy to develop more good jobs. I don\’t have a well-designed agenda to offer here. But what\’s needed goes well beyond our standard public arguments about whether firms should be required to offer certain minimum levels of wages and benefits.

The Exam Time/Dead Grandmother Syndrome

Here\’s an oldie-but-a-goodie that I\’m sure some readers have already seen over the years, a satirical piece about the extremely high death rates of the grandmothers of college students during exam period. Mike Adams write about \”The Dead Grandmother/Exam Syndrome\” in the November/December 1999 issue of the Annals of Improbable Research.

In true social-science fashion, Adams first provides background, then establishes the facts, discusses lines of causality, then proposes a solution.

Background

\”In my travels I found that a similar phenomenon is known in other countries. In England it is called the “Graveyard Grannies” problem, in France the “Chere Grand’mere,” while in Bulgaria it is inexplicably known as “The Toadstool Waxing Plan” (I may have had some problems here with the translation. Since the revolution this may have changed anyway.) Although the problem may be international in scope it is here in the USA that it reaches its culmination, so it is only fitting that the first warnings originate here also. The basic problem can be stated very simply: A student’s grandmother is far more likely to die suddenly just before the student takes an exam, than at any other time of year.\”

Facts

\”For over twenty years I have collected data on this supposed relationship … [W]hen no exam is imminent the family death rate per 100 students (FDR) is low and is not related to the student’s grade in the class. The effect of an upcoming exam is unambiguous. The mean FDR jumps from 0.054 with no exam, to 0.574 with a mid-term, and to 1.042 with a final, representing increases of 10-fold and 19-fold, respectively. … [T]he changes are strongly grade dependent … Overall, a student who is failing a class and has a final coming up is more than 50 times more likely to lose a family member than is an A student not facing any exams.\” 

Of course, the averages cannot capture the extreme cases, like one member of the baseball team \”who tragically lost at least one grandmother every semester for four years.\”

Causality

\”Only one conclusion can be drawn from these data. Family members literally worry themselves to death over the outcome of their relatives’ performance on each exam. Naturally, the worse the student’s record is, and the more important the exam, the more the family worries; and it is the ensuing tension that presumably causes premature death. Since such behavior is most likely to result in high blood pressure, leading to stroke and heart attacks, this would also explain why these deaths seem to occur so suddenly, with no warning and usually immediately prior to the exam. It might also explain the disproportionate number of grandmothers in the victim pool, since they are more likely to be susceptible to strokes. This explanation, however, does not explain why grandfathers are seldom affected, and clearly there are other factors involved that have not been identified. Nonetheless, there is considerable comfort to be had in realizing that these results indicate that the American family is obviously still close-knit and deeply concerned about the welfare of individual members, perhaps too much so.\”

Solutions

Adams evaluates the merits of three different solutions to saving the grandmothers:
1) Stop giving exams.
2) Allow only orphans to enroll at universities.
3) Have students lie to their families.

Those who want more detail on this health scourge are encouraged to check Adams\’s paper. It has actual tables and figures, so it must be true.

Context on Corporate Profits

High US corporate profits are in the news, in part because they are the subject of a recent cover story in Economist magazine. Here\’s some longer-term context, and a few reflections.

As a starting point, here\’s basic data on corporate profits divided by GDP for the period since World War II. The red line on top is profits before tax; the blue line at the bottom is profits after tax. The recent rise in profits is clear. But it\’s also interesting to note that the 1980s and 1990s were a period of relatively low corporate profits, while profits were higher from the 1950s through the 1970s.

There are a bunch of different ways to adjust profits to get a more nuanced number, but the same basic pattern over time tends to show through of higher profits from the 1950s to the 1970s, lower profits in the 1980s and 1990s, and generally higher profits in the 2000s–with a downward blip in profits during the Great Recession. Here are a few thoughts about this pattern.

1) It\’s common for those pointing to high profits to assert that they are closely linked to higher levels of income inequality. From a long-run perspective, the connection isn\’t at all obvious. After all, inequality was much lower back in the 1950s, 1960s, and 1970s. The rise in inequality started in the 1970s, but corporate profits drop to lower levels in  the 1980s and the 1990s.

2) As the figure shows, the gap between before-tax and after-tax profits was larger back in the 1950s, 1960s, and 1970s. Here\’s a figure showing that corporate taxes have become relatively smaller as a share of GDP over time. But much of the drop in corporate tax revenues as a share of the economy happened back from the 1950s through the 1970s, when corporate profits were fairly high. The recent rise in corporate profits since about 2000 is apparent both in pre-tax and in post-tax corporate profits. It\’s not a creation of the corporate tax system.

3) There\’s a tendency in public discourse to treat \”profits\” as essentially a synonym for \”loot and plunder.\” For economists, profits are instead a signal conveying information. The problem lies in interpreting that information! Ideally, high profits are a signal that what is being produced by the firm is highly valued by consumers, and so it\’s a good time for firms to invest, expand output, hire more workers, and give raises to existing workers. Indeed, high profits provide the finance to help those steps along. High profits in the 1950s and 1960s, for example, were accompanied by (mostly) low unemployment and solid expansions of jobs and wages.

4) In contrast, the recent wave of high profits, especially since the Great Recession, don\’t seem to be combining with high investment, strong expansions in output and wages, and so on. There\’s some controversy on this point. For example, it may be that what we measure as \”investment\” in the US economy is conceptually outdated, because 21st century firms may not be increasing investment in machinery and equipment, but they are instead investing in intangible capabilities to provide new services in ways that conventional statistics don\’t capture. Unemployment rates have fallen more slowly than hoped, but they are now below 5%. Wages haven\’t risen as hoped, but there are some preliminary signs that they may be starting to do so.

5) In one way or another, profits do eventually flow back to the rest of the economy, but the mechanisms through which this happens have changed over time. For example, the high-profit companies of the 1950s and 1960s also tended to pay high rates of dividends to shareholders. Now, high-profit companies are more likely to use profits to engage in share buy-backs.

6) If you look at the distribution of profits across US companies, the distribution of profits has become more unequal: that is, the companies with the highest profits are also getting a higher share of the profits. For discussion, see \”Greater Inequality of Returns Across US Firms\” (October 22, 2015). We also know that a major wave of mergers and acquisitions is underway. The Economist cover story reports that concentration within industries is rising: that is, the share of sales going to the top handful of firms is rising. We know that there has been a decline in startup rates for new US firms, and that the share of workers with jobs at young companies is dropping. All of this paints a picture of a group of established firms that are making substantially higher profits. After all, the high corporate profits from the 1950s through the 1970s were in some part due to enormous and successful US corporations that for much of this period face only limited global competition.

7) Like so many economic issues, the meaning of high corporate profits will be clarified over the next year or two with the arrival of additional data. If the next few years see growth in investment and wages, together with a sag in profits, and perhaps a wave of money flowing back to investors as part of share buy-backs and merger and acquisition deals, then these last few years will look like a transitional period after the Great Recession. But if profits continue to remain high, then other explanations become more likely. Some of the higher profits may be due to weakened compeitition between producers. A related theory is that many of the high-profit companies are technology companies where startups can be risky and have high costs, but when a company succeeds, it has built a community of appreciative users in such a way that the profits can be extraordinary and long-lasting.