Maybe Too Big To Fail, but Not Too Big to Suffer

Which financial institutions are \”too big to fail\”? According to a report from the international Financial Stability Board, a working group of governments and central banks that tries to facilitate international cooperation on these issues, here\’s the list as of November 2012.

 Ready for a nice bowl of acronym soup? This list is actually the \”global systemically important banks,\” known as the G-SIBs, which are a subcategory of the \”global systemically important financial institutions,\” or G-SIFIs.  Already finalized, as the Financial Stability Board (FSB) explains, are guidelines for the \”domestic systemically important banks,\” the D-SIBs, which national governments are expected to implement by 2016. Meanwhile, the International Association of Insurance Supervisors (IAIS) has proposed a method of deciding who is a \”global systemically important insurers,\” the G-SIIs. The Financial Stability Board (FSB) and the International Organization of Securities Commissions (IOSCO) are now working on a method to idenfity the systemically important non-bank non-insurance financial institutions (no acronym yet available).

Meanwhile, the Financial Stability Oversight Council (FSOC) within the U.S. Department of the Treasury is working on its own lists. In its 2012 annual report, it designated eight systemically important \”financial market utilities\”–that is, firms that are intimately involved in carrying out various financial transactions. Here\’s the list: the Clearing House Payments Company, CLS Bank International, the Chicago Mercantile Exchange, the Depository Trust Company, Fixed Income Clearing Corporation, ICE Clear Credit, National Securities Clearing Corporation, and the Options Clearing Corporation. (And I freely  admit that I have only a fuzzy idea of what several of those companies actually do.)

In addition, the Dodd-Frank legislation presumes that U.S. banks are systemically important if their h holdings exceed $50 billion in consolidated assets. David Luttrell, Harvey Rosenblum, and Jackson Thies explain these points, along with a nice overview of many broader issues, in their Staff paper on \”Understanding the Risks Inherent in Shadow Banking: a Primer and Practical Lessons Learned,\” written for the Dallas Fed. For perspective, they offer a list of the largest U.S. bank holding companies, all of which comfortably exceed the $50 billion benchmark for consolidated assets.

This litany of who is \”systemically important\” feels disturbingly long, and it\’s only getting longer. But ultimately, it\’s a good thing to have such lists–at least if they lead to policy changes. Once you have admitted that a number of financial institutions are too big to fail, because their failure would lead to too great a disruption in financial markets, and once you have then made a commitment that government should not bail out such institutions, what policy prescription follows?

The proposal from the Financial Stability Board is that the G-SIBs (global systemically important banks, of course) should face a different set of regulatory rules. As the Dallas Fed economists explain, these could include \”higher capital requirements, supervisory expectations for risk management functions, data aggregation capabilities, risk governance, and internal controls.\” There are two difficulties with this approach. First, it may not work. After all, a considerable regulatory apparatus in the U.S. did not prevent the financial crashes from 2007-2009. And second, it may work with undesired side effects. In particular, if there are heavy rules on one set of regulated financial institutions, then there will be a tendency for financial activities to flow to less-regulated financial institutions. \”If regulation constrains commercial banks’ risk taking, many questionable assets may simply migrate to less-regulated entities.\”

I don\’t oppose regulating the SIFIs (that would be \”systemically important financial institutions\”) more heavily. But it\’s important to be clear on the limits of this approach. After all, it\’s not that these institutions are big, but rather that they are so tightly interconnected with other institutions. As Luttrell, Rosenblum, and Thies explain: \”TBTF [that would be \”too big to fail] is not just about bigness; it also includes “too many to fail” and “too opaque to regulate.\””

It seems to me that the key here is to remember that maybe some institutions are too big to fail, but they aren\’t too big to suffer! In particular, they aren\’t too big to have their top managers booted out–without bonuses. They aren\’t too big to have their shareholders wiped out, and the company handed over to bondholders–who are then likely to end up taking losses as well. One task of financial regulators should be to design and pre-plan an \”orderly resolution\” as they call it. The trick is to devise ways so that if these systemically important firms run into financial difficulties, the tasks and external obligations of certain large financial firms will not be much disrupted, for the sake of financial stability,but those who invest in those firms and who manage them will face costs.

Why Has Health Information Technology Been Ineffective?

Health information technology is one of the methods often proposed to help rein in rising health care costs. The underlying story is plausible: greater efficiency in dealing with the provision of care and the paperwork burden of medicine, and greater safety for patients as providers can be aware of past medical histories and ongoing treatments. However, at least so far, health information technology hasn’t done much to reduce costs.  Arthur L. Kellermann and Spencer S. Jones ask “What It Will Take to Achieve the As-Yet-Unfulfilled Promises of Health Information Technology” in the first issue of Health Affairs for 2013 (pp. 63-68). (This journal is not freely available on-line, but many academic readers will have access through library subscriptions.)
Back in 2005, a group of RAND researchers forecast that rapid adoption of health information technology could save $81 billion annually. Kellermann and Jones essentially ask: Why hasn’t this vision come to pass?  Here are some of their answers (as usual, footnotes are omitted).
Health providers and patients have been slow to adopt information technology. “The most recent data suggest that approximately 40 percent of physicians and 27 percent of hospitals are using at least a “basic” electronic health record. … Uptake of health IT by patients is even worse.”
Existing health information technology systems don\’t interconnect. “Are modern health IT systems interconnected and interoperable? The answer to this question, quite clearly, is no. The health IT systems that currently dominate the market are not designed to talk to each other. … As a result, the current generation of electronic health records function less as an “ATM cards,” allowing a patient or provider to access needed health information anywhere at any time, than as “frequent flyer cards” intended to enforce brand loyalty to a particular health care system.”
Health care providers dislike the existing information technology systems.“Considering the theoretical benefits of health IT, it is remarkable how few fans it has among health care professionals. The lack of enthusiasm might be attributed, in part, to the sobering results of studies showing that in many cases health IT has failed to deliver promised gains in productivity and patient safety. An even more plausible cause is that few IT vendors make products that are easy to use. As a result, many doctors and nurses complain that health IT systems slow them down.”
Existing health information technology can raise costs. On this point,, the authors cite a New York Times article from last fall by Reed Abelson, Julie Creswell, and Griff Palmer. (Full disclosure, Reed Abelson was a friend of mine back in college days.) The NYT story reports: \”[T]he move to electronic health records may be contributing to billions of dollars in higher costs for Medicare, private insurers and patients by making it easier for hospitals and physicians to bill more for their services, whether or not they provide additional care.Hospitals received $1 billion more in Medicare reimbursements in 2010 than they did five years earlier, at least in part by changing the billing codes they assign to patients in emergency rooms, according to a New York Times analysis of Medicare data from the American Hospital Directory. Regulators say physicians have changed the way they bill for office visits similarly, increasing their payments by billions of dollars as well.\”

Kellermann and Jones end with a plea that health information technology systems should be built o principles of interoperability, ease of use, patient-centeredness. I have no disagreement with the principles, but I would note that even within individual companies, it has often proven quite time-consuming and difficult to integrate information technology into operations in a full and productive way. Thus, it\’s no surprise to me that the health care industry has faced a number of stumbling blocks. I’ve heard anecdotal stories of doctors spending inordinate amounts of time clicking through menus on some IT system, trying to figure out which boxes to check to best represent a diagnosis and a course of care. I’ve heard that some doctors, as they master the system, find that it becomes easier to bill for many separate small services that they wouldn’t have previously bothered to write up.
It seems that it should be possible for the big health care finance operation, both public and private, to get together and hammer out a basic flexible framework for health care information technology. But it doesn\’t seem to be happening.

A Future of Low Returns?

Through the dismal stock market performance and low interest rates of the last few years, many savers and investors have held on to the hope that, when the economy eventually recovered, their financial investments would start earning the sorts of returns that they did through most of the second half of the twentieth century. However, in the Credit Suisse Global Investment Returns Yearbook 2013, Elroy Dimson, Paul Marsh, and Mike Staunton poor a few buckets of cold water on these hopes in their essay,\”The low-return world.\” (Thanks to Larry Willmore and his \”Thought du Jour\” blog for the pointer to this report.)

Here are a few of their figures showing the investment world as many of us summarize it in our minds–based of course on evidence from recent decades, and with something of a U.S. focus. Thus, here\’s a figure showing returns on equity and on debt for a number of markets around the world. equity. The bars on the right show a time-frame since 1950; the ones on the right show a time frame since 1980. Bonds have done comparatively better since 1980, because declining interest rates have meant that bonds purchased in the past–at the higher rates then prevailing–were worth more.

Here\’s a figure showing U.S. experience in the long run. The bars on the far right show how annual returns on equity have outstripped those for bonds and for bills from 1900-2012. The middle bars show a similar pattern, a bit less extreme, for the last half-century from 1963-2012. The bars on the far right show the 21st century experience from 2000-2012. Stocks have offered almost no return at all; neither have bills. Bonds have done fairly well, given the steady fall in interest rates, but as interest rates have headed toward zero, the gains from bonds seem sure to diminish, too.

 
Dimson, Marsh, and Staunton have some grim news for those waiting for a bounceback to 20th century levels of returns: \”[M]any investors seem to be in denial, hoping markets will soon revert to “normal.” Target returns are too high, and many asset managers still state that their long-run performance objective is to beat inflation by 6%, 7%, or even 8%. Such aims are unrealistic in today’s low-return world. … The high equity returns of the second half of the 20th century were not normal; nor were the high bond returns of the last 30 years; and nor was the high real interest rate since 1980. While these periods may have conditioned our expectations, they were exceptional.\”

In thinking about bond prices in the future, they point out that central banks around the world are pledging to keep interest rates low for at least the next few years. There is little reason to think that real interest rates on bonds are rising any time soon.

In terms of equities, they restate the well-accepted insight that over time, equities will be priced in a way that will provide a higher return than bonds, to make up for the higher volatility and risk of buying equities. But how much more will stocks pay than bonds? How high will this \”equity premium\” be? They write:

\”Until a decade ago, it was widely believed that the annualized equity premium relative to bills was over 6%. This was strongly influenced by the Ibbotson Associates Yearbook. In early 2000, this showed a historical US equity premium of 6¼% for the period 1926–99. Ibbotson’s US statistics appeared in numerous textbooks and were applied worldwide to the future as well as the past. It is now clear that this figure is too high as an estimate of the prospective equity premium. First,it overstates the long-run premium for the USA. From 1900–2012, the premium was a percentage point lower at 5.3%, as the early years of both the 20th and 21st centuries were relatively disappointing for US equities. Second, by focusing on the USA – the world’s most successful economy during the 20th century – even the 5.3% figure is likely to be an upwardly biased estimate of the experience of equity investors worldwide. .. To assume that savers can confidently expect large wealth increases from investing over the long term in the stock market – in essence, that the investment conditions of the 1990s will return – is delusional.\”

Here are their estimates for long-run returns on stocks and bonds over the next few decades. Essentially, they forecast real stock returns in the range of 3-4% per year, and real bond returns in the range of 1/1-1% per year–well below the expectations many people are carrying around from their experiences in the 1980s and 1990s.

If their predictions of a low-return world over the next few decades hold up, it will impose heavy difficulties on those saving for the future–not just those doing individual saving, but also pension plans. Moreover, it can be hazardous to a solid economic recovery. They write: \”Today’s low-return world is imposing stresses on investors. … For how long are low returns bearable? For investors, we fear that the answer is “as long as it takes.” While a low-return world imposes stresses on investors and savers in an over-leveraged world recovering from a deep financial crisis, it provides essential relief for borrowers. The danger here is that if this continues too long, it creates “zombies” – businesses kept alive by low interest rates and a reluctance to write off bad loans. This can suppress creative destruction and rebuilding, and can prolong the downturn.\”

 For those who are already retired, or on the verge, prudence suggests that you base your spending plans during retirement on these kinds of low returns. For those still saving for retirement, prudence suggests trying to save more. For those trying to figure out pension plan funding, the fund may well be worse off than you think. If rates of return on equities and bonds do rebound well above these projections, it will be a happy surprise for today\’s savers and investors. But the prudent don\’t base their plans on the hope of happy surprises.

Global R&D: An Overview

Reinhilde Veugelers investigates the patterns of worldwide research and development spending in a Policy Contribution for the European think-tank Bruegel: \”The World Innovation Landscape: Asia Rising?\”  The results are a useful reminder that thinking of China and the rest of the emerging economies as depending on low wages to drive their economic growth is so very 20th century. In this century, China in particular is staking its future economic growth on R&D spending and innovation.

As a starting point, consider snapshots of global R&D spending in 1999 and in 2009. The share of global R&D spending done by the United States, the European Union, and Japan is falling, while the share done by China, South Korea, and other emerging economies is rising.

Looking at individual countries, China has now outstripped Japan for second place in global R&D spending, and China\’s R&D spending is similar to that of Germany, France, and Italy combined. 
Veugelers looks more closely at the group of seven countries that together account for about 71% of global R&D spending. Here is how much they spend on R&D as a percentage of their economies: by this measure, the U.S. economy does not especially stand out.

Veugelers also takes a more detailed look at government and private R&D spending in these seven countries, and finds some intriguing differences in priorities. For example, more than half of all U.S. government R&D spending goes to defense, a far higher share than any of these other countries. The U.S. government also commits a larger share of its R&D budget to health. South Korea and Germany stand out for having a greater share of their government R&D focuses on industrial production and technology. And a number of countries devote a larger share of their government R&D to the \”Other\” category, which includes \”earth and space, transport, telecommunications, agriculture, education, culture, political systems.\”

In the private sector, a greater share of the U.S. private-sector R&D happens in the services sector, while the private sector in other countries focuses more of its R&D spending on machinery and equipment.

It\’s worth remembering that there\’s a lot more to innovation than just research and development spending. Yes, a nation that is the home of new innovation probably receives a disproportionate share of the gains in productivity as a result. But innovative discoveries can flow across national borders.The true economic gains from innovation are not from a discovery in a laboratory, but rather from the economic flexibility that translates the discovery into new products and jobs.

Why are U.S, Firms Holding $5 Trillion in Cash?

One of the puzzles and frustrations of the sluggish economy since the official end of the Great Recession in June 2009 is that U.S. firms are holding enormous amounts of cash–about $5 trillion in 2011. A considerable number of pixels have been spent wondering why corporations seem so reluctant to spend, and how we might entice them to do so. But I had not know that the trend toward corporations holding more in cash very much predates the Great Recession; indeed, it was already apparent back in the 1990s. Thus, along with thinking about why events of the last few years have led corporations to hold more cash, we should be thinking about influences over the last couple of decades.

Juan M. Sánchez and Emircan Yurdagul lay out the issues in \”Why Are Corporations Holding So Much Cash?\” written for the Regional Economist magazine published by the Federal Reserve Bank of St. Louis.  The first graph shows \”cash and short-term investments,\” which include all securities transferable to cash, going back to The second graph focuses just on nonfinancial, nonutility firms–thus leaving out banks, insurance companies, regulated power companies, and the like.There are some differences in timing, but the overall upward pattern is clear.

Of course, one reason for the rise in cash assets could just be overall growth of the economy. Thus, an alternative measure is to look at the ratio of cash to net assets of the firm. By this measure, which is probably more revealing, the growth of cash starts off around 1995, accelerates in the early 2000s, takes a step back in the Great Recession, and now has rebounded.

Broadly speaking, there are two reasons for firms to hold more cash: precautionary motives and repatriation taxes. Precautionary motives refer to the notion that firms operate in a situation of uncertainty, including uncertainty about what stresses or opportunities might arise, and whether they will be able to get a loan on favorable terms when they want it. Cash offers flexibility. The authors explain repatriation taxes this way: \”[T]axes due to the U.S. government from corporations operating abroad are determined by the difference between the taxes already paid abroad and the taxes that U.S. tax rates would imply. Importantly, such taxation only takes place when earnings are repatriated. Therefore, firms may have incentives to keep foreign earnings abroad.\” To put it a little differently, if firms are thinking that they may wish to reinvest foreign profits in foreign operations, then the tax code gives them an incentive not to repatriate those profits.

Both of these factors surely make some difference, but Sánchez and Yurdagul also seek some insight into the question by looking at which industries, and which size firms, are more likely to be increasing their cash holdings. For example, information technology firms and firms focused on tech and hardware equipment are holding noticeably more in cash, although since the end of the housing bubble, building product companies have been sitting on a lot more cash, too.

When looking by size of firm, it\’s smaller firms that hold more cash. In part, this surely reflects that smaller firms are more vulnerable to ups and downs, and less likely to be confident that they have access to a loan when they want it. But interestingly, it\’s the second-smallest quintile of firms, not the smallest, that has seen the biggest rise in cash/asset ratios.

Ultimately, these arguments made me less confident that any particular policy was going to shake loose a big share of the $5 trillion cash hoard of U.S. firms. It\’s not quite clear to me why smallish-sized tech firms feel the need to hold so much cash, but they clearly have felt that way for a couple of decades now. In an globalizing economy, more U.S. firms are going to have foreign operations and to be focused on expanding those operations, so it\’s not clear to me that a much larger share of those foreign profits will be repatriated. The cash hoard of U.S. firms isn\’t just, or even mainly, a post-recession phenomenon.

Reusable Grocery Bags Can Kill (Unless Washed)

One recent local environmental cause, especially popular in California, has been to ban or tax plastic grocery bags. The expressed hope is that shoppers will instead carry reusable grocery bags back and forth to the grocery store, and that plastic bags will be less likely to end up in landfills, or blowing across hillsides, or floating in water. The problem is that almost no one ever washes their reusable grocery bags. Reusuable grocery bags often carry raw meat, unseparated from other foods, and are often stored for convenience in the trunk of cars that sit outside in the sun. In short, reusuable grocery bags can be a friendly breeding environment for E. coli bacteria, which can cause severe illness and even death.

Jonathan Klick and Joshua D. Wright tell this story in \”Grocery Bag Bans and Foodborne Illness,\” published as a research paper by the Institute for Law and Economics at the University of Pennsylvania Law School. As their primary example, they look at E. coli infections in the San Francisco County after it adopted an ordinance severely limiting the use of plastic bags by grocery stores.

As one piece of evidence, here\’s a figure showing the number of emergency room visits in San Francisco County related to E. coli for the 10 quarters before and after the enactment of the ordinance, where zero on the horizontal axis is the date the ordinance went into effect. (The shaded area around the line is a 95% statistical confidence interval.)  Clearly, there is a discontinuous jump in the number of emergency room visits.

For comparison, here\’s the same measure of emergency room visits related to E. coli infections for the other counties in the San Francisco Bay Area in the 10 quarters before and after the ordinance was enacted. These counties don\’t see a jump.

Klick and Wright flesh out this finding in a variety of ways. They look at other measures of foodborne illness, like salmonella and campylobacter, and also find a rise associated with the ordinance limiting plastic bags. They look at other cities in California that have enacted such bans, and although it is harder for them to track health effects for individual cities–because the health statistics are collected at the county level–they find negative health effects as well. (The city of San Francisco is consolidated with the county of San Francisco.)

Overall, they find that San Francisco typically experiences about 12 deaths per year from intestinal infections, and that the restrictions on plastic bags probably let to another 5-6 deaths per year in that city–plus, of course, the personal and social costs of some dozens of additional hospitalizations. With these costs taken into account, restrictions on plastic bags stop looking like a good idea.

Of course, a possible response to this problem is to launch a public information campaign to encourage people to wash their reusable grocery bags. But that response then leads to two other issues.  First, if public information campaigns can be effective on the grocery bag issue, the campaign could simply focus on the need to dispose of plastic bags properly and recycle them where possible–without a need to ban them. The argument for an ordinance sharply limiting the use of plastic grocery bags is, in effect, based on an implicit assumption that public information campaigns about grocery bags don\’t work well.  Second, if reusable grocery bags are washed after each use, then any cost-benefit analysis of their use would have to take into account the costs of water and detergent use. I have no idea if these costs alone would outweigh the benefits of reusable grocery bags, but it might be a near thing.

 Like most economists, I have a mental file drawer of \”good intentions aren\’t enough\” stories. The push to ban plastic grocery bags is one more example.

Added February 14, 2013:  For a short memo challenging the findings of this study from the San Francisco Department of Public Health, see here.

Winter 2013 Journal of Economic Perspectives

The Winter 2013 issue of my own Journal of Economic Perspectives is now available on-line. Like all issues of JEP back to the first issue in 1987, it is freely available here, courtesy of the American Economic Association. I will probably put put some posts about individual articles in the next week or so, but here is an overview. The issue has two symposia of four papers each: one on the economics of patents and the other on tradeable pollution allowances. There are a couple of individual papers, one on the empirical work about prospect theory 30 years after that theory was formulated, and the other a look back at the famous RAND health insurance experiment. My own \”Recommendations for Further Reading\” rounds out the issue. Here are abstracts for the papers, with links to the text.

Symposium on Patents

\”The Case against Patents,\” by Michele Boldrin and David K. Levine

The case against patents can be summarized briefly: there is no empirical evidence that they serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity. Both theory and evidence suggest that while patents can have a partial equilibrium effect of improving incentives to invent, the general equilibrium effect on innovation can be negative. A properly designed patent system might serve to increase innovation at a certain time and place. Unfortunately, the political economy of government-operated patent systems indicates that such systems are susceptible to pressures that cause the ill effects of patents to grow over time. Our preferred policy solution is to abolish patents entirely and to find other legislative instruments, less open to lobbying and rent seeking, to foster innovation when there is clear evidence that laissez-faire undersupplies it. However, if that policy change seems too large to swallow, we discuss in the conclusion a set of partial reforms that could be implemented.
Full-Text Access | Supplementary Materials

\”Patents and Innovation: Evidence from Economic History,\” by Petra Moser

What is the optimal system of intellectual property rights to encourage innovation? Empirical evidence from economic history can help to inform important policy questions that have been difficult to answer with modern data: For example, does the existence of strong patent laws encourage innovation? What proportion of innovations is patented? Is this share constant across industries and over time? How does patenting affect the diffusion of knowledge? How effective are prominent mechanisms, such as patent pools and compulsory licensing, that have been proposed to address problems with the patent system? This essay summarizes results of existing research and highlights promising areas for future research.
Full-Text Access | Supplementary Materials

\”The New Patent Intermediaries: Platforms, Defensive Aggregators, and Super-Aggregators,\” by Andrei Hagiu and David B. Yoffie

The patent market consists mainly of privately negotiated, bilateral transactions, either sales or cross-licenses, between large companies. There is no eBay, Amazon, New York Stock Exchange, or Kelley\’s Blue Book equivalent for patents, and when buyers and sellers do manage to find each other, they usually negotiate under enormous uncertainty: prices of similar patents vary widely from transaction to transaction and the terms of the transactions (including prices) are often secret and confidential. Inefficient and illiquid markets, such as the one for patents, generally create profit opportunities for intermediaries. We begin with an overview of the problems that arise in patent markets, and how traditional institutions like patent brokers, patent pools, and standard-setting organizations have sought to address them. During the last decade, a variety of novel patent intermediaries has emerged. We discuss how several online platforms have started services for buying and selling patents but have failed to gain meaningful traction. And new intermediaries that we call defensive patent aggregators and superaggregators have become quite influential and controversial in the technology industries they touch. The goal of this paper is to shed light on the role and efficiency tradeoffs of these new patent intermediaries. Finally, we offer a provisional assessment of how the new patent intermediary institutions affect economic welfare.
Full-Text Access | Supplementary Materials

\”Of Smart Phone Wars and Software Patents,\” by Stuart Graham and Saurabh Vishnubhakat

Among the main criticisms currently confronting the US Patent and Trademark Office are concerns about software patents and what role they play in the web of litigation now proceeding in the smart phone industry. We will examine the evidence on the litigation and the treatment by the Patent Office of patents that include software elements. We present specific empirical evidence regarding the examination by the Patent Office of software patents, their validity, and their role in the smart phone wars. More broadly, this article discusses the competing values at work in the patent system and how the system has dealt with disputes that, like the smart phone wars, routinely erupt over time, in fact dating back to the very founding of the United States. The article concludes with an outlook for systematic policymaking within the patent system in the wake of major recent legislative and administrative reforms. Principally, the article highlights how the US Patent Office acts responsibly when it engages constructively with principled criticisms and calls for reform, as it has during the passage and now implementation of the landmark Leahy-Smith America Invents Act of 2011.
Full-Text Access | Supplementary Materials

Symposium on Tradeable Pollution Allowances

\”Markets for Pollution Allowances: What Are the (New) Lessons?\” by Lawrence H. Goulder

About 45 years ago a few economists offered the novel idea of trading pollution rights as a way of meeting environmental goals. Such trading was touted as a more cost-effective alternative to traditional forms of regulation, such as specific technology requirements or performance standards. The principal form of trading in pollution rights is a cap-and-trade system, whose essential elements are few and simple: first, the regulatory authority specifies the cap—the total pollution allowed by all of the facilities covered by the regulatory program; second, the regulatory authority distributes the allowances, either by auction or through free provision; third, the system provides for trading of allowances. Since the 1980s the use of cap and trade has grown substantially. In this overview article, I consider some key lessons about when cap-and-trade programs work well, when they perform less effectively, how they work compared with other policy options, and how they might need to be modified to address issues that had not been anticipated.
Full-Text Access | Supplementary Materials

\”The SO2 Allowance Trading System: The Ironic History of a Grand Policy Experiment,\” by Richard Schmalensee and Robert N. Stavins 

Two decades have passed since the Clean Air Act Amendments of 1990 launched a grand experiment in market-based environmental policy: the SO2 cap-and-trade system. That system performed well but created four striking ironies: First, by creating this system to reduce SO2 emissions to curb acid rain, the government did the right thing for the wrong reason. Second, a substantial source of this system\’s cost-effectiveness was an unanticipated consequence of earlier railroad deregulation. Third, it is ironic that cap-and-trade has come to be demonized by conservative politicians in recent years, as this market-based, cost-effective policy innovation was initially championed and implemented by Republican administrations. Fourth, court decisions and subsequent regulatory responses have led to the collapse of the SO2 market, demonstrating that what the government gives, the government can take away.
Full-Text Access | Supplementary Materials

\”Carbon Markets 15 Years after Kyoto: Lessons Learned, New Challenges,\” by Richard G. Newell, William A. Pizer and Daniel Raimi

Carbon markets are substantial and they are expanding. There are many lessons from market experiences over the past eight years: there should be fewer free allowances, better management of market-sensitive information, and a recognition that trading systems require adjustments that have consequences for market participants and market confidence. Moreover, the emerging market architecture features separate emissions trading systems serving distinct jurisdictions and a variety of other types of policies exist alongside the carbon markets.This situation is in sharp contrast to the top-down, integrated global trading architecture envisioned 15 years ago by the designers of the Kyoto Protocol and raises a suite of new questions. In this new architecture, jurisdictions with emissions trading have to decide how, whether, and when to link with one another. Stakeholders and policymakers must confront how to measure the comparability of efforts among markets as well as relative to a variety of other policy approaches. International negotiators must in turn work out a global agreement that can accommodate and support increasingly bottom-up approaches to carbon markets and climate change mitigation.
Full-Text Access | Supplementary Materials

\”Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis,\” by Karen Fisher-Vanden and Sheila Olmstead

This paper seeks to assess the current status of water quality trading and to identify possible problems and solutions. Water pollution permit trading programs have rarely been comprehensively described and analyzed in the peer-reviewed literature. Including active programs and completed or otherwise inactive programs, we identify approximately three dozen initiatives. We describe six criteria for successful pollution trading programs and consider how these apply to standard water quality problems, as compared to air quality. We then highlight some important issues to be resolved if current water quality trading programs are to function as the \”leading edge\” of a new frontier in cost-effective pollution permit trading in the United States.
Full-Text Access | Supplementary Materials

Individual articles

\”Thirty Years of Prospect Theory in Economics: A Review and Assessment,\” by Nicholas C. Barberis

In 1979, Daniel Kahneman and Amos Tversky, published a paper in Econometrica titled \”Prospect Theory: An Analysis of Decision under Risk.\” The paper presented a new model of risk attitudes called \”prospect theory,\” which elegantly captured the experimental evidence on risk taking, including the documented violations of expected utility. More than 30 years later, prospect theory is still widely viewed as the best available description of how people evaluate risk in experimental settings. However, there are still relatively few well-known and broadly accepted applications of prospect theory in economics. One might be tempted to conclude that, even if prospect theory is an excellent description of behavior in experimental settings, it is less relevant outside the laboratory. In my view, this lesson would be incorrect. Over the past decade, researchers in the field of behavioral economics have put a lot of thought into how prospect theory should be applied in economic settings. This effort is bearing fruit. A significant body of theoretical work now incorporates the ideas in prospect theory into more traditional models of economic behavior, and a growing body of empirical work tests the predictions of these new theories. I am optimistic that some insights of prospect theory will eventually find a permanent and significant place in mainstream economic analysis.
Full-Text Access | Supplementary Materials

\”The RAND Health Insurance Experiment, Three Decades Later,\” by Aviva Aron-Dine, Liran Einav and Amy Finkelstein 

Between 1974 and 1981, the RAND health insurance experiment provided health insurance to more than 5,800 individuals from about 2,000 households in six different locations across the United States, a sample designed to be representative of families with adults under the age of 62. More than three decades later, the RAND results are still widely held to be the \”gold standard\” of evidence for predicting the likely impact of health insurance reforms on medical spending, as well as for designing actual insurance policies. On cost grounds alone, we are unlikely to see something like the RAND experiment again. In this essay, we reexamine the core findings of the RAND health insurance experiment in light of the subsequent three decades of work on the analysis of randomized experiments and the economics of moral hazard. First, we re-present the main findings of the RAND experiment in a manner more similar to the way they would be presented today. Second, we reexamine the validity of the experimental treatment effects. Finally, we reconsider the famous RAND estimate that the elasticity of medical spending with respect to its out-of pocket price is -0.2. We draw a contrast between how this elasticity was originally estimated and how it has been subsequently applied, and more generally we caution against trying to summarize the experimental treatment effects from nonlinear health insurance contracts using a single price elasticity.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Economic Bounceback by 2014, Says CBO

After recessions, the U.S. economy has typically had a period of bounce-back growth, which then catches the economy up to the path of \”potential GDP\” it would have been on prior to the recession. One of the imponderable questions in the aftermath of the Great Recession that lasted from 2007-2009 was when–or if–this bounceback growth would arrive. The nonpartisan Congressional Budget Office offers a prediction in its just-released \”The Budget and Economic Outlook: Fiscal Years 2013 to 2023\” that \”economic activity will expand slowly in 2013 but will
increase more rapidly in 2014.\” In other words, the long-delayed period of bounceback growth is now at least visible on the horizon. Here\’s the forecast in pictures.

\”Potential GDP\” refers to how much an economy could produce with full employment of workers and productive capacity. The blue line shows growth in potential GDP; the gray line shows the actual course of the economy during the recession and its aftermath. Notice the catch-up growth, bringing the economy back to potential GDP.

You can also see the catch-up growth in the CBO predictions for the annual growth rate of GDP.

As the catch-up growth arrives, the CBO is also predicting that the unemployment rate will drop briskly.

In addition, some measures of the grim economy will start to reverse themselves. For example, real investment in residential housing (that is, in building and remodeling houses), after several years of negative growth, has moved back to positive territory.

The underlying story here is that when a recession is accompanied by a financial crisis, the economic bounceback can be painfully slow. When households and firms across the economy are all feeling that they have borrowed too much, and need to get their financial houses back in order, it takes time. The depth of the U.S. recession has actually been somewhat shallower and less prolonged than the experience in many other countries when they experienced the double-whammy of financial crisis and recession, as I discussed here. But by fits and starts, the economic bounceback process does eventually work its way forward.

Email Spam Declines? Or Just Migrates?

In the last few years, from 80-90% of all e-mail traffic has been spam. This imposes a considerable cost in terms of computer security and people\’s time. In the Summer 2012 issue of my own Journal of Economic Perspectives, Justin M. Rao and David H. Reiley discuss the \”The Economics of Spam\” and conservatively estimate social costs to businesses and consumers of about $20 billion per year.
But in 2012, it looks as if the tide may be turning against e-mail spam, at least a bit. 

Some evidence comes from monitoring of spam done by Kaspersky Lab, a seller of information technology security services. In particular, Darya Kudkova has written  the \”Kaspersky Security Bulletin: Spam Evolution 2012.\” The first bar chart, from the Economist magazine, relies on Kaspersky Lab data to show monthly patterns of spam from 2006 up through 2012. The second bar chart, from Kudkova\’s report, shows monthly spam patterns just during 2012.

The Rao and Reiley paper in JEP offers an extended discussion of how the spam wars have evolved over time. (Here is my post on this paper from last August.) As example, they describe a study in which a group attempted to send 345 million spam e-mails, but three-quarters were blocked when the server was blacklisted. The 82 million e-mails that escaped the blacklist then had to run the normal gauntlet of anti-spam software, and ultimately, there were ultimately just 28 purchases.

However, the Kaspersky report suggests that even better anti-spam software has been the main driver of the decline. \”This continual and considerable decrease in spam volumes is unprecedented….
The main reason behind the decrease in spam volume is the overall heightened level of anti-spam protection. To begin with, spam filters are now in place on just about every email system, even free ones, and the spam detection level typically bottoms out at 98%. Next, many email providers have introduced mandatory DKIM signature policies (digital signatures that verify the domain from which emails are sent).\”

The other big change mentioned in the report is that, partly as a result of the improvements in shutting down spam on e-mail, the spammers are trying to use other pathways to your credit cards.  The Kaspersky report comments:

\”When anti-spam experts answer questions about what needs to be done in order to reduce the amount of spam, in addition to anti-spam legislation, quality filters and user education, one factor that is always mentioned is inexpensive advertising on legal platforms. With the emergence of Web 2.0, advertising opportunities on the Internet have skyrocketed: banners, context-based advertising, and ads on social networks and blogs. Ads in legal advertising venues are not as irritating for users on the receiving end, they aren’t blocked by spam filters, and emails are sent to target audiences who have acknowledged a potential interest in the goods or services being promoted. Furthermore, when advertisers are after at least one user click, legal advertising can be considerably less costly than advertising through spam.

\”Based on the results from several third-party studies, we have calculated that at an average price of $150 per 1 million spam emails sent, the final CPC (cost per click, the cost of one user using the link in the message) is a minimum of $.4.45. Yet the same indicator for Facebook is just $0.10. That means that, according to our estimates, legal advertising is more effective than spam. Our conclusion has been indirectly confirmed by the fact that the classic spam categories (such as fake luxury goods, for example) are now switching over to social networks. We have even found some IP addresses for online stores advertising on Facebook that were previously using spam.\”

\”Advertisers have also been drawn to yet another means of legal Internet promotion: coupon services, or group discount websites where users can purchase so-called coupons. These services appeared several years ago. After a user buys a coupon, he/she presents it when purchasing a product or service and receives a discount. In 2012, coupon services gained a lot of popularity. Many companies around the world are striving to grow their client base, and in turn, clients receive generous offers. … The popularity of coupon services has made the migration of advertisers from spam to other platforms more noticeable. At the same time, the prevalence of coupon services has had an impact on spam. Malicious users have started to copy emails from major coupon services, using the originals to advertise their own goods or services, or to lure users to a malicious website.\”

 In other words, the economics of sending and screening emails, together with the economics of online advertising, is tipping the balance a bit for spammers. Getting people to click on offers in random emails is becoming more costly; getting people to click on random advertisements is becoming easier. Before you send a credit card number, be sure you know who is at the other end.

Checkerboard Puzzle, Moore\’s Law, and Growth Prospects

My father the mathematician first posed the checkerboard puzzle to me back in grade-school, perhaps on some rainy Saturday. His version of the story went something like this:

The jester performs a great deed, and the king asks him how he would like to be rewarded. The jester is aware that the king is a highly volatile individual, and if the jester asks for too much, the king might just kill him then and there. The jester also knows that the king views his promise as sacred, so if the king says \”yes\” to the jester\’s proposal, then the king will honor that promise. So in a way, the jester\’s problem is how to ask for a lot, but have the king at least initially think it\’s not very much, so that the king will give his consent.

So the jester clowns around a bit and then says: \”Here\’s all I want. Take this checkerboard. On the first square, put one piece of gold. On the second square, two pieces. On the third square, four pieces, and on the fourth square, 8 pieces. Double the amount on each square until you reach the end of the checkerboard.\”

In the story, the king laughs at this comic proposal and says,  \”Your great deed was so wonderful, I would have happily done much more than this! I grant your request!\”

But of course, when the king starts hauling up gold pieces from the treasury, he will discover that 2 raised to the 63rd power, the final spot on the checkerboard requires about 9 quintillion gold pieces (that is, 9 followed by 18 zeros). 

I\’ve had some sense of the power of exponential growth ever since.  But what I hadn\’t thought about is the interaction of Moore\’s Law and economic growth. Moore\’s Law is of course named for Gordon Moore, one of the founders of Intel, who noticed this pattern back in 1965. Back in the 1970s, he wrote a paper that contained the following graph showing how much it cost to produce a computer chip with a certain number of components. Here\’s his figure. Notice that the numbers of component on the horizontal axis and the cost figures on the vertical axis are both graphed as logarithm (specifically, each step up the axis is a change by a factor of 10). The key takeaway was that the number of transistors (\”components\”) on an integrated circuit was doubling about every two years, making computing power much cheaper and faster.
This chart from Intel co-founder Gordon Moore's seminal 1965 paper showed the cost of transistors decreased with new manufacturing processes even as the number of transistors on a chip increased.

Ever since I started reading up on Moore\’s law in the early 1980s, there have been predictions in the trade press that it will soon reach technological limits and come to and end. But Moore\’s law marches on: indeed, the research and innovation targets at Intel and other chip-makers are defined in terms of making sure that Moore\’s law continues to hold for at least awhile longer. Stephen Shankland offers a nice accessible overview of the current situation in an October 15, 2012, essay on CNET: \”\”Moore\’s Law: The rule that really matters in tech\”  (The Gordon Moore graph above is copied from Shankland\’s essay.)

As Shankland writes: \”To keep up with Moore\’s Law, engineers must keep shrinking the size of transistors. Intel, the leader in the race, currently uses a manufacturing process with 22-nanometer features. That\’s 22 billionths of a meter, or roughly a 4,000th the width of a human hair.\” He cites a variety of industry and research experts to the effect that Moore\’s law has at least another decade to run–and remember, a decade of doubling every two years means five more doublings!

It\’s hard to wrap one\’s mind around what it means to say that the power of microchipo technology will increase by a factor of 32 (doubling five times) in the next 10 years. A characteristically intriguing survey essay  from the January 10 issue of the Economist on the future of innovation uses the checkerboard analogy to think about the potential effects of Moore\’s law. Here\’s a comment from the Economist essay:

Ray Kurzweil, a pioneer of computer science and a devotee of exponential technological extrapolation, likes to talk of “the second half of the chess board”. There is an old fable in which a gullible king is tricked into paying an obligation in grains of rice, one on the first square of a chessboard, two on the second, four on the third, the payment doubling with every square. Along the first row, the obligation is minuscule. With half the chessboard covered, the king is out only about 100 tonnes of rice. But a square before reaching the end of the seventh row he has laid out 500m tonnes in total—the whole world’s annual rice production. He will have to put more or less the same amount again on the next square. And there will still be a row to go.

Erik Brynjolfsson and Andrew McAfee of MIT make use of this image in their e-book “Race Against the Machine”. By the measure known as Moore’s law, the ability to get calculations out of a piece of silicon doubles every 18 months. That growth rate will not last for ever; but other aspects of computation, such as the capacity of algorithms to handle data, are also growing exponentially. When such a capacity is low, that doubling does not matter. As soon as it matters at all, though, it can quickly start to matter a lot. On the second half of the chessboard not only has the cumulative effect of innovations become large, but each new iteration of innovation delivers a technological jolt as powerful as all previous rounds combined.\”

Now, it\’s of course true that doubling the capacity of computer chips doesn\’t translate in a direct way into a higher standard of living: there are many steps from one to the other. But my point here is to note that many of us (myself included) have been thinking about the changes in electronics technology a little too much like the king in the checkerboard story: that is, we think of something doubling a few times, even 10 or 20 times, and we know it\’s a big change, but it somehow seems within our range of comprehension.

But when something has already been doubling every 18 months or two years for a half-century–and it is continuing to double!–the absolute size of each additional doubling is starting to get very large. I lack the imagination to conceive of what will be done with all this cheap computing power in terms of health care, education, industrial process, communication, transportation, entertainment, food, travel, design, and more. But I suspect that these enormous repeated doublings, as Moore\’s law marches forward in the next decade and drives computing speeds up and prices down, will transform lives and industries in ways that we are only just starting to imagine.