Social Welfare Programs and Incentives to Work

There\’s a fundamental conflict between helping those in need and encouraging self-support. I sometimes say that if you give a person a fish, every day, then you remove that person\’s incentive to learn to fish. But if you vow not give them a fish, they may starve to death while learning to fish. 

C. Eugene Steuerle explores the current state of this conflict in \”Labor Force Participation, Taxes, and the Nation’s Social Welfare System,\” which is testimony given to the Committee on Oversight and Government Reform of the U.S. House of Representatives on February 14, 2013. As a starting point, focus first on the support that we give to those in need. Steuerle writes:

\”Figures 1 and 2 display the benefits available to a single mother with two children in 2011 under these two cases. The first case, what I call the “universal” case, shows the benefits available to anyone whose income was low enough to qualify for them, namely nutrition assistance and tax benefits. The second case adds to those benefits narrower assistance—TANF and housing subsidies and supplements to nutrition assistance—that is available to some households but not to others based on availability, time limits, and other criteria. Because health reform will soon alter the delivery of health benefits in an important way, in both cases I assume that the provisions of the Affordable Care Act are in effect.\”

It\’s useful to remember that these graphs do not refer to cash benefits, and they represent averages that will vary across families. For example, the amount that families receive in Medicaid benefits is not received in cash, but in the form of access to health care services, and the amount will vary from year to year, depending on health. I find the details of these figures interesting for what they reveal about the size of spending and support from different programs and the income range over which programs operate. For example, the figures highlight that SNAP, more commonly known as \”food stamps,\” is a substantially larger program than TANF, more commonly known as \”welfare.\” The figures also show the relatively large size of health care benefits like Medicaid, CHIP, and the \”exchange subsidy\” compared with other forms of benefits, following a pattern that as a society we are willing to pay large health care bills for those with low incomes, or to give them food stamps, but we are less willing to give them cash benefits.

But the main point that Steuerle emphasizes is in the overall hump shape of the curves: that is more support for those at lower incomes, and then declining support as income rises. This pattern makes perfect sense: more fish for those with very low incomes, less fish as people learn to fish and bring in their own income. But it also means that those with low incomes face what economists call a \”negative income tax.\”

A \”positive\” income tax is the usual tax in which, as you earn additional income, the government taxes a percentage. A \”negative\” income tax arises when, as you earn additional income, the government phases out benefits it would otherwise have provided. Both kinds of taxes have the same  result on incentives: when you earn an additional marginal dollar of income, you take home less than a dollar after taxes. When social programs phase out quickly as income rises, then a situation can arise where earning an additional dollar of income means losing 50 cents or more in benefits–thus greatly reducing the incentives to work.

Here are the effective marginal tax rates as Steuerle calculates them. That is, adding together both the \”positive\” tax rates of federal income taxes, state taxes, and payroll taxes for Social Security and Medicare, together with the implicit \”negative\” tax rates of the phase-out of social programs, what is the effective tax rate on a marginal dollar of income as income rises. Notice how the phase-out of social programs–that is, how their support declines as earned income rises–leads to a spike in the overall \”effective\” marginal tax rates that people experience at around $10,000-$15,000 in earned income.

Of course, if you\’re someone who doesn\’t believe that marginal tax rates affect work effort, then this sort of chart won\’t bother you. Personally, I\’m concerned about the effects of marginal tax rates on incentives not just at the top of the income scale, and not just at the bottom of the income scale, but at all income levels.

Those interested in this subject might also see my post of November 16, 2012, based on a report from the Congressional Budget Office, about  \”Marginal Tax Rates on the Poor and Lower Middle Class.\”

Taking Apprenticeships Seriously

The United States puts a heavy emphasis on a college degree as the path to economic and social success, and thus it\’s a familiar pledge of politicians that a higher share of the population will attend college. For example, in a speech to Congress on February 24, 2009, President Obama 
set a goal that \”by 2020, America will once again have the highest proportion of college graduates in the world.\”

But this emphasis on college has two difficulties: 1) as a society, we don\’t actually mean it; and 2) it probably isn\’t an appropriate goal, anyway. After all, if we really supported a widespread expansion of college education, we would do considerably more than pump up the loans available to students. Instead, we would be figuring out how current colleges can expand their enrollments, and starting a new wave of colleges and universities–and figuring out how to keep these options affordable to students. Instead, the U.S. has lost its lead as the country in the world with the highest proportion of college graduates.

Moreover, a four-year college degree just isn\’t going to be right for everyone. Think about those students who managed to finish a high school degree, but were in the bottom third or bottom quarter of the class. For many of these students, their interactions with the educational system have not been happy ones, and the notion that their life plan should start off with yet another four years of education is likely to be met with hard-earned dislike and disbelief.

So what\’s the alternative for these students, in a U.S. economy that places considerable value on skilled labor. Betty Joyce Nash offers one angle on these issues in in \”Journey to Work: European Model Combines Education with Vocation  in the Fourth Quarter issue of Region Focus, which is published by the Federal Reserve Bank of Richmond.  She writes:

\”In the United States, vocational education has been disparaged by some as a place for students perceived as unwilling or unable. The United States still largely champions college as the route to higher lifetime wages and the flexibility to retool skills in times of economic change. Yet just 58 percent of the 53 percent of college-goers in 2004 who started at four-year institutions finished within six years. Moreover, 25 percent of those who enter two-year community colleges don’t finish. Only about 28 percent of U.S. adults over age 25 actually have a bachelor’s. What about the rest? What’s their path to the workplace? It may be unrealistic to expect everyone to finish college, but most students will need more than a high school education as jobs become more complex.\”

Nash focuses her discussion on apprenticeships and vocational education, and as is common in these kinds of arguments, she focuses some attention on practices in Germany and Switzerland. Thus:

\”Germany and Switzerland educate roughly 53 percent and 66 percent of students, respectively, in a system that combines apprenticeships with classroom education — the dual system. This approach brings young people into the labor force more quickly and easily. Unemployment for those in
Switzerland between the ages of 15 and 24 in 2011 was 7.7 percent; in Germany, 8.5 percent. In the United States that year, the rate was 17.3 percent, down from 18.4 percent the previous year. (A 10 percent higher rate of participation in vocational education in selected Organization for Economic
Cooperation and Development countries led to a 2 percent lower youth unemployment rate in 2011, according to economist Eric Hanushek of Stanford University.)\”

Here\’s a bit more detail on Switzerland and on Germany:

\”At ages 15 to 16, in Switzerland, about two-thirds of every cohort enter  apprenticeships, [Stefan C.] Wolter notes. Apprentices in fields from health care to hairdressing to engineering attend vocational school at least one day a week for general education and theoretical grounding for roughly three years. On other days, they apprentice under the supervision of a seasoned employee. What makes the system work so well is firm participation, which is relatively strong. “If you exclude the one-person companies and the businesses that cannot train, about 40 percent of companies that could train do train,” Wolter says. …\”

\”In Germany, about 25 percent of students go to university, and apprenticeships employ another 53 percent. At 16,  they sign on for a three-year stint in one of 350 occupations. Another 15 percent may attend vocational schools. Those who are less qualified take a full-time vocational course or temporary job until they land an apprenticeship. About one-quarter of German employers participate. …\”

Other western European countries use variations of the Swiss and German model. Belgium, Finland, Sweden, and the Netherlands train most vocational students in school programs, while Germany, Switzerland, Austria, and Denmark have large school-and-work programs. The United States is an outlier: By international standards and official definitions, it has virtually no vocational education and training program.\”

From a U.S. perspective, it\’s hard to think clearly about how this kind of widespread use of apprenticeships and vocational school would even work.  Half or two-thirds of 16 year-old students involved in paid internships? A quarter or a third of all employers providing a large number of such internships as part of their regular business model? Internships across a wide array of professions, both blue- and white-collar? My American mind boggles. But given that a four-year college degree is demonstrably not a good fit for many young Americans, it\’s past time to take some of the alternatives more seriously.

I\’ve posted from time to time about the merits of apprenticeships and various alternative credentials.

For examples, see this post from October 18, 2011, on \”Apprenticeships for the U.S. Economy, \”
this post from last November 3, 2011 on \”Recognizing Non-formal and Informal Learning,\” and th is post from January 16, 2012 on \”Certificate Programs for Labor Market Skills.\”

 

 

Obesity and Healthy Snacks

The rise in American rates of obesity can be traced back to what seems like a fairly small rise in daily calories consumed, I learned this lesson from an article on the causes of obesity about 10 years back in my own Journal of Economic Perspectives. In \”Why Have Americans Become More Obese?\” David M. Cutler, Edward L. Glaeser and Jesse M. Shapiro wrote that the \”10- to 12-pound increase in median weight we observe in the past two decades requires a net caloric imbalance of about 100 to 150 calories per day. These calorie numbers are strikingly small. One hundred and fifty calories per day is three Oreo cookies or one can of Pepsi. It is about a mile and a half of walking.\”

Elizabeth Frazao, Hayden Stewart, Jeffrey Hyman, and Andrea Carlson apply a similar logic in \”Gobbling Up Snacks: Cause or Potential Cure for Childhood Obesity?\” which appears in the n the December 2012 issue of Amber Waves, published by the Economic Research Service at the U.S. Department of Agriculture. 

When I was a child, my mother had a clear-cut policy on before-dinner snacks: I was allowed to eat all the raw carrots I wanted. The parental philosophy appears to have gone out of date. The USDA economists explain: \”Consumption of snacks among children has increased markedly over the last 35 years. In the late 1970s, American children consumed an average of only one snack a day. Today, they are consuming nearly three snacks per day. As a result, daily calories from children\’s snacks have increased by almost 200 calories over the period.\” They propose that a shift to healthier snacks could play a useful role in cutting childhood obesity.

As a starting point, here\’s a table showing the calorie count for some fruit and vegetable stacks, compared with some commonly consumed snack foods. As a parent of three, I confess that I can\’t see my children snacking on broccoli florets. But grapes, strawberries, cantaloupe, or apples are all possibilities, and even among the snack foods, some choices like popsicles or fruit rolls are at least better than the alternatives.

One common reaction to this sort of list is that the healthy snacks cost too much. It is true that cookies, crackers, and chips make a relatively cheap snack, and some of the healthier choices, like tangerines, grape tomatoes, and strawberries are a costlier snack. But some of the unhealthy choices like muffins and Danish are also on the costly side, while bananas, oranges, and those good old carrot sticks are not especially pricey. Here are their calculations of cost-per-snack.

 Generalizing wildly from personal experience, as social scientists are wont to do, it feels to me as if a cultural shift has occurred about a \”suitable\” snack. Two of my children were, at different times, in soccer leagues where the parents organized themselves to make sure that each child got a \”treat\” of a cookie or chips and a large sugared drink after each game. When I coached a different youth soccer team, I brought bags of orange and apple slices for the players. As far as I could tell, the kids were equally happy with the oranges–at least once it was clear that cookies and chips would not be forthcoming.

Consuming healthier (and fewer) snacks could make a real difference to child obesity–and it makes sense for adults, too.

Maybe Too Big To Fail, but Not Too Big to Suffer

Which financial institutions are \”too big to fail\”? According to a report from the international Financial Stability Board, a working group of governments and central banks that tries to facilitate international cooperation on these issues, here\’s the list as of November 2012.

 Ready for a nice bowl of acronym soup? This list is actually the \”global systemically important banks,\” known as the G-SIBs, which are a subcategory of the \”global systemically important financial institutions,\” or G-SIFIs.  Already finalized, as the Financial Stability Board (FSB) explains, are guidelines for the \”domestic systemically important banks,\” the D-SIBs, which national governments are expected to implement by 2016. Meanwhile, the International Association of Insurance Supervisors (IAIS) has proposed a method of deciding who is a \”global systemically important insurers,\” the G-SIIs. The Financial Stability Board (FSB) and the International Organization of Securities Commissions (IOSCO) are now working on a method to idenfity the systemically important non-bank non-insurance financial institutions (no acronym yet available).

Meanwhile, the Financial Stability Oversight Council (FSOC) within the U.S. Department of the Treasury is working on its own lists. In its 2012 annual report, it designated eight systemically important \”financial market utilities\”–that is, firms that are intimately involved in carrying out various financial transactions. Here\’s the list: the Clearing House Payments Company, CLS Bank International, the Chicago Mercantile Exchange, the Depository Trust Company, Fixed Income Clearing Corporation, ICE Clear Credit, National Securities Clearing Corporation, and the Options Clearing Corporation. (And I freely  admit that I have only a fuzzy idea of what several of those companies actually do.)

In addition, the Dodd-Frank legislation presumes that U.S. banks are systemically important if their h holdings exceed $50 billion in consolidated assets. David Luttrell, Harvey Rosenblum, and Jackson Thies explain these points, along with a nice overview of many broader issues, in their Staff paper on \”Understanding the Risks Inherent in Shadow Banking: a Primer and Practical Lessons Learned,\” written for the Dallas Fed. For perspective, they offer a list of the largest U.S. bank holding companies, all of which comfortably exceed the $50 billion benchmark for consolidated assets.

This litany of who is \”systemically important\” feels disturbingly long, and it\’s only getting longer. But ultimately, it\’s a good thing to have such lists–at least if they lead to policy changes. Once you have admitted that a number of financial institutions are too big to fail, because their failure would lead to too great a disruption in financial markets, and once you have then made a commitment that government should not bail out such institutions, what policy prescription follows?

The proposal from the Financial Stability Board is that the G-SIBs (global systemically important banks, of course) should face a different set of regulatory rules. As the Dallas Fed economists explain, these could include \”higher capital requirements, supervisory expectations for risk management functions, data aggregation capabilities, risk governance, and internal controls.\” There are two difficulties with this approach. First, it may not work. After all, a considerable regulatory apparatus in the U.S. did not prevent the financial crashes from 2007-2009. And second, it may work with undesired side effects. In particular, if there are heavy rules on one set of regulated financial institutions, then there will be a tendency for financial activities to flow to less-regulated financial institutions. \”If regulation constrains commercial banks’ risk taking, many questionable assets may simply migrate to less-regulated entities.\”

I don\’t oppose regulating the SIFIs (that would be \”systemically important financial institutions\”) more heavily. But it\’s important to be clear on the limits of this approach. After all, it\’s not that these institutions are big, but rather that they are so tightly interconnected with other institutions. As Luttrell, Rosenblum, and Thies explain: \”TBTF [that would be \”too big to fail] is not just about bigness; it also includes “too many to fail” and “too opaque to regulate.\””

It seems to me that the key here is to remember that maybe some institutions are too big to fail, but they aren\’t too big to suffer! In particular, they aren\’t too big to have their top managers booted out–without bonuses. They aren\’t too big to have their shareholders wiped out, and the company handed over to bondholders–who are then likely to end up taking losses as well. One task of financial regulators should be to design and pre-plan an \”orderly resolution\” as they call it. The trick is to devise ways so that if these systemically important firms run into financial difficulties, the tasks and external obligations of certain large financial firms will not be much disrupted, for the sake of financial stability,but those who invest in those firms and who manage them will face costs.

Why Has Health Information Technology Been Ineffective?

Health information technology is one of the methods often proposed to help rein in rising health care costs. The underlying story is plausible: greater efficiency in dealing with the provision of care and the paperwork burden of medicine, and greater safety for patients as providers can be aware of past medical histories and ongoing treatments. However, at least so far, health information technology hasn’t done much to reduce costs.  Arthur L. Kellermann and Spencer S. Jones ask “What It Will Take to Achieve the As-Yet-Unfulfilled Promises of Health Information Technology” in the first issue of Health Affairs for 2013 (pp. 63-68). (This journal is not freely available on-line, but many academic readers will have access through library subscriptions.)
Back in 2005, a group of RAND researchers forecast that rapid adoption of health information technology could save $81 billion annually. Kellermann and Jones essentially ask: Why hasn’t this vision come to pass?  Here are some of their answers (as usual, footnotes are omitted).
Health providers and patients have been slow to adopt information technology. “The most recent data suggest that approximately 40 percent of physicians and 27 percent of hospitals are using at least a “basic” electronic health record. … Uptake of health IT by patients is even worse.”
Existing health information technology systems don\’t interconnect. “Are modern health IT systems interconnected and interoperable? The answer to this question, quite clearly, is no. The health IT systems that currently dominate the market are not designed to talk to each other. … As a result, the current generation of electronic health records function less as an “ATM cards,” allowing a patient or provider to access needed health information anywhere at any time, than as “frequent flyer cards” intended to enforce brand loyalty to a particular health care system.”
Health care providers dislike the existing information technology systems.“Considering the theoretical benefits of health IT, it is remarkable how few fans it has among health care professionals. The lack of enthusiasm might be attributed, in part, to the sobering results of studies showing that in many cases health IT has failed to deliver promised gains in productivity and patient safety. An even more plausible cause is that few IT vendors make products that are easy to use. As a result, many doctors and nurses complain that health IT systems slow them down.”
Existing health information technology can raise costs. On this point,, the authors cite a New York Times article from last fall by Reed Abelson, Julie Creswell, and Griff Palmer. (Full disclosure, Reed Abelson was a friend of mine back in college days.) The NYT story reports: \”[T]he move to electronic health records may be contributing to billions of dollars in higher costs for Medicare, private insurers and patients by making it easier for hospitals and physicians to bill more for their services, whether or not they provide additional care.Hospitals received $1 billion more in Medicare reimbursements in 2010 than they did five years earlier, at least in part by changing the billing codes they assign to patients in emergency rooms, according to a New York Times analysis of Medicare data from the American Hospital Directory. Regulators say physicians have changed the way they bill for office visits similarly, increasing their payments by billions of dollars as well.\”

Kellermann and Jones end with a plea that health information technology systems should be built o principles of interoperability, ease of use, patient-centeredness. I have no disagreement with the principles, but I would note that even within individual companies, it has often proven quite time-consuming and difficult to integrate information technology into operations in a full and productive way. Thus, it\’s no surprise to me that the health care industry has faced a number of stumbling blocks. I’ve heard anecdotal stories of doctors spending inordinate amounts of time clicking through menus on some IT system, trying to figure out which boxes to check to best represent a diagnosis and a course of care. I’ve heard that some doctors, as they master the system, find that it becomes easier to bill for many separate small services that they wouldn’t have previously bothered to write up.
It seems that it should be possible for the big health care finance operation, both public and private, to get together and hammer out a basic flexible framework for health care information technology. But it doesn\’t seem to be happening.

A Future of Low Returns?

Through the dismal stock market performance and low interest rates of the last few years, many savers and investors have held on to the hope that, when the economy eventually recovered, their financial investments would start earning the sorts of returns that they did through most of the second half of the twentieth century. However, in the Credit Suisse Global Investment Returns Yearbook 2013, Elroy Dimson, Paul Marsh, and Mike Staunton poor a few buckets of cold water on these hopes in their essay,\”The low-return world.\” (Thanks to Larry Willmore and his \”Thought du Jour\” blog for the pointer to this report.)

Here are a few of their figures showing the investment world as many of us summarize it in our minds–based of course on evidence from recent decades, and with something of a U.S. focus. Thus, here\’s a figure showing returns on equity and on debt for a number of markets around the world. equity. The bars on the right show a time-frame since 1950; the ones on the right show a time frame since 1980. Bonds have done comparatively better since 1980, because declining interest rates have meant that bonds purchased in the past–at the higher rates then prevailing–were worth more.

Here\’s a figure showing U.S. experience in the long run. The bars on the far right show how annual returns on equity have outstripped those for bonds and for bills from 1900-2012. The middle bars show a similar pattern, a bit less extreme, for the last half-century from 1963-2012. The bars on the far right show the 21st century experience from 2000-2012. Stocks have offered almost no return at all; neither have bills. Bonds have done fairly well, given the steady fall in interest rates, but as interest rates have headed toward zero, the gains from bonds seem sure to diminish, too.

 
Dimson, Marsh, and Staunton have some grim news for those waiting for a bounceback to 20th century levels of returns: \”[M]any investors seem to be in denial, hoping markets will soon revert to “normal.” Target returns are too high, and many asset managers still state that their long-run performance objective is to beat inflation by 6%, 7%, or even 8%. Such aims are unrealistic in today’s low-return world. … The high equity returns of the second half of the 20th century were not normal; nor were the high bond returns of the last 30 years; and nor was the high real interest rate since 1980. While these periods may have conditioned our expectations, they were exceptional.\”

In thinking about bond prices in the future, they point out that central banks around the world are pledging to keep interest rates low for at least the next few years. There is little reason to think that real interest rates on bonds are rising any time soon.

In terms of equities, they restate the well-accepted insight that over time, equities will be priced in a way that will provide a higher return than bonds, to make up for the higher volatility and risk of buying equities. But how much more will stocks pay than bonds? How high will this \”equity premium\” be? They write:

\”Until a decade ago, it was widely believed that the annualized equity premium relative to bills was over 6%. This was strongly influenced by the Ibbotson Associates Yearbook. In early 2000, this showed a historical US equity premium of 6¼% for the period 1926–99. Ibbotson’s US statistics appeared in numerous textbooks and were applied worldwide to the future as well as the past. It is now clear that this figure is too high as an estimate of the prospective equity premium. First,it overstates the long-run premium for the USA. From 1900–2012, the premium was a percentage point lower at 5.3%, as the early years of both the 20th and 21st centuries were relatively disappointing for US equities. Second, by focusing on the USA – the world’s most successful economy during the 20th century – even the 5.3% figure is likely to be an upwardly biased estimate of the experience of equity investors worldwide. .. To assume that savers can confidently expect large wealth increases from investing over the long term in the stock market – in essence, that the investment conditions of the 1990s will return – is delusional.\”

Here are their estimates for long-run returns on stocks and bonds over the next few decades. Essentially, they forecast real stock returns in the range of 3-4% per year, and real bond returns in the range of 1/1-1% per year–well below the expectations many people are carrying around from their experiences in the 1980s and 1990s.

If their predictions of a low-return world over the next few decades hold up, it will impose heavy difficulties on those saving for the future–not just those doing individual saving, but also pension plans. Moreover, it can be hazardous to a solid economic recovery. They write: \”Today’s low-return world is imposing stresses on investors. … For how long are low returns bearable? For investors, we fear that the answer is “as long as it takes.” While a low-return world imposes stresses on investors and savers in an over-leveraged world recovering from a deep financial crisis, it provides essential relief for borrowers. The danger here is that if this continues too long, it creates “zombies” – businesses kept alive by low interest rates and a reluctance to write off bad loans. This can suppress creative destruction and rebuilding, and can prolong the downturn.\”

 For those who are already retired, or on the verge, prudence suggests that you base your spending plans during retirement on these kinds of low returns. For those still saving for retirement, prudence suggests trying to save more. For those trying to figure out pension plan funding, the fund may well be worse off than you think. If rates of return on equities and bonds do rebound well above these projections, it will be a happy surprise for today\’s savers and investors. But the prudent don\’t base their plans on the hope of happy surprises.

Global R&D: An Overview

Reinhilde Veugelers investigates the patterns of worldwide research and development spending in a Policy Contribution for the European think-tank Bruegel: \”The World Innovation Landscape: Asia Rising?\”  The results are a useful reminder that thinking of China and the rest of the emerging economies as depending on low wages to drive their economic growth is so very 20th century. In this century, China in particular is staking its future economic growth on R&D spending and innovation.

As a starting point, consider snapshots of global R&D spending in 1999 and in 2009. The share of global R&D spending done by the United States, the European Union, and Japan is falling, while the share done by China, South Korea, and other emerging economies is rising.

Looking at individual countries, China has now outstripped Japan for second place in global R&D spending, and China\’s R&D spending is similar to that of Germany, France, and Italy combined. 
Veugelers looks more closely at the group of seven countries that together account for about 71% of global R&D spending. Here is how much they spend on R&D as a percentage of their economies: by this measure, the U.S. economy does not especially stand out.

Veugelers also takes a more detailed look at government and private R&D spending in these seven countries, and finds some intriguing differences in priorities. For example, more than half of all U.S. government R&D spending goes to defense, a far higher share than any of these other countries. The U.S. government also commits a larger share of its R&D budget to health. South Korea and Germany stand out for having a greater share of their government R&D focuses on industrial production and technology. And a number of countries devote a larger share of their government R&D to the \”Other\” category, which includes \”earth and space, transport, telecommunications, agriculture, education, culture, political systems.\”

In the private sector, a greater share of the U.S. private-sector R&D happens in the services sector, while the private sector in other countries focuses more of its R&D spending on machinery and equipment.

It\’s worth remembering that there\’s a lot more to innovation than just research and development spending. Yes, a nation that is the home of new innovation probably receives a disproportionate share of the gains in productivity as a result. But innovative discoveries can flow across national borders.The true economic gains from innovation are not from a discovery in a laboratory, but rather from the economic flexibility that translates the discovery into new products and jobs.

Why are U.S, Firms Holding $5 Trillion in Cash?

One of the puzzles and frustrations of the sluggish economy since the official end of the Great Recession in June 2009 is that U.S. firms are holding enormous amounts of cash–about $5 trillion in 2011. A considerable number of pixels have been spent wondering why corporations seem so reluctant to spend, and how we might entice them to do so. But I had not know that the trend toward corporations holding more in cash very much predates the Great Recession; indeed, it was already apparent back in the 1990s. Thus, along with thinking about why events of the last few years have led corporations to hold more cash, we should be thinking about influences over the last couple of decades.

Juan M. Sánchez and Emircan Yurdagul lay out the issues in \”Why Are Corporations Holding So Much Cash?\” written for the Regional Economist magazine published by the Federal Reserve Bank of St. Louis.  The first graph shows \”cash and short-term investments,\” which include all securities transferable to cash, going back to The second graph focuses just on nonfinancial, nonutility firms–thus leaving out banks, insurance companies, regulated power companies, and the like.There are some differences in timing, but the overall upward pattern is clear.

Of course, one reason for the rise in cash assets could just be overall growth of the economy. Thus, an alternative measure is to look at the ratio of cash to net assets of the firm. By this measure, which is probably more revealing, the growth of cash starts off around 1995, accelerates in the early 2000s, takes a step back in the Great Recession, and now has rebounded.

Broadly speaking, there are two reasons for firms to hold more cash: precautionary motives and repatriation taxes. Precautionary motives refer to the notion that firms operate in a situation of uncertainty, including uncertainty about what stresses or opportunities might arise, and whether they will be able to get a loan on favorable terms when they want it. Cash offers flexibility. The authors explain repatriation taxes this way: \”[T]axes due to the U.S. government from corporations operating abroad are determined by the difference between the taxes already paid abroad and the taxes that U.S. tax rates would imply. Importantly, such taxation only takes place when earnings are repatriated. Therefore, firms may have incentives to keep foreign earnings abroad.\” To put it a little differently, if firms are thinking that they may wish to reinvest foreign profits in foreign operations, then the tax code gives them an incentive not to repatriate those profits.

Both of these factors surely make some difference, but Sánchez and Yurdagul also seek some insight into the question by looking at which industries, and which size firms, are more likely to be increasing their cash holdings. For example, information technology firms and firms focused on tech and hardware equipment are holding noticeably more in cash, although since the end of the housing bubble, building product companies have been sitting on a lot more cash, too.

When looking by size of firm, it\’s smaller firms that hold more cash. In part, this surely reflects that smaller firms are more vulnerable to ups and downs, and less likely to be confident that they have access to a loan when they want it. But interestingly, it\’s the second-smallest quintile of firms, not the smallest, that has seen the biggest rise in cash/asset ratios.

Ultimately, these arguments made me less confident that any particular policy was going to shake loose a big share of the $5 trillion cash hoard of U.S. firms. It\’s not quite clear to me why smallish-sized tech firms feel the need to hold so much cash, but they clearly have felt that way for a couple of decades now. In an globalizing economy, more U.S. firms are going to have foreign operations and to be focused on expanding those operations, so it\’s not clear to me that a much larger share of those foreign profits will be repatriated. The cash hoard of U.S. firms isn\’t just, or even mainly, a post-recession phenomenon.

Reusable Grocery Bags Can Kill (Unless Washed)

One recent local environmental cause, especially popular in California, has been to ban or tax plastic grocery bags. The expressed hope is that shoppers will instead carry reusable grocery bags back and forth to the grocery store, and that plastic bags will be less likely to end up in landfills, or blowing across hillsides, or floating in water. The problem is that almost no one ever washes their reusable grocery bags. Reusuable grocery bags often carry raw meat, unseparated from other foods, and are often stored for convenience in the trunk of cars that sit outside in the sun. In short, reusuable grocery bags can be a friendly breeding environment for E. coli bacteria, which can cause severe illness and even death.

Jonathan Klick and Joshua D. Wright tell this story in \”Grocery Bag Bans and Foodborne Illness,\” published as a research paper by the Institute for Law and Economics at the University of Pennsylvania Law School. As their primary example, they look at E. coli infections in the San Francisco County after it adopted an ordinance severely limiting the use of plastic bags by grocery stores.

As one piece of evidence, here\’s a figure showing the number of emergency room visits in San Francisco County related to E. coli for the 10 quarters before and after the enactment of the ordinance, where zero on the horizontal axis is the date the ordinance went into effect. (The shaded area around the line is a 95% statistical confidence interval.)  Clearly, there is a discontinuous jump in the number of emergency room visits.

For comparison, here\’s the same measure of emergency room visits related to E. coli infections for the other counties in the San Francisco Bay Area in the 10 quarters before and after the ordinance was enacted. These counties don\’t see a jump.

Klick and Wright flesh out this finding in a variety of ways. They look at other measures of foodborne illness, like salmonella and campylobacter, and also find a rise associated with the ordinance limiting plastic bags. They look at other cities in California that have enacted such bans, and although it is harder for them to track health effects for individual cities–because the health statistics are collected at the county level–they find negative health effects as well. (The city of San Francisco is consolidated with the county of San Francisco.)

Overall, they find that San Francisco typically experiences about 12 deaths per year from intestinal infections, and that the restrictions on plastic bags probably let to another 5-6 deaths per year in that city–plus, of course, the personal and social costs of some dozens of additional hospitalizations. With these costs taken into account, restrictions on plastic bags stop looking like a good idea.

Of course, a possible response to this problem is to launch a public information campaign to encourage people to wash their reusable grocery bags. But that response then leads to two other issues.  First, if public information campaigns can be effective on the grocery bag issue, the campaign could simply focus on the need to dispose of plastic bags properly and recycle them where possible–without a need to ban them. The argument for an ordinance sharply limiting the use of plastic grocery bags is, in effect, based on an implicit assumption that public information campaigns about grocery bags don\’t work well.  Second, if reusable grocery bags are washed after each use, then any cost-benefit analysis of their use would have to take into account the costs of water and detergent use. I have no idea if these costs alone would outweigh the benefits of reusable grocery bags, but it might be a near thing.

 Like most economists, I have a mental file drawer of \”good intentions aren\’t enough\” stories. The push to ban plastic grocery bags is one more example.

Added February 14, 2013:  For a short memo challenging the findings of this study from the San Francisco Department of Public Health, see here.

Winter 2013 Journal of Economic Perspectives

The Winter 2013 issue of my own Journal of Economic Perspectives is now available on-line. Like all issues of JEP back to the first issue in 1987, it is freely available here, courtesy of the American Economic Association. I will probably put put some posts about individual articles in the next week or so, but here is an overview. The issue has two symposia of four papers each: one on the economics of patents and the other on tradeable pollution allowances. There are a couple of individual papers, one on the empirical work about prospect theory 30 years after that theory was formulated, and the other a look back at the famous RAND health insurance experiment. My own \”Recommendations for Further Reading\” rounds out the issue. Here are abstracts for the papers, with links to the text.

Symposium on Patents

\”The Case against Patents,\” by Michele Boldrin and David K. Levine

The case against patents can be summarized briefly: there is no empirical evidence that they serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity. Both theory and evidence suggest that while patents can have a partial equilibrium effect of improving incentives to invent, the general equilibrium effect on innovation can be negative. A properly designed patent system might serve to increase innovation at a certain time and place. Unfortunately, the political economy of government-operated patent systems indicates that such systems are susceptible to pressures that cause the ill effects of patents to grow over time. Our preferred policy solution is to abolish patents entirely and to find other legislative instruments, less open to lobbying and rent seeking, to foster innovation when there is clear evidence that laissez-faire undersupplies it. However, if that policy change seems too large to swallow, we discuss in the conclusion a set of partial reforms that could be implemented.
Full-Text Access | Supplementary Materials

\”Patents and Innovation: Evidence from Economic History,\” by Petra Moser

What is the optimal system of intellectual property rights to encourage innovation? Empirical evidence from economic history can help to inform important policy questions that have been difficult to answer with modern data: For example, does the existence of strong patent laws encourage innovation? What proportion of innovations is patented? Is this share constant across industries and over time? How does patenting affect the diffusion of knowledge? How effective are prominent mechanisms, such as patent pools and compulsory licensing, that have been proposed to address problems with the patent system? This essay summarizes results of existing research and highlights promising areas for future research.
Full-Text Access | Supplementary Materials

\”The New Patent Intermediaries: Platforms, Defensive Aggregators, and Super-Aggregators,\” by Andrei Hagiu and David B. Yoffie

The patent market consists mainly of privately negotiated, bilateral transactions, either sales or cross-licenses, between large companies. There is no eBay, Amazon, New York Stock Exchange, or Kelley\’s Blue Book equivalent for patents, and when buyers and sellers do manage to find each other, they usually negotiate under enormous uncertainty: prices of similar patents vary widely from transaction to transaction and the terms of the transactions (including prices) are often secret and confidential. Inefficient and illiquid markets, such as the one for patents, generally create profit opportunities for intermediaries. We begin with an overview of the problems that arise in patent markets, and how traditional institutions like patent brokers, patent pools, and standard-setting organizations have sought to address them. During the last decade, a variety of novel patent intermediaries has emerged. We discuss how several online platforms have started services for buying and selling patents but have failed to gain meaningful traction. And new intermediaries that we call defensive patent aggregators and superaggregators have become quite influential and controversial in the technology industries they touch. The goal of this paper is to shed light on the role and efficiency tradeoffs of these new patent intermediaries. Finally, we offer a provisional assessment of how the new patent intermediary institutions affect economic welfare.
Full-Text Access | Supplementary Materials

\”Of Smart Phone Wars and Software Patents,\” by Stuart Graham and Saurabh Vishnubhakat

Among the main criticisms currently confronting the US Patent and Trademark Office are concerns about software patents and what role they play in the web of litigation now proceeding in the smart phone industry. We will examine the evidence on the litigation and the treatment by the Patent Office of patents that include software elements. We present specific empirical evidence regarding the examination by the Patent Office of software patents, their validity, and their role in the smart phone wars. More broadly, this article discusses the competing values at work in the patent system and how the system has dealt with disputes that, like the smart phone wars, routinely erupt over time, in fact dating back to the very founding of the United States. The article concludes with an outlook for systematic policymaking within the patent system in the wake of major recent legislative and administrative reforms. Principally, the article highlights how the US Patent Office acts responsibly when it engages constructively with principled criticisms and calls for reform, as it has during the passage and now implementation of the landmark Leahy-Smith America Invents Act of 2011.
Full-Text Access | Supplementary Materials

Symposium on Tradeable Pollution Allowances

\”Markets for Pollution Allowances: What Are the (New) Lessons?\” by Lawrence H. Goulder

About 45 years ago a few economists offered the novel idea of trading pollution rights as a way of meeting environmental goals. Such trading was touted as a more cost-effective alternative to traditional forms of regulation, such as specific technology requirements or performance standards. The principal form of trading in pollution rights is a cap-and-trade system, whose essential elements are few and simple: first, the regulatory authority specifies the cap—the total pollution allowed by all of the facilities covered by the regulatory program; second, the regulatory authority distributes the allowances, either by auction or through free provision; third, the system provides for trading of allowances. Since the 1980s the use of cap and trade has grown substantially. In this overview article, I consider some key lessons about when cap-and-trade programs work well, when they perform less effectively, how they work compared with other policy options, and how they might need to be modified to address issues that had not been anticipated.
Full-Text Access | Supplementary Materials

\”The SO2 Allowance Trading System: The Ironic History of a Grand Policy Experiment,\” by Richard Schmalensee and Robert N. Stavins 

Two decades have passed since the Clean Air Act Amendments of 1990 launched a grand experiment in market-based environmental policy: the SO2 cap-and-trade system. That system performed well but created four striking ironies: First, by creating this system to reduce SO2 emissions to curb acid rain, the government did the right thing for the wrong reason. Second, a substantial source of this system\’s cost-effectiveness was an unanticipated consequence of earlier railroad deregulation. Third, it is ironic that cap-and-trade has come to be demonized by conservative politicians in recent years, as this market-based, cost-effective policy innovation was initially championed and implemented by Republican administrations. Fourth, court decisions and subsequent regulatory responses have led to the collapse of the SO2 market, demonstrating that what the government gives, the government can take away.
Full-Text Access | Supplementary Materials

\”Carbon Markets 15 Years after Kyoto: Lessons Learned, New Challenges,\” by Richard G. Newell, William A. Pizer and Daniel Raimi

Carbon markets are substantial and they are expanding. There are many lessons from market experiences over the past eight years: there should be fewer free allowances, better management of market-sensitive information, and a recognition that trading systems require adjustments that have consequences for market participants and market confidence. Moreover, the emerging market architecture features separate emissions trading systems serving distinct jurisdictions and a variety of other types of policies exist alongside the carbon markets.This situation is in sharp contrast to the top-down, integrated global trading architecture envisioned 15 years ago by the designers of the Kyoto Protocol and raises a suite of new questions. In this new architecture, jurisdictions with emissions trading have to decide how, whether, and when to link with one another. Stakeholders and policymakers must confront how to measure the comparability of efforts among markets as well as relative to a variety of other policy approaches. International negotiators must in turn work out a global agreement that can accommodate and support increasingly bottom-up approaches to carbon markets and climate change mitigation.
Full-Text Access | Supplementary Materials

\”Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis,\” by Karen Fisher-Vanden and Sheila Olmstead

This paper seeks to assess the current status of water quality trading and to identify possible problems and solutions. Water pollution permit trading programs have rarely been comprehensively described and analyzed in the peer-reviewed literature. Including active programs and completed or otherwise inactive programs, we identify approximately three dozen initiatives. We describe six criteria for successful pollution trading programs and consider how these apply to standard water quality problems, as compared to air quality. We then highlight some important issues to be resolved if current water quality trading programs are to function as the \”leading edge\” of a new frontier in cost-effective pollution permit trading in the United States.
Full-Text Access | Supplementary Materials

Individual articles

\”Thirty Years of Prospect Theory in Economics: A Review and Assessment,\” by Nicholas C. Barberis

In 1979, Daniel Kahneman and Amos Tversky, published a paper in Econometrica titled \”Prospect Theory: An Analysis of Decision under Risk.\” The paper presented a new model of risk attitudes called \”prospect theory,\” which elegantly captured the experimental evidence on risk taking, including the documented violations of expected utility. More than 30 years later, prospect theory is still widely viewed as the best available description of how people evaluate risk in experimental settings. However, there are still relatively few well-known and broadly accepted applications of prospect theory in economics. One might be tempted to conclude that, even if prospect theory is an excellent description of behavior in experimental settings, it is less relevant outside the laboratory. In my view, this lesson would be incorrect. Over the past decade, researchers in the field of behavioral economics have put a lot of thought into how prospect theory should be applied in economic settings. This effort is bearing fruit. A significant body of theoretical work now incorporates the ideas in prospect theory into more traditional models of economic behavior, and a growing body of empirical work tests the predictions of these new theories. I am optimistic that some insights of prospect theory will eventually find a permanent and significant place in mainstream economic analysis.
Full-Text Access | Supplementary Materials

\”The RAND Health Insurance Experiment, Three Decades Later,\” by Aviva Aron-Dine, Liran Einav and Amy Finkelstein 

Between 1974 and 1981, the RAND health insurance experiment provided health insurance to more than 5,800 individuals from about 2,000 households in six different locations across the United States, a sample designed to be representative of families with adults under the age of 62. More than three decades later, the RAND results are still widely held to be the \”gold standard\” of evidence for predicting the likely impact of health insurance reforms on medical spending, as well as for designing actual insurance policies. On cost grounds alone, we are unlikely to see something like the RAND experiment again. In this essay, we reexamine the core findings of the RAND health insurance experiment in light of the subsequent three decades of work on the analysis of randomized experiments and the economics of moral hazard. First, we re-present the main findings of the RAND experiment in a manner more similar to the way they would be presented today. Second, we reexamine the validity of the experimental treatment effects. Finally, we reconsider the famous RAND estimate that the elasticity of medical spending with respect to its out-of pocket price is -0.2. We draw a contrast between how this elasticity was originally estimated and how it has been subsequently applied, and more generally we caution against trying to summarize the experimental treatment effects from nonlinear health insurance contracts using a single price elasticity.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials