How Many Deaths from Mistakes in US Health Care?

Back in the 1999, the Institute of Medicine (part of the National Academies of Science) estimated in its report To Err is Human that in 1997 at least 44,000 and as many as 98,000 patients died in hospitals as the result of medical errors that could have been prevented. Current estimates are higher, as Thomas R. Krause points out in \”Department of Measurement: Scorecard Needed\” in the Milken Institute Review (Fourth Quarter 2015, pp. 91-94). Krause writes:

\”You\’ve seen the astounding numbers: hundreds of thousands of Americans die each year due to medical treatment errors. Indeed, the median credible estimate is 350,000, more than U.S. combat deaths in all of World War II. If you measure the “value of life” the way economists and federal agencies do it – that is, by observing how much individuals voluntarily pay in daily life to reduce the risk of accidental death – those 350,000 lives represent a loss exceeding $3 trillion, or one-sixth of GDP. But when decades pass and little seems to change, even these figures lose their power to shock, and the public is inclined to focus its outrage on apparently more tractable problems.\”

In case you\’re one of the vast majority who actually haven\’t seen those estimates, or at least haven\’t mentally registered that they exist, here are a couple of the more recent underlying sources.

The Agency for Healthcare Research and Quality (part of the US Department of Health and Human Services) published in May 2015 the 2014 National Healthcare Quality and Disparities Report. Here are some good news/bad news statistics from the report:

From 2010 to 2013, the overall rate of hospital-acquired conditions declined from 145 to 121 per 1,000 hospital discharges. This decline is estimated to correspond to 1.3 million fewer hospital-acquired conditions, 50,000 fewer inpatient deaths, and $12 billion savings in health care costs. Large declines were observed in rates of adverse drug events, healthcare-associated infections, and pressure ulcers.

The good news is 50,000 fewer deaths, along with health improvements and saving money. The bad new is that the rate of hospital-acquired conditions basically fell from one patient in every seven patients to one out of every eight. Sure, hospital-acquired conditions will never fall to zero. But it certainly looks to me as if at least tens thousands of lives were being lost each year because that rate had not been reduced, and that tens of thousands of additional could be saved be reducing the rate further. For another analysis in a different setting, here\’s a 2014 US government study about adverse and preventable effects of care in nursing care facilities.

John T. James published \”A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care\” in the Journal of Patient Safety (September 2013, pp. 122-128). James reviews four studies of quality of care that focus on relatively small numbers of patients (three of the studies are less than 1000 patient records, the other is 2,300). He uses a software package called the Global Trigger Tool to flag cases where preventable errors might have occurred, and then those cases are examined by physicians. James describes the process this way:

The GTT depends on systematic review of medical records by persons trained to find specific clues or triggers suggesting that an adverse event has taken place. For example, triggers might include orders to stop a medication, an abnormal lab result, or prescription of an antidote medication such as naloxone. As a final step, the examination of the record must be validated by 1 or more physicians. As will be shown shortly, the methods used to find adverse events in hospital medical records target primarily errors of commission and are much less likely to find harm from errors of omission, communication, context, or missed diagnosis.

Projecting from four small studies to national patterns is obviously a little dicey, but for what it\’s worth, James finds:

Using a weighted average of the 4 studies, a lower limit of 210,000 deaths per year was associated with preventable harm in hospitals. Given limitations in the search capability of the Global Trigger Tool and the incompleteness of medical records on which the Tool depends, the true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm.

My reactions to this body of evidence on the prevalence and costs of mistakes in the US health care system can be summarized in two bits of skepticism and one burst of outrage.

It seems sensible to be skeptical about the largest estimates of the size of the problem. There are obviously issues in deciding what was \”preventable\” or a \”mistake.\”

The other bit of skepticism is that seeking to reduce the problem of medical errors is harder than it might at first sound. For example, Christine K. Cassel, Patrick H. Conway, Suzanne F. Delbanco, Ashish K. Jha, Robert S. Saunders, and Thomas H. Lee wrote about some efforts to measure and set guidelines for health care in \”Getting More Performance from Performance Measurement.\” which appears in the New England Journal of Medicine on December 4, 2014. They point out that there are often literally  hundreds of measures of quality of care, some important, some not, and many that turn out to be useless or even harmful.

Many observers fear that a proliferation of measures is leading to measurement fatigue without commensurate results. An analysis of 48 state and regional measure sets found that they included more than 500 different measures, only 20% ofwhich were used by more than one program. Similarly, a study of 29 private health plans identified approximately 550 distinct measures, which overlapped little with the measures used by public programs. Health care organizations are therefore devoting substantial resources to reporting their performance to regulators and payers; one northeastern health system, for instance, uses 1% of its net patient-service revenue for that purpose. Beyond the problem of too many measures, there is concern that programs are not using the right ones. Some metrics capture health outcomes or processes that have major effects on overall health, but others focus on activities that may have minimal effects. …

Unfortunately, for every instance in which performance initiatives improved care, there were cases in which our good intentions for measurement simply enraged colleagues or inspired expenditures that produced no care improvements. One example of a measurement effort that had unintended consequences was the CMS quality measure for community-acquired pneumonia. This metric assessed whether providers administered the first dose of antibiotics to a patient within 6 hours after presentation, since analyses of Medicare databases had shown that an interval exceeding 4 hours was associated with increased in-hospital mortality. But the measure led to inappropriate antibiotic use in patients without community-acquired pneumonia, had adverse consequences such as Clostridium difficile colitis, and did not reduce mortality. The measure therefore lost its endorsement by the National Quality Forum in 2012, and CMS removed it from its Hospital Inpatient Quality Reporting and Hospital Compare programs.

But even after acknowledging that quantifying death and injury caused by health care mistakes is an inexact process, and fixing it isn\’t simple, the sheer scale of the issue remains.

The US economy will spend about $3 trillion this year on health care. As Krause noted at the start, the loss of 350,000 lives life from preventable errors, if we value a life at about $9 million as is commonly done by federal regulators, means that the total costs of death from by health care mistakes is about  $3 trillion. On one side, perhaps this total is overstated. On the other side, it includes only costs of deaths, not health costs from serious but nonlethal harms (which James estimates are 10 or 20 times as common), and not the costs of resources used by the health care system in seeking to deal with mistakes already made.

There is considerable public debate over how to make sure all Americans have health insurance. But the issue of the enormous costs of the US health care system doesn\’t get the same airtime. Sure, there are arguments over how much or why the rate of growth of US health care spending has changed. In the meantime, the US continues to vastly outspend other countries. For example, here\’s a figure from the OECD showing health care spending as a share of GDP,50% higher than any other country and roughly double the OECD average. Based on this data, the US is spending about $8500 per person per year on health care, while Canada and Germany are spending about $4400 per person per year, and the United Kingdom and Japan are spending about $3,300 per person per year.

I understand the reasons why high US health care spending doesn\’t buy health. But it\’s a bitter irony indeed that the extremely high levels of US health care spending are actually causing at least tens of thousands, and quite possible hundreds of thousands, of deaths each year.

Calibrating the Hype about Online Higher Education

\”Massive open online courses\” (MOOCs) and other aspects of online higher education were white-hot a few years ago, but I\’d say that they have cooled off to only red-hot. Two economists who have also been college presidents, Michael S. McPherson and Lawrence S. Bacow discuss the current state of play and offer some insights in \”Online Higher Education: Beyond the Hype Cycle,\” appearing in the Fall 2015 issue of the Journal of Economic Perspectives. Here are some points that caught my eye.

About one-quarter of higher education students took an online course in 2013, and about one-ninth of higher education students took all of their courses online that year. 

\”The US Department of Education recently began to conduct its own survey of online education as part of its Integrated Post-Secondary Education Data System (IPEDS), with full coverage of the roughly 4,900 US institutions of higher education. As shown in Table 1, IPEDS data indicates that as of 2013, about 26 percent of all students took at least one course that was entirely online, and about 11 percent received all of their education online.\” 

When it comes to the possibility of education technologies that can operate at large scale with near-zero marginal costs, there\’s a history of overoptimism. Here\’s a quick sketch of promises about educational radio, and then educational television. 

\”Berland (1992), citing a popular commentator named Waldeman Kaempffert writing in 1924, reported that “there were visions of radio producing ‘a super radio orchestra’ and ‘a super radio university’ wherein ‘every home has the potentiality of becoming an extension of Carnegie Hall or Harvard University.’” Craig (2000) reports that “the enthusiasm for radio education during the early days of broadcasting was palpable. Many universities set up broadcast stations as part of their extension programs and in order to provide their engineering and journalism students with experience in radio. By 1925 there were 128 educational stations across the country, mostly run by tertiary institutions” (p. 2831). The enthusiasm didn’t last—by 1931 the number of educational stations was down to 49, most low-powered (p. 2839). This was in part the result of cumbersome regulation, perhaps induced by commercial interests; but the student self-control problem … likely played a role as well. As NBC representative Janice Waller observed, “Even those listeners who clamored for educational programs, Waller found, secretly preferred to listen to comedians such as Jack Benny. These “intellectually dishonest” people “want to appear very highbrow before for their friends . . . but down inside, and within the confines of their own homes, they are, frankly, bored if forced to listen to the majority of educational programs” (as quoted in Craig 2000, pp. 2865–66). 

\”The excitement in the late 1950s about educational television outshone even the earlier enthusiasm for radio. An article by Schwarzwalder (1959, pp. 181–182) has an eerily familiar ring: “Educational Television can extend teaching to thousands, hundreds of thousands and, potentially, even millions. . . . As Professor Siepman wrote some weeks ago in The New York Times, ‘with impressive regularity the results come in. Those taught by television seem to do at least as well as those taught in the conventional way.’ . . . The implications of these facts to a beleaguered democracy desperately in need of more education for more of its people are immense. We shall ignore these implications at our national peril.” Schwartzman goes on to claim that any subject, including physics, manual skills, and the arts can be taught by television, and even cites experiments that show “that the discussion technique can be adapted to television.”\”

The Internet offers the possibility not just of widespread distribution of education material, but also of interactive content. But if the content is to be richly interactive–that is, more than just a short multiple-choice quiz inserted into the recorded material–the costs of design and production could be very substantial. 

\”Richly interactive online instruction is obviously much more expensive than Internet-delivered television. The development costs for Carnegie Mellon’s sophisticated but far from fully computer-adaptive courses in statistics and other fields have been estimated at about $1 million each (Parry 2009). Although future technical developments will reduce the costs of providing a course of a fixed level of quality over time, those future technical developments will also encourage the provision of additional features. Universities can invest in improving the production values of such television programs at the margin in ways that range from multiple camera angles to the incorporation of sophisticated graphics and live location video. Many interactive courses could also conceivably benefit from regular updating based on recent events or scholarship … Our point is that while online courses offer the potential for constant modification and updates, realizing this potential may in fact be expensive, leading to less-frequent updates than for traditionally taught subjects. … Those who foresee the widespread adoption of adaptive learning technology often underestimate the cost of producing it. Stanford President John Hennessey, in a recent lecture to the American Council of Education, estimated that the cost of producing a first-rate highly interactive digital course to be in the millions of dollars (Jaschik 2015). Few individual institutions have the resources to make such investments. Furthermore, while demand may be substantial enough to support such investments for basic introductory courses in fields that easily lend themselves to such instruction, it is unlikely that anyone will invest in the creation of such courses for upper-level courses unless they can be adopted at scale.

There\’s no guarantee that online tools will reduce the costs of higher education. One possibility is that well-endowed universities use online higher education as a way to drive up costs–since these schools often compete to provide a high-end experience. For example, expensive schools might \”flip the classroom\” by paying for both a rich and interactive online course, and then also hiring enough faculty members (not graduate students!) to staff a large number of discussion and problem-solving sections.

Indeed, there is a real chance that at least in selective higher education, technology will actually be used to raise rather than lower cost. There are obvious ways to use online materials to complement rather than to substitute for in-person instruction. Flipping the classroom, as we will explain further, is one. Instructors can also import highly produced video material—either purchased or homemade— to complement their classes, and there could easily emerge a market in modular lessons aimed at allowing students to extend material farther or to get a second take on a difficult set of concepts. If individual faculty members are authorized to make these choices, and universities agree to subsidize expensive choices, costs seem likely to rise. 

Conversely, schools that are lower-ranked and with fewer financial resources may be pushed to focus on implementing a low-cost and mostly online curriculum. 

Broad-access unselective institutions are already among the largest users of online instruction. These institutions are responsible for the education of many students—at least half of all those enrolled in postsecondary education—and they disproportionately educate lower-income students and students of color. Enabling technological advances to support improvement in the educational success of these institutions at manageable cost is an important goal, arguably the most important goal for using technology to improve American higher education. (Of course, the implications of these technologies for global learning would be potentially gigantic.) There is especially high potential for online education to cater to the large number of nontraditional students, which includes adult learners and those who have a very high opportunity cost of attending college whether at the undergraduate or graduate level. For this group of students, asynchronous online learning can be a godsend. Opportunities surely exist for technology to penetrate this market further, and quality is likely to improve as faculty and others figure out how to take better advantage of new educational technology. As the technology improves and as more institutions adopt it, more of these students are likely to receive all or at least some of their education online. 

Yet this great opportunity is accompanied by considerable risk. It is all too easy to envision legislators who see a chance to cut state-level or national-level spending that supports higher education by imposing cheap and ineffective online instruction on institutions whose students lack the voice and political influence to demand quality. It’s equally easy to imagine for-profit institutions proffering online courses in a way that takes advantage of populations with little experience with college in a marketplace where reliable information is scarce.

(Full disclosure: I\’ve been Managing Editor of the Journal  of Economic Perspectives since 1987. All JEP articles from the current issue going back to the first issue are freely available online courtesy of the publisher, the American Economic Association.)

Overconfidence: The Ancient Evil

Here\’s how Ulrike Malmendier and I started our short introduction to a three-paper symposium in the Fall 2015 issue of the Journal of Economic Perspectives on the subject of overconfidence.

Economists have been concerned about issues of overconfidence at least since Adam Smith (1776, Book I, Chapter X), who wrote in The Wealth of Nations: “The over-weening conceit which the greater part of men have of their own abilities, is an ancient evil remarked by the philosophers and moralists of all ages.” Titans of modern economics have had similar reactions to the “ancient evil.” Daniel Kahneman recently told an interviewer that if he had a magic wand that could eliminate one human bias, he would do away with overconfidence. As Shariatmadari (2015) reports: “Not even he [Kahneman] believes that the various flaws that bedevil decision-making can be successfully corrected. The most damaging of these is overconfidence: the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite.” Kahneman argues that overconfidence “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things.” 

Evidence concerning the prevalence of overconfidence is widespread and robust. Some of the results have even become fairly well-known in popular culture, like the findings that most drivers believe they are safer than a typical driver, or that the unskilled tend to overestimate their abilities. The finding about driver overconfidence stems from a Svenson (1981) study, a lab experiment using undergraduate students as subjects, which found that 83 percent of American subjects believed that they were in the top 30 percent in terms of driving safety. The finding about overestimation of ability comes from a Kruger and Dunning (1999) study, which reports: “[P]articipants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd.” Overconfidence on both sides of a conflict may be linked to the willingness to fight a war (Wrangham 1999; Johnson 2004) or to the conditions that lead a strike to occur (Neale and Bazerman 1985). 

The three papers in the symposium ask about the economics of overconfident consumers, overconfident CEOs, and overconfident investors.

Michael Grubb points out in \”Overconfident Consumers in theMarketplace\” that if firms know that a substantial share of consumers are overconfident, they will set prices and contract terms accordingly. For example, consumers may be overconfident about future actions. Sure, they will send in that mail-in rebate. Sure, they won\’t have a car accident, so choosing car insurance with a REALLY high deductible makes sense. Sure, they will subscribe to something and put it on \”auto-renew,\” because they will remember to cancel it when ready. Grubb points out that in some of these situations, competition by firms to take advantage of the overconfident can mean great deals for those who are not overconfidence. Conversely, steps by government to protect the overconfident from themselves can, in some cases, lead to higher costs for others.

lrike Malmendier and Geoffrey Tate discuss \”Behavioral CEOs: The Role of Managerial Overconfidence.\” They suggest a number of ways of measuring the overconfidence of CEOs. One example is based on the insight that CEOs should want to cash in their stock options when they have a chance for a good gain, because it gives them a chance to diversify their wealth by investing it somewhere else. However, an overconfident CEO will holds stock options right up the expiration date before cashing them in, out of a belief that the company is doing better than the market recognizes and for that reason the stock option will keep rising in value. Using measures of when CEOs cash in their stock options, along with estimates of how much they would receive for cashing in their options and various back-of-the-envelope estimates for risk and diversification, they find that about 40% of  CEOs qualify as overconfident. It turns out that those CEOs are more likely to use the company\’s cash, or borrowed money, to make big investments and acquisitions. They also point out that some companies facing the need for a tough transition might prefer an overconfident CEO to manage the transition. Indeed, you can offer an overconfidence CEO less in stock options, because the overconfident CEO will tend to believe that those options will be worth more.

Kent Daniel and David Hirshleifer study \”Overconfident Investors, Predictable Returns, and Excessive Trading.\” They argue that the volume of trading in financial markets is very high, and that there are a number of ways that have been well-known for decades in which stock market returns are somewhat predictable. They suggest that overconfidence among investors helps to explain the eagerness to trade, and the willingness to believe that you can choose a financial adviser who will make you rich. They also suggest links between overconfidence and predictabilities in stock market pricing related to momentum in stock prices, firms with a low ratio of book-too-market value, small firms outperforming larger firms, and others. As they point out, overconfidence is reinforced by \”self-attribution\” bias, which is the common mental posture that whatever goes well was due to your own skill, while whatever goes poorly is too to bad luck.

Overconconfidence shouldn\’t be treated as a giant all-purpose explanation for less-than-rational behavior by consumers, CEOs, and investors. There are lots of ways, in different situations and contexts, that people make less-than-rational decisions. But overconfidence plays a central role in the persistence of lots of less-than-rational behavior. If people were actually rational, most of us (myself very much included) would be pretty humble about our abilities to assess situations, draw inferences, and make decisions. Overconfidence explains why most of us, despite being drenched in the cold rain of reality on a regular basis, still manage believe so strongly in our own points of view and our own ways of making decisions.

And one can indeed argue, with Adam Smith and Daniel Kahneman, that a world in which overconfidence was substantially diminished would be a more productive and pleasant place.

Journal of Economic Perspectives, Fall 2015 issue, Available Online

Since 1986, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which about four years ago made the decision–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here. I\’ll start here with Table of Contents for the just-released Fall 2015 issue. Below are abstracts and direct links for all of the papers. I will probably blog about some of the individual papers in the next week or two, as well.

Here are abstract and links to the articles.

Symposium on Overconfidence

\”On the Verges of Overconfidence,\” by Ulrike Malmendier and Timothy Taylor

This symposium provides several examples of overconfidence in certain economic contexts. Michael Grubb looks at \”Overconfident Consumers in the Marketplace.\” Ulrike Malmendier and Geoffrey Tate consider \”Behavioral CEOs: The Role of Managerial Overconfidence.\” Kent Daniel and David Hirshleifer discuss \”Overconfident Investors, Predictable Returns, and Excessive Trading.\” A number of insights and lessons emerge for our understanding of markets, public policy, and welfare. How do firms take advantage of consumer overconfidence? Might government attempts to rule out such practices end up providing benefits to some consumers but imposing costs on others? How are empirical measures of CEO overconfidence related to investment and the capital structure of firms? Can overconfidence among at least some investors help to explain prominent anomalies in stock markets like high levels of trading volume and certain predictable patterns in stock market returns?
Full-Text Access | Supplementary Materials

\”Overconfident Consumers in the Marketplace,\” by Michael D. Grubb
The term overconfidence is used broadly in the psychology literature, referring to both overoptimism and overprecision. Overoptimistic individuals overestimate their own abilities or prospects. In contrast, overprecise individuals place overly narrow confidence intervals around forecasts, thereby underestimating uncertainty. These biases can lead consumers to misforecast their future product usage, or to overestimate their abilities to navigate contract terms. In consequence, consumer overconfidence causes consumers to systematically misweight different dimensions of product quality and price. Poor choices based on biased estimates of a product\’s expected costs or benefits are the result. For instance, overoptimism about self-control is a leading explanation for why individuals overpay for gym memberships that they underutilize. Similarly, overprecision is a leading explanation for why individuals systematically choose the wrong calling plans, racking up large overage charges for exceeding usage allowances in the process. Beyond these market effects of overconfidence, this paper addresses three additional questions: What will firms do to exploit consumer overconfidence? What are the equilibrium welfare consequences of consumer overconfidence for consumers, firms, and society? And what are the implications of consumer overconfidence for public policy?
Full-Text Access | Supplementary Materials

\”Behavioral CEOs: The Role of Managerial Overconfidence,\” by Ulrike Malmendier and Geoffrey Tate
In this paper, we provide a theoretical and empirical framework that allows us to synthesize and assess the burgeoning literature on CEO overconfidence. We also provide novel empirical evidence that overconfidence matters for corporate investment decisions in a framework that explicitly addresses the endogeneity of firms\’ financing constraints.
Full-Text Access | Supplementary Materials

\”Overconfident Investors, Predictable Returns, and Excessive Trading,\” by Kent Daniel and David Hirshleifer
The last several decades have witnessed a shift away from a fully rational paradigm of financial markets toward one in which investor behavior is influenced by psychological biases. Two principal factors have contributed to this evolution: a body of evidence showing how psychological bias affects the behavior of economic actors; and an accumulation of evidence that is hard to reconcile with fully rational models of security market trading volumes and returns. In particular, asset markets exhibit trading volumes that are high, with individuals and asset managers trading aggressively, even when such trading results in high risk and low net returns. Moreover, asset prices display patterns of predictability that are difficult to reconcile with rational expectations-based theories of price formation. In this paper, we discuss the role of overconfidence as an explanation for these patterns.
Full-Text Access | Supplementary Materials

Symposium on the Future of Retail

\”The Ongoing Evolution of US Retail: A Format Tug-of-War,\” by Ali Hortaçsu and Chad Syverson
The past 15-20 years have seen substantial and visible changes in the way US retail business is conducted. Explanations about what is happening in the retail sector have been dominated by two powerful and not fully consistent narratives: a prediction that retail sales will migrate online and physical retail will be virtually extinguished, and a prediction that future shoppers will almost all be heading to giant physical stores like warehouse clubs and supercenters. Although online retail will surely continue to be a force shaping the sector going forward and may yet emerge as the dominant mode of commerce in the retail sector in the United States, its time for supremacy has not yet arrived. We discuss evidence indicating that the warehouse clubs/supercenter format has had a greater effect on the shape of retail over the past 15-20 years We begin with an overview of the retail sector as a whole, which over the long term has been shrinking as a share of total US economic activity and in terms of relative employment share. The retail sector has experienced stronger-than average productivity growth, but this has not been accompanied by commensurate wage growth. After discussing the important e-commerce and warehouse clubs/supercenters segments, we look more broadly at changes across the structure of the retail sector, including scale, concentration, dynamism, and degree of urbanization. Finally, we consider the likely future course of the retail sector.
Full-Text Access | Supplementary Materials

\”Adolescence and the Path to Maturity in Global Retail,\” Bart J. Bronnenberg and Paul B. Ellickson
We argue that, over the past several decades, the adoption and diffusion of \”modern retailing technology\” represents a substantial advance in productivity, providing greater product variety, enhanced convenience, and lower prices. We first describe modern retailing, highlighting the role of modern formats, scale (often transcending national boundaries), and increased coordination with upstream and downstream partners in production and distribution. In developed markets, the transition to modern retailing is nearly complete. In contrast, many low-income and emerging markets continue to rely on traditional retail formats, that is, a collection of independent stores and open air markets supplied by small-scale wholesalers, although modern retail has begun to spread to these markets as well. E-commerce is a notable exception: the penetration of e-commerce in China and several developing nations in Asia has already surpassed that of high-income countries for some types of consumer goods. To understand the forces governing the adoption of modern technology and the unique role of e-commerce, we propose a framework that emphasizes the importance of scale and coordination in facilitating the transition from traditional to modern retailing. We conclude with some conjectures regarding the likely impact of increased retail modernization for the developing world.
Full-Text Access | Supplementary Materials

Symposium on Online Higher Education

\”Online Higher Education: Beyond the Hype Cycle,\” by Michael S. McPherson and Lawrence S. Bacow
When two Silicon Valley start-ups, Coursera and Udacity, embarked in 2012 on a bold effort to supply college-level courses for free over the Internet to learners worldwide, the notion of the Massively Open Online Course (MOOC) captured the nation\’s attention. Although MOOCs are an interesting experiment with a role to play in the future of higher education, they are a surprisingly small part of the online higher education scene. We believe that online education, at least online education that begins to take full advantage of the interactivity offered by the web, is still in its infancy. We begin by sketching out the several faces of online learning—asynchronous, partially asynchronous, the flipped classroom, and others—as well as how the use of online education differs across the spectrum of higher education. We consider how the growth of online education will affect cost and convenience, student learning, and the role of faculty and administrators. We argue that spread of online education through higher education is likely to be slower than many commenters expect. We hope that online education will bring substantial benefits. But less-attractive outcomes are also possible if, for instance, legislators use the existence of online education as an excuse for sharp cuts in higher education budgets that lead to lower-quality education for many students, at the same time that richer, more selective schools are using online education as one more weapon in the arms race dynamic that is driving costs higher.
Full-Text Access | Supplementary Materials

\”How Economics Faculty Can Survive (and Perhaps Thrive) in a Brave New Online World,\” by Peter Navarro
The academy in which we toil is moving rapidly towards a greater role for online delivery of higher education, and both fans and skeptics offer strong reasons to believe this technological shock will have substantial disruptive effects on faculty. How can we as economic educators continue to provide sufficient value-added to justify our role in a world where much of what we now do is effectively being automated and commoditized? In this brave new online world, many successful and resilient faculty will add value (and differentiate their product) not by producing costly and elaborate multimedia lectures in which they become a superstar professor-celebrity, but rather through careful, clever, and innovative choices regarding both the adoption of the online content of other providers and the forms of online interactions they integrate into their course designs. Possible forms of faculty-to-student and student-to-student interactions run the digital gamut from discussion boards and electronic testing to peer assessments, games and simulations, and virtual office hours. This article explores basic descriptive and prescriptive questions economic educators and their administrators are likely to face as the online education tide rises. For example, how much does it cost to develop online content and how much time does it take? What are the key \”ingredients\” for a pedagogically sound online course? Throughout, I will draw on both the extant literature as well as my own experience at the University of California, Irvine, where the online evolution is advancing rapidly. [This article is available for download in audio (MP3) format from the journal website.]
Full-Text Access | Supplementary Materials

Articles

\”Rewriting Monetary Policy 101: What\’s the Fed\’s Preferred Post-Crisis Approach to Raising Interest Rates?\” by Jane E. Ihrig, Ellen E. Meade and Gretchen C. Weinbach
For many years prior to the global financial crisis, the Federal Open Market Committee set a target for the federal funds rate and achieved that target through small purchases and sales of securities in the open market. In the aftermath of the financial crisis, with a superabundant level of reserve balances in the banking system having been created as a result of the Federal Reserve\’s large-scale asset purchase programs, this approach to implementing monetary policy will no longer work. This paper provides a primer on the Fed\’s implementation of monetary policy. We use the standard textbook model to illustrate why the approach used by the Federal Reserve before the financial crisis to keep the federal funds rate near the Federal Open Market Committee\’s target will not work in current circumstances, and explain the approach that the Committee intends to use instead when it decides to begin raising short-term interest rates.
Full-Text Access | Supplementary Materials

\”Household Surveys in Crisis,\” by Bruce D. Meyer, Wallace K. C. Mok and James X. Sullivan
Household surveys, one of the main innovations in social science research of the last century, are threatened by declining accuracy due to reduced cooperation of respondents. While many indicators of survey quality have steadily declined in recent decades, the literature has largely emphasized rising nonresponse rates rather than other potentially more important dimensions to the problem. We divide the problem into rising rates of nonresponse, imputation, and measurement error, documenting the rise in each of these threats to survey quality over the past three decades. A fundamental problem in assessing biases due to these problems in surveys is the lack of a benchmark or measure of truth, leading us to focus on the accuracy of the reporting of government transfers. We provide evidence from aggregate measures of transfer reporting as well as linked microdata. We discuss the relative importance of misreporting of program receipt and conditional amounts of benefits received, as well as some of the conjectured reasons for declining cooperation and for survey errors. We end by discussing ways to reduce the impact of the problem including the increased use of administrative data and the possibilities for combining administrative and survey data.
Full-Text Access | Supplementary Materials

\”Seven Centuries of European Economic Growth and Decline,\” by Roger Fouquet and Stephen Broadberry
This paper investigates very long-run preindustrial economic development. New annual GDP per capita data for six European countries over the last seven hundred years paint a clearer picture of the history of European economic development. We confirm that sustained growth has been a recent phenomenon, but reject the argument that there was no long-run growth in living standards before the Industrial Revolution. Instead, the evidence demonstrates the existence of numerous periods of economic growth before the nineteenth century—periods of unsustained, but raising GDP per capita. We also show that many of the economies experienced substantial economic decline. Thus, rather than being stagnant, pre-nineteenth century European economies experienced a great deal of change. Finally, we offer some evidence that, from the nineteenth century, these economies increased the likelihood of being in a phase of economic growth and reduced the risk of being in a phase of economic decline.

Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

The Window Tax: A Tale of Excess Burden

For economists, the \”excess burden\” of a tax refers to the idea that the cost of a tax isn\’t just the amount of money collected–it\’s also the ways in which taxpayers alter their behavior because the tax has changed their incentives. A moderately well-known classroom and textbook example is the \”window tax,\” first imposed in England in 1696 by King William III, and not definitively repealed until 1851. The excess burden of the window tax was that lower-income people ended up living in rooms with few or no windows.

Wallace E. Oates and Robert M. Schwab review the history of the window tax and provide actual estimates of how it affected the number of windows per house in their article, \”The Window Tax: A Case Study in Excess Burden,\” which appeared in the Winter 2015 issue of the Journal of Economic Perspectives (where I have toiled in the fields as Managing Editor since 1987). The article popped back into my mind earlier this week when I learned that Oates, a highly distinguished economist based at the University of Maryland since 1979, died last week.  One of Oates\’s specialties was the area of local public finance, and his 1972 book on Fiscal Federalism,  is a classic of that subfield.

Here are some facts about the historical window tax, courtesy of Oates and Schwab.

  • William III intended it as a temporary tax, just to help out with the overhang of costs from the Glorious Revolution of 1688 and the most recent war with France. But it ended up lasting 150 years. 
  • \”An important feature of the tax was that it was levied on the occupant, not the owner of the dwelling. Thus, the renter, not the landlord, paid the tax. However, large tenement buildings in the cities, each with several apartments, were an exception. They were charged as single residences with the tax liability resting on the landlord. This led to especially wretched conditions for the poor in the cities, as landlords blocked up windows and constructed tenements without adequate light and ventilation …\”
  • The window tax was thought of as improvement on the \”hearth tax,\” that Charles II had imposed in 1662. \”The tax was very unpopular in part because of the intrusive character of the assessment process. The `chimney-men\’ (as the assessors and tax collectors were called) had to enter the house to count the number of hearths and stoves, and there was great resentment against this invasion of the sanctity of the home. The window tax, in contrast, did not require access to the interior of the dwelling: the “window peepers” could count windows from the outside, thus simplifying the assessment procedure and obviating the need for an invasion of the interior.\”
  • The window tax was intended as a visible measure of ability to pay: that is, a high-income person would live in a place with more windows than a low-income person. But at the time, it was widely recognized that windows were a very imperfect proxy for wealth. Adam Smith wrote about this problem of window tax in 1776 in The Wealth of Nations: “A house of ten pounds rent in the country may have more windows than a house of five hundred pounds rent in London; and though the inhabitant of the former is likely to be a much poorer man than that of the latter, yet so far as his contribution is regulated by the window-tax, he must contribute more to the support of the state.”
  • When the rates on the window tax went up, it was common for owners of homes and apartments to block or build over many or all of their windows. The results on human well-being were severe. \”A series of studies by physicians and others found that the unsanitary conditions resulting from the lack of proper ventilation and fresh air encouraged the propagation of numerous diseases such as dysentery, gangrene, and typhus. … A series of petitions to Parliament resulted in the designation of commissioners and committees to study the problems of the window tax in the first half of the 19th century. In 1846, medical officers petitioned Parliament for the abolition of the window tax, pronouncing it to be `most injurious to the health, welfare, property, and industry of the poor, and of the community at large\’.\”
  • Here\’s Charles Dickens writing in 1850 about the window tax in Household Words, a magazine that he published for a number of years: “The adage ‘free as air’ has become obsolete by Act of Parliament. Neither air nor light have been free since the imposition of the window-tax. We are obliged to pay for what nature lavishly supplies to all, at so much per window per year; and the poor who cannot afford the expense are stinted in two of the most urgent necessities of life.” 

Oates and Schwab work with a mix of data on the number of windows in a sample of houses in Shropshire and economic theory about household behavior when confronted with taxes to generate an admittedly rough estimate that on average, collecting a certain amount of money through the window tax created an excess burden–in terms of the costs of living in a place with fewer windows–equal to an additional 62% of the value of the tax.

Oates and Schwab ask why the window tax lasted so long, give its many problems, and offer an appropriately cynical answer: \”Perhaps the lesson here is that when governments need to raise significant revenue, even a very bad tax can survive for a very long time.\”

I didn\’t know Oates personally, but I had one other job-related interaction with him back. Along with his work in local public finance, Oates was also well-known as an environmental economist. His 1975 book, The Theory of Environmental Policy (written with William Baumol) was highly influential in setting the direction of what at the time was a fairly new and growing field. In 1995, Oates was a co-author in one of the most downloaded and cited exchanges the JEP has ever published on the subject of what is sometimes called the \”Porter hypothesis.\”

Michael Porter made the argument–bolstered by a large number of case studies, that when environmental goals are set in a strict way, but firms are allowed flexibility in how to achieve those goals in the context of a competitive market environment, firms often become quite innovative in meeting those environmental goals. Indeed, Porter argued that in a substantial number of cases, the innovations induced by the tough new environmental rules save enough money so that the rules end up imposing no economic costs at all. In the Fall 1995 Journal of Economic Perspectives, Michael E. Porter and Claas van der Linde make their case in \”Toward a New Conception of the Environment-Competitiveness Relationship,\” (9:4, 97-118). The authorial team of Karen Palmer, Wallace E. Oates, and Paul R. Portney respond in \”Tightening Environmental Standards: The Benefit-Cost or the No-Cost Paradigm?\” (9:4, 119-132). Oates and his co-authors took the position that while the costs of complying with environmental regulations do often turn out to be lower than industry predictions that were made when the rule was under discussion, it goes too far to say that environmental rules usually or generally don\’t impose costs. I wrote about some more recent evidence on this dispute in \”Environmental Protection and Productivity Growth: Seeking the Tradeoff\” (January 8, 2015).

Land Use: Regulations vs. Negotiations

Here\’s a thought experiment. Imagine two cities. In one city, plots of land are rectangular and divided up in a grid. In the other city, plots of land are defined by what is called the \”metes and bounds\” system, in which the property line is described by a series of boundaries that could include a stream, a large rock, a tree, a road, a wall, an existing building–and then with straight lines (\”metes\”) drawn between these boundaries.  In which city would you expect land values to be higher, and development to proceed faster, and why?

The answer to this question, along with a number of other intriguing insights about urban planning, emerge from an article by Roderick M. Hills Jr. and David Schleicher, \”Can `Planning\’ Deregulate Land Use?\” which appears in Regulation magazine (Fall 2015, pp. 36-41).  Here\’s their answer:

Dean Lueck (University of Arizona) and Gary Libecap (University of California, Santa Barbara) have confirmed empirically that the existence of easily ascertained boundaries can substantially increase property values. Comparing “metes and bounds” lots (the boundaries of which are defined by irregular and individualized lines often following natural boundaries like hills, trees, and streams) with “rectangles and squares” (a system that originally allocated properties in standardized rectangular and square lots), Lueck and Libecap determined that “rectangles and squares” demarcation results in property values that are roughly 30 percent higher than “metes and bounds”—an effect persisting 200 years after the original demarcation. Lots defined by rectangles and squares attracted more population, urbanized more quickly, and—despite the property owners’ power to customize their lots after the initial demarcation—retained their geometrical boundaries long after they were initially demarcated. 

Manhattan’s street grid suggests a similar effect of simple, uniform, and rectangular lot lines. In 1811, when New York was a city of just under 100,000 people residing almost entirely south of Houston Street, the state legislature authorized three street commissioners to create a uniform street grid covering almost all of Manhattan Island. With few deviations, the result was a uniform grid of numbered streets and avenues that, according to the commissioners, would lower the cost of construction because “strait-sided and right-angled houses” were cheaper and easier to build. The grid had another advantage: rectangular lots made property rights easy to ascertain, reducing property disputes between neighbors and making it easier for outsiders to buy, sell, and improve real estate. Trevor O’Grady, a post-doctoral fellow at Harvard University, has confirmed that properties on gridded blocks are both more valuable and more densely developed than properties with irregular boundaries in non-gridded areas.

This finding may seem unsurprising, but it has a provocative lesson for current urban planning. In many urban areas, exactly what you are allowed to build on a plot of land is not altogether clear, and it can often be more about political negotiations than about concrete and drywall. Want to build an apartment building or a condo complex? Prepare for a negotiation over height, parking spaces, the design of street-level entrances, and the inclusion of units designated as \”affordable.\” Want to build an office building? City Hall will probably only give you a building permit after a negotiation over many features of the building, potentially including height, whether mass transit or public space is included in the design, parking spaces, the mix of retail and office space, and a number of other features. 
In short, the two-dimensional square of land owned by a property developer may be well-defined by right angles. But the property rights concerning the three-dimensional box of what can be built on that plot of land are often not well-defined. Instead, a system of metaphorical \”metes and bounds\” includes many requirements that are negotiated anew during any given project. 
Like everything else in economics, there are competing theories and a tradeoff at work here. On one side, there\’s an argument that bargaining over each individual project–typically in process that pits the would-be developer against those already in the neighborhood–helps to assure that new projects are blended into the existing neighborhood and fit with a broader public vision. Hills and Schleicher admit some truth in this argument, but also offer two strong counterarguments. In the context of thinking about requirements for affordable housing, they write: 

In two important respects, this bargaining process defeats the very goal of land-use deregulation necessary for a lasting solution to the housing affordability crisis. First, individualized bargains increase the chance that NIMBY neighbors will hijack local land to exclude housing because the individualized bargaining process provides no way for politicians representing different parts of the city to strike bargains allocating locally unwanted land uses across neighborhoods. … Local legislatures tend to be too disorganized to enforce complex deals for allocating locally undesirable land uses (LULUs) across legislators’ districts. The result is excessive zoning restrictions created by a universal coalition against LULUs: each member votes for every other member’s proposals to keep LULUs out in expectation that every other member will do the same…. 

Second, individualized bargaining raises the costs of knowing what development rights one buys with the purchase of a lot. Those costs drive away real estate investors, depriving the city of capital sorely needed to remedy a desperate housing shortage. …  Comprehensive plans also promote transparency and marketability for use rights in much the same way that a simple grid system promotes a market in possession rights. By making it simple for outsiders to see what they are buying when they purchase a parcel, such plans attract more buyers, thereby increasing investment in housing. In these two respects, centralized and rigid planning can actually be a libertarian reform, while ostensibly more flexible bargaining can lead to the strangulation of a city’s housing supply.

This insight is what leads to their seemingly paradoxical title: \”Can `Planning\’ Deregulate Land Use?\” General plans that specify what you can do with your land \”as-of-right\”–that is, without anyone being able to block you–may be a useful step toward avoiding regulation-by-negotiation.
Their lesson is broader than just land use planning. Many government policies can be implemented through general rules, or though a series of negotiations and exceptions and discretionary decisions. In specific situations, one can often make the case that the general rule doesn\’t quite work, at least not this time, and so tweaking and bending and amending the rule makes sense just this time. But if every decision comes down to a negotiation, then the general rule itself lacks force. Moreover, the outcome of real-world negotiations won\’t be decided by an All-Knowing and Benevolent  Social Planner. Actual negotiations will be a messy business of who can hire the most high-powered lawyers, spend the most money on publicity, turn out the neighborhood crowds, and cut deals with local politicians and well-placed decision-makers for their support. As Hills and Schleicher conclude: 

Paradoxically, sometimes rigidity and centralization of a general framework for buying and selling land is more market-friendly than the bargaining free-for-all that keeps everyone guessing—and paying lobbyists to improve the odds of their guesses. 

Snapshots of Well-Being Across High-Income Countries

Well-being is a multidimensional concept, for countries as well as for individuals. Thus, in the recent OECD report  \”How\’s Life 2015: Measuring Well-Being,\” the emphasis goes well beyond basic one-dimensional measures of well-being like per capita GDP and offers comparisons across a range of measures of well being for the countries that are members of the OECD.

The OECD membership includes 34 countries. It  includes the US and Canada, many countries across western and eastern Europe, Japan and Australia, and a few others additions like Korea, Israel, Mexico, and Chile. Thus, it would be fair to say that it includes mostly the high-income countries of the world. The OECD is a combination think-tank and forum: it collects a wide range of data and publishes reports, but it has no power beyond its own reputation for accuracy and even-handedness.

In the calculations shown here, each measure of well-being is expressed in \”standard deviations,\” which in case your statistics is a little rusty, can be used to look at how far a value is from the average. If you plot each of these statistics for the 34 countries of the OECD, most of the countries will tend to bunch fairly near the average value, while a few countries will be higher or lower. The OECD notes that for these statistics, about two-thirds of 34 countries will be between -1 and +1 standard deviations of the mean value. Thus, about five countries will be above +1 and another five will be below -1.

So when you read these charts for various countries, they are basically telling you if a country is pretty close to the average for the high-income countries of the world, or whether it really stands out from the average.

Here\’s what the US looks like in this framework.. For household income and financial wealth, the US is an extreme outlier in comparison to the OECD peer group with values above 2. The US also stands out in earnings, rooms per person, housing affordability, educational attainment, and perceived health. However, the US stands out in a negative way in categories like time off, adult skills, and deaths due to assault.

How do these configurations appear for some other prominent OECD economies? In some ways, the US stands out as a country of fairly extreme values, both positive and negative. In contrast, Germany in comparison to the other OECD countries stands out on household income (although clearly well below the US level) as well as employment, job security, and water quality. Germany has a number of other above-average categories, and no categories that are dramatically below the average.

What about Japan? Japan is high on financial wealth but about average on income–on those two categories the opposite of Germany\’s pattern. Japan is especially high in job security, life expectancy, cognitive skills at 15, and adult skills. Japan performs poorly in perceived health (which is quirky, given the life expectancy numbers) as well as in basic sanitation, voter turnout, and life satisfaction

The chart for France made me smile. Overall France is even closer to the OECD average on most of these measures that Germany. France stands out a bit on household income, voter turnout, and life expectancy. But by these measures, the most striking difference for France is the large amount of time off!

The standard of living in Scandinavian countries like Sweden and Denmark has been getting some attention lately in US political conversations. Here\’s the snapshot for Sweden. Again, Sweden is closer to the average for the 34 OECD countries than the US measures, but it especially stands out in employment, perceived health, working hours, and water and air quality. It\’s interesting to me that while the US is roughly average on cognitive skills at age 15 but substantially below-average on adult skills, Sweden is a little behind on cognitive skills at age 15 and substantially  ahead on adult skills. This pattern suggests that Sweden knows something about helping young adults acquire skills as they move into the workforce. It\’s also interesting that Sweden is a little below average on job security, a little above average on long-term unemployment, and average on time off.

Finally, if you want a picture of a society in trouble, consider the graph for Greece. Compared to the OECD average, Greece lags dramatically in employment, earnings, job security, long-term unemployment rooms per person, housing affordability, cognitive skills at 15, social support, water quality, and life satisfaction.

The report offers graphs like this for all 34 of the OECD countries, and then also considerably more discussion of how each variable is measured and how each variable compares across countries.

Update on the National School Lunch Program

\”On a typical schoolday in October 2014, over 30 million U.S. schoolchildren and teens took their trays through the lunch line. Seventy-two percent of these students received their meals for free or paid a reduced price, and the remaining 28 percent purchased the full-price lunch.\” However, the number of children receiving a free lunch is rising, while the number purchasing a school lunch is falling. Katherine Ralston and Constance Newman take \”A Look at What’s Driving Lower Purchases of School Lunches,\” in Amber Waves, published by the US Department of Agriculture (October 5, 2015).

Here are some facts to organize the discussion. First, here\’s a figure showing total number of students getting school lunch over time. The number receiving free lunches has risen substantially; the number paying for lunch has dropped.

Another angle on this same data is instead of looking at total numbers, look at the proportion of students in each category. About 60% of all students are provided a lunch at school. The share of those who are eligible to get a lunch, and actually getting one, is about 90%. The share of students who would need to pay for their own lunch, and are paying for the school lunch, is down in the last few years.

The National School Lunch program cost $11.6 billion in 2012, according to a USDA fact sheet. Why is it leading to fewer paid lunches? Perhaps the obvious explanation for fewer paid lunches is the 2007-2009 recession and its aftermath. It seems plausible that a number of families who weren\’t eligible for free lunches were concerned about saving some money, and started sending their children to school with a home-packed lunch instead. But this answer seems incomplete, because the program has been tweaked in a number of ways in recent years.

For example, Ralston and Newman explain:

In 2010, Congress passed the Healthy, Hunger-Free Kids Act. The Act addressed concerns about the nutritional quality of children’s diets, school meals, and competitive foods available in schools (those not part of the school meal, such as a la carte items or foods and drinks sold in vending machines). … In implementing the Act, USDA promulgated rules requiring lunches to include minimum servings per week of specific categories of vegetables, including dark green and red/orange vegetables, as well as changes to increase whole grains while limiting calories and sodium. … These rules took effect starting with school year 2012-13. Some school lunch standards were gradually phased in. …  The updated standards set a ceiling on total calories per average lunch in addition to existing minimum calorie requirements, with upper restrictions ranging from 650 kilocalories (kcal) for grades K-5 to 850 kcal for high schools. Total sodium levels for average lunches offered were also limited for the first time to 1,230 milligrams (mg) (grades K-5), 1,360 mg (grades 6-8), and 1,420 mg (grades 9-12) per average lunch by July 1, 2014, with intermediate and final targets scheduled for school years 2017-18 and 2022-23. 

It\’s easy to find surveys of school administrator who cheerily praise these new rules. As someone with three children in public schools, my anecdotal evidence is that not all children are pleased with the changes. Also, I think the kinds of concerns over what children eat for lunch that motivated the passage of the 2010 act are also leading some families to believe that a home-packed lunch will feed their children better. In addition to the menu changes, the law has also led many school to raise the price for school lunches. Again, Ralston and Newman explain: 

The Paid Lunch Equity provision requires districts to work towards making the revenue from paid lunches to equal the difference between the reimbursement rates for free lunches and paid lunches. For example, in school year 2014-15, the reimbursement rate for free lunches, including an additional $0.06 for compliance with updated meal standards, was $3.04 and the reimbursement for paid lunches, together with the additional 6 cents, was $0.34. The difference of $2.70 would represent the “equity” price. A district charging $2.00 for a paid lunch would be required to obtain an additional $0.70 per meal, on average, by gradually raising prices or adding non-Federal funds to make up the difference over time. Until the gap is closed, districts must increase average revenue per lunch, through prices or other non-Federal sources, by 2 percent plus the rate of inflation, with minimum increases capped at 10 cents in a given year, with exemptions under certain conditions.

Higher prices for paid lunches run the risk of reducing participation. A nationally representative study from school year 2005-06 found that a 10-percent increase in lunch price was associated with a decline of 1.5 percentage points in the participation rate of paid lunches, after controlling for other characteristics of the meal and the school foodservice operation. Another nationally representative survey conducted in 2012 found that lunch prices rose 4.2 percent in elementary schools and 3.3 percent in middle and high schools, on average, between school years 2010-11 and 2011-12. Applying the earlier results on effects of differences in lunch prices on paid-lunch participation rates, these price increases would be expected to lead to declines in participation rates of 0.6 percentage points for elementary school and 0.5 percentage points for middle and high school. These estimates suggest that price increases related to the Paid Lunch Equity provision could have contributed modestly to the decline in participation rates for paid lunches.

Other changes to the school lunch program are just beginning to be phased in. In the current school year, the \”Smart Snacks in School\” rules kicked in, requiring that \”competitive\” foods sold in schools along with school lunches \”must meet limits on calories, total and saturated fat, trans-fat, sugar, and sodium and contribute to servings of healthy food groups.\”

After a few years of pilot programs, the eligibility rules for free school lunches are being eased. \”Overall NSLP participation may also be helped by the Community Eligibility Provision (CEP), a new option that allows schools in low-income areas to offer school meals at no charge to all students. Under CEP, a district may offer all meals at no charge in any school where 40 percent or more of students are certified for free meals without an application … An evaluation of seven early adopting States found that CEP increased student participation in NSLP by 5 percent relative to comparable schools that did not participate in CEP. The increase in overall participation associated with CEP may result not only from the expansion of free lunches, but also from reduced stigma and faster moving lunch lines due to the elimination of payments.\”

Like a lot of middle class families, we use the school lunch program as a convenience. Our children take home-packed lunches most days, but some days the lunches never quite get made. My sense is that he nutritional value of the lunches our children take to school is considerably better than what they eat when they buy a school lunch (remembering that what they actually eat is not the same as what the school tries to serve them). But for a lot of low-income families, the school lunch program is nutritional lifeline. The poverty rate for children in the United States (21% in 2014) is considerably higher than for other age groups.

I\’m sympathetic to the notion that the food served in schools should be healthier. But as a parent, I\’ve learned that serving healthier food to children is comparatively easy. Having children eat that food is harder. And having children learn healthy habits related to food and diet can be harder, still.

The Trade Facilitation Agenda

The most common way of talking about \”barriers to trade\” between countries has often involved measuring taxes on imports (\”tariffs\”) or quantitative limits on imports (\”quotas\”). But import tariffs and quotas have been reduced over time, and the focus of many new trade agreements–along with the World Trade Organization– is \”trade facilitation,\” which means taking steps to reduce the costs of international trade. Some of of these costs involve transportation and communications infrastructure, but a number of the changes also involve administrative practices like the paperwork and time lags needed to get through customs.

Back in December 2013, the trade negotiators at the  World Trade Organization signed off on the Trade Facilitation Agreement, the first multilateral trade agreement concluded since the establishment of the World Trade Organization in 1995. The agreement legally comes into force if or when two-thirds of the WTO member countries formally accept it. So far, 18 have done so, so there\’s some distance to go. In its World Trade Report 2015, subtitled \”Speeding up trade: benefits and challenges of implementing the WTO Trade Facilitation Agreement,\” the WTO lays out the potential gains and challenges.  The WTO writes:

While trade agreements in the past were about “negative” integration – countries lowering tariff and non-tariff barriers – the WTO Trade Facilitation Agreement (TFA) is about positive integration – countries working together to simplify processes, share information, and cooperate on regulatory and policy goals. … The TFA represents a landmark achievement for the WTO, with the potential to increase world trade by up to US$ 1 trillion per annum.

How  big are the costs of trading internationally? The WTO writes:

Based on the available evidence, trade costs remain high. Based on the Arvis et al. (2013) database, trade costs in developing countries in 2010 were equivalent to applying a 219 per cent ad valorem tariff on international trade. This implies that for each dollar it costs to manufacture a product, another US$ 2.19 will be added in the form of trade costs. Even in high-income countries, trade costs are high, as the same product would face an additional US$ 1.34 in cost.

Here\’s a figure showing how these trade costs vary across types of countries. The report has discussion of variation by sector of industry as well.

There is already widespread recognition that these costs are hindering trade, and so the  trade facilitation agenda is already spreading rapidly through regional and bilateral trade agreements. This figure shows the rise in the number of regional trading agreements, and also emphasizes that almost all of those agreements have trade facilitation components. Indeed, a defining characteristic of international trade in the 21st century is that it involves global value chains, in which the chain of production is divided up across multiple countries (for more detail, see here, here, or here). In effect, many regional trading agreements are seeking to facilitate these global value chains by reducing the costs of trade.

For a taste of what specifically is meant by \”trade facilitation\” in these agreements, here\’s a list of the trade facilitation provisions that are most common in regional trade agreements. Of course, the WTO report has details about what each of these categories means.

Shipping good and services across global distances and multiple national borders is never going to be quite as simple as dealing with a nearby provider who is operating within the same borders. How much can the trade facilitation agenda reduce the kinds of costs give above? Here\’s the WTO summary:

Trade costs are high, particularly in developing countries. Full implementation of the Trade Facilitation Agreement (TFA) will reduce global trade costs by an average of 14.3 per cent. African countries and least-developed countries (LDCs) are expected to see the biggest average reduction in trade costs. … Computable general equilibrium (CGE) simulations predict export gains from the TFA of between US$ 750 billion and well over US$ 1 trillion dollars per annum, depending on the implementation time-frame and coverage. Over the 2015-30 horizon, implementation of the TFA will add around 2.7 per cent per year to world export growth and more than half a per cent per year to world GDP growth. … Gravity model estimates suggest that the trade gains from the TFA could be even larger, with increases in global exports of between US$ 1.1 trillion and US$ 3.6 trillion depending on the extent to which the provisions of the TFA are implemented.

There are some other benefits to the trade facilitation agenda, as well. For example, reforming the legal and regulatory processes around customs, and reducing delays, means that there is less reason to pay bribes to facilitate the process–and thus reduces corruption. The WTO writes:

Trade-related corruption is positively affected by the time spent to clear customs procedures. Shepherd (2010) shows that a 10 per cent increase in trade time leads to a 14.5 per cent fall in bilateral trade in a low-corruption country, and to a 15.3 per cent fall in a country with high levels of corruption. By reducing the time required to move goods across borders, trade facilitation is therefore a useful instrument for anticorruption efforts at the border.

More broadly, steps to facilitate trade across borders by simplifying paperwork, improving infrastructure, and reducing delays will often be quite useful for domestic production chains, not just for international trade.  Thus, lots of organizations are pushing the trade facilitation agenda, not just the WTO. As one example, the report notes:

The World Bank is also active in the trade facilitation area. In fiscal year 2013, for example, the World Bank spent approximately US$ 5.8 billion on trade facilitation projects, including customs and border managementand streamlining documentary requirements, as well as trade infrastructure investment, port efficiency, transport security, logistics and transport services, regional trade facilitation and trade corridors or transit and multimodal transport. The Bank is also involved in analytical work such as the Trade and Transport Facilitation Assessment which “is a practical tool to identify the obstacles to the fluidity of trade supply chains.”

How Tight is the US Labor Market?

The US unemployment rate was 5.1% in August and September. This rate is low by the standards of recent decades, but concerns remain over the extent to which is it not reflecting those who were long-term unemployed, have dropped out of looking for a job–and thus are no longer officially counted in the ranks of the unemployed.

Alan B. Krueger tackles this and related issues in \”How Tight Is the Labor Market?\”, which was delivered as the 2015 Martin Feldstein Lecture at the National Bureau of Economic Research on July 22, 2015. An edited version of the talk is here; If you would like to watch the lecture and see the Powerpoint slides, you can do so here. (Full disclosure: Alan was Editor of the Journal of Economic Perspectives, and thus was my boss, from 1996-2002.) Short answer: The long-term unemployed dropping out of the labor market do contribute modestly to a lower labor force participation rate and the lower unemployment rate. However, if one focuses on short-run unemployment levels, the labor market is tight enough that it is leading to higher wages in much the same way as in previous decades.

Here are a few figures to set the stage. Here, I\’ll use versions generated by the ever-useful FRED website run by the Federal Reserve Bank of St. Louis, which has the advantage of updating the figures a bit from the ones provided in Krueger\’s talk last summer.  For starters, the US unemployment rate has now dropped dramatically, back to levels that are relatively low in the context of recent decades. 

However, if one focuses on the share of the unemployed who are long-term unemployed, defined as those without a job and still looking for one after at least 27 weeks, the picture isn\’t as rosy. Although the share of the unemployed who are long-term unemployed has declined, it still remains at relatively high levels by the standards of recent decades. To describe this pattern in another way, those who are long-term unemployed have found it harder to get back into employment than those who were unemployed for less than 27 weeks. 
In addition, the official unemployment statistics only count someone as unemployed if they are out of a job and actively looking for work. This definition of unemployment makes some sense: for example, it would be silly to count a 75 year-old retiree or a married spouse staying home by choice as \”unemployed.\” The labor force participation rate measures the share of adults who are \”in the labor force,\” which means that they either have a job or are out of a job and looking for one. This rate has been generally declining since the late 1990s. There are a number of possible reasons for this decline: for example, the baby boomer generation is retiring in force, and more retirees means a lower labor force participation rate; more young adults are continuing to attend school into their 20s, and thus aren\’t counted as being in the labor force; and some of those who were long-term unemployed have given up looking for work, and are no longer counted in the unemployment statistics even though they would still prefer to be employed. 
Krueger slices and dices this topic from several directions, but a lot of his recent work has focused on the issue of the long-term unemployed. Those who are long-term unemployed tend to become disconnected from the labor market over time.  Their job search activity gradually diminishes, and employers are less likely to give interviews to those whose resumes show long-term unemployment. For illustration, here\’s a figure from Krueger on the probability of an unemployed worker finding a job based on how long the worker has been unemployed.

Figure5

Krueger writes: \”A variety of evidence points to the long-term unemployed being on the margins of the labor market, with many on the verge of withdrawing from searching for a job altogether. As a result, the long-term unemployed exert less downward pressure on wages than do the short-term unemployed. They are increasingly likely to transition out of the labor force, which is a loss of potential for our economy and, more importantly, a personal tragedy for millions of workers and their families.\” By Krueger\’s calculation, about half of the decline in the share of the long-term unemployed is due to that group dropping out of the labor force altogether. Of the decline in the labor force participation rate, most of it is due to a larger share of retirees in the population and young adults being more likely to remain in school, but on Krueger\’s estimates about one percentage point of the decline is due to long-term unemployment leaving the labor market and no longer looking for work. (I\’ve discussed the decline in labor force participation rates a number of times on this blog before: for example, here, here and here, or for some international comparisons, see here.)

A different measure of the tightness of the labor market is to stop parsing the job statistics, and instead look at the patterns of unemployment and wages,what economists call a Phillips curve. In general, one might expect that higher unemployment would mean less pressure for wages to rise, and the reverse for lower unemployment. One sometime hears an argument that real wages haven\’t been rising recently in the way one should expect if unemployment is genuinely low (as opposed to just appearing low because workers have dropped out of the labor force).

Krueger argues that the patterns of wage changes and unemployment are roughly what one should expect. He focuses only on short-term employment (that is, employment less than 27 weeks), on the grounds that the long-term unemployed are more likely to be detached from the labor force and thus will exert less pressure on wages. Increases in real wages are measured with the Employment Cost Index data collected by the US Bureau of Labor Statistics, and then subtracting inflation as measured by the Personal Consumption Expenditures price index. In the figure below, the solid line shows the relationship between short-term unemployment and changes in real wages for the period from 1976-2008. (The dashed lines show the statistical confidence intervals on either side of this line.) The points labelled in blue are for the years since 2008. From 2009-2011, the points line up almost exactly on the relationship predicted from earlier data. For 2012-2014, the points are below the predicted relationship, although still comfortably within the range of past experience (as shown by the confidence intervals). For the first quarter of 2015, the point is above the historical prediction.

Figure10

This pattern suggests that since 2008, the relationship between unemployment rates and wage increases hasn\’t changed much. To put it another way, the low unemployment rates now being observed are a meaningful statistic–not just covering up for workers exiting from the labor market–because they are tending to push up wages at pretty much the same way as they have in the past.