Uber: What are the Real Economic Gains?

A common accusation against Uber and other web-facilitated car-hire services is what looks like a competitive advantage only arises because they operate under a different and more lax set of rules than regular taxicabs. In other words, the newfangled service looks great until you are in a situation with an unsafe and undermaintained vehicle, along with an untrained or underinsured driver. In \”The Social Costs of Uber,\”  Brishen Rogers points out two sources of genuine economic gains from Uber and similar firms (The University of Chicago Law Review Dialogue, 2015, 82: pp. 85-102). He also describes the evolving negotiations over rules that Uber and other companies seem sure to face.

A company like Uber offers two sources of genuine economic gains: reduced search costs for both passengers and drivers, and gains from horizontal and vertical integration. Here\’s Rogers on the mess that search costs on the part of both drivers and passengers create for conventional taxicab markets, and how Uber addresses them (with footnotes omitted).

\”[B]oth regulated and deregulated taxi sectors suffer from high search costs. Riders have difficulty finding empty cabs when needed. Taxis therefore tend to congregate in spaces of high demand, such as airports and hotels. Deregulation arguably made this worse. Since supply went up, cab drivers had even greater incentives to stay in high-demand areas, and yet they had to raise fares to stay afloat.

High search costs and low effective supply may also reduce demand for cabs in two ways. First, if consumers have difficulty finding cabs because cabs are scarce, they may tend not to search in the first place. Second, high search costs may create a vicious cycle for phone-dispatched cabs. Riders who get tired of waiting for a dispatched cab may simply hail another on the street; drivers en route to a rider may also decide to take another fare from the street, rationally estimating that the rider who called may have already found another car. In some cities, the result is that dispatched cabs may never arrive—full stop.

Uber has basically eradicated search costs. Rather than calling a dispatcher and waiting, or standing on the street, users can hail a car from indoors and watch its progress toward their location. Drivers also cannot poach one another’s pre-committed fares. This is a real boon for consumers who don’t like long waits or uncertainty—which is to say everyone. Uber can also advise drivers on when to enter and exit the market—for example, by encouraging part-time drivers to work a few hours on weekend nights.

The article cite some evidence from a few years back in San Francisco that fewer than half of the attempts to dispatch a cab to a certain address ended up with a cab actually arriving.

For economists, \”vertical integration\” refers to whether a few or many economic actors are involved in number of steps along the chain of production from start to finish. In contrast, \”horizontal integration\” refers to whether a few or many are involved in a particular stage of the production process. Rogers argues that the taxicab industry has evolved in ways that don\’t involve much vertical or horizontal integration, and Uber and other ride-sharing services are creating efficiency gains bringing greater integration in these ways. Rogers writes:

Uber is also extremely important for another reason that has received little attention: it is encouraging vertical and horizontal integration in the car-hire sector. … In Chicago, for example, medallion owners often lease their operating rights to management companies; management companies in turn purchase or lease cars and outfit them as required per local regulations; drivers then lease those cars from management companies on a weekly, daily, or even hourly basis. Other cities have different licensing systems, but any licensing system that does not mandate owner operation or direct employment of drivers will encourage similar vertical fragmentation. Taxi companies will rationally (and lawfully) lease cars to drivers rather than employ drivers in order to avoid the costs associated with employment, which include minimum wage laws, unemployment and workers’ compensation taxes, and possible unionization. Uber is now reducing such vertical fragmentation, since it has a direct contractual relationship with its drivers. It is also integrating the sector horizontally as it gains market share within cities. Meanwhile, the company is compiling a massive database of driver and rider behavior. Those data are essential to Uber’s price-setting and market-making functions but would be all-but-impossible to compile in a fragmented industry. 

In short, the economics behind Uber and other ride-sharing services suggests the possibility of substantial and real economic gains. Rogers quickly mentions some other gains, as well: \”For example, Uber reduces consumers’ incentives to purchase automobiles, almost certainly saving them money and reducing environmental harms. As consumers buy fewer cars, Uber also opens up the remarkable possibility of converting parking spaces to new and environmentally sound uses. Uber may also reduce drunk driving and other accidents.\”

But even if Uber isn\’t just a case of those who can sidestep existing regulations having a cost advantage, it is nonetheless true that Uber like any company providing service to the public is going to find itself facing some rules and regulations. For example, basic checks on driver competence, as well as rules about vehicle safety and appropriate insurance, seem to be on their way.

What is perhaps more interesting is that the web-enabled car-hire model raises some questions that didn\’t arise in the same way in the previous taxicab industry.

For example, there are a combination of old and new concerns about discrimination. The old concern is that taxis may not be available for hire in certain neighborhoods, or drivers may not pick up riders from certain racial or ethnic groups. A web-connected car-hire service seems likely to reduce this problem. The new concern is that Uber riders are expected to evaluate drivers. What if such evaluations carry a dose of racial/ethnic or gender prejudice?

Another issue is whether the Uber drivers should be treated as \”employees.\” Rogers doubts that ultimately Uber drivers will be treated in this way, and refers to mentions that there are similar cases involving whether FedEx drivers are employees. He writes:

The most analogous recent cases, in which courts have split, involve FedEx drivers. Those that found for the workers have noted, for example, that FedEx requires uniforms and other trade dress, that it requires drivers to show up at sorting facilities at designated times each day, and that it requires them to deliver packages every day. Uber drivers are different in each respect. They use their own cars, need not wear uniforms, and most importantly they work whatever hours they please.

But ultimately, as these kinds of regulations are discussed and debated, the very success of Uber and similar services is likely to help in enacting and enforcing certain standards. As Rogers notes: \”These developments could make it relatively simple to ensure that Uber complies with the law and plays its part in advancing public goals. The reason is simple: as scholars have documented, large, sophisticated firms can detect and root out internal legal violations—and otherwise alter employees’ and contractors’ behavior—far more easily than public authorities or outside private attorneys.\”

In other words, Uber and similar companies are not going to be both enormous commercial successes and also untouched by regulatory concerns. Instead, Uber\’s huge and growing database of drivers, fares, prices, time-of-day, locations, accidents, evaluations of drivers by passengers, evaluations by passengers of drivers, will all tend to provide information that can be used to monitor what happens and to motivate improvements where needed. Moreover, if enough potential customers or drivers are discontented with Uber and the existing web-enabled car hire companies, the barriers to entry for other firms to start up Uber-like companies on a city-by-city basis are not very high. As Rogers writes:

Moreover, it is not clear that Uber’s position at the top of the ride-sharing sector is stable. While Uber’s app is revolutionary, it is also easy to replicate. Uber already faces intense competition from Lyft and other ride-sharing companies, competition that should only become more intense given Uber’s repeated public relations disasters. While Uber’s success relies in part on network effects—more riders and drivers enable a more efficient market—the switching costs for riders and drivers appear to be fairly minimal. Uber may become the Myspace or Netscape of ride sharing—that is, a pioneer that could not maintain its market position. Concerns about monopoly therefore seem premature.   

Those interested in this subject might also want to check out an earlier post on \”Who are the Uber Drivers?\” (February 18. 2015).

How Many Deaths from Mistakes in US Health Care?

Back in the 1999, the Institute of Medicine (part of the National Academies of Science) estimated in its report To Err is Human that in 1997 at least 44,000 and as many as 98,000 patients died in hospitals as the result of medical errors that could have been prevented. Current estimates are higher, as Thomas R. Krause points out in \”Department of Measurement: Scorecard Needed\” in the Milken Institute Review (Fourth Quarter 2015, pp. 91-94). Krause writes:

\”You\’ve seen the astounding numbers: hundreds of thousands of Americans die each year due to medical treatment errors. Indeed, the median credible estimate is 350,000, more than U.S. combat deaths in all of World War II. If you measure the “value of life” the way economists and federal agencies do it – that is, by observing how much individuals voluntarily pay in daily life to reduce the risk of accidental death – those 350,000 lives represent a loss exceeding $3 trillion, or one-sixth of GDP. But when decades pass and little seems to change, even these figures lose their power to shock, and the public is inclined to focus its outrage on apparently more tractable problems.\”

In case you\’re one of the vast majority who actually haven\’t seen those estimates, or at least haven\’t mentally registered that they exist, here are a couple of the more recent underlying sources.

The Agency for Healthcare Research and Quality (part of the US Department of Health and Human Services) published in May 2015 the 2014 National Healthcare Quality and Disparities Report. Here are some good news/bad news statistics from the report:

From 2010 to 2013, the overall rate of hospital-acquired conditions declined from 145 to 121 per 1,000 hospital discharges. This decline is estimated to correspond to 1.3 million fewer hospital-acquired conditions, 50,000 fewer inpatient deaths, and $12 billion savings in health care costs. Large declines were observed in rates of adverse drug events, healthcare-associated infections, and pressure ulcers.

The good news is 50,000 fewer deaths, along with health improvements and saving money. The bad new is that the rate of hospital-acquired conditions basically fell from one patient in every seven patients to one out of every eight. Sure, hospital-acquired conditions will never fall to zero. But it certainly looks to me as if at least tens thousands of lives were being lost each year because that rate had not been reduced, and that tens of thousands of additional could be saved be reducing the rate further. For another analysis in a different setting, here\’s a 2014 US government study about adverse and preventable effects of care in nursing care facilities.

John T. James published \”A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care\” in the Journal of Patient Safety (September 2013, pp. 122-128). James reviews four studies of quality of care that focus on relatively small numbers of patients (three of the studies are less than 1000 patient records, the other is 2,300). He uses a software package called the Global Trigger Tool to flag cases where preventable errors might have occurred, and then those cases are examined by physicians. James describes the process this way:

The GTT depends on systematic review of medical records by persons trained to find specific clues or triggers suggesting that an adverse event has taken place. For example, triggers might include orders to stop a medication, an abnormal lab result, or prescription of an antidote medication such as naloxone. As a final step, the examination of the record must be validated by 1 or more physicians. As will be shown shortly, the methods used to find adverse events in hospital medical records target primarily errors of commission and are much less likely to find harm from errors of omission, communication, context, or missed diagnosis.

Projecting from four small studies to national patterns is obviously a little dicey, but for what it\’s worth, James finds:

Using a weighted average of the 4 studies, a lower limit of 210,000 deaths per year was associated with preventable harm in hospitals. Given limitations in the search capability of the Global Trigger Tool and the incompleteness of medical records on which the Tool depends, the true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm.

My reactions to this body of evidence on the prevalence and costs of mistakes in the US health care system can be summarized in two bits of skepticism and one burst of outrage.

It seems sensible to be skeptical about the largest estimates of the size of the problem. There are obviously issues in deciding what was \”preventable\” or a \”mistake.\”

The other bit of skepticism is that seeking to reduce the problem of medical errors is harder than it might at first sound. For example, Christine K. Cassel, Patrick H. Conway, Suzanne F. Delbanco, Ashish K. Jha, Robert S. Saunders, and Thomas H. Lee wrote about some efforts to measure and set guidelines for health care in \”Getting More Performance from Performance Measurement.\” which appears in the New England Journal of Medicine on December 4, 2014. They point out that there are often literally  hundreds of measures of quality of care, some important, some not, and many that turn out to be useless or even harmful.

Many observers fear that a proliferation of measures is leading to measurement fatigue without commensurate results. An analysis of 48 state and regional measure sets found that they included more than 500 different measures, only 20% ofwhich were used by more than one program. Similarly, a study of 29 private health plans identified approximately 550 distinct measures, which overlapped little with the measures used by public programs. Health care organizations are therefore devoting substantial resources to reporting their performance to regulators and payers; one northeastern health system, for instance, uses 1% of its net patient-service revenue for that purpose. Beyond the problem of too many measures, there is concern that programs are not using the right ones. Some metrics capture health outcomes or processes that have major effects on overall health, but others focus on activities that may have minimal effects. …

Unfortunately, for every instance in which performance initiatives improved care, there were cases in which our good intentions for measurement simply enraged colleagues or inspired expenditures that produced no care improvements. One example of a measurement effort that had unintended consequences was the CMS quality measure for community-acquired pneumonia. This metric assessed whether providers administered the first dose of antibiotics to a patient within 6 hours after presentation, since analyses of Medicare databases had shown that an interval exceeding 4 hours was associated with increased in-hospital mortality. But the measure led to inappropriate antibiotic use in patients without community-acquired pneumonia, had adverse consequences such as Clostridium difficile colitis, and did not reduce mortality. The measure therefore lost its endorsement by the National Quality Forum in 2012, and CMS removed it from its Hospital Inpatient Quality Reporting and Hospital Compare programs.

But even after acknowledging that quantifying death and injury caused by health care mistakes is an inexact process, and fixing it isn\’t simple, the sheer scale of the issue remains.

The US economy will spend about $3 trillion this year on health care. As Krause noted at the start, the loss of 350,000 lives life from preventable errors, if we value a life at about $9 million as is commonly done by federal regulators, means that the total costs of death from by health care mistakes is about  $3 trillion. On one side, perhaps this total is overstated. On the other side, it includes only costs of deaths, not health costs from serious but nonlethal harms (which James estimates are 10 or 20 times as common), and not the costs of resources used by the health care system in seeking to deal with mistakes already made.

There is considerable public debate over how to make sure all Americans have health insurance. But the issue of the enormous costs of the US health care system doesn\’t get the same airtime. Sure, there are arguments over how much or why the rate of growth of US health care spending has changed. In the meantime, the US continues to vastly outspend other countries. For example, here\’s a figure from the OECD showing health care spending as a share of GDP,50% higher than any other country and roughly double the OECD average. Based on this data, the US is spending about $8500 per person per year on health care, while Canada and Germany are spending about $4400 per person per year, and the United Kingdom and Japan are spending about $3,300 per person per year.

I understand the reasons why high US health care spending doesn\’t buy health. But it\’s a bitter irony indeed that the extremely high levels of US health care spending are actually causing at least tens of thousands, and quite possible hundreds of thousands, of deaths each year.

Calibrating the Hype about Online Higher Education

\”Massive open online courses\” (MOOCs) and other aspects of online higher education were white-hot a few years ago, but I\’d say that they have cooled off to only red-hot. Two economists who have also been college presidents, Michael S. McPherson and Lawrence S. Bacow discuss the current state of play and offer some insights in \”Online Higher Education: Beyond the Hype Cycle,\” appearing in the Fall 2015 issue of the Journal of Economic Perspectives. Here are some points that caught my eye.

About one-quarter of higher education students took an online course in 2013, and about one-ninth of higher education students took all of their courses online that year. 

\”The US Department of Education recently began to conduct its own survey of online education as part of its Integrated Post-Secondary Education Data System (IPEDS), with full coverage of the roughly 4,900 US institutions of higher education. As shown in Table 1, IPEDS data indicates that as of 2013, about 26 percent of all students took at least one course that was entirely online, and about 11 percent received all of their education online.\” 

When it comes to the possibility of education technologies that can operate at large scale with near-zero marginal costs, there\’s a history of overoptimism. Here\’s a quick sketch of promises about educational radio, and then educational television. 

\”Berland (1992), citing a popular commentator named Waldeman Kaempffert writing in 1924, reported that “there were visions of radio producing ‘a super radio orchestra’ and ‘a super radio university’ wherein ‘every home has the potentiality of becoming an extension of Carnegie Hall or Harvard University.’” Craig (2000) reports that “the enthusiasm for radio education during the early days of broadcasting was palpable. Many universities set up broadcast stations as part of their extension programs and in order to provide their engineering and journalism students with experience in radio. By 1925 there were 128 educational stations across the country, mostly run by tertiary institutions” (p. 2831). The enthusiasm didn’t last—by 1931 the number of educational stations was down to 49, most low-powered (p. 2839). This was in part the result of cumbersome regulation, perhaps induced by commercial interests; but the student self-control problem … likely played a role as well. As NBC representative Janice Waller observed, “Even those listeners who clamored for educational programs, Waller found, secretly preferred to listen to comedians such as Jack Benny. These “intellectually dishonest” people “want to appear very highbrow before for their friends . . . but down inside, and within the confines of their own homes, they are, frankly, bored if forced to listen to the majority of educational programs” (as quoted in Craig 2000, pp. 2865–66). 

\”The excitement in the late 1950s about educational television outshone even the earlier enthusiasm for radio. An article by Schwarzwalder (1959, pp. 181–182) has an eerily familiar ring: “Educational Television can extend teaching to thousands, hundreds of thousands and, potentially, even millions. . . . As Professor Siepman wrote some weeks ago in The New York Times, ‘with impressive regularity the results come in. Those taught by television seem to do at least as well as those taught in the conventional way.’ . . . The implications of these facts to a beleaguered democracy desperately in need of more education for more of its people are immense. We shall ignore these implications at our national peril.” Schwartzman goes on to claim that any subject, including physics, manual skills, and the arts can be taught by television, and even cites experiments that show “that the discussion technique can be adapted to television.”\”

The Internet offers the possibility not just of widespread distribution of education material, but also of interactive content. But if the content is to be richly interactive–that is, more than just a short multiple-choice quiz inserted into the recorded material–the costs of design and production could be very substantial. 

\”Richly interactive online instruction is obviously much more expensive than Internet-delivered television. The development costs for Carnegie Mellon’s sophisticated but far from fully computer-adaptive courses in statistics and other fields have been estimated at about $1 million each (Parry 2009). Although future technical developments will reduce the costs of providing a course of a fixed level of quality over time, those future technical developments will also encourage the provision of additional features. Universities can invest in improving the production values of such television programs at the margin in ways that range from multiple camera angles to the incorporation of sophisticated graphics and live location video. Many interactive courses could also conceivably benefit from regular updating based on recent events or scholarship … Our point is that while online courses offer the potential for constant modification and updates, realizing this potential may in fact be expensive, leading to less-frequent updates than for traditionally taught subjects. … Those who foresee the widespread adoption of adaptive learning technology often underestimate the cost of producing it. Stanford President John Hennessey, in a recent lecture to the American Council of Education, estimated that the cost of producing a first-rate highly interactive digital course to be in the millions of dollars (Jaschik 2015). Few individual institutions have the resources to make such investments. Furthermore, while demand may be substantial enough to support such investments for basic introductory courses in fields that easily lend themselves to such instruction, it is unlikely that anyone will invest in the creation of such courses for upper-level courses unless they can be adopted at scale.

There\’s no guarantee that online tools will reduce the costs of higher education. One possibility is that well-endowed universities use online higher education as a way to drive up costs–since these schools often compete to provide a high-end experience. For example, expensive schools might \”flip the classroom\” by paying for both a rich and interactive online course, and then also hiring enough faculty members (not graduate students!) to staff a large number of discussion and problem-solving sections.

Indeed, there is a real chance that at least in selective higher education, technology will actually be used to raise rather than lower cost. There are obvious ways to use online materials to complement rather than to substitute for in-person instruction. Flipping the classroom, as we will explain further, is one. Instructors can also import highly produced video material—either purchased or homemade— to complement their classes, and there could easily emerge a market in modular lessons aimed at allowing students to extend material farther or to get a second take on a difficult set of concepts. If individual faculty members are authorized to make these choices, and universities agree to subsidize expensive choices, costs seem likely to rise. 

Conversely, schools that are lower-ranked and with fewer financial resources may be pushed to focus on implementing a low-cost and mostly online curriculum. 

Broad-access unselective institutions are already among the largest users of online instruction. These institutions are responsible for the education of many students—at least half of all those enrolled in postsecondary education—and they disproportionately educate lower-income students and students of color. Enabling technological advances to support improvement in the educational success of these institutions at manageable cost is an important goal, arguably the most important goal for using technology to improve American higher education. (Of course, the implications of these technologies for global learning would be potentially gigantic.) There is especially high potential for online education to cater to the large number of nontraditional students, which includes adult learners and those who have a very high opportunity cost of attending college whether at the undergraduate or graduate level. For this group of students, asynchronous online learning can be a godsend. Opportunities surely exist for technology to penetrate this market further, and quality is likely to improve as faculty and others figure out how to take better advantage of new educational technology. As the technology improves and as more institutions adopt it, more of these students are likely to receive all or at least some of their education online. 

Yet this great opportunity is accompanied by considerable risk. It is all too easy to envision legislators who see a chance to cut state-level or national-level spending that supports higher education by imposing cheap and ineffective online instruction on institutions whose students lack the voice and political influence to demand quality. It’s equally easy to imagine for-profit institutions proffering online courses in a way that takes advantage of populations with little experience with college in a marketplace where reliable information is scarce.

(Full disclosure: I\’ve been Managing Editor of the Journal  of Economic Perspectives since 1987. All JEP articles from the current issue going back to the first issue are freely available online courtesy of the publisher, the American Economic Association.)

Overconfidence: The Ancient Evil

Here\’s how Ulrike Malmendier and I started our short introduction to a three-paper symposium in the Fall 2015 issue of the Journal of Economic Perspectives on the subject of overconfidence.

Economists have been concerned about issues of overconfidence at least since Adam Smith (1776, Book I, Chapter X), who wrote in The Wealth of Nations: “The over-weening conceit which the greater part of men have of their own abilities, is an ancient evil remarked by the philosophers and moralists of all ages.” Titans of modern economics have had similar reactions to the “ancient evil.” Daniel Kahneman recently told an interviewer that if he had a magic wand that could eliminate one human bias, he would do away with overconfidence. As Shariatmadari (2015) reports: “Not even he [Kahneman] believes that the various flaws that bedevil decision-making can be successfully corrected. The most damaging of these is overconfidence: the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite.” Kahneman argues that overconfidence “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things.” 

Evidence concerning the prevalence of overconfidence is widespread and robust. Some of the results have even become fairly well-known in popular culture, like the findings that most drivers believe they are safer than a typical driver, or that the unskilled tend to overestimate their abilities. The finding about driver overconfidence stems from a Svenson (1981) study, a lab experiment using undergraduate students as subjects, which found that 83 percent of American subjects believed that they were in the top 30 percent in terms of driving safety. The finding about overestimation of ability comes from a Kruger and Dunning (1999) study, which reports: “[P]articipants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd.” Overconfidence on both sides of a conflict may be linked to the willingness to fight a war (Wrangham 1999; Johnson 2004) or to the conditions that lead a strike to occur (Neale and Bazerman 1985). 

The three papers in the symposium ask about the economics of overconfident consumers, overconfident CEOs, and overconfident investors.

Michael Grubb points out in \”Overconfident Consumers in theMarketplace\” that if firms know that a substantial share of consumers are overconfident, they will set prices and contract terms accordingly. For example, consumers may be overconfident about future actions. Sure, they will send in that mail-in rebate. Sure, they won\’t have a car accident, so choosing car insurance with a REALLY high deductible makes sense. Sure, they will subscribe to something and put it on \”auto-renew,\” because they will remember to cancel it when ready. Grubb points out that in some of these situations, competition by firms to take advantage of the overconfident can mean great deals for those who are not overconfidence. Conversely, steps by government to protect the overconfident from themselves can, in some cases, lead to higher costs for others.

lrike Malmendier and Geoffrey Tate discuss \”Behavioral CEOs: The Role of Managerial Overconfidence.\” They suggest a number of ways of measuring the overconfidence of CEOs. One example is based on the insight that CEOs should want to cash in their stock options when they have a chance for a good gain, because it gives them a chance to diversify their wealth by investing it somewhere else. However, an overconfident CEO will holds stock options right up the expiration date before cashing them in, out of a belief that the company is doing better than the market recognizes and for that reason the stock option will keep rising in value. Using measures of when CEOs cash in their stock options, along with estimates of how much they would receive for cashing in their options and various back-of-the-envelope estimates for risk and diversification, they find that about 40% of  CEOs qualify as overconfident. It turns out that those CEOs are more likely to use the company\’s cash, or borrowed money, to make big investments and acquisitions. They also point out that some companies facing the need for a tough transition might prefer an overconfident CEO to manage the transition. Indeed, you can offer an overconfidence CEO less in stock options, because the overconfident CEO will tend to believe that those options will be worth more.

Kent Daniel and David Hirshleifer study \”Overconfident Investors, Predictable Returns, and Excessive Trading.\” They argue that the volume of trading in financial markets is very high, and that there are a number of ways that have been well-known for decades in which stock market returns are somewhat predictable. They suggest that overconfidence among investors helps to explain the eagerness to trade, and the willingness to believe that you can choose a financial adviser who will make you rich. They also suggest links between overconfidence and predictabilities in stock market pricing related to momentum in stock prices, firms with a low ratio of book-too-market value, small firms outperforming larger firms, and others. As they point out, overconfidence is reinforced by \”self-attribution\” bias, which is the common mental posture that whatever goes well was due to your own skill, while whatever goes poorly is too to bad luck.

Overconconfidence shouldn\’t be treated as a giant all-purpose explanation for less-than-rational behavior by consumers, CEOs, and investors. There are lots of ways, in different situations and contexts, that people make less-than-rational decisions. But overconfidence plays a central role in the persistence of lots of less-than-rational behavior. If people were actually rational, most of us (myself very much included) would be pretty humble about our abilities to assess situations, draw inferences, and make decisions. Overconfidence explains why most of us, despite being drenched in the cold rain of reality on a regular basis, still manage believe so strongly in our own points of view and our own ways of making decisions.

And one can indeed argue, with Adam Smith and Daniel Kahneman, that a world in which overconfidence was substantially diminished would be a more productive and pleasant place.

Journal of Economic Perspectives, Fall 2015 issue, Available Online

Since 1986, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which about four years ago made the decision–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here. I\’ll start here with Table of Contents for the just-released Fall 2015 issue. Below are abstracts and direct links for all of the papers. I will probably blog about some of the individual papers in the next week or two, as well.

Here are abstract and links to the articles.

Symposium on Overconfidence

\”On the Verges of Overconfidence,\” by Ulrike Malmendier and Timothy Taylor

This symposium provides several examples of overconfidence in certain economic contexts. Michael Grubb looks at \”Overconfident Consumers in the Marketplace.\” Ulrike Malmendier and Geoffrey Tate consider \”Behavioral CEOs: The Role of Managerial Overconfidence.\” Kent Daniel and David Hirshleifer discuss \”Overconfident Investors, Predictable Returns, and Excessive Trading.\” A number of insights and lessons emerge for our understanding of markets, public policy, and welfare. How do firms take advantage of consumer overconfidence? Might government attempts to rule out such practices end up providing benefits to some consumers but imposing costs on others? How are empirical measures of CEO overconfidence related to investment and the capital structure of firms? Can overconfidence among at least some investors help to explain prominent anomalies in stock markets like high levels of trading volume and certain predictable patterns in stock market returns?
Full-Text Access | Supplementary Materials

\”Overconfident Consumers in the Marketplace,\” by Michael D. Grubb
The term overconfidence is used broadly in the psychology literature, referring to both overoptimism and overprecision. Overoptimistic individuals overestimate their own abilities or prospects. In contrast, overprecise individuals place overly narrow confidence intervals around forecasts, thereby underestimating uncertainty. These biases can lead consumers to misforecast their future product usage, or to overestimate their abilities to navigate contract terms. In consequence, consumer overconfidence causes consumers to systematically misweight different dimensions of product quality and price. Poor choices based on biased estimates of a product\’s expected costs or benefits are the result. For instance, overoptimism about self-control is a leading explanation for why individuals overpay for gym memberships that they underutilize. Similarly, overprecision is a leading explanation for why individuals systematically choose the wrong calling plans, racking up large overage charges for exceeding usage allowances in the process. Beyond these market effects of overconfidence, this paper addresses three additional questions: What will firms do to exploit consumer overconfidence? What are the equilibrium welfare consequences of consumer overconfidence for consumers, firms, and society? And what are the implications of consumer overconfidence for public policy?
Full-Text Access | Supplementary Materials

\”Behavioral CEOs: The Role of Managerial Overconfidence,\” by Ulrike Malmendier and Geoffrey Tate
In this paper, we provide a theoretical and empirical framework that allows us to synthesize and assess the burgeoning literature on CEO overconfidence. We also provide novel empirical evidence that overconfidence matters for corporate investment decisions in a framework that explicitly addresses the endogeneity of firms\’ financing constraints.
Full-Text Access | Supplementary Materials

\”Overconfident Investors, Predictable Returns, and Excessive Trading,\” by Kent Daniel and David Hirshleifer
The last several decades have witnessed a shift away from a fully rational paradigm of financial markets toward one in which investor behavior is influenced by psychological biases. Two principal factors have contributed to this evolution: a body of evidence showing how psychological bias affects the behavior of economic actors; and an accumulation of evidence that is hard to reconcile with fully rational models of security market trading volumes and returns. In particular, asset markets exhibit trading volumes that are high, with individuals and asset managers trading aggressively, even when such trading results in high risk and low net returns. Moreover, asset prices display patterns of predictability that are difficult to reconcile with rational expectations-based theories of price formation. In this paper, we discuss the role of overconfidence as an explanation for these patterns.
Full-Text Access | Supplementary Materials

Symposium on the Future of Retail

\”The Ongoing Evolution of US Retail: A Format Tug-of-War,\” by Ali Hortaçsu and Chad Syverson
The past 15-20 years have seen substantial and visible changes in the way US retail business is conducted. Explanations about what is happening in the retail sector have been dominated by two powerful and not fully consistent narratives: a prediction that retail sales will migrate online and physical retail will be virtually extinguished, and a prediction that future shoppers will almost all be heading to giant physical stores like warehouse clubs and supercenters. Although online retail will surely continue to be a force shaping the sector going forward and may yet emerge as the dominant mode of commerce in the retail sector in the United States, its time for supremacy has not yet arrived. We discuss evidence indicating that the warehouse clubs/supercenter format has had a greater effect on the shape of retail over the past 15-20 years We begin with an overview of the retail sector as a whole, which over the long term has been shrinking as a share of total US economic activity and in terms of relative employment share. The retail sector has experienced stronger-than average productivity growth, but this has not been accompanied by commensurate wage growth. After discussing the important e-commerce and warehouse clubs/supercenters segments, we look more broadly at changes across the structure of the retail sector, including scale, concentration, dynamism, and degree of urbanization. Finally, we consider the likely future course of the retail sector.
Full-Text Access | Supplementary Materials

\”Adolescence and the Path to Maturity in Global Retail,\” Bart J. Bronnenberg and Paul B. Ellickson
We argue that, over the past several decades, the adoption and diffusion of \”modern retailing technology\” represents a substantial advance in productivity, providing greater product variety, enhanced convenience, and lower prices. We first describe modern retailing, highlighting the role of modern formats, scale (often transcending national boundaries), and increased coordination with upstream and downstream partners in production and distribution. In developed markets, the transition to modern retailing is nearly complete. In contrast, many low-income and emerging markets continue to rely on traditional retail formats, that is, a collection of independent stores and open air markets supplied by small-scale wholesalers, although modern retail has begun to spread to these markets as well. E-commerce is a notable exception: the penetration of e-commerce in China and several developing nations in Asia has already surpassed that of high-income countries for some types of consumer goods. To understand the forces governing the adoption of modern technology and the unique role of e-commerce, we propose a framework that emphasizes the importance of scale and coordination in facilitating the transition from traditional to modern retailing. We conclude with some conjectures regarding the likely impact of increased retail modernization for the developing world.
Full-Text Access | Supplementary Materials

Symposium on Online Higher Education

\”Online Higher Education: Beyond the Hype Cycle,\” by Michael S. McPherson and Lawrence S. Bacow
When two Silicon Valley start-ups, Coursera and Udacity, embarked in 2012 on a bold effort to supply college-level courses for free over the Internet to learners worldwide, the notion of the Massively Open Online Course (MOOC) captured the nation\’s attention. Although MOOCs are an interesting experiment with a role to play in the future of higher education, they are a surprisingly small part of the online higher education scene. We believe that online education, at least online education that begins to take full advantage of the interactivity offered by the web, is still in its infancy. We begin by sketching out the several faces of online learning—asynchronous, partially asynchronous, the flipped classroom, and others—as well as how the use of online education differs across the spectrum of higher education. We consider how the growth of online education will affect cost and convenience, student learning, and the role of faculty and administrators. We argue that spread of online education through higher education is likely to be slower than many commenters expect. We hope that online education will bring substantial benefits. But less-attractive outcomes are also possible if, for instance, legislators use the existence of online education as an excuse for sharp cuts in higher education budgets that lead to lower-quality education for many students, at the same time that richer, more selective schools are using online education as one more weapon in the arms race dynamic that is driving costs higher.
Full-Text Access | Supplementary Materials

\”How Economics Faculty Can Survive (and Perhaps Thrive) in a Brave New Online World,\” by Peter Navarro
The academy in which we toil is moving rapidly towards a greater role for online delivery of higher education, and both fans and skeptics offer strong reasons to believe this technological shock will have substantial disruptive effects on faculty. How can we as economic educators continue to provide sufficient value-added to justify our role in a world where much of what we now do is effectively being automated and commoditized? In this brave new online world, many successful and resilient faculty will add value (and differentiate their product) not by producing costly and elaborate multimedia lectures in which they become a superstar professor-celebrity, but rather through careful, clever, and innovative choices regarding both the adoption of the online content of other providers and the forms of online interactions they integrate into their course designs. Possible forms of faculty-to-student and student-to-student interactions run the digital gamut from discussion boards and electronic testing to peer assessments, games and simulations, and virtual office hours. This article explores basic descriptive and prescriptive questions economic educators and their administrators are likely to face as the online education tide rises. For example, how much does it cost to develop online content and how much time does it take? What are the key \”ingredients\” for a pedagogically sound online course? Throughout, I will draw on both the extant literature as well as my own experience at the University of California, Irvine, where the online evolution is advancing rapidly. [This article is available for download in audio (MP3) format from the journal website.]
Full-Text Access | Supplementary Materials

Articles

\”Rewriting Monetary Policy 101: What\’s the Fed\’s Preferred Post-Crisis Approach to Raising Interest Rates?\” by Jane E. Ihrig, Ellen E. Meade and Gretchen C. Weinbach
For many years prior to the global financial crisis, the Federal Open Market Committee set a target for the federal funds rate and achieved that target through small purchases and sales of securities in the open market. In the aftermath of the financial crisis, with a superabundant level of reserve balances in the banking system having been created as a result of the Federal Reserve\’s large-scale asset purchase programs, this approach to implementing monetary policy will no longer work. This paper provides a primer on the Fed\’s implementation of monetary policy. We use the standard textbook model to illustrate why the approach used by the Federal Reserve before the financial crisis to keep the federal funds rate near the Federal Open Market Committee\’s target will not work in current circumstances, and explain the approach that the Committee intends to use instead when it decides to begin raising short-term interest rates.
Full-Text Access | Supplementary Materials

\”Household Surveys in Crisis,\” by Bruce D. Meyer, Wallace K. C. Mok and James X. Sullivan
Household surveys, one of the main innovations in social science research of the last century, are threatened by declining accuracy due to reduced cooperation of respondents. While many indicators of survey quality have steadily declined in recent decades, the literature has largely emphasized rising nonresponse rates rather than other potentially more important dimensions to the problem. We divide the problem into rising rates of nonresponse, imputation, and measurement error, documenting the rise in each of these threats to survey quality over the past three decades. A fundamental problem in assessing biases due to these problems in surveys is the lack of a benchmark or measure of truth, leading us to focus on the accuracy of the reporting of government transfers. We provide evidence from aggregate measures of transfer reporting as well as linked microdata. We discuss the relative importance of misreporting of program receipt and conditional amounts of benefits received, as well as some of the conjectured reasons for declining cooperation and for survey errors. We end by discussing ways to reduce the impact of the problem including the increased use of administrative data and the possibilities for combining administrative and survey data.
Full-Text Access | Supplementary Materials

\”Seven Centuries of European Economic Growth and Decline,\” by Roger Fouquet and Stephen Broadberry
This paper investigates very long-run preindustrial economic development. New annual GDP per capita data for six European countries over the last seven hundred years paint a clearer picture of the history of European economic development. We confirm that sustained growth has been a recent phenomenon, but reject the argument that there was no long-run growth in living standards before the Industrial Revolution. Instead, the evidence demonstrates the existence of numerous periods of economic growth before the nineteenth century—periods of unsustained, but raising GDP per capita. We also show that many of the economies experienced substantial economic decline. Thus, rather than being stagnant, pre-nineteenth century European economies experienced a great deal of change. Finally, we offer some evidence that, from the nineteenth century, these economies increased the likelihood of being in a phase of economic growth and reduced the risk of being in a phase of economic decline.

Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

The Window Tax: A Tale of Excess Burden

For economists, the \”excess burden\” of a tax refers to the idea that the cost of a tax isn\’t just the amount of money collected–it\’s also the ways in which taxpayers alter their behavior because the tax has changed their incentives. A moderately well-known classroom and textbook example is the \”window tax,\” first imposed in England in 1696 by King William III, and not definitively repealed until 1851. The excess burden of the window tax was that lower-income people ended up living in rooms with few or no windows.

Wallace E. Oates and Robert M. Schwab review the history of the window tax and provide actual estimates of how it affected the number of windows per house in their article, \”The Window Tax: A Case Study in Excess Burden,\” which appeared in the Winter 2015 issue of the Journal of Economic Perspectives (where I have toiled in the fields as Managing Editor since 1987). The article popped back into my mind earlier this week when I learned that Oates, a highly distinguished economist based at the University of Maryland since 1979, died last week.  One of Oates\’s specialties was the area of local public finance, and his 1972 book on Fiscal Federalism,  is a classic of that subfield.

Here are some facts about the historical window tax, courtesy of Oates and Schwab.

  • William III intended it as a temporary tax, just to help out with the overhang of costs from the Glorious Revolution of 1688 and the most recent war with France. But it ended up lasting 150 years. 
  • \”An important feature of the tax was that it was levied on the occupant, not the owner of the dwelling. Thus, the renter, not the landlord, paid the tax. However, large tenement buildings in the cities, each with several apartments, were an exception. They were charged as single residences with the tax liability resting on the landlord. This led to especially wretched conditions for the poor in the cities, as landlords blocked up windows and constructed tenements without adequate light and ventilation …\”
  • The window tax was thought of as improvement on the \”hearth tax,\” that Charles II had imposed in 1662. \”The tax was very unpopular in part because of the intrusive character of the assessment process. The `chimney-men\’ (as the assessors and tax collectors were called) had to enter the house to count the number of hearths and stoves, and there was great resentment against this invasion of the sanctity of the home. The window tax, in contrast, did not require access to the interior of the dwelling: the “window peepers” could count windows from the outside, thus simplifying the assessment procedure and obviating the need for an invasion of the interior.\”
  • The window tax was intended as a visible measure of ability to pay: that is, a high-income person would live in a place with more windows than a low-income person. But at the time, it was widely recognized that windows were a very imperfect proxy for wealth. Adam Smith wrote about this problem of window tax in 1776 in The Wealth of Nations: “A house of ten pounds rent in the country may have more windows than a house of five hundred pounds rent in London; and though the inhabitant of the former is likely to be a much poorer man than that of the latter, yet so far as his contribution is regulated by the window-tax, he must contribute more to the support of the state.”
  • When the rates on the window tax went up, it was common for owners of homes and apartments to block or build over many or all of their windows. The results on human well-being were severe. \”A series of studies by physicians and others found that the unsanitary conditions resulting from the lack of proper ventilation and fresh air encouraged the propagation of numerous diseases such as dysentery, gangrene, and typhus. … A series of petitions to Parliament resulted in the designation of commissioners and committees to study the problems of the window tax in the first half of the 19th century. In 1846, medical officers petitioned Parliament for the abolition of the window tax, pronouncing it to be `most injurious to the health, welfare, property, and industry of the poor, and of the community at large\’.\”
  • Here\’s Charles Dickens writing in 1850 about the window tax in Household Words, a magazine that he published for a number of years: “The adage ‘free as air’ has become obsolete by Act of Parliament. Neither air nor light have been free since the imposition of the window-tax. We are obliged to pay for what nature lavishly supplies to all, at so much per window per year; and the poor who cannot afford the expense are stinted in two of the most urgent necessities of life.” 

Oates and Schwab work with a mix of data on the number of windows in a sample of houses in Shropshire and economic theory about household behavior when confronted with taxes to generate an admittedly rough estimate that on average, collecting a certain amount of money through the window tax created an excess burden–in terms of the costs of living in a place with fewer windows–equal to an additional 62% of the value of the tax.

Oates and Schwab ask why the window tax lasted so long, give its many problems, and offer an appropriately cynical answer: \”Perhaps the lesson here is that when governments need to raise significant revenue, even a very bad tax can survive for a very long time.\”

I didn\’t know Oates personally, but I had one other job-related interaction with him back. Along with his work in local public finance, Oates was also well-known as an environmental economist. His 1975 book, The Theory of Environmental Policy (written with William Baumol) was highly influential in setting the direction of what at the time was a fairly new and growing field. In 1995, Oates was a co-author in one of the most downloaded and cited exchanges the JEP has ever published on the subject of what is sometimes called the \”Porter hypothesis.\”

Michael Porter made the argument–bolstered by a large number of case studies, that when environmental goals are set in a strict way, but firms are allowed flexibility in how to achieve those goals in the context of a competitive market environment, firms often become quite innovative in meeting those environmental goals. Indeed, Porter argued that in a substantial number of cases, the innovations induced by the tough new environmental rules save enough money so that the rules end up imposing no economic costs at all. In the Fall 1995 Journal of Economic Perspectives, Michael E. Porter and Claas van der Linde make their case in \”Toward a New Conception of the Environment-Competitiveness Relationship,\” (9:4, 97-118). The authorial team of Karen Palmer, Wallace E. Oates, and Paul R. Portney respond in \”Tightening Environmental Standards: The Benefit-Cost or the No-Cost Paradigm?\” (9:4, 119-132). Oates and his co-authors took the position that while the costs of complying with environmental regulations do often turn out to be lower than industry predictions that were made when the rule was under discussion, it goes too far to say that environmental rules usually or generally don\’t impose costs. I wrote about some more recent evidence on this dispute in \”Environmental Protection and Productivity Growth: Seeking the Tradeoff\” (January 8, 2015).

Land Use: Regulations vs. Negotiations

Here\’s a thought experiment. Imagine two cities. In one city, plots of land are rectangular and divided up in a grid. In the other city, plots of land are defined by what is called the \”metes and bounds\” system, in which the property line is described by a series of boundaries that could include a stream, a large rock, a tree, a road, a wall, an existing building–and then with straight lines (\”metes\”) drawn between these boundaries.  In which city would you expect land values to be higher, and development to proceed faster, and why?

The answer to this question, along with a number of other intriguing insights about urban planning, emerge from an article by Roderick M. Hills Jr. and David Schleicher, \”Can `Planning\’ Deregulate Land Use?\” which appears in Regulation magazine (Fall 2015, pp. 36-41).  Here\’s their answer:

Dean Lueck (University of Arizona) and Gary Libecap (University of California, Santa Barbara) have confirmed empirically that the existence of easily ascertained boundaries can substantially increase property values. Comparing “metes and bounds” lots (the boundaries of which are defined by irregular and individualized lines often following natural boundaries like hills, trees, and streams) with “rectangles and squares” (a system that originally allocated properties in standardized rectangular and square lots), Lueck and Libecap determined that “rectangles and squares” demarcation results in property values that are roughly 30 percent higher than “metes and bounds”—an effect persisting 200 years after the original demarcation. Lots defined by rectangles and squares attracted more population, urbanized more quickly, and—despite the property owners’ power to customize their lots after the initial demarcation—retained their geometrical boundaries long after they were initially demarcated. 

Manhattan’s street grid suggests a similar effect of simple, uniform, and rectangular lot lines. In 1811, when New York was a city of just under 100,000 people residing almost entirely south of Houston Street, the state legislature authorized three street commissioners to create a uniform street grid covering almost all of Manhattan Island. With few deviations, the result was a uniform grid of numbered streets and avenues that, according to the commissioners, would lower the cost of construction because “strait-sided and right-angled houses” were cheaper and easier to build. The grid had another advantage: rectangular lots made property rights easy to ascertain, reducing property disputes between neighbors and making it easier for outsiders to buy, sell, and improve real estate. Trevor O’Grady, a post-doctoral fellow at Harvard University, has confirmed that properties on gridded blocks are both more valuable and more densely developed than properties with irregular boundaries in non-gridded areas.

This finding may seem unsurprising, but it has a provocative lesson for current urban planning. In many urban areas, exactly what you are allowed to build on a plot of land is not altogether clear, and it can often be more about political negotiations than about concrete and drywall. Want to build an apartment building or a condo complex? Prepare for a negotiation over height, parking spaces, the design of street-level entrances, and the inclusion of units designated as \”affordable.\” Want to build an office building? City Hall will probably only give you a building permit after a negotiation over many features of the building, potentially including height, whether mass transit or public space is included in the design, parking spaces, the mix of retail and office space, and a number of other features. 
In short, the two-dimensional square of land owned by a property developer may be well-defined by right angles. But the property rights concerning the three-dimensional box of what can be built on that plot of land are often not well-defined. Instead, a system of metaphorical \”metes and bounds\” includes many requirements that are negotiated anew during any given project. 
Like everything else in economics, there are competing theories and a tradeoff at work here. On one side, there\’s an argument that bargaining over each individual project–typically in process that pits the would-be developer against those already in the neighborhood–helps to assure that new projects are blended into the existing neighborhood and fit with a broader public vision. Hills and Schleicher admit some truth in this argument, but also offer two strong counterarguments. In the context of thinking about requirements for affordable housing, they write: 

In two important respects, this bargaining process defeats the very goal of land-use deregulation necessary for a lasting solution to the housing affordability crisis. First, individualized bargains increase the chance that NIMBY neighbors will hijack local land to exclude housing because the individualized bargaining process provides no way for politicians representing different parts of the city to strike bargains allocating locally unwanted land uses across neighborhoods. … Local legislatures tend to be too disorganized to enforce complex deals for allocating locally undesirable land uses (LULUs) across legislators’ districts. The result is excessive zoning restrictions created by a universal coalition against LULUs: each member votes for every other member’s proposals to keep LULUs out in expectation that every other member will do the same…. 

Second, individualized bargaining raises the costs of knowing what development rights one buys with the purchase of a lot. Those costs drive away real estate investors, depriving the city of capital sorely needed to remedy a desperate housing shortage. …  Comprehensive plans also promote transparency and marketability for use rights in much the same way that a simple grid system promotes a market in possession rights. By making it simple for outsiders to see what they are buying when they purchase a parcel, such plans attract more buyers, thereby increasing investment in housing. In these two respects, centralized and rigid planning can actually be a libertarian reform, while ostensibly more flexible bargaining can lead to the strangulation of a city’s housing supply.

This insight is what leads to their seemingly paradoxical title: \”Can `Planning\’ Deregulate Land Use?\” General plans that specify what you can do with your land \”as-of-right\”–that is, without anyone being able to block you–may be a useful step toward avoiding regulation-by-negotiation.
Their lesson is broader than just land use planning. Many government policies can be implemented through general rules, or though a series of negotiations and exceptions and discretionary decisions. In specific situations, one can often make the case that the general rule doesn\’t quite work, at least not this time, and so tweaking and bending and amending the rule makes sense just this time. But if every decision comes down to a negotiation, then the general rule itself lacks force. Moreover, the outcome of real-world negotiations won\’t be decided by an All-Knowing and Benevolent  Social Planner. Actual negotiations will be a messy business of who can hire the most high-powered lawyers, spend the most money on publicity, turn out the neighborhood crowds, and cut deals with local politicians and well-placed decision-makers for their support. As Hills and Schleicher conclude: 

Paradoxically, sometimes rigidity and centralization of a general framework for buying and selling land is more market-friendly than the bargaining free-for-all that keeps everyone guessing—and paying lobbyists to improve the odds of their guesses. 

Snapshots of Well-Being Across High-Income Countries

Well-being is a multidimensional concept, for countries as well as for individuals. Thus, in the recent OECD report  \”How\’s Life 2015: Measuring Well-Being,\” the emphasis goes well beyond basic one-dimensional measures of well-being like per capita GDP and offers comparisons across a range of measures of well being for the countries that are members of the OECD.

The OECD membership includes 34 countries. It  includes the US and Canada, many countries across western and eastern Europe, Japan and Australia, and a few others additions like Korea, Israel, Mexico, and Chile. Thus, it would be fair to say that it includes mostly the high-income countries of the world. The OECD is a combination think-tank and forum: it collects a wide range of data and publishes reports, but it has no power beyond its own reputation for accuracy and even-handedness.

In the calculations shown here, each measure of well-being is expressed in \”standard deviations,\” which in case your statistics is a little rusty, can be used to look at how far a value is from the average. If you plot each of these statistics for the 34 countries of the OECD, most of the countries will tend to bunch fairly near the average value, while a few countries will be higher or lower. The OECD notes that for these statistics, about two-thirds of 34 countries will be between -1 and +1 standard deviations of the mean value. Thus, about five countries will be above +1 and another five will be below -1.

So when you read these charts for various countries, they are basically telling you if a country is pretty close to the average for the high-income countries of the world, or whether it really stands out from the average.

Here\’s what the US looks like in this framework.. For household income and financial wealth, the US is an extreme outlier in comparison to the OECD peer group with values above 2. The US also stands out in earnings, rooms per person, housing affordability, educational attainment, and perceived health. However, the US stands out in a negative way in categories like time off, adult skills, and deaths due to assault.

How do these configurations appear for some other prominent OECD economies? In some ways, the US stands out as a country of fairly extreme values, both positive and negative. In contrast, Germany in comparison to the other OECD countries stands out on household income (although clearly well below the US level) as well as employment, job security, and water quality. Germany has a number of other above-average categories, and no categories that are dramatically below the average.

What about Japan? Japan is high on financial wealth but about average on income–on those two categories the opposite of Germany\’s pattern. Japan is especially high in job security, life expectancy, cognitive skills at 15, and adult skills. Japan performs poorly in perceived health (which is quirky, given the life expectancy numbers) as well as in basic sanitation, voter turnout, and life satisfaction

The chart for France made me smile. Overall France is even closer to the OECD average on most of these measures that Germany. France stands out a bit on household income, voter turnout, and life expectancy. But by these measures, the most striking difference for France is the large amount of time off!

The standard of living in Scandinavian countries like Sweden and Denmark has been getting some attention lately in US political conversations. Here\’s the snapshot for Sweden. Again, Sweden is closer to the average for the 34 OECD countries than the US measures, but it especially stands out in employment, perceived health, working hours, and water and air quality. It\’s interesting to me that while the US is roughly average on cognitive skills at age 15 but substantially below-average on adult skills, Sweden is a little behind on cognitive skills at age 15 and substantially  ahead on adult skills. This pattern suggests that Sweden knows something about helping young adults acquire skills as they move into the workforce. It\’s also interesting that Sweden is a little below average on job security, a little above average on long-term unemployment, and average on time off.

Finally, if you want a picture of a society in trouble, consider the graph for Greece. Compared to the OECD average, Greece lags dramatically in employment, earnings, job security, long-term unemployment rooms per person, housing affordability, cognitive skills at 15, social support, water quality, and life satisfaction.

The report offers graphs like this for all 34 of the OECD countries, and then also considerably more discussion of how each variable is measured and how each variable compares across countries.