Adam Smith as a Practical Development Economist

For modern economists, Adam Smith\’s classic The Wealth of Nations is a source of memorable concepts: the division of labor, the \”invisible hand,\” relying on the self-interest of the butcher and baker to provide us with products we want, the insight that people of the same trade rarely get together without trying to come up with some contrivance for raising prices, the four canons of taxation, and many more. 
But those actually reading the book, rather than cherry-picking snippets and quotations, will find that it is not just a book of theories and hypothetical examples. Instead, it is chock-full of historical episodes, national episodes, and arguments based on the evidence of the time. William Easterly takes on the task of rethinking Smith in this way in his essay, \”Progress by consent: Adam Smith as development economist\” (Review of Austrian Economics, published online September 9, 2019). As Easterly writes:  

There is a curious notion in development economics that the field emerged out of nowhere right after World War II. I used to share that view … Apparently we believe that economists for decades and centuries had remarkably little curiosity about the dramatic development differences evident around them.

It took me embarrassingly long to acknowledge some obvious ancient history of development thinking, and some other development economists are apparently taking even longer. As long ago as 1776, Adam Smith wrote a book called the Wealth of Nations. It turns out, after many hours of careful reading, that the book is indeed about the Wealth of Nations.

Far from ignoring the wider world, Smith cited 164 different historical or contemporary
place names or names of ethnic groups. … The omissions … are rare and reflect information availability. Only Australia and New Zealand are left out altogether. Specific place names in Africa are limited to some places on the coast, but there are very important discussions of the African continent as a whole. The rest of the world is well covered … Smith has abundant coverage of future Third World places such as Peru, Mexico, Chile, Egypt, India, Africa, Central Asia, and China. Smith’s First World success stories are England, lowland Scotland, British North America, and Holland. The future Second
World is also covered in discussions of Russia and Eastern Europe. …

Smith used his widespread examples to test his preferred hypothesis to explain development. The hypothesis is not a surprise – development happens based on free trade and free markets, making possible the division of labor and gains from specialization. Many of his examples use natural variation in trade based on access to waterways, or proximity to prosperous towns or rich neighbors. So for example, England, Scotland, and Holland benefit from access to waterways, towns, and rich neighbors. Inland Africa suffers from the lack of all three. The Incas and the Aztecs had not enough trade for a
different reason – they lacked money as a means of exchange. China and India were intermediate development examples because they had large domestic markets and good interior water transport, but had refused to participate in international trade. Free institutions and moral norms that support individual choice and trade also matter. …

Adam Smith (and economists in general) are sometime caricatured as believing that \”greed is good.\” And of course, Smith is famous in part for emphasizing the power of self-interest to produce beneficial social outcomes, as if led by an invisible hand. But as Easterly emphasizes, Smith\’s vision of self-interest involved all parties having the power and freedom to make choices. Thus, Smith was a notable opponent of colonialism and slavery in his time, because those under colonial rule or enslaved were denied the power to make their own choices. Easterly explains: 
One such idea that was widespread and influential for a couple centuries in Western intellectual history is that less developed people were unfit to have the same rights as more developed people. Underdevelopment equaled innate inferiority, which implied your inability to make wise choices for yourself. Advanced development equaled innate superiority, which included the ability to direct development for the inferior people. These ideas opened the door for the more developed, allegedly superior people to make choices for the less developed people. Europeans had the right to seize lands of American Indians because Europeans would make the wise choices that would develop the lands more. Slave-owners had the right to dictate to slaves unfit to make choices for themselves.Colonizers had the right to dictate to whole countries unfit to make choices for themselves.
 Many thinkers linked the idea that underdevelopment reflects innate inferiority to the right of the more developed to coerce the less developed. Adam Smith is notable in this debate because he argued more universally for individuals’ right to choose for themselves, and emphasized how these choices would serve both their own interests and those of society and the world as a whole. Smith shows how recognizing the right to choose and consent is essential for development to really be beneficial for all. The gains from trade can only occur if one party does not coerce the other, an idea that led Smith to fiercely criticize European conquest and colonization of non-Europeans. …
Slaves did not consent to have their production seized by their owner. Slaves are also not free to move to higher wage employments. Slaves have no incentive to work hard, and so lands worked by slaves “produce as little as possible.”Likewise, slaves have no incentive to make laborsaving innovations on the job, these “have been the discoveries of freemen.” (He confirms this empirically with a comparison of high productivity Hungarian mines with free labor compared to low productivity Turkish mines using slave labor in the same neighborhood.) … Smith’s view here on African slavery was sufficiently notorious that Virginian pamphleteer Arthur Lee in 1764 complained Smith had “debased” the American slaveowning colonists into “monsters.”\”
In other words, Smith is not arguing that self-interest which takes from others is economically (or socially) beneficial. Instead, he is pointing out the benefits that arise when self-interest of buyers and sellers interact in a shared moral context where all parties are able to make independent individual choices. Of course, this argument needs to be spelled out in detail in many ways, but that\’s why Smith wrote his book and Easterly wrote his essay. 
To modern ears, Smith\’s formulation may seem obvious (even for those who disagree and see it as naive or misleading). But as Easterly  points out, in his own time, Smith\’s view was distinctive: 

Despite what seems like a natural synthesis in Smith of a pro-market argument with an
anti-colonial and anti-racist one, it is sad that this combination would rarely occur again
among economists or other intellectuals in the centuries and decades after Smith before
the modern era.

The Productivity Race: US vs. Germany vs. Japan

Over the long run of decades, essentially all of the gains in standard of living are due to higher levels of productivity. On average and over time, what the people of a society produce is going to be closely linked what they can consume. In addition, the many and manifest problems of society are much easier to address in a context of an economy with rising productivity and economic growth, because an economy with flat productivity and zero growth is a zero-sum game, where helping one group always means imposing costs on others. 
Martin Neil Baily, Barry P. Bosworth and Siddhi Doshi search for \”Lessons from Productivity Comparisons of Germany, Japan, and the United States (International Productivity Monitor, Spring 2020, pp. 81=103). These are the three biggest high-income developed economies, and three of the four biggest economies in the world (of course, China is the other). However, they differ in their experiences in recent decades, as well as in their institutions and cross-industry patterns. 
As a starting point, here\’s a quick-and-basic measure of productivity: GDP per hour worked.  Starting back in 1970, the US economy was way ahead in this measure of productivity: \”Germany\’s aggregate productivity level was 0.72 relative to the United States in 1970, and Japan\’s aggregate productivity level was 0.40 relative to the US level in 1970.\” But Germany and Japan had more rapid productivity growth than the US in the 1980s and 1980s, and in fact Germany caught up to US levels. However, since about 1995, the US has reasserted its lead with faster productivity growth than Germany and Japan. 
As the authors look more deeply into these patterns, what do they see? 
1) The figure measures output per hour worked, but in Germany the average worker worked 1363 hours in 2018, while the average US worker was on the job 1786 hours that year.  (Take moment to wrap your mind around that difference. The average German workers worked more than 400 hours less in 2018–more than 10 weeks less!). Indeed, Germany has been substantially reducing average number of hours worked in recent years: for example, some German union workers negotiated for those who preferred it to choose a 28-hour work week. If one instead did a comparison by output per worker, not output per hour worked, the gap would be larger: \”However, Ger­many has greatly reduced the number of hours worked per worker and so output per worker was only 73. 7 per cent of the US level in 2017.\” 
Japan has also reduced hours worked. In 1990, for example, the average Japanese worker was on the job 2031 hours per year, compared with 1,833 hours for a US worker. But by 2018, the average Japanese worker was at 1680 hour per year, lower than the US level of 1786 hours. 
2) These differences in hours worked across countries may also imply something about levels of productivity.  For the sake of argument, let\’s hypothesize that the decline in hours in Germany and Japan tended to be larger for workers with lower skills. If that is true, then the comparison of GDP/hour worked is looking at a broader range of US workers compared with a group that lacks the same proportion of low-skilled workers in Germany and Japan. 
3) Of course, West and East Germany combined in the early 1990s, so what \”Germany\” means as an economy shifts at that time. But overall, the authors write: \”The German economy caught up to the US level of productivity in the 1990s and has since remained close behind. Their economy lacks the innovative IT sector of the United States but has other ad­vantages, including strong worker training. German GDP per capita is well below the US level, but that is because German work­ers have many fewer annual hours of work, and more leisure.\”
4) Of course, Japan has an economic meltdown for the ages in the early 1990s, from which its economy has arguably never fully recovered. The authors write: \”In the 1990s that relative progress [for Japan] stalled out and GDP per hour worked fell further be­hind the levels achieved in both Germany and the United States. Increasing the level of competitive intensity and driving out low productivity small and large firms would help complete Japan\’s convergence to the productivity frontier. The Japanese manu­facturing sector still has strong productiv­ity performance, setting the frontier level of productivity in some industries, but its rel­ative performance has declined. … The lit­erature suggests Japan may have had dif­ficulty with software development and the application of IT.\” 
 
5) One can also do a breakdown of productivity by industry, and look for industries where productivity in one of these three countries seems especially low compared to the others. For the US, the construction and utilities are two industries where low productivity stands out.  They write: \”Recent productivity growth in the United States has been very slow indeed. There are promising technologies on the horizon but so far the gains are not being realized. The results in this article point to problem industries such as construction and utilities where productivity growth is very low or negative. While it is likely that productivity measurement needs to be im­proved, there are also underlying problems associated with regulation and a lack of ef­fective competition.\” I would add my own hobby-horses here for US productivity growth, which include an insufficient commitment to worker training and to research and development efforts. 

US Transportation Infrastructure: Manage Supply or Demand?

Much of the public discussion over US transportation infrastructure proceeds from the belief that it faces a supply problem which needs to be fixed by updating the old and building more of the new. Thus, the prescription is for spending more on roads, bridges, and mass transit. One common claim is that US transportation is \”crumbling\” (to use a word that often arises in this context). Other claims are that  additional transportation spending will reduce traffic congestion and improve economic growth.   
All of these claims are highly disputable. Gilles Duranton, Geetika Nagpal, and Matthew A. Turner lay out the issues in their essay, \”Transportation Infrastructure in the US.\” The essay was written for a conference held at the National Bureau of Economic Research last November. The conference proceedings are going to be published in a book, Economics of Infrastructure Investment, edited by Edward L. Glaeser and James M. Poterba. But for now, the working paper versions of the essays and in some cases the galley proofs for the conference volume are available online. The Duranton-Nagpal-Turner essay is currently available here, and also available as NBER Working Paper #27254 (revised June 2020). 

What about the claim that US highways are in declining condition?

There\’s something called the Highway Performance Monitoring System, in which the Federal Highway Administration collects annual data from various state highway authorities. There is something called the International Roughness Index, which basically drives a vehicle over the road and measures bumpiness. \”As part of HPMS, state highway authorities measure IRI on every segment of the interstate highway system, more-or-less, every year.\” Looking at available data from the 1992 to 2007: \”The improvement in the condition of interstate highways has been almost monotonic.\”

What about the claim that US bridges are in declining condition?

There\’s a National Bridge Inventory. Looking at the data from 1990-2017, \”the condition of bridges remained about the same, the number of bridges increased slowly, and bridge traffic increased modestly.\”

What about the claim that US mass transit is in declining condition?

There is a National Transit Administration, which maintains a National Transit Database. It shows that mass transit is heavily concentrated in a few cities. \”New York accounts for about 40% of all transit rides in the entire country. Chicago is second, with 6%, followed by DC, Los Angeles, Boston and
Philadelphia. In total, these six districts account for about 60% of all transit rides in the country. …The New York subway system carries about 71% of all subway riders and about 31% of all public transit riders in the whole country. … The stock of public transit motor buses is younger than it was a generation ago and about 30% larger, although ridership has been about constant. The mean age of a subway car stayed about the same from 1992 to 2017, but at more than 20 years old, this average car is quite old. Subways carry about twice as many riders as they did a generation ago.\”

In short, there\’s is of course an ongoing need to update transportation infrastructure. But overall, the quality of US transportation infrastructure is not in decline. The authors write: \”Massive increases in infrastructure are not required reverse the decline of us transportation infrastructure. Not only is this infrastructure, for the most part, not deteriorating, much of it is in good condition or improving. … On average, most US transportation infrastructure is not crumbling, except (probably) for our subways.\”

But of course, one might argue that even if US transportation infrastructure is not literally getting worse, there might be large social gains from additional spending. For example, one might claim that more transportation spending will lead to improved economic growth or less traffic congestion.

On the issue of improved economic growth, Duranton, Nagpal, and Turner cite an array of evidence that improved transportation benefits those in close proximity. But much evidence suggests that transportation improvements lead to a geographic rearrangement of economic activity, not to additional gains for the area a whole. There is a broader case based on big-picture historical data that long-run growth benefits when society has sufficient infrastructure investments in transportation, electricity, communication, and water/sewage. But investing in these areas doesn\’t offer a quick boost to economic growth rates.  
On the issue of transportation infrastructure and congestion, the authors refer to the evidence that it\’s very hard to build your way out of congestion. For example, the six cities which account for most of US mass transit–New York, Chicago, Los Angeles, Boston, Washington, DC, and Philadelphia–all have extensive systems of both roads and mass transit. They also experience heavy traffic congestion. 
Overall, the authors suggest that gloominess about transportation infrastructure may not be primarily about its physical condition, but about displeasure over congestion. They write: 

The condition of infrastructure has, for the most part, improved over the past generation.
However, highways and subways per person have decreased, even as travel per
person has increased. Thus, while the condition of the infrastructure has improved or
stayed constant, it is serving much more demand, and so the speed of travel has decreased
and the experience of drivers and riders is worse. We speculate that the sentiment that infrastructure is deteriorating derives from the fact that users’ experiences are deteriorating
with increased congestion, and that this deterioration is largely independent of physical
condition.

As economists have argued for some time, people make choices when they commute: specifically, choices about the time, the route, and the mode (like whether to take a car or mass transit). When you build additional transportation capacity, some of those who were taking other times, other routes, and other modes will shift over, and the additional capacity quickly becomes congested, too. Thus, it\’s quite difficult to build one\’s way out of congestion: it would mean building enough capacity so that everyone who might choose to travel at a peak time can do so without hindrance–even if all those highway lanes and mass transit vehicles are near-empty at other times. The public policy answer to traffic congestion is instead to focus on the demand side–for example, by charging higher tolls and prices during peak-load times.

_______________

Here\’s the full Table of Contents for the Economics of Transportation Infrastructure volume: 

Table of Contents
Introduction: Edward L. Glaeser, James M. Poterba (bibliographic info)

1. Transportation Infrastructure in the US: Gilles Duranton, Geetika Nagpal, Matthew A. Turner (bibliographic info) (download) version of March 3, 2020 (Working Paper version)
Comment: Stephen J. Redding (bibliographic info) (download)

2. Measuring Infrastructure in BEA\’s National Economic Accounts: Jennifer Bennett, Robert Kornfeld, Daniel Sichel, David Wasshausen (bibliographic info) (download) version of March 3, 2020
Comment: Peter Blair Henry (bibliographic info)

3. Can America Reduce Highway Construction Costs? Evidence from the States: Leah Brooks, Zachary Liscow (bibliographic info) (download) version of March 3, 2020
Comment: Clifford Winston

4. Procurement Choices and Infrastructure Costs: Dejan Makovšek, Adrian Bridge (bibliographic info) (download) version of March 5, 2020
Comment: Shoshana Vasserman (bibliographic info) (download)

5. Digital Infrastructure: Shane Greenstein (bibliographic info) (download) version of March 3, 2020
Comment: Catherine Tucker (bibliographic info) (download)

6. When and How to Use Public-Private Partnerships in Infrastructure: Lessons from the International Experience: Eduardo Engel, Ronald D. Fischer, Alexander Galetovic (bibliographic info) (download) version of March 3, 2020 (Working Paper version)
Comment: Keith Hennessey (bibliographic info) (download)

7. A Fair Value Approach to Valuing Public Infrastructure Projects and the Risk Transfer in Public Private Partnerships: Deborah Lucas, Jorge Jimenez Montesinos (bibliographic info) (download) version of March 3, 2020
Comment: Richard Geddes (bibliographic info)

8. The Macroeconomic Consequences of Infrastructure Investment: Valerie A. Ramey (bibliographic info) (download) version of June 16, 2020
Comment: Jason Furman (bibliographic info)

The Boy, the Wolf, and the Case for Solar Radiation Management in Response to Climate Change

Richard Zeckhauser delivered a keynote address at the meetings of the Southern Economic Association last November, and an early online version in advance of publication is now available at the website Southern Economic Journal: Three prongs for prudent climate policy,\” by Joseph E. Aldy and Richard Zeckhauser (first published April 29, 2020). 

Zeckhause offers this retelling of the boy and the wolf fable (footnotes omitted): 

An extremely analytic boy lives in a village high in the mountains. The village used to have a traditional crop‐based economy, with a few scraggly sheep on the side. However, outside technology produced hybrid sheep that yield ample wool and mutton, and thrive at high altitudes. The village turned to mainly raising sheep, and ever more sheep.

When sheep were few, they could graze nearby the village. But when they became abundant, they had to graze well beyond where the villagers lived, namely in the outlands. The village had been warned that sheep without humans living nearby bring wolves. The boy has seen wolves and noticed that sheep have disappeared in unusual numbers. The boy\’s pleas for a shift back toward a less‐profitable crop economy have been ignored. Moreover, the boy is warning that more wolves are likely to take up residence nearby, and those wolves may reproduce. Changing the mix in the economy significantly back toward crops (a mitigation measure), will lower the wolf threat. As the first prong of defense, he recommended a 50% cutback in sheep, thus significantly reducing their attracting smell and their presence in the outlands.

The villagers heard the boy\’s message, and they responded, albeit in a woefully insufficient fashion. Whereas previously they had raised 20 sheep per family, they cut back to 18 and raised some vegetables. But evidently wolf numbers were still below equilibrium, and the losses to them increased.

Given the pace of sheep loss, and the unwillingness of the villagers to cut back sharply, the boy recommended that another layer of protective fencing (an adaptation technology) be erected in the outlands, even where the terrain is steep. This would be a second prong of the defense strategy. The village made a half‐hearted effort; after all, fencing is expensive, particularly in rocky and hilly domains. Some new fences were built, but the wolves readily evaded most of them. After a minor dip, the losses continued to rise.

Finally, the boy in desperation recommended that the village raise a hunting posse to search out and kill or scare away the wolves (an amelioration strategy), a third prong of defense. The villagers are very reluctant. They are farmers turned shepherds, not hunters. There may be riding accidents or gun accidents; indeed, a wolf may even turn on the posse. The village council votes against raising a posse. A few more fences are built, and the council implores the villagers to cut their flocks, and return to agriculture, but few villagers follow that course. Raising large numbers of hybrid sheep, even with the current 20% annual loss rate, is more profitable than growing crops.

As the annual loss rate climbs to 25%, the boy cries: \”Please, can\’t we move forward on all three fronts? Someday the wolves will come to snatch our children, and that will end the world as we know it.\”

And so it is with climate change. We\’ve been told, correctly, that the world is running out of time to curb its emission‐profligate ways. The world did little mitigation and ran out of the urgent time it was given. And matters have gotten worse, much worse. Emissions cutting, drastic emissions cutting, are still the recommended primary prong of our defense. Experience suggests, and economics reveals, that the magnitude of needed cutting will be almost impossible to achieve in the time available. Moreover, even if the prescribed level of mitigation is met, it may already be too late. A second prong of defense, adaptation, has received some discussion, but very little actual implementation. Adaptation would consist of such measures as building barriers to the ocean, restoring absorptive marshes, repositioning sensitive equipment from cellars to roofs, and preventing new construction in threatened areas. This analysis considers a third prong, amelioration through SRM to complement mitigation and adaptation.

Zeckhauser\’s emphasizes several themes. The mitigation strategy for climate change has now been in the public eye at least since the 1992 \”Earth Summit\” in Rio de Janiero. As we approach the three-decade anniversary of that summit, we can look at the data to find out how well the mitigation strategy is working. The first figure shows global carbon emissions annually from fossil fuels: the second figure shows atmospheric concentrations of carbon. 
Zeckhauser is not expressing opposition to a mitigation policy. He is just pointing out a hard truth: mitigation is an arduous and difficult and costly path. Given the politics and economics of the world in which we live, it is not guaranteed to succeed. Given the risks involved from high and rising emissions of carbon and other greenhouse gases, it makes to think about combining mitigation with advance planning for adaptation and amelioration some other costly and even risky steps. As the authors note: 

Mitigation efforts, which will require immense efforts and vast expenditures, will likely take decades to even cut emissions in half. Large‐scale renewable power, for example, would likely require years of innovation and commercialization of large‐scale battery storage, and the hopeful development of nuclear fusion. …

On the choices between amelioration and adaptation (citations and footnotes omitted): 

The academic literature as well as the media have dedicated significantly less attention to amelioration than to emission mitigation. The most promising amelioration technology to date, in terms of feasibility and cost, as mentioned, is SRM [solar radiation management]. It would inject aerosols, most likely sulfur particles delivered by airplane, into the upper atmosphere to reflect back incoming solar energy. This would lower the temperature for a given accumulation of atmospheric GHGs [greenhouse gases]. This technology draws on research about the cooling impacts of introducing sulfur dioxide (SO2) into the atmosphere—from volcanic eruptions as well as the combustion of sulfur‐intensive coal and petroleum products. An SRM strategy would have side effects, perhaps extremely costly side effects.

Implementation of SRM on a scale sufficient to cool the planet will take considerable time and money, though a slight fraction of the costs of climate change … While scientists are quite confident of its efficacy, experiments are still needed to demonstrate the feasibility of implementation. To deliver the SO2 to the lower stratosphere will require a new type of plane, and planes take years to develop. Such change will require research on feasibility, safety, and governance, and it could take many years to achieve grudging acceptance of this technology and then move to actual implementation at any scale.

Adaptation, will require considerable time and money as well. For example, if physical barriers are to be built to protect against rising sea levels and more intense storms, it will take years to figure out the engineering requirements, develop the plans, and secure the political will to produce the required resources. For example, the U.S. Army Corps of Engineers (2019) has identified a six‐mile long sea barrier with storm surge gates as a potential investment to protect New York City from climate change. It estimates that the wall would take 25 years to construct. Moving human activity away from the coasts will require decades and trillions of dollars. In short, the monies expended on adaptation will vastly exceed those required for SRM. That is true even if political realities prevent many worthwhile protective projects from being undertaken.

Zeckhauser focuses most of his talk on the difficult issues of risk and implementation of a strategy to ameliorate the effects of carbon emissions with a strategy of solar radiation management. Here, I\’ll just mention that people in the modern world yearn for and even expect that there will be a techno-fix for difficult problems.  A new coronavirus? Whip up a vaccine. Climate change? Whip up some low-cost and environmentally friendly solar and wind power.  There\’s nothing wrong, of course, with investing in possible techno-fix solutions. But there is something wrong with not being willing to consider to consider the risks that such solutions may not arrive soon or at all, or may be only partial answers, or may bring problems of their own–and to also make supplementary plans.  

For a couple of previous posts on geoengineering, see: 

The Random Presence of Females for Discovering Discrimination

It can be difficult to prove the existence of discriminatory attitudes to the full satisfaction of a social scientist. Ideally, one would like a real-world random event or policy intervention to occur, so that you could compare the real-world behavior of those who either experienced the random event, or not. A couple of recent studies have found specific situations where this can be done. 
Here\’s one example, analyzed by Marco Battaglini, Jorgen M. Harris, Eleonora Patacchini in \”Professional Interactions and Hiring Decisions: Evidence from the Federal Judiciary\” (January 2020, NBER Working Paper 26726). They note that in US appellate courts, the usual pattern is that cases are heard by three judges randomly assigned from the overall group. They also note that judges have pretty much a free hand in deciding who to hire as clerks. An overview of their work reports

Using data from the Judicial Yellow Book and a database of federal court cases, the researchers find that judges who hear more cases alongside female colleagues are 4 percentage points more likely to hire at least one female law clerk in the following year. They also examine whether exposure differentially affects judges who vary in meaningful ways — by gender, age, experience level, political affiliation, status, quality, and current law clerk staff composition. The effects of exposure to female judicial colleagues are larger for men, for judges whose current roster of law clerks is majority male, and for less-experienced judges. For example, male judges who hear more cases alongside female judges are 4.3 percentage points more likely to hire a female law clerk subsequently, while female judges who hear more cases alongside female judges are 1.6 percentage points more likely to hire a female law clerk subsequently.

Here\’s another example from a few years ago, this one from the venture capital industry, from research by Paul A. Gompers and Sophie Q. Wang  “And the children shall lead: Gender diversity and performance in venture capital” (2017. Harvard Business School Working Paper 17-103). They look at whether senior partners in venture capital firms have sons or daughters, which for this group we can treat as a random event. Then they look at how likely the firm is to hire women. 

We find that the proportion of female hires increases by 1.93% if you replace a son with a daughter for the existing partners in a firm. Given that about 8.03% of the new hires are female, this suggests a 24% increase in the probability of hiring a senior female investor when a son is replaced with a daughter for the existing partners. … [W]e also show that improved gender diversity, induced by parenting more daughters, improves deal and fund performances.

My guess is that not many judges or venture capitalists had previously been openly vowing not to hire women as clerks or as as part of venture capital firms; indeed, I\’m sure many of had previous hired a few women. In addition, I doubt that these changes involved an  open and conscious change-of-heart epiphany about hiring more women. I doubt that a  lot of judges were muttering in their chamber: \”Well, now that I\’ve been on a panel with a female judge, I\’ll go ahead and hire more female law clerks.\” I doubt a lot of venture capitalists were muttering to themselves: \”Well, now that I have a daughter, all of a sudden women job applicants seem more plausible to me.\” Instead, it seems to me that these kinds of studies both demonstrate the existence of an unconscious bias and a mechanism for attenuating that bias. 

The studies also demonstrate that increasing contact can be a meaningful tool for reducing discriminatory attitudes. Kevin Lang and Ariella Kahn-Lang Spitzer make this point in the context of racial discrimination in their essay \”Race Discrimination: An Economic Perspective\” (Spring 2020, Journal of Economic Perspectives). They point out that racial discrimination potentially affects many areas: residential location, credit access, educational opportunity, hiring, criminal justice, medical treatment, and so on. Pushing back against discrimination in each of these areas is of course worthwhile, but some ways of pushing back may have broader leverage than others across a range of areas. They write: 

[P]olicies to increase interracial contact—like limiting residential segregation—may offer a useful point of leverage. Residential and social segregation may lead to prejudice and taste-based discrimination. Pettigrew and Tropp (2006) provide a meta-analysis of 515 studies and conclude that there is strong support for “intergroup contact theory,” which proposes that contact tends to reduce prejudice. Some economists have contributed to our understanding of this topic. Carrell, Hoekstra, and West (2015), for example, found that having an additional black member in an Air Force squadron of roughly 35 people increased the probability of having a black roommate as a sophomore (usually not a freshman squadron member) by about one percentage point, or about 18 percent. Similarly, exposure to more black peers with high admissions scores increased the probability that whites reported that they had become more accepting of African Americans. Dahl, Kotsadam, and Rooth (2018) find similar positive effects on male attitudes towards female recruits from having been assigned to a squad with a woman member during boot camp in Norway. In particular, given the importance of networks in job search, social distance can directly increase racial disparities in employment (Loury 2000). These studies, together with the large literature outside economics, suggest a public interest in greater integration and reducing social distance across groups.

The (Semi-Official) End of the Longest US Economic Expansion

The longest economic expansion in US history (or at least back to 1854, before which time the data gets not just shaky but exceedingly shaky) ended in February at 128 months. It beat the record established during the 120-month expansion of the 1990s. Third place is the 106 month expansion of the 1960s, with the 92-month expansion of the 1980s in fourth place. These figures are from the Business Cycle Dating Committee of the National Bureau of Economic Research, which is to say it\’s from a group of academic economists who have an affiliation with a certain prestigious research institute. 

(Specifically, the economists involved in this decision were Robert Hall, Stanford University (chair); Robert Gordon, Northwestern University; James Poterba, MIT and NBER President; Valerie Ramey, University of California, San Diego; Christina Romer, University of California, Berkeley; David Romer, University of California, Berkeley; James Stock, Harvard University; and Mark Watson, Princeton University).

It sometimes comes as a surprise to non-economists, but although the US government publishes a wide array of economic data, it does not attempt to pronounce on when a recession has started or ended. Given the political implications of setting such dates, the choice of leaving this task to an outside group is probably a wise one.  In a press release earlier this week, the NBER committee of economists announced that they had chosen February 2020 as the month for the peak of the previous business cycle, and explained why. The committee focuses on monthly data about domestic production and employment.

For domestic employment, there is a Current Employment Statistics (CES) survey of employers. Each month, \”CES surveys approximately 145,000 businesses and government agencies, representing approximately 697,000 individual worksites.\” It does not include, for example proprietors running their own firm, the self-employed, farm-workers, or those employed by households. But the Total Nonfarm Payroll that it does cover includes about 80% of all workers. As you can see, it peaks in February.
However, one problem with the CES data is that because it\’s collected from employers, it includes workers who are being paid–but are on furlough and thus might be thought of as \”unemployed.\” So for confirmation, the NBER committee also looked at the the Current Population Survey, which surveys about 60,000 households each month. The advantage here is that the survey can ask whether you are on furlough. The downside is that data from payrolls is a pretty solid measure of how many people are getting paid, while surveys that ask people may be subject to more measurement error. But that said, the CPS data peaks in February, too. 
On the production side, estimates of total US economic output like gross domestic product are not available on a monthly basis, but only quarterly: moreover, GDP numbers are first released as an \”advance\” estimate with preliminary data at the end of a given quarter, updated several months later with second and third estimate as more data become available. For making a decision now about the month the recession started, reliable if less complete monthly data is needed. Thus, the NBER turns to measures of personal consumption expenditures and personal income. Personal consumption expenditures is part (about 70%) of total GDP; personal income is part of the gross national income calculation. 
The data on real personal consumption expenditures is compiled by the Bureau of Economic Analysis from a variety of sources,  including the Monthly Retail Trade Survey, but also other government agencies (like the Departments of Energy, Transportation, Health and Human Services) as well as private associations and trade groups).  
Instead of measuring production by expenditures, an alternative is to measure the income that people received for producing. However, the issue here is that one needs to look at transfers in a way that does not include transfer payments, using sources like the ongoing data collected for the Quarterly Census of Employment and Wages for income, and government data on payments for estimating transfers. The monthly data on income is thought of as being a little more subject to later revision than the data on expenditures, but it also shows a peak in February 2020. 
These four measures are not always so synchronized in their timing. But in this case, it\’s of course not a surprise that the various measures are neatly aligned, because the pandemic that brought on the recession hit the economy with full force in March.
All definitions of \”recession\” are unofficial, but a definition one sometimes hears is that a recession is two quarters of negative economic activity. Thus, it may seem odd to have the NBER declaring a recession in June, with only three months of data available since February 2020. In response to these issues, the NBER committee writes: 

[I]n deciding whether to identify a recession, the committee weighs the depth of the contraction, its duration, and whether economic activity declined broadly across the economy (the diffusion of the downturn). The committee recognizes that the pandemic and the public health response have resulted in a downturn with different characteristics and dynamics than prior recessions. Nonetheless, it concluded that the unprecedented magnitude of the decline in employment and production, and its broad reach across the entire economy, warrants the designation of this episode as a recession, even if it turns out to be briefer than earlier contractions.

Finally, I\’ll add as a mignardise that the four longest economic expansions have all happened since the 1960s and three of the four longest happened since the 1980s. It\’s not much comfort for the current economic problems, but one of my best friends, Casual Empiricism, suggests that the US economy has become less recession-prone over time.  

Some Economics of Tipping

Tipping is a puzzle for economists. The service has already been provided. In many cases, the restaurant diner or taxicab driver will never see the provider of the service again. So why would a rational consumer pay extra?It seems clear that customs of tipping are embedded in a psychology and social expectations. But given that tipping varies across times and countries, and that even  consumers at a certain time and place tip for some purchases but not others, the explanation is likely to be complex.  Ofer H. Azar explores these questions and the limited evidence that is available in \”The Economics of Tipping\” (Spring 2020, Journal of Economic Perspectives, pp. 215-236). 

I had not known this history of tipping: 

There is no agreement on just how the practice of tipping started. Hemenway (1993) suggests that tipping dates as far back as the Roman era and is probably even older. Segrave (1998) claims that tipping may have begun in the late Middle Ages when a master or lord of the manor could give a little extra money to a servant or laborer, whether from appreciation of a good deed or from compassion. Brenner (2001) attributes the tipping origins to 16th century England, where brass urns with the inscription “To Insure Promptitude” were placed first in coffeehouses and later in local pubs. People tipped in advance in order to get good service by putting money in these urns. Indeed, Schein, Jablonski, and Wohlfahrt (1984) and Brenner (2001) suggest that “tip” comes from the first three letters of “To Insure Promptitude,” but others suggest different stories. Hemenway (1993), for example, argues that “tip” may come from stipend, a version of the Latin “stips.”

I had also not known that the practice of tipping flowed from Europe to the United States, which is ironic, because the practice has faded in Europe while strengthening in the US–at least in certain industries. Azar writes:  
It seems that Europe exported the practice of tipping to the United States, when high-income Americans who traveled in Europe in the 19th century started tipping upon their return to the United States, to show that they had been abroad and were familiar with the European customs (Schein, Jablonski, and Wohlfahrt 1984). By 1895, the average tip in European restaurants was 5 percent of the bill, while in the United States a common tip was 10 percent. Segrave (1998) estimates that during the early 1910s, five million US workers—more than 10 percent of the labor force—had tip-taking occupations. The large extent of tipping gave some tipped employees relatively high income, and employers both in Europe and the United States sometimes tried to take these economic rents from the workers either by taking the tips, or by charging employees for the right to work and earn tips (Scott 1916; Segrave 1998; Azar 2004a).
By the early 20th century, even though the tipping custom had only just arrived in the United States, there were already attempts to abolish it. Some saw tipping as creating a servants’ class, part of a society where the tippers looked down upon the service providers. Gunton (1896) called tipping offensively un-American, because it was contrary to the spirit of American life of working for wages rather than fawning for favors. Some states passed laws against tipping, starting with Washington in 1909, but these laws were repealed after several years.

Over the years, the percentage tipped in the United States has gradually risen. The 10 percent tipping norm in restaurants in the late 19th century stayed for several decades (Hathaway 1928; Post 1937), but eventually increased to 15 percent (Post 1984). In her etiquette manual, Post (1997) writes, “It wasn’t long ago that 15 percent of the bill, excluding tax, was considered a generous tip in elegant restaurants. Now the figure is moving toward 20 percent for excellent service. In ordinary family-style restaurants 15 percent is still the norm.” Today, some travel guides refer to 15−25 percent as a tipping standard in restaurants (for example, https://www.lonelyplanet. com/news/2016/12/07/how-much-to-tip/). A similar pattern of increasing tip percentages is observed in taxi tipping, starting with 10 percent early in the 20th century (Hathaway 1928), rising in mid-century to 15 percent (Post 1984), and then by the end of the 20th century reaching 20 percent in large cities (Post 1997). Today, tipping norms differ around the globe. Tourist guidebooks often provide advice about the tipping norms in the country (for example, Star 1988). In Europe, where tipping originated and was common already hundreds of years ago, today tipping is generally less common and in much smaller magnitudes than in the United States. In many European restaurants, tipping takes the form of rounding up the restaurant bill a little, not adding 15−20 percent to the bill. Along with restaurant servers and taxicab drivers, some of the professions where tipping is at least relatively common include food delivery people, bartenders, and hair salon workers. …
Why does tipping persist, at least in certain settings? For example, why don\’t more restaurants move to a fixed service charge, or just raise their prices? Azar offers a number of ways in which the different players in a restaurant have reasons to prefer tipping and to facilitate it–for example, by leaving a line on the bill for the tip and even suggesting a precalculated amount. 
From the customer\’s point of view, as Azar notes: \”Many customers prefer the control of choosing a tip and have a positive feeling that they are showing generosity.\” 
From the server\’s point of view, tips are a form of variable compensation. A server probably works a lot harder on Friday and Saturday than on Tuesday and Wednesday, but the hourly pay from the restaurant may not distinguish much or at all between days. It also seems clear that servers often get higher pay as a result of tips–and their implicit deal with diners–than they would get with pure hourly pay. Azar writes: \”Servers earn more as a result [of tipping] and find that busy shifts where they have to work harder are rewarded with higher income. Service quality seems modestly higher.\” 
For restaurant owners, in a setting where tipping is an established norm, attempts to move away from tipping run a risk of alienating those customers who like the feel of tipping and also of driving away their best servers, who will earn more at a restaurant with tips–especially if they can work Friday and Saturday nights. However, restaurant owners will also look at tips with a degree of longing. They will either try to find ways to share tips among all workers, including the kitchen workers, or to shift some of the tip money to their own pockets, or both. 
There\’s much more in the article itself. As Azar writes: \”Perhaps the self-reinforcing social norm of tipping will be toppled eventually in the United States, but with more than a century of history,
it seems unlikely to go quickly.\”
[Full disclosure: I\’ve worked as Managing Editor of the Journal of Economic Perspectives since 1986, so I am perhaps predisposed to find the articles of great interest. However, this blog is an unpaid labor of love, not part of my job.] 

Pregnancy-Related Mortality in the US

When it comes to maternal mortality during pregnancy, the United States not only lags behind other high-income countries, but has been getting worse. The National Academies of Sciences, Engineering, and Medicine tells the story in its report Birth Settings in America: Outcomes, Quality, Access, and Choice Here\’s the trendline of US maternal mortality over time: 

As part of putting this in (grim) perspective, the report notes (citations omitted):

In contrast, the rate of maternal mortality has consistently dropped in most high-resource countries over the past 25 years. Severe maternal morbidity has been increasing in the United States as well. It is estimated that for every woman who dies in childbirth, 70 more come close to dying . All told, more than 50,000 U.S. women each year suffer severe maternal morbidity or “near miss” mortality, and roughly 700 die, leaving partners and families to raise children while coping with a devastating loss. Like the rates of maternal mortality, U.S. rates of severe maternal morbidity are high relative to those in other high-resource countries. In this context, it is notable that some local efforts in the United States have shown progress in reducing rates of maternal mortality and morbidity. In California, for example, the California Maternal Quality Care Collaborative led an initiative that reduced rates of maternal mortality by 55 percent (from 2006 and 2013) … 

The fundamental problem here doesn\’t seem to be an issue of too little overall spending, but more of misallocated spending. US overall healthcare spending is high, and costs of childbirth are a big part of that. The NAS report notes (citations omitted):  

Childbirth is the most common reason U.S. women are hospitalized, and one of every four persons discharged from U.S. hospitals is either a childbearing woman or a newborn . As a result, childbirth is the single largest category of hospital-based expenditures for public payers in the country, and among the highest investments by large employers in the well-being of their employees. Cumulatively, this spending accounts for 0.6 percent of the nation’s entire gross domestic product, roughly one-half of which is paid for by state Medicaid programs.

The discussion in the report suggests that there is too little spent on prenatal care, perhaps some overview of (costly) hospitals as a venue for births compared with other options, and an overuse of some costly care like C-sections
A natural concern is how this might relate to infant mortality. However, US infant mortality has been falling over time. The two main concerns in this area seem to be a rise in the share of children born with low birthweights, together with large discrepancies across groups. Here\’s a figure on low birthweights from a National Center for Health Statistics Data Brief in 2018
Figure 1 is a stacked bar chart showing singleton low, moderately low, and very low birthweight rates from 2006 through 2016.
Here\’s the NAS report on infant mortality differences across groups (citations and references to figures omitted). 

In contrast to maternal mortality, infant mortality in the United States has been declining over the past 20 years, and there are expanded opportunities for survival at increasing levels of prematurity and illness complexity. However, large disparities persist among racial/ethnic groups and between rural and urban populations. In 2017, infant mortality rates per 1,000 live births by race and ethnicity were as follows: non-Hispanic Black, 10.97 per 1,000; American Indian/Alaska Native, 9.21 per 1,000; Native Hawaiian or Other Pacific Islander, 7.64 per 1,000; Hispanic, 5.1 per 1,000; non-Hispanic White, 4.67 per 1,000; and Asian, 3.78 per 1,000. … Rates of preterm birth and low birthweight have increased since 2014, and as with other outcomes, show large disparities by race and ethnicity. Low-birthweight (less than 5.5 pounds at birth) and preterm babies are more at risk for many short and long-term health problems, such as infections, delayed motor and social development, and learning disabilities. About one-third of infant deaths in the United States are related to preterm birth … 

The NAS report does not offer a lot of clear recommendations for what should be done. The discussion of steps that could be taken for quality improvement (QI) is full of statements like: \”While many QI initiatives have shown promising results, many current QI initiatives are underfunded.\” Do tell. But the report does offer some possible models to follow, like the California Maternal Quality Care Collaborative mentioned earlier. 

Some Thoughts on Police Reform

Disputes over policing are of course not new. Andrea M. Headley and James E. Wright, II, look at \”National Police Reform Commissions: Evidence-Based Practices or Unfulfilled Promises?\” (Review of Black Political Economy, 2019, 46:4, pp. 277–305). As they point out, the 1931 National Commission on Law Observance and Enforcement “the Wickersham Commission” was focused on problems like excessive police use of force, how the police should focus more on prevention of crime, and improvement of personnel standards for policy hiring.

Headley and Wright look at later police commissions, including the Kerner Commission report in the aftermath of the 1968 riots, the 2015 President’s Task Force on 21st Century Policing, and the 2018 report from the U.S. Commission on Civil Rights released a detailed report entitled “Police Use of Force: An Examination of Modern Policing Practices.” As they note, later commissions often revisit the topics from the 1931 commission, while adding some additional concerns like improving police-community relations, accountability, transparency, and diversity. 

In short, the current controversies surrounding police departments are not new, which of course is part of what makes them frustrating. Headley and Wright call it a \”wicked problem,\” and write: Wicked problems are characterized by their complex nature, changing circumstances, lasting impact as well as the incomplete information regarding the issue—all of which pose difficulties for solving such problems.\”  
Of course, commissions are often willing to write up a wish list of what the members think should be done. But does the existing research offer useful guidance about what might work? Lack of data has been a severe problem.  For example, as the 2018 report of the US Civil Rights Commission notes, starting on the first page of the \”Executive Summary\”: 

While allegations that some police force is excessive, unjustified, and discriminatory continue and proliferate, current data regarding police use of force is insufficient to determine if instances are occurring more frequently. The public continues to hear competing narratives by law enforcement and community members, and the hard reality is that available national and local data is flawed and inadequate. A central contributing factor is the absence of mandatory federal reporting and standardized reporting guidelines. … [M]oreoer, the data that are available is most frequently compiled by grassroots organizations, nonprofits, or media sources. Data are not only lacking regarding fatal police shootings, but data regarding all use of force are scant and incomplete.

The report then quotes Roland Fryer: \”Data on lower level uses of force, which happen more frequently than officer-involved shootings, are virtually non-existent. This is due, in part, to the fact that most police precincts don’t explicitly collect data on use of force, and in part, to the fact that even when the data is hidden in plain view within police narrative accounts of interactions with civilians, it is exceedingly difficult to extract.\” 

In a similar vein, Headley and Wright note:

Despite the breadth of studies surrounding police use of force, there are still questions that merit more attention. First, scholarship would be enhanced if we could identify why disparities in use of force exist and for which type of officers? For instance, are disparities present due to biases and discrimination on part of the officer? Are they present due to institutionalized racism or biases that are implicitly embedded in department practices, policies, or protocols? Or, are they present due to differences in the quantity and quality of civilian interactions with, and treatment toward, officers? Second, the literature on police use of force tells us very little about when force is not used. Thus, to be able to understand police use of force decision making more broadly, we need to also assess the interactions where force could have been used but was not. Getting at these nuances is key to enhancing the knowledge base on police use of force. To aid in addressing these questions, a national and comprehensive database of police–civilian interactions is warranted.

Headley and Wright also cite an array of evidence about\”community-oriented policing\” (COP), a broad term that usually includes \”the provision of victim services, counseling, community organizing, and education; and the establishment of foot patrols, neighborhood teams/offices, and precinct stations.\” They note: \”The research has generally shown COP positively affects community perceptions and attitudes and thus builds relations, whereas such strategies have very limited, if any, effects on reducing crime.\” Given that improving police-community relations is a goal in itself, COP policies may be worth pursuing, but perhaps with limited expectations about how they will reduce crime.

They describe a knot of potential conflicts that can arise between goals of increasing diversity and the hiring standards that have in some cases limited hiring a more diverse workforce. Headley and Wright comment: 

Police departments across the country are realizing the need to expand their hiring pool while also acknowledging some of the harms that have been done to keep people of color and women out of policing (whether intentionally or not), which provides a fruitful area for future research to assess. For instance, Madison Police Department in Wisconsin has restructured its physical agility test and has increased the number of women police officers hired, whereas St. Paul Police Department (Minnesota) changed its written test requirements (which had disproportionately adverse impacts on applicants of color) to focus more on personal history and community engagement rather than situational testing. Going a step further, Colorado’s Peace Officer Standards and Training Board allows officers who have been arrested for criminal convictions to still be considered for law enforcement positions under certain criteria, whereas other police departments, such as the Burlington Police Department in Vermont, only require legal permanent residency or work authorizations instead of U.S. citizenship. These advances occurring in the police profession open a new door for researchers to conduct pre- and post-evaluations of recruitment and hiring initiatives particularly as it relates to long-term organizational culture and employee performance.

One of the go-to suggestions for any police reform is \”better training.\” They pour a little chilled (if not quite cold) water on this suggestion: 

Training has been one of the most commonly used ways to respond to crises in the policing profession in hopes to affect police behavior. Unfortunately, with the lack of consistency in training across police departments, scholarship has not rigorously or systematically been able to examine the impacts of various types of trainings. This is a huge gap in the existing scholarship that needs to be filled to move the practice of policing forward and improve outcomes.

Headley and Wright are not trying to provide a full overview of the literature. As they point out, many of the problems of policing fall under the heading of \”culture\”–whatever explicit rules exist, they are filtered through the culture of police departments. They do not discuss the issue of police unions, which may in some cases be a substantial barrier to accountability, transparency, and shifts in culture. Katherine J. Bies makes this case in a 2017 essay in the Stanford Law and Policy Review, where she writes: \”Let the Sunshine In: Illuminating the Powerful Role Police Unions Play in Shielding Office Misconduct.\’\’ She writes: 

[D]uring the rise of police unions to political power in the 1970s, police unions lobbied for legislation that shrouded personnel files in secrecy and blocked public access to employee records of excessive force or other officer misconduct. Today, these officer misconduct confidentiality statutes continue to prohibit public disclosure of disciplinary records related to police shootings and other instances of excessive force. Moreover, as the failure of recent sunshine legislation demonstrates, police unions continue to challenge and deter today’s progressive reform efforts that would replace secrecy with accountability and transparency. This Note also argues that police unions are unparalleled in their ability to successfully advocate for policy proposals that conflict with traditional democratic values of accountability and transparency.

The problem of police reform of course involves a desire to minimize grievous abuses or excessive violence, but it\’s much more than that. The terrible cases in the headlines are the 10% of the iceberg that is showing above the waterline. The police need to be able to operate within their communities with some basic level of community support, but in many cities, police in their day-to-day interactions have already lost the trust of a substantial share of the public, and are in danger of losing the trust of many more. 

Exploding US Unemployment Rates: A Peek Inside

US unemployment rates have reached higher levels, and risen in a way that is more dramatic, than at any time since the start of regular employment statistics in the late 1940s. Here\’s the basic picture. The unemployment rate was 14.7%  in April and then dropped unexpectedly (to me, at least!) to 13.3% in May. Even so, looking back over the last 75 years, the monthly unemployment rate has never risen this fast or reached a level this high.  
The explosive rise in the unemployment rate has been accompanied by a sharper decline in jobs than the US economy has experienced in the last 75 years. The figure shows total US employees. As you see, the number rises gradually over the decades, keeping pace with the US population. The total number of jobs drops during or just after recessions, shown by the shaded gray bars. But whether it\’s the Great Recession of 2007-9 or the severe double-dip recession of the early 1980s, the US economy has not seen a drop in total jobs this fast and severe. Total number of jobs was 151 million in March and 130 million in April–a drop of about 14% in a single month–before the gain of about 2.5 million total jobs in May. 
The key question about unemployment is whether there could be a quick bounceback. Are many of these employers poised to resume hiring? Are many of these workers poised to go back to work? One interesting tidbit of evidence here is the share of the unemployed who lost their jobs because of layoffs–which has some implication that they could be readily rehired. Here\’s another striking figure. The share of \”job losers on layoff\” is about 8-15% of the total unemployed from the mid-1980s up to the is around 8-15% of 
One of the shifting labor market patterns in the last 30 years or so has been the disappearance of the \”layoff.\” If you look back at recessions in the 1970s and 1980s, you see that the share of \”job losers on layoff\” rises during recessions, and then falls. It was a much more common pattern for factories and other employer to lay off and then to rehire those same workers. But when you look at the recessions of 1990-91, 2001, and 2007-9, you don\’t see much of a rise in layoffs. Instead, the chance that an unemployed workers was laid off with a plausible prospect of being rehired, rather than just let go, got lower and lower. For example, look how low the percentage falls in the years after the Great Recession. 
But the share of \”job losers\” on layoff just spiked to 78% in April and 73% in May, which implies that large numbers of the unemployed could conceivably be rehired quickly. But of course, a \”layoff\” could become an empty promise, where most of these workers are not rehired, and instead need to find new jobs in the new socially distancing economy. 
I\’ve also been struck by the difference between US and European unemployment data. When US unemployment was spiking to 14.7% in April, unemployment in the 27 countries of the European Union barely nudged up to 6.6% in April; for the subset of 19 countries in the euro zone, unemployment was 7.3% in April. Why did US unemployment spike to double European levels? The likely answer involves interactions between public policy and what is counted as \”unemployment.\” 
One key policy choice is whether assistance to workers has been sent to them directly–say, via unemployment insurance–or whether assistance to worker was funneled through employers, so that workers who were not necessarily going to work still kept receiving a (government-funded) paycheck from their employer. Jonathan Rothwell describes the difference in \”The effects of COVID-19 on international labor markets: An update\” (May 27, 2020, Brookings Institution). 
Here\’s a figure from Rothwell showing the change in workers getting unemployment benefits. Notice that it\’s way up in Canada, Israel, Ireland, and the US. But in France, Germany, Japan, and Netherlands, there\’s essentially no rise in unemployment benefits. 
The reason is that in many countries, a number of worker are getting government assistance via their employers. In the unemployment stats for those countries, they are still counted as employed. Here\’s the figure from Rothwell: 
Another policy choice in the US has been to increase unemployment assistance substantially, so that it is closer to the actual pay that workers receive. Manuel Alcalá Kovalski and Louise Sheiner provide a quick background primer on \”How does unemployment insurance work? And how is it changing during the coronavirus pandemic?\” (Brookings Institution, April 7, 2020). As they write: 

Most state UI [Unemployment Insurance] systems replace about half of prior weekly earnings, up to some maximum. Before the expansion of UI during the coronavirus crisis, average weekly UI payments were $387 nationwide, ranging from an average of $215 per week in Mississippi to $550 per week in Massachusetts. … The CARES Act—a $2 trillion relief package aimed at alleviating the economic fallout from the COVID-19 pandemic—extends the duration of UI benefits by 13 weeks and increases payments by $600 per week through July 31st. This implies that maximum UI benefits will exceed 90 percent of average weekly wages in all states.

In other words, rather than trying to keep laid-off or furloughed workers receiving much the same income via their employer, the US approach has been to do so via the unemployment insurance system. This has caused problems. For lower-wage US workers, the higher unemployment insurance payments cover a substantial part of their typical working income–in some cases, more than 100% of their previous pay. They have a financial incentive not to return to work, even if their employer would like to re-open, until these benefits run out. Of course, other unemployed workers receiving these higher benefits may not have an option to return. In the meantime, other low-wage workers who have kept working in grocery stores, warehouses, delivery services, and from home, are not receiving such payments at all. 
Given that the US policy choice was to funnel assistance to workers through the unemployment system, it\’s not a big shock that the unemployment rate rose so high, so fast. A near-term policy question is whether to extend the higher unemployment payments, perhaps by another six months. The Congressional Budget Office (June 4, 2020) has just released some estimate of the effects of that choice. CBO writes: 

Roughly five of every six recipients would receive benefits that exceeded the weekly amounts they could expect to earn from work during those six months. The amount, on average, that recipients spent on food, housing, and other goods and services would be closer to what they spent when employed than it would be if the increase in unemployment benefits was not extended. … In CBO’s assessment, the extension of the additional $600 per week would probably reduce employment in the second half of 2020, and it would reduce employment in calendar year 2021. The effects from reduced incentives to work would be larger than the boost to employment from increased overall demand for goods and services.

My own sense is that a blanket extension of the additional unemployment benefits is probably the politically easy choice. But the pragmatic choice would be to start thinking more carefully about how structuring these payments in a way that would strike a better balance helping those who need it with incentives to return to work. 

There is a sense in which the very high US unemployment rates both understate and overstate the condition of US labor markets. Unemployment rates, by definition, leave out those who are \”out of the labor force,\” perhaps because added family responsibilities have made it too difficult to work, or the bleak unemployment picture has made it difficult to seek a job. On the other side, some of the unemployed are are hovering in place, ready and able to return to their previous employer, but receiving enhanced unemployment insurance payments in the meantime. 
Estimating these kinds of factors of course involves a bunch of judgement calls. But for an example of such analysis,  Jason Furman and Wilson Powell III have written \”The US unemployment rate is higher than it looks—and is still high if all furloughed workers returned\” (Peterson Institute for International Economics, June 5, 2020). Furman and Powell look at the rise in the number of people “not at work for other reasons” and the rise in the number of people who are out of the labor force. They write: \”Adjusting for these factors our “realistic unemployment rate” was 17.1 percent in May, down from the April value but still higher than any other unemployment rate in over 70 years.\”
They also look at what the unemployment rate would be if those who say they are on layoff all returne to their jobs: \”In total, an additional 14.5 million of the unemployed reported being on temporary layoff. If all of these people were immediately recalled back to work and the labor force adjusted accordingly—a very optimistic scenario—the “full recall unemployment rate” would still be a very elevated 7.1 percent.\”
Either way, the US economy is clearly in the midst of a recession. The question is whether it turns out to be a deep-at-the-start-but-short recession, or deep-at-the-start-and-prolonged recession. The eventual outcome is only partly about economic policy: the coronavirus and public health policy will also play a big role.