Reconsidering the "Washington Consensus" in the 21st Century

The \”Washington consensus\” has become a hissing and a buzzword over time. The usual implication is that free-market zealots in Washington, DC, told developing countries around the world that they would thrive if they followed free-market policies, but when developing countries tried out these policies, they were proven not to work. William Easterly, who has been a critic of the \”Washington consensus\” in the past, offers an update and some new thinking in \”In Search of Reforms for Growth New Stylized Facts on Policy and Growth Outcomes\” (Cato Institute, Research Briefs #215, May 20, 2020). He summarizes some ideas from his NBER working paper of the same title (NBER Working Paper 26318, September 2019)/

Before discussing what Easterly has to say, it\’s perhaps useful to review how the \”Washington consensus\” terminology emerged. The name traces back to a 1989 seminar in which John Williamson tried to write down what he saw as the main steps that policy-makers in Washington, DC, thought were appropriate for countries in Latin America facing a debt crisis. As Williamson wrote in the resulting essay published in 1990:

No statement about how to deal with the debt crisis in Latin America would be complete without a call for the debtors to fulfill their part of the proposed bargain by \”setting their houses in order,\” \”undertaking policy reforms,\” or \”submitting to strong conditionality.\”
The question posed in this paper is what such phrases mean, and especially what they are generally interpreted as meaning in Washington. Thus the paper aims to set out what would be regarded in Washington as constituting a desirable set of economic policy reforms. … The Washington of this paper is both the political Washington of Congress and senior members of the administration and the technocratic Washington of the international financial institutions, the economic agencies of the US government, the Federal Reserve Board, and the think tanks. … Washington does not, of course, always practice what it preaches to foreigners.

Here\’s how Williamson summed up the 10 reforms he listed in a follow-up essay in 2004:

  1. Fiscal Discipline. This was in the context of a region where almost all countries had run large deficits that led to balance of payments crises and high inflation that hit mainly the poor because the rich could park their money abroad.
  2. Reordering Public Expenditure Priorities. This suggested switching expenditure in a progrowth and propoor way, from things like nonmerit subsidies to basic health and education and infrastructure. It did not call for all the burden of achieving fiscal discipline to be placed on expenditure cuts; on the contrary, the intention was to be strictly neutral about the desirable size of the public sector, an issue on which even a hopeless consensus-seeker like me did not imagine that the battle had been resolved with the end of history that was being promulgated at the time. 
  3. Tax Reform. The aim was a tax system that would combine a broad tax base with moderate marginal tax rates. 
  4. Liberalizing Interest Rates. In retrospect I wish I had formulated this in a broader way as financial liberalization, stressed that views differed on how fast it should be achieved, and—especially—recognized the importance of accompanying financial liberalization with prudential supervision. 
  5. A Competitive Exchange Rate. I fear I indulged in wishful thinking in asserting that there was a consensus in favor of ensuring that the exchange rate would be competitive, which pretty much implies an intermediate regime; in fact Washington was already beginning to edge toward the two-corner doctrine which holds that a country must either fix firmly or else it must float “cleanly”. 
  6. Trade Liberalization. I acknowledged that there was a difference of view about how fast trade should be liberalized, but everyone agreed that was the appropriate direction in which to move. 
  7. Liberalization of Inward Foreign Direct Investment. I specifically did not include comprehensive capital account liberalization, because I did not believe that did or should command a consensus in Washington. 
  8. Privatization. As noted already, this was the one area in which what originated as a neoliberal idea had won broad acceptance. We have since been made very conscious that it matters a lot how privatization is done: it can be a highly corrupt process that transfers assets to a privileged elite for a fraction of their true value, but the evidence is that it brings benefits (especially in terms of improved service coverage) when done properly, and the privatized enterprise either sells into a competitive market or is properly regulated. 
  9. Deregulation. This focused specifically on easing barriers to entry and exit, not on abolishing regulations designed for safety or environmental reasons, or to govern prices in a non-competitive industry. 
  10. Property Rights. This was primarily about providing the informal sector with the ability to gain property rights at acceptable cost (inspired by Hernando de Soto’s analysis). 
There are really two main sets of complaints about the \”Washington consensus\” recommendations. One set of complaints is that a united DC-centered policy establishment was telling countries around the world what to do in an overly detailed and intrusive way. The other complaint was that the recommendations weren\’t showing meaningful results for improved economic growth in countries of Latin America, Africa, or elsewhere. As Easterly points out, these complaints were being voiced by the mid-1990s. 
Conversely, standard responses were that many of these countries had not actually adopted the list of 10 policy reforms. Moreover, the responses went, there is no instant-fix set of policies for raising economic growth, and these policies need to be maintained in place for years (or decades?) before their effects will be meaningful.  And there the controversy (mostly) rested.

Easterly is (wisely) not seeking to refight the specific proposals of the Washington consensus. Instead,  he is just pointing out some basic facts. The share of countries with extremely negative macroeconomic outcomes–like very high inflation, or very high black market premiums on the exchange rate for currency–diminished sharply in the 21st century, as compared to the 1980s and 1990s. Here are a couple of figures from Easterly\’s NBER working paper:

These kinds of figures provide a context for the 1990 Washington consensus: for example, in the late 1980s and early 1990s when between 25-40% of all countries in the world had inflation rates greater than 40%, getting that wildfire under control had a high level of importance.

Easterly also points out that when these extremely undesirable outcomes diminished in the 1990s, growth across countries of Latin America and Africa has done better in the 21st century. Easterly thus offers this gentle reconsideration of the Washington consensus policies: 

The new stylized facts seem most consistent with a position between complete dismissal and vindication of the Washington Consensus. … Even critics of the Washington Consensus might agree that extreme ranges of inflation, black market premiums, overvaluation, negative real interest rates, and repression of trade were undesirable. …

Despite these caveats, the new stylized facts are consistent with a more positive view of reform, compared to the previous consensus on doubting reform. The reform critics (including me) failed to emphasize the dangers of extreme policies in the previous reform literature or to note how common extreme policies were. Even if the reform movement was far from a complete shift to “free market policies,” it at least seems to have accomplished the elimination of the most extreme policy distortions of markets, which is associated with the revival of growth in African, Latin American, and other countries that had extreme policies. 

Is a Revolution in Biology-based Technology on the Way?

Sometimes, a person needs a change from feeling rotten about the pandemic and the economy. One needs a sense that, if not right away, the future holds some imaginative and exciting possibilities. A group at the the McKinsey Global Institute–Michael Chui,  Matthias Evers, James Manyika,  Alice Zheng, amd Travers Nisbet–have been working for about a year on their report: \”The Bio Revolution: Innovations transforming economies,societies, and our lives\” (May 2020). It\’s got a last-minute text box about COVID-19, emphasizing the speed with which biomedical research has been able to move into action in looking for vaccines and treatments. But the heart of the report is that the authors looked at the current state of biotech, and came up with a list of about 400 \”cases that
are scientifically conceivable today and that could plausibly be commercialized by 2050. … Over the next ten to 20 years, we estimate that these applications alone could have direct economic impact of between $2 trillion and $4 trillion globally per year.\”

For me, reports like this aren\’t about the economic projections, which are admittedly shaky, but rather are a way of emphasizing the importance of increasing national research and development efforts across a spectrum of technologies. As the authors point out, the collapsing costs of sequencing and editing genes are reshaping what\’s possible with biotech. Here are some of the possibilities they discuss.

When it comes to physical materials, the report notes that in the long run:

As much as 60 percent of the physical inputs to the global economy could, in principle, be produced biologically. Our analysis suggests that around one-third of these inputs are biological materials, such as wood, cotton, and animals bred for food. For these materials, innovations can improve upon existing production processes. For instance, squalene, a moisturizer used in skin-care products, is traditionally derived from shark liver oil and can now be produced more sustainably through fermentation of genetically engineered yeast. The remaining two-thirds are not biological materials—examples include plastics and aviation fuels—but could, in principle, be produced using innovative biological processes or be replaced with substitutes using bio innovations. For example, nylon is already being made using genetically engineered microorganisms instead of petrochemicals. To be clear, reaching the full potential to produce these inputs biologically is a long way off, but even modest progress toward it could transform supply and demand and economics of, and participants in, the provision of physical inputs.  …

Biology has the potential in the future to determine what we eat, what we wear, the products we put on our skin, and the way we build our physical world. Significant potential exists to improve the characteristics of materials, reduce the emissions profile of manufacturing and processing, and shorten value chains. Fermentation, for centuries used to make bread and brew beer, is now being used to create fabrics such as artificial spider silk. Biology is increasingly being used to create novel materials that can raise quality, introduce entirely new capabilities, be biodegradable, and be produced in a way that generates significantly less carbon emissions. Mushroom roots rather than animal hide can be used to make leather. Plastics can be made with yeast instead of petrochemicals. …

A significant share of materials developed through biological means are biodegradable and generate less carbon during manufacture and processing than traditional materials. New bioroutes are being developed to produce chemicals such as fertilizers and pesticides. …

 A deeper understanding of human genetics offers potential for improvements in health care, where the social benefits go well beyond higher economic output. The report estimates that there are 10,000 human diseases caused by a single gene.

A new wave of innovation is under way that includes cell, gene, RNA, and microbiome therapies to treat or prevent disease, innovations in reproductive medicine such as carrier screening, and improvements to drug development and delivery.  Many more options are being explored and becoming available to treat monogenic (caused by mutations in a single gene) diseases such as sickle cell anemia, polygenic diseases (caused by multiple genes) such as cardiovascular disease, and infectious diseases such as malaria. We estimate between 1 and 3 percent of the total global burden of disease could be reduced in the next ten to 20 years from these applications—roughly the equivalent of eliminating the global disease burden of lung cancer, breast cancer, and prostate cancer combined. Over time, if the full potential is captured, 45 percent of the global disease burden could be addressed using science that is conceivable today. …

An estimated 700,000 deaths globally every year are the result of vector-borne infectious diseases. Until recently, controlling these infectious diseases by altering the genomes of the entire population of the vectors was considered difficult because the vectors reproduce in the wild and lose any genetic alteration within a few generations. However, with the advent of CRISPR, gene drives with close to 100 percent probability of transmission are within reach. This would offer a permanent solution to preventing most vector-borne diseases, including malaria, dengue fever, schistosomiasis, and Lyme disease.

The potential gains for agriculture as the global population heads toward 10 billion and  higher seem pretty important, too.

Applications such as low-cost, high-throughput microarrays have vastly increased the amount of plant and animal sequencing data, enabling lower-cost artificial selection of desirable traits based on genetic markers in both plants and animals. This is known as marker-assisted breeding and is many times quicker than traditional selective breeding methods. In addition, in the 1990s, genetic engineering emerged commercially to improve the traits of plants (such as yields and input productivity) beyond traditional breeding.  Historically, the first wave of genetically engineered crops has been referred to as genetically modified organisms (GMOs); these are organisms with foreign (transgenic) genetic material introduced. Now, recent advances in genetic engineering (such as the emergence of CRISPR) have enabled highly specific cisgenic changes (using genes from sexually compatible plants) and intragenic changes (altering gene combinations and regulatory sequencings belonging to the recipient plant). Other innovations in this domain include using the microbiome of plants, soil, animals, and water to improve the quality and productivity of agricultural production; and the development of alternative proteins, including lab-grown meat, which could take pressure off the environment from traditional livestock and seafood.

More? Direct-to-consumer genetic testing is already a reality as a consumer product, but it will start to be combined with other goods and services based on your personal genetic profile: what vitamins and probiotics to take, meal services, cosmetics, whitening teeth, monitoring health, and more.

Pushing back against rising carbon emissions?

Genetically engineered plants can potentially store more CO2 for longer periods than their natural counterparts. Plants normally take in CO2 from the atmosphere and store carbon in their roots. The Harnessing Plant Initiative at the Salk Institute is using gene editing to create plants with deeper and more extensive root systems that can store more carbon than typical plants. These roots are also engineered to produce more suberin or cork, a naturally occurring carbon-rich substance found in roots that absorbs carbon, resists decomposition (which releases carbon back into the atmosphere), may enrich soil, and helps plants resist stress. When these plants die, they release less carbon back into the atmosphere than conventional plants. …

Algae, present throughout the biosphere but particularly in marine and freshwater environments, are among the most efficient organisms for carbon sequestration and photosynthesis; they are generally considered photosynthetically more efficient than terrestrial plants. Potential uses of microalgal biomass after sequestration could include biodiesel production, fodder for livestock, and production of colorants and vitamins. Using microalgae to sequester carbon has a number of advantages. They do not require arable land and are capable of surviving well in places that other crop plants cannot inhabit, such as saline-alkaline water, land, and wastewater. Because microalgae are tiny, they can be placed virtually anywhere, including cities. They also grow rapidly. Most important, their CO2 fixation efficiency has been estimated at ten to 50 times higher than that of  terrestrial plants.

Using biotech to remediate earlier environmental damage or aid recycling?

One example is genetically engineered microbes that can be used to break down waste and toxins, and could, for instance, be used to reclaim mines. Some headway is being made in using microbes to recycle textiles. Processing cotton, for instance, is highly resource-intensive, and dwindling resources are constraining the production of petroleum-based fibers such as acrylic, polyester, nylon, and spandex. There is a great deal of waste, with worn-out and damaged clothes often thrown away rather than repaired. Less than 1 percent of the material used to produce clothing is recycled into new clothing, representing a loss of more than $100 billion a year.Los Angeles–based Ambercycle has genetically engineered microbes to digest polymers from old textiles and convert them into polymers that can be spun into yarns. Engineered microbes can also assist in the treatment of wastewater. In the United States, drinking water and wastewater systems account for between 3 and 4 percent of energy use and emit more than 45 million tons of GHG a year. Microbes—also known as microbial fuel cells—can convert sewage into clean water as well as generate the electricity that powers the process.

What about longer-run possibilities, still very much under research, that might bear fruit out beyond 2050?

  • \”Biobatteries are essentially fuel cells that use enzymes to produce electricity from sugar. Interest is growing in their ability to convert easily storable fuel found in everyday sugar into electricity and the potential energy density this would provide. At 596 ampere hours per kilogram, the density of sugar would be ten times that of current lithium-ion batteries.\”
  • \”Biocomputers that employ biology to mimic silicon, including the use of DNA to store data, are being researched. DNA is about one million times denser than hard-disk storage; technically, one kilogram of DNA could store the entirety of the world’s data (as of 2016).\”
  • Of course, if people are going to live in space or on other planets, biotech will be of central importance. 

If your ideas about the technologies of the future begin and end with faster computing power, you are not dreaming big enough.

A Wake-Up Call about Infections in Long-Term Care Facilities

Those live in long-term care facilities are by definition more likely to be older and facing multiple health risks. Thus, it\’s not unexpected that a high proportion of those dying from the coronavirus live in long-term care facilities. But the problem of infections and deaths in long-term care facilities predates the coronavirus pandemic, and will likely outlast it, too. Here\’s some text from the Centers for Disease Control website:

Nursing homes, skilled nursing facilities, and assisted living facilities, (collectively known as long-term care facilities, LTCFs) provide a variety of services, both medical and personal care, to people who are unable to manage independently in the community. Over 4 million Americans are admitted to or reside in nursing homes and skilled nursing facilities each year and nearly one million persons reside in assisted living facilities. Data about infections in LTCFs are limited, but it has been estimated in the medical literature that:

  • 1 to 3 million serious infections occur every year in these facilities.
  • Infections include urinary tract infection, diarrheal diseases, antibiotic-resistant staph infections and many others.
  • Infections are a major cause of hospitalization and death; as many as 380,000 people die of the infections in LTCFs every year.

If you\’re a number-curious person like me, you immediately think, \”Where does that estimate of 380,000 deaths come from? A bit of searching unearths that the 380,000 is from the National Action Plan to Prevent Health Care-Associated Infections, a title which has the nice ring of a program that is already well-underway. But then you look at Phase Three: Long-Term Care Facilities, and it takes you to a report called \”Chapter 8: Long-Term Care Facilities,\” which dated April 2013.  The 2013 report reads:

More recent estimates of the rates of HAIs [health-care associated infections] occurring in NH/SNF [nursing home/skilled nursing facility] residents range widely from 1.4 to 5.2 infections per 1,000 resident-care days.2,3 Extrapolations of these rates to the approximately 1.5 million U.S. adults living in NHs/SNFs suggest a range from 765,000 to 2.8 million infections occurring in U.S. NHs/SNFs every year.4 Given the rising number of individuals receiving more complex medical care in NHs/SNFs, these numbers might underestimate the true magnitude of HAIs in this setting. Additionally, morbidity and mortality due to HAIs in LTCFs [long-term care facilities] are substantial. Infections are among the most frequent causes of transfer from LTCFs to acute care hospitals and 30-day hospital readmissions.5,6 Data from older studies conservatively estimate that infections in the NH/SNF population could account for more than 150,000 hospitalizations each year and a resultant $673 million in additional health care costs.5 Infections also have been associated with increased mortality in this population.4,7,8 Extrapolation based on estimates from older publications suggests that infections could result in as many as 380,000 deaths among NH/SNF residents every year.5

Because I am on a hunt for the source of the estimate of 380,000 deaths, I take a look at note 5, which refers to a 1991 study: Teresi JA, Holmes D, Bloom HG, Monaco C & Rosen S. Factors differentiating hospital transfers from long-term care facilities with high and low transfer rates. Gerontologist. Dec 1991; 31(6):795-806.  

So to summarize the bidding. Here in 2020, in the midst of a pandemic where the infections are  causing particular harm in nursing homes, the CDC website in 2020 is quoting estimates of deaths from a study published in 2013, and the methodology for estimating those deaths relies on an extrapolation from a study published three decades ago in 1991. 
I\’m sure there are many good people making substantial efforts to reduce infections in long-term care facilities, often at meaningful risk to their own health. But ultimately, the degree of success in reducing infections isn\’t measured by good intentions or efforts: it\’s measured by actual counts of infections and deaths. And when the CDC describes estimates of \”serious infections\” that vary by a factor of three, and estimates of deaths based on extrapolations from a 1991 study, it seems pretty clear that the statistics about infections in long-term care facilities are not well-measured or consistent over time.

This problem of infections in long-term care facilities will matter well beyond the pandemic. Populations are aging everywhere: in the United States, 3.8% of the population is currently over 80, but by 2050 it will likely rise to 8.2%. The demand for long-term care is likely to rise accordingly,  which in turn will raise difficult questions about where the workers for such facilities and the financial support will come from. Here, I would emphasize that it will take redoubled efforts if the future rise in number of people in long-term care is not to be matched by a similar rise in the number of people subject to infections, including when (not if) future pandemics arrive.

The Bad News about the Big Jump in Average Hourly Wages

Average hourly wages in April 2020 were 7.9% higher than a year earlier, a very high jump. And as a moment\’s reflection will suggest, this is actually part of the terrible news for the US labor market. Of course, it\’s not true that the average hourly worker is getting a raise of 7.9%. Instead, the issue is that only workers who have jobs are included in the average. So the big jump in average hourly wages is actually telling us that a much higher proportion of workers with below-average wages have lost their jobs, so that the average wage of the hourly workers who still have jobs has risen.

Here\’s are a couple of illustrative figures, taken from the always-useful US Economy in a Snapshot published monthly by the Federal Reserve Bank of New York. The blue line in this figure shows the rise in average hourly wages over the previous 12 months. The red line shows a measure of inflation, the Employment Cost Index. There has been lots of concern in the last few years about why wages were not rising more quickly, and as you can see, the increase in average earnings had pulled ahead of inflation rates in 2019. 

However, the US economy is of course now experiencing a sharp fall in the number of hours worked. As the NY Fed notes, nonfarm payrolls fell by 20 million jobs in April, the largest fall in the history of this data series. Total weekly hours worked in the private sector fell by 14.9% in April compared with 12 months earlier. No matter how many checks and loans are handed out, the US economy will not have recovered until hours worked returns to normal levels.

You will sometimes hear statistics people talk about a \”composition effect,\” which just means that if you are comparing a group over time, you need to beware of the possibility that  the composition of the group is changing. In this case, if you compare the average hourly earnings of the group that is working for hourly wages over time, you need to beware if the composition of the group that is working for hourly wages has systematically shifted in some way. In this case, the bottom of the labor market has fallen out for hourly workers who had been receiving below-average hourly wages.

There\’s nothing nefarious about these statistics. The average hourly wage is standard statistic published every month. The government statisticians just publish the numbers, as they should. It\’s up to citizens to understand what they mean.

Interview with Emi Nakamura: Price Dynamics, Monetary and Fiscal, and COVID-19 Adjustments

Douglas Clement at the Minneapolis Federal Reserve offers one of his characteristically excellent interviews, this one with Emi Nakamura, titled \”On price dynamics, monetary policy, and this `scary moment in history\’” (May 6, 2020, Federal Reserve Bank of Minneapolis). Here are a few of Nakamura\’s comments that caught my eye, but there\’s much more in the full interview.

On the current macroeconomic situation

It’s a scary moment in history. I thought the Great Recession that started in 2007 was going to be the big macroeconomic event of my lifetime, but here we are again, little more than a decade later. … More than other recessions, this particular episode feels like it fits into the classic macroeconomic framework of dividing things into “shocks” and “propagation”—mainly because in this case, it’s blindingly clear what the shock is and that it is completely unrelated to other forces in the economy. In the financial crisis, there was much more of a question as to whether things were building up in the previous decade—such as debt and a housing bubble—that eventually came to a head in the crisis. But here that’s clearly not the case.

Price rigidity at times of macroeconomic adjustment

You might think that it’s very easy to go out there and figure out how much rigidity there is in prices. But the reality was that at least until 20 years ago, it was pretty hard to get broad-based price data. In principle, you could go into any store and see what the prices were, but the data just weren’t available to researchers tabulated in a systematic way. …

Once macroeconomists started looking at data for this broad cross section of goods, it was obvious that pricing behavior was a lot more complicated in the real world than had been assumed. If you look at, say, soft drink prices, they change all the time. But the question macroeconomists want to answer is more nuanced. We know that Coke and Pepsi go on sale a lot. But is that really a response to macroeconomic phenomena, or is that something that is, in some sense, on autopilot or preprogrammed? Another question is: When you see a price change, is it a response, in some sense, to macroeconomic conditions? We found that, often, the price is simply going back to exactly the same price as before the sale. That suggests that the responsiveness to macroeconomic conditions associated with these sales was fairly limited. … 

One of the things that’s been very striking to me in the recent period of the COVID-19 crisis is that even with incredible runs on grocery products, when I order my online groceries, there are still things on sale. Even with a shock as big as the COVID shock, my guess is that these things take time to adjust. … he COVID-19 crisis can be viewed as a prime example of the kind of negative productivity shock that neoclassical economists have traditionally focused on. But an economy with price rigidity responds much less efficiently to that kind of an adverse shock than if prices and wages were continuously adjusting in an optimal way.

What\’s does the market learn from Fed announcements of changes in monetary policy? 

The basic challenge in estimating the effects of monetary policy is that most monetary policy announcements happen for a reason. For example, the Fed has just lowered interest rates by a historic amount. Obviously, this was not a random event. It happened because of this massively negative economic news. When you’re trying to estimate the consequences of a monetary policy shock, the big challenge is that you don’t really have randomized experiments, so establishing causality is difficult.

Looking at interest rate movements at the exact time of monetary policy announcements is a way of estimating the pure effect of the monetary policy action. …  Intuitively, we’re trying to get as close as possible to a randomized experiment. Before the monetary policy announcement, people already know if, say, negative news has come out about the economy.The only new thing that they’re learning in these 30 minutes of the [time window around the monetary policy] announcement is how the Fed actually chooses to respond. Perhaps the Fed interprets the data a little bit more optimistically or pessimistically than the private sector. Perhaps their outlook is a little more hawkish on inflation. Those are the things that market participants are learning about at the time of the announcement. The idea is to isolate the effects of the monetary policy announcement from the effects of all the macroeconomic news that preceded it. Of course, you have to have very high-frequency data to do this, and most of this comes from financial markets. …

The results completely surprised us. The conventional view of monetary policy is that if the Fed unexpectedly lowers interest rates, this will increase expected inflation. But we found that this response was extremely muted, particularly in the short run. The financial markets seemed to believe in a hyper hyper-Keynesian view of the economy. Even in response to a significant expansionary monetary shock, there was very little response priced into bond markets of a change in expected inflation. … 

But, then, we were presenting the paper in England, and I recall that Marco Bassetto asked us to run one more regression looking at how forecasts by professional forecasters of GDP growth responded to monetary shocks. The conventional view would be that an expansionary monetary policy shock would yield forecasts of higher growth. When we ran the regression, the results actually went in the opposite direction from what we were expecting! An expansionary monetary shock was actually associated with a decrease in growth expectations, not the reverse! … When Jay Powell or Janet Yellen or Ben Bernanke says, for example, “The economy is really in a crisis. We think we need to lower interest rates” … perhaps the private sector thinks they can learn something about the fundamentals of the economy from the Fed’s announcements. This can explain why a big, unexpected reduction in interest rates could actually have a negative, as opposed to a positive, effect on those expectations.

The Plucking Model of Unemployment

A feature emphasized by Milton Friedman is that the unemployment rate doesn’t really look like a series that fluctuates symmetrically around an equilibrium “natural rate” of unemployment. It looks more like the “natural rate” is a lower bound on unemployment and that unemployment periodically gets “plucked” upward from this level by adverse shocks. Certainly, the current recession feels like an example of this phenomenon.

Another thing we emphasize is that if you look at the unemployment series, it appears incredibly smooth and persistent. When unemployment starts to rise, on average, it takes a long time to get back to where it was before. This is something that isn’t well explained by the current generation of macroeconomic models of unemployment, but it’s clearly front and center in terms of many economists’ thinking about the policy responses [to COVID-19]. A lot of the policy discussions have to do with trying to preserve links between workers and firms, and my sense is the goal here is to avoid the kind of persistent changes in unemployment that we’ve seen in other recessions.

For more on Nakamura and her work, the Journal of Economic Perspectives has a couple of articles to offer.

What Do We Know about Progress Toward a COVID-19 Vaccine?

There seem to me a few salient facts about the search for a COVID-19 vaccine.

1) According to a May 11 count by the World Health organization, there are now 8 vaccine candidates now in clinical trials, and an additional 102 vaccines in pre-clinical evaluation. Seems like an encouragingly high number.

2) Influenza viruses are different from coronaviruses. We do have vaccines for many influenza viruses–that\’s the \”flu shot\” many of us get each fall. But there has never been a vaccine developed for a coronavirus. The two previous outbreaks of a coronavirus–SARS (severe acute respiratory syndrome) in 2002-3 and MERS (Middle East respiratory syndrome) in 2012–both saw substantial efforts to develop such a vaccine, but neither one succeeded .Eriko Padron-Regalado discusses \”Vaccines for SARS-CoV-2: Lessons from Other Coronavirus Strains\” in the April 23 issue of Infectious Diseases and Therapy. 

3) It\’s not 100% clear to me why the previous efforts to develop a coronavirus vaccine for SARS or MERS failed. Some of the discussion seems to suggest that there wasn\’t a strong commercial reason to develop such a vaccine. The SARS outbreak back in 2002-3 died out. While some cases of MERS still happen, they are relatively few and seem limited to Saudi Arabia and nearby areas in the Middle East. Thus, one possible answer for the lack of a previous coronavirus vaccine is a lack of effort–an answer which would not reflect well on those who provide funding and set priorities for biomedical research.

4) The other possible answer is that it may be hard to develop that first coronavirus vaccine, which is why dozens of previous efforts to do so with SARS and MERS failed. Padron-Regalado put it this way (boldface is mine): \”In vaccine development, it is ideal that a vaccine provides long-term protection. Whether long-term protection can be achieved by means of vaccination or exposure to coronaviruses is under debate, and more information is needed in this regard.\” A recent news story talked to researchers who tried to find a SARS vaccine, and summarizes their sentiments with comments like: \”But there’s no guarantee, experts say, that a fully effective COVID-19 vaccine is possible. …. “Some viruses are very easy to make a vaccine for, and some are very complicated,” says Adolfo García-Sastre, director of the Global Health and Emerging Pathogens Institute at the Icahn School of Medicine at Mount Sinai. “It depends on the specific characteristics of how the virus infects.”Unfortunately, it seems that COVID-19 is on the difficult end of the scale. … At this point, it’s not a given that even an imperfect vaccine is a slam dunk.\”

5) At best, vaccines take time to develop, especially if you are thinking about giving them to a very wide population group with different ages, genetics, and pre-existing conditions.

So by all means, research on a vaccine for the novel coronavirus should proceed full speed ahead. In addition, even if we reach a point where the disease itself seems to be fading, that research should keep proceeding with the same urgency. That research may offer protection against a future wave of the virus in the next year or two. Or it may offer scientific insights that will help with a vaccine targeted at a future outbreak of a different coronavirus. 

But we can\’t reasonably make current public policy about stay-at-home, business shutdowns, social distancing, or other public policy steps based on a hope or expectation that a coronavirus vaccine will be ready anytime soon.

A Century of Suffrage for American Women

In August 1920, the 19th Amendment to the US Constitution became law when it was ratified by the state of Tennessee. It concisely states: \”The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of sex. Congress shall have power to enforce this article by appropriate legislation.\”

The event raises two broad sets of questions. First, why would an in-group with voting power choose to weaken its power by extending that power to others? Second, how has the vote of women changed actual US political patterns?  After all, there was some question about back in 1920 about how many women would actually vote.  If women as a group voted in pretty much the same patterns as their brothers, husbands. and fathers, would women\’s suffrage leave the overall electoral results mostly unchanged?

Two papers in the Spring 2020 issue of the Journal of Economic Perspectives tackle these issues: 

Moehling and Thomasson trace the history of US women\’s suffrage from the start of the country up through 1920, and link that history to various theories of political economy. As they write: \”Theories of suffrage extension seek to explain why groups in power would choose to share this power with the disenfranchised. All of these theories predict that men extend the franchise to women when the benefits of doing so outweigh the costs, but they differ in the benefits and costs they consider.\” The article contains numerous tidbits that were 

For example, I had not known that New Jersey defined the right to vote in terms of property-owners at the start of the country, which allowed the relatively small numbers of women and non-white men who owned property to vote–and sometimes their votes even swung close elections. However, women in New Jersey lost the vote in 1807. I also had not known that when the Seneca Falls Convention of 1848 drafted its famous \”Declaration of Sentiments\” for women\’s rights, the right to vote was the only element that was NOT unanimously adopted, out of a fear that the demand was too extreme and might make the movement look ridiculous.

As one would expect, the path to women\’s suffrage in 1920 involved a number of interacting factors. Some of the important factors include: 

  1. The sheer persistence of the suffrage movement over the decades. What was unthinkable even to some proponents in 1848 had become at least a thinkable controversy several decades later after decades of (mostly failed) demonstrations, referenda,  and proposed legislation.
  2. Many western US states granted suffrage to women for state-level elections. Part of the reason seems to be that women were scarce in these jurisdictions. Another was a hope by certain politicians that women would be a reliable vote in favor of community-building efforts like schools and sanitation. Part of the reason was that in these states, women\’s suffrage was not blocked by the state constitution, and thus could become law with a simple vote of the legislature. 
  3. Especially after about 1890, the women\’s suffrage movement became more strategic about forming coalitions. For example, large labor unions had growing numbers of women as members by the 1890s, and began to endorse women\’s suffrage. The same was true of groups supporting farmers. Prohibition groups like the Women\’s Christian Temperance Union, which had always included a number of women who did not support suffrage, became active in the pro-suffrage movement. 
  4. By about 1910, states were taking the lead on women\’s suffrage. In 1912, Theodore Roosevelt\’s Bull Moose party endorsed women\’s suffrage. Moehling and Thomasson: \”The suffrage parade in New York City in 1915 involved an estimated 20,000 to 25,000 women, including 74 women on horseback, 57 marching bands, and 145 decorated automobiles (McCammon 2003, 791). These parades brought together suffrage supporters from across the political and economic spectrum and put this diversity on display to the public. The scale and spectacle of the parades led to coverage by the press, greatly expanding public awareness of the movement. The doldrums of the suffrage movement ended in 1910 when the state of Washington granted women full suffrage. California followed in 1911, and Arizona, Kansas, and Oregon followed in 1912. … With the exception of New Mexico, all of the states west of the Rocky Mountains extended full voting rights to women by 1914.\”
  5. The prominent role of women in the World War I effort also helped shift support in favor of suffrage. 
One striking outcome of this history is that the 19th Amendment was passed by Congress on June 4, 1919 and all the states needed to ratify the amendment had done so barely more than a year later August 18, 1920. It took a long time for women\’s suffrage to happen, but when it did happen, it had a deep level of broader legitimacy because of that history, and because of the number of powerful political, economic, and social forces aligned in support. 
What happened in the decades after women\’s suffrage was adopted? Cascio and Shenhav piece together the story. They offer the reminder that as late as 1940, the prominent pollster George Gallup was quoted as saying: “How will [women] vote on election day? Just exactly as they were told the night before.\” Of course, it wasn\’t that simple. 
They describe how the voting participation rates of women rose over time, and have been higher than men\’s rates of voting since 1980. They also present evidence that in the last half-century or so, women have become more likely to identify with the Democratic Party. They argue that these patterns seem connected to rising rates of high school and then college graduation among women. They also argue that although partisan voting differences between men and women have become larger in recent decades, there is little evidence that this greater partisanship reflects an increase in policy preferences between men and women. Cascio and Shenhav summarize: 

The female voter has come a long way since the passage of the first suffrage laws at the turn of the century and since the passage of the Nineteenth Amendment in 1920 extended the franchise (at least in principle) to women nationwide. We trace the evolution of the sex gap in voter turnout and partisanship over the last 80 years using a novel dataset of voter surveys. We find that women closed a 10 percentage point gap in voter turnout over the 40 years from 1940 to 1980 and over the next 40 years from 1980 to present gained more than a 4 percentage point advantage in turnout over men. Additionally, while women and men had similar patterns of party support in 1940, over the last half-century, a 12 percentage point sex gap has emerged in the probability of women and men identifying with the Democratic Party.

What accounts for these changes? We argue that the relative rise in women’s turnout is largely explained by the replacement of older, low-participation cohorts with younger, high-participation cohorts. Descriptively, we find that these cohort effects are associated with women’s differential response to increasing rates of high school graduation, with less explanatory power for rising rates of college attendance. In contrast, the rise in women’s support for Democrats appears to have been common to all cohorts. At least since the 1970s, this seems to be best explained by the trend towards greater polarization of political parties, as we find little evidence of any change in the gap in policy preferences across men and women.

Evolving Choices about Easing the Shutdown

I\’ll just say up front that the question of exactly when to ease the stay-at-home orders and shutdown in response to the coronavirus is a hard one, and I don\’t have a firm answer. But here\’s a framework for thinking through the tradeoffs.

Here are a couple of figures from Joseph Pagliari in a short and aptly-titled essay,\” No one has all the answers for COVID-19 policy: The trade-offs are evident, but the costs involved are ambiguous\” (Chicago Booth Review, May 4, 2020). 

The horizontal axis is the total length of the stay-at-home orders. The vertical axis measures human and economic costs. The dark blue line shows that the effect of the stay-at-home order at the start has a relatively big effect on reducing COVID-19 costs (including both health costs and costs of delivering health care). However, a short stay-at-home period of, say, a couple of weeks, would have imposed a relatively small level of \”Quarantine-related gross costs\” shown by the black line. As the stay-at-home period gets longer, the marginal benefits to improved health are somewhat lower, and the the quarantine-related costs start rising. Government can take some actions with spending and lending to reduce these quarantine-related costs, so that \”net costs\” are the green line rather than the black line–but the costs still rise.

Now, combine the cost lines to get a picture of \”total cost.\” At the start of the pandemic, the total costs were mostly the COVID-19 costs. But at some point, the quarantine-related costs get large.

Some comments:

1) These figures obviously have a high level of generality. There are no numbers on the axes! One might plausibly say that we aren\’t at either the extreme left or extreme right of the diagram, but where we are in the middle is unclear. There\’s also no presumption in the figure that the social goal should be to choose the length of the stay-at-home order where total costs are lowest, or where the dark-blue and light-blue lines cross. Any decision would have to place a value on the health benefits being received, and that value isn\’t shown here. A substantial degree of humility is appropriate both for those imposing stay-at-home and shutdown orders, and also by those opposing them.

2) Some people object to having human and economic costs on the same scale. I strongly disagree. I remember the aftermath of the Great Recession when monthly unemployment rates were 9% or  higher for more than two years. If anyone had said at that time, \”Well, these are just economic costs, not all that important compared to actual human costs,\” they would have been pilloried–and rightly so. The unemployment rate for April 2020 was 14.7%, and it may well be higher in May. Dismissing this as \”an economic cost\” seems benighted.

It\’s easy enough to find studies showing direct economic costs of an economic slowdown. As one example, a couple of foundations–the Well Being Trust and the Robert Graham Center–published a report on \”Projected Deaths of Despair from COVID-19\” (May 8, 2020). Making projections over the next decade, they write: \”Across the nine different scenarios, the additional deaths of despair range from 27,644 (quick recovery, smallest impact of unemployment on deaths of despair) to 154,037 (slow recovery, greatest impact of unemployment on deaths of despair).\” I don\’t view these estimates as definitive, but they do illustrate that the quarantine costs are not just a matter of lower incomes, but are tightly connected to tens of thousands of additional deaths, as well.

Moreover, there are lots of health effects that these estimates don\’t take into account, like the  children who are missing meals because the schools are closed, or not receiving vaccinations because of the stay-at-home policy focus, or whose families are falling into poverty. From an international perspective, the United Nations published a \”Policy Brief:The Impact ofCOVID-19 on children\” (April 15, 2020). It points out that 188 countries around the world have interrupted school, which affects 1.5 billion children. It also points out that the economic effects of the pandemic and the shelter-in-place rules are going to tip tens of millions of children around the world–most in countries that lack substantial social welfare programs–into extreme poverty. Rates of infant mortality and maternal mortality seem certain to rise. The tradeoffs of stay-at-home policies many not look the same across all countries of the world.

3) Some people will feel uncomfortable with the shape of  the curves in the diagrams. Why do the COVID-19 health care costs fall, while the quarantine costs rise? Actually, the shape of the curves is based on the idea that the government has at least a decent sense of what it\’s doing. A supporter of government actions presumably believes that an effort was made to choose approaches that started off with the biggest gains to health and the smallest costs to the rest of the economy. Critics of government actions might counter with claims that the gains to health have been modest, or that other policies could have achieved similar gains with smaller quarantine costs to the economy.

4) One reason why quarantine costs rise is that if the stay-in-place had lasted only a week or so, then having people return to their previous jobs would have been relatively easy. But many parts of the United States are now in their ninth week of stay-in-place, with talk of another month or two to come. You can\’t just put the economy on hold for a few months, and then flick it back on like a light switch. I\’ve seen news stories talking about how as government rules are loosened, workers can then return to their jobs. But after a few months, many of them will not have jobs to come back to. Moreover, my guess is that job openings and hires will plummet in the next few months, as firms struggle to find their feet. 

5) The diagrams above suggest a very real possibility that it will be sensible social policy to dramatically ease the stay-in-place and the lockdown policies while the number of cases and deaths from coronavirus remain substantial.  I sometimes read and hear comments from people that we need to keep the stay-at-home and lockdown policies in place \”until there is a vaccine,\” or \”until there is a treatment,\” or \”until we have universal testing and contact tracing.\” I\’m sympathetic to the all-American desire for a techno-fix that makes all this go away. But the most optimistic timelines I\’ve seen for a vaccine or a sure-fire treatment are measured in months, while the pessimistic timelines are measured in years or \”never.\” I\’d like to see more testing, but rearranging our lives, work, schedules, and personal interactions based on a series of test results isn\’t going be simple. Those who want to wait for a techno-fix need to face the question of whether they are perhaps willing to wait years, and what health and financial costs society and the economy will bear in the meantime. Personally, I am quite confident that the right policy is not to keep the stay-at-home and shutdown policies until the COVID-19 line hits zero, while ignoring other costs being imposed.

6) Political figures often seem to have a habit of re-fighting the previous battle and becoming stuck in place, which would be an unproductive approach here. Say that you have been arguing for a shutdown, or in fact have been implementing a shutdown, and have been receiving lots of criticism for doing so.  There\’s a natural human/political tendency to form sides–those favoring shut-down and those opposed. People on your side support each other. Phrases like \”blood on your hands\” get used. Even though the situation is evolving, sides get stuck in place. This would be deeply counterproductive. When the time comes to end the shutdown, it doesn\’t mean that it was wrong to start it in the first place, and it\’s not some admission that \”the anti-shutdown forces were right all along.\” It just means that knowledge and conditions have evolved.

7) Although our knowledge about the coronavirus remains frustratingly inconclusive, we do seem to know a few things. For example, it seems clear that the high-risk scenarios for transmission in involve indoor settings, where people are closely-spaced, that also lots of talking, singing, or yelling. Examples include workplaces with people in close quarters, especially if they involve needing to yell over loud sounds; big group events including weddings, funerals, bars, concerts, and church services; and institutional living from nursing homes to prisons. Conversely, the transmission scenarios seem quite low for those who are outdoors, or loosely spaced, or walking by people without talking. We also know that the health risks are much greater for those who are elderly or whose systems are immuno-compromised in some way, and the risks are near-zero for children and for healthy younger people. We know that you can help to protect yourself by taking Lady Macbeth as your role model for hand-washing, and that you can help to protect others by pulling something over your mouth, especially if you\’ve got a cough. This knowledge has some obvious implications, like placing a very heavy emphasis on limiting transmission in nursing homes. 

8) Many people, including me, are not great at thinking about risk, and perhaps especially about low-level risks. But in a broad sense, we should be able to agree that activities with low risks of transmitting the virus should be treated differently than activities with a high risk.   sometime sread and hear comments that the stay-at-home and lockdown should be intensified in every direction, usually backed up by a what-if story: \”Sure, it\’s just some teenagers playing tennis, but one of them might be a carrier who breathes on the ball, and then the other person touches the ball and their own mouth, and then that other person takes the virus to their grandparents in the nursing home, and then dozens of people die.\” It could happen, of course. But you can draw up a doomsday scenario for pretty much every setting in which someone leaves their home for any purpose, and whether there\’s a pandemic or not, the costs of strict rules that seek to eliminate every low-probability doomsday scenario are just too high.

9) James H. Stock has written a useful paper \”Reopening the Coronavirus-Closed Economy\” (May 2020, Hutchins Center Working Paper #60, Brookings Institution), with some general guidelines. Jim emphasizes four points:

  • Non-economic NPIs play a critical role in getting people back to work. There are important non-pharmaceutical interventions that, while individually limited, collectively hold the potential to substantially reduce the spread of the virus. These include social distancing; testing, contact tracing and quarantine; wearing masks; and having adequate personal protective equipment for workers in jobs that are unavoidably high-contact. None are a silver bullet, but collectively they can reduce the probability of transmission outside the workplace and thereby make room for getting people back to work and back to something more closely resembling normal economic activity.
  • Low-contact, high-value workplaces should be reopened quickly, and returning workers must feel safe. Many jobs are either low-contact or can be made so by suitable modifications of the workplace. In some cases, those modifications are low cost, like encouraging work-from-home, while in other cases they might entail some productivity reductions to facilitate worker distancing at work. When coupled with low-contact forms of transport to work, such jobs can be reopened quickly.
  • Some high-contact activities might need to be suspended indefinitely. Certain high-contact activities might require a hiatus until a vaccine and/or effective treatment is developed. These include both economic activities (for example, live fans at professional sports) and activities with less or no economic component.
  • Avoid a second dip that could induce severe long-term damage to workers and the economy. While reopening the economy is urgently needed, doing so in a way that leads to a second wave of deaths and a subsequent second shutdown could result in damage that is lasting and profound. Such damage has largely been avoided to date because of federal fiscal support and aggressive actions of the Federal Reserve. There are reasons to be pessimistic, however, that these levels of support would either be available or as effective in a second wave of deaths and closings, which could lead to those temporarily unemployed now becoming long-term unemployed without a job to return to, waves of bankruptcies, and severe strains on credit markets.

Florence Nightingale: Innovator in Statistics and Data Presentation

I learned as a child about Florence Nightingale (1820-1910) as the founder of the modern profession of nursing and probably the single person who did the most to make it socially acceptable for women from middle- and upper-class background to become nurses. Her name became eponymous: referring to someone as \”Florence Nightingale\” was a way of saying that the person was a perfect nurse. For more than a century, the International Committee of the Red Cross has given an award in her name for \”exceptional courage and devotion to the wounded, sick or disabled or to civilian victims of a conflict or disaster\” or \”exemplary services or a creative and pioneering spirit in the areas of public health or nursing education.\”
What I had not learned about Nightingale as a child was that she was also an early innovator in applying statistical analysis to health data, and in the graphic presentation of data.  Indeed, he was the first female member of Britain\’s Royal Statistical Society/  Noel-Ann Bradshaw provides a nice overview of this story in \”Florence Nightingale (1820–1910): An Unexpected Master of Data,\” in Patterns magazine (May 2020). 
Nightingale\’s work in statistics and data followed after her legendary work in the Crimean War. (For background here, I draw on the article about Nightingale from the History.com editors, updated April 17, 2020.)  When she arrived at the main British base hospital in Constantinople in 1854, she found that the \”hospital sat on top of a large cesspool, which contaminated the water and the hospital building itself. Patients lay on in their own excrement on stretchers strewn throughout the hallways. Rodents and bugs scurried past them. The most basic supplies, such as bandages and soap, grew increasingly scarce as the number of ill and wounded steadily increased. Even water needed to be rationed. More soldiers were dying from infectious diseases like typhoid and cholera than from injuries incurred in battle.\”
Nightingale dramatically revamped hygiene, food, laundry, and nursing practices. The hospital\’s death rate fell by two-thirds. Upon her return to England in 1856, she was greeted as a hero. She wrote an 830-page report, \”Notes on Matters Affecting the Health, Efficiency and Hospital Administration of the British Army.\” Queen Victoria supported her in establishing Royal Commission for the Health of the Army in 1857. where she worked with leading statisticians of the day. 
Bradshaw presents several examples of Nightingale\’s data presentations. For example, Bradshaw writes: 

She became fascinated that the mortality rate among soldiers stationed at home was higher than the mortality rate of ordinary British men, despite soldiers being healthier at the start of their careers. She used data to examine the cause, concluding that the problem was poor sanitation and over-crowding of military barracks, encampments, and hospitals that exacerbated the spread of disease. She drew many graphs depicting this, including Figure 1, which shows five circles filled with hexagons representing the space between people. The first three circles show how closely packed the army would be in the Quartermaster General’s camp plans, while the last two circles show how densely packed the inner city of London currently was and the population of London in general. This comparison made it obvious to anyone that the Quartermaster General’s proposition for encampment was going to be problematic given how unhealthy densely populated areas of London were.

Figure thumbnail gr1
Here\’s another example, from Bradshaw:

She [Nightingale] went on to forecast the efficiency of the army if the soldiers were as healthy as the rest of the men in the UK. This graph was way ahead of its time (Figure 2). On the left she displayed the current situation, showing the effectiveness of the British Army in terms of the numbers who were ill, invalided, etc. On the right she graphed the potential effectiveness of the army if the soldiers were as healthy as the general male population. By forecasting this potential effectiveness, she emphasized how the army at rest were experiencing higher degrees of mortality that the general male population.

Figure thumbnail gr2

Perhaps the most famous of Nightingale\’s figures was sometimes called the \”rose\” diagram. Each wedge represents death sin a month. The red part of the wedge is deaths from wounds; the blue par is deaths from infectious disease; and the total is deaths from all causes. The circle on the right is April 1854 to March 1855, while the circle on the left is the following year from April 1855 to March 1856. 
Figure thumbnail gr3
Exercises like these also made Nightingale an outspoken advocate for improved and regularized methods of collecting health statistics–a lesson which is self-evidently still being learned today during the coronovirus pandemic. 
I should be clear that Nightingale\’s work in statistics and data presentation has been well-known for a long time–just not by me. Indeed, there is an award given to a prominent female statistician every other year by the Committee of Presidents of Statistical Societies and Caucus for Women in Statistics called the Florence Nightingale David Award. F.N. David\’s (1909-1993) parents were friends with the nurse, and named their daughter after her. David did her doctoral research with Karl Pearson in the 1930s, and then spent most of of her professional career at University College in London,  University of California, Riverside, and University of California, Berkeley.

If history of data display is holding some perhaps unexpected appeal for you, you might also be interested in \”William Playfair: Inventor of the Bar Graph, Line Graph, and Bar Chart\” (August 9, 2017). 

Some Economics of World War II: The Air-Sea Super Battlefield

World War II ended 75 years ago in 1945. Stephen Broadberry and Mark Harrison have edited an ebook that offers an overview of some economic research on this topic: The Economics of the Second World War: Seventy-Five Years On (May 2020, CEPR Press, free registration required). The ebook has short readable chapters that link to the underlying specialized research. I was struck by a comment from their introduction:

Mobilisation for the Second World War was more extensive than for the First. The First World War was fought on land in Europe and the Near East and at sea in the Atlantic, while the Second was expanded to Asia and the Pacific, and to the air. While the major economies mobilised 30-60% of their national incomes for the First World War, the Second demanded 50-70%. Both wars reached the limit of what was sustainable for a modern economy at the time. The human losses were also greater: more than 50 million in the Second World War compared with 20 million or more in the First … 

I\’ll append a full table of contents for the book below. Here, I\’ll focus on two chapters that particularly caught my eye, both of which focus on the importance of production of specialized equipment as central to the outcome of World War II. 

Phillips Payson O’Brien contributes an essay on \”How the War Was Won.\” He suggests that histories of World War II have tended to focus on specific battles, like Stalingrad. Instead, he argues that the more fundamental war was what he calls the Air-Sea Super Battlefield: \”Looking at the war this way allows us to reframe our understanding of what a battle was in the Second World War. Instead of battles being fixed on well-known pieces of earth, air-sea weaponry was constantly in action in battlefields thousands of miles long and many miles in depth – what should be called the Air-Sea Super Battlefield. Victory in this super-battlefield led to victory in the war.\”

More specifically, the Air-Sea Super Battlefield was not just a matter of battles in the air or the sea. Before that, it was a battle over the ability to put such resources into battle in the first place. O\’Brien writes:

If we reframe the discussion of the war to look not only at what equipment was made but also at how it was destroyed, it emerges that the war was decided far from the land battlefield (O’Brien 2015). The most striking sign of this is how little war production went to the land war and how much went to the combined air-sea war. This was the case for all the powers except the USSR. … 

Instead of waiting to destroy Axis equipment on the traditional battlefield, Allied air-sea weaponry destroyed it en masse before it could ever be used in action, determining the result of every ‘battle’ long before it was fought. This destruction of equipment is best understood in three phases. First, there is pre-production destruction, which prevented weapons from being built. This was done most efficiently to both Germany and Japan by depriving them of the ability to move raw materials. By 1942, both Germany and Japan had assembled large, resource-rich empires that had the ability to significantly increase weapons output. … By the second half of 1944, attacks on the movement of goods throughout the Japanese and German economies meant that the amount of war equipment each could build was far below potential (Mierzejewski 2007: 106-113).

The second phase is direct production destruction – destroying the facilities to make weapons in Germany and Japan. This was the great hope of inter-war airpower enthusiasts for the precise targeting of individual munitions factories (Meilinger 1997: 1-114). During the war, there was an expectation that attacking specific industries such as German ball-bearing production would cripple weapons output. The truth was that these attacks were not as effective as hoped for, as strategic bombing was not accurate enough to completely wipe out facilities (until 1944). That being said, the losses from bombing were greater than those arising in land battles. The surprise is that land battles destroyed little equipment. German armour losses during the Battle of Kursk amounted to approximately 0.2% of annual output (and moreover was made up of mostly obsolete equipment) (O’Brien 2015: 310-311).

Finally, there were deployment losses. Getting weapons from the factory to the front was no easy feat. It normally required movement over hundreds or thousands of miles using shipping or rail lines that were vulnerable to attack. Aircraft had to be flown, often by inexperienced pilots, over the open ocean in or through difficult weather conditions. By 1943, as Anglo-American aircraft deployment losses decreased, Axis losses skyrocketed. This was because of the stresses placed on their systems by Allied air-sea power. German and Japanese pilot training was cut back as both ran out of fuel; hastily constructed new factories were producing more aircraft with undiscovered flaws; maintenance facilities at the front were poorly supplied. This meant that the Axis were losing as many aircraft deploying to the front as in direct combat. At times, Japan’s losses outside combat were up to twice those lost fighting (O’Brien 2015: 405-7).

Overall, by 1944 the Axis could deploy only a small fraction of their potential military capacity into combat – it was being destroyed in a multi-layered campaign long before it could be used against their enemies. This was the true battlefield of the Second World War, a massive air-sea super battlefield that stretched for thousands of miles not only of traditional front but of depth and height.

This emphasis on military equipment leads naturally to the US economy and its role as a supplier not just of soldiers, but also of manufactured production. Price Fishback focused on that story in \”The Second World War in America: Spending, deficits, multipliers, and sacrifice.\” He writes:

The US war economy was a quasi-command economy in which the government forced 10% of the workforce to join the military at compensation levels well below normal wages. The military had the first claim on all resources, as over 36% of estimated GDP was devoted to the production of war goods that would be destroyed, left behind, or mothballed. Production halted on automobiles, civilian housing, and most consumer durables. The military also had first claim on the materials for clothing, food, and other factors. This led to rationing of meat, gasoline, fuel oil, kerosene, nylon, silk, shoes, sugar, coffee, processed foods, cheese, and milk. …

The war-time production that made the US economy ‘the arsenal of democracy’ was a tremendous accomplishment. In a very short time span, the US economy produced 17 million rifles and pistols, over 80,000 tanks, 41 billion rounds of ammunition, 4 million artillery shells, 75,000 vessels, nearly 300,000 planes, and many more items and services for the war. … In the last year of the war, 18% of the combined civilian and military labour force were in the military and another 22% were producing munitions.

Although output of the US economy rose dramatically during the World War II years, the increase went entirely to the war effort, so consumers as a group–and of course with some exceptions–were actually worse off.

Consumption per capita measured with official prices shows no change in private consumption between 1941 and 1944 but the estimate does not account for the declines in quality of goods, the extra costs of obtaining rationed goods, and the complete absence of other goods. Once the consumption figures are adjusted to develop better estimates of the true prices, the amount consumed per person was lower throughout the war than it was in 1940 when the economy was still climbing out of the Great Depression.

Despite the sacrifices, many remember the war as prosperous relative to the Depression because everybody had a job and developed a sense of shared sacrifice to defeat the Axis. Some individuals did fare better. Blacks migrated north and west to better jobs. Industrial demand for women’s services rose during the war; despite a post-war fall, it remained higher than in 1941 (Shatnawi and Fishback 2018). With little to buy, people accumulated wealth through savings or bought existing housing, which fuelled the post-war boom delayed by the war.

Fishback also offers some discussion of the controversy over the extent to which government spending was \”crowding out\” the rest of the economy. Apparently there was an argument back in the mid-1940s that wartime levels of taxation and spending were needed to keep the US economy going after the war; conversely, the warning was that a reduction in US government spending would lead to a return to the Depression years. One can rephrase this argument as a belief that government spending had not been crowding out the private sector. When government wartime spending fell sharply, there were some difficulties with the transition back to civilian production, and there was an eight-month postwar recession in 1945. But the US economy did not return to the Depression, which suggest that wartime spending was indeed \”crowding out\” private consumption.

____________________________

Table of Contents:

\”Introduction,\” by Stephen Broadberry and Mark Harrison

Part I: Preparations for war

1 \”Roots of war: Hitler’s rise to power,\” by Hans-Joachim Voth
2 \”The German economy from peace to war: The Blitzkrieg economy revisited,\” by Richard Overy
3 \”The Soviet economy and war preparations,\” by Mark Harrison
4 \”Lessons learned? British economic management and performance during the World Wars,\” by Stephen Broadberry

Part II: Conduct of the war

5 \”How the war was won,\” by Phillips Payson O’Brien
6 \”Never alone, and always strong: the British war economy in 1940 and after,\” by David Edgerton
7 \”The Second World War in America: Spending, deficits, multipliers, and sacrifices,\” by Price Fishback
8 \”Economic warfare: Insights from Mançur Olson,\” by Mark Harrison
9 \”Supplier networks as a key to wartime production in Japan,\” by Tetsuji Okazaki
10 \”Exploitation and destruction in Nazi-occupied Europe,\” by Hein Klemann
11 \”The economics of neutrality in the Second World War,\” by Eric Golson
12 \”Economists at war,\” by Alan Bollard

Part III: Consequences of the war

13 \”The famines of the Second World War,\” by Cormac Ó Gráda
14 \”Inequality: Total war as a great leveller,\” by Walter Scheidel
15 \”Recovery and reconstruction in Europe after the war,\” by Tamás Vonyó
16 \”How the war shaped political and social trust in the long run,\” by Pauline Grosjean