The Spanish Flu of 1918-20: Health and Macroeconomic Effects

A century ago, the world went through the \”Spanish flu,\” which was actually an epidemic that arrived in three waves from 1918-1920. Robert J. Barro, Jose F. Ursua, and Joanna Weng discuss \”The coronavirus and the Great Influenza epidemic: Lessons from the `Spanish Flu\’” (AEI Economics Working Paper 2020-02, March 2020). They write (footnotes omitted):

A reasonable upper bound for the coronavirus’s mortality and economic effects can be derived from the world’s experience with the Great Influenza Epidemic (popularly and unfairly known as the Spanish Flu), which began and peaked in 1918 and persisted through 1920. Our estimate, based on data discussed later on flu-related death rates for 43 individual countries, is that this epidemic killed around 39 million people worldwide, corresponding to 2.0 percent of the world’s population at the time. These numbers likely represent the highest worldwide mortality from a “natural disaster” in modern times, though the impact of the plague during the black death in the 14th century was much greater as a share of the population. 

The Great Influenza Epidemic arose in three main waves, the first in spring 1918, the second and most deadly from September 1918 to January 1919, and the third from February 1919 through the remainder of the year (with a fourth wave applying in some countries in 1920). This airborne infection was based on the Influenza A virus subtype H1N1. The coincidence of the two initial waves with the final year of World War I (1918) encouraged the spread of the infection, due to crowding of troops in transport, including large-scale movements across countries. An unusual feature was the high mortality among young adults without existing medical conditions. This pattern implies greater economic effects than for a disease with comparable mortality that applied mostly to the old and very young. 

The epidemic killed a number of famous people, including the sociologist Max Weber, the artist Gustav Klimt, the child saints Francisco and Jacinta Marto, and Frederick Trump, the grandfather of the current U.S. President. Many more famous people were survivors, including Mahatma Gandhi, Friedrich Hayek, General Pershing, Walt Disney, Mary Pickford, and the leaders of France and the United Kingdom at the end of World War I, Georges Clemenceau and David Lloyd George. The disease severely impacted U.S. President Woodrow Wilson, whose impairment likely had a major negative effect on the negotiations of the Versailles Treaty in 1919. Thus, if the harsh terms imposed on Germany by this treaty led eventually to World War II, then the Great Influenza Epidemic may have indirectly caused World War II. …

Applying the flu death rates from the Great Influenza Epidemic to current population levels (about 7.5 billion worldwide in 2020) generates staggering mortality numbers. A death rate of 2.0 percent corresponds in 2020 to 150 million worldwide deaths. The number of deaths in the United States would be 6.5 million at the global death rate of 2.0 percent and 1.7 million at the U.S. death rate of 0.5 percent. However, these numbers likely represent the worst-case scenario today, particularly because public-health care and screening/quarantine procedures are more advanced than they were in 1918-1920.

The main purpose of the Barro, Ursua and Weng paper is to estimate macroeconomic effects of the Great Influenza Outbreak. It should be noted that this is a working paper, subject to later revision. The broad approach is to look at how outbreaks of flu intensity varied across countries and over time, and to draw inferences about changes in economic output accordingly. (For example, not all countries that had outbreaks of flu were much involved in World War I.).  But as the authors are quick to poiunt out, the quality of country-level data for this time period often isn\’t great. There is also the task of separating effects of flu from after-effects of World War I, including postwar depression and, in some countries, inflations on their way to becoming hyperinflations. Those with a taste for working through econometrics will certainly find other questions to raise.

Nonetheless, as an early take on events, the results caught my eye. By their calculations, the Great Influenza Outbreak was the fourth-worst global economic event since 1870, lagging behind only World War I, World War II, and the Great Depression. (Of course, Americans have a tendency to think of World War II as a time of rising economic output, but this was not the experience of the war across Europe and Asia).  Barro, Ursua, and Weng conclude this way:

The implications of our findings from the Great Influenza Epidemic for the ongoing coronavirus epidemic are unsettling. As noted before, the flu death rate of 2.0 percent out of the total population in 1918-1920 translates into 150 million deaths worldwide when applied to the world’s population of around 7.5 billion in 2020. Further, this death rate corresponds in our regression analysis to declines in the typical country by 6 percent for GDP and 8 percent for consumption. These economic declines are comparable to those last seen during the global Great Recession of 2008-2009. The results also suggest substantial short-term declines in real returns on stocks and short-term government bills. Thus, the possibility exists not only for unprecedented numbers of deaths but also for major global economic dislocation. Although these outcomes for the coronavirus are only possibilities, corresponding to plausible worst-case scenarios, the large potential losses in lives and economic activity justify substantial outlays to attempt to limit the damage. However, extreme mitigation efforts—such as widespread cancellations of travel, meetings, and major events—will themselves contribute to the depressed economic activity

A number of points here are worth reflection. For example, this kind of pandemic can be something that echoes over several years, not just a three-week or three-month event. The effects of a major outbreak will echo over time, both in terms of an effect on the lives of prominent people, but also through health effects that may not manifest themselves until decades later in life. At least so far, the mortality rate of coronavirus seems higher among the elderly and those with previous conditions, which is one of the several main reasons why this historical parallel is imperfect. Indeed, there is some argument–for example, expressed here by Nicholas Christakis–that COVID-19 may be closer to the quite 1957-8 outbreak of \”Asian flu,\” rather than the 1918-1920 experience.

An overview from the CDC on the 1918 pandemic is available here.

Given the kerfuffle over whether to refer to viral outbreaks or diseases by using geographic names, I did find myself smiling at the comment from Barro, Ursua, and Weng about what I long-ago learned to call the \”Spanish flu\” of 1918-20:

Spain was not special in terms of the severity or date of onset of the disease but, because of its neutral status in World War I, did have a freer press than most other countries. The greater attention in news reports likely explains why the flu was called “Spanish.” In terms of mortality rates and total persons killed, it would be more appropriate to label the epidemic as the Indian Flu, although the highest mortality rate out of the total population, above 20 percent, may have been in Western Samoa. There is controversy about the origin point of the epidemic, with candidates including France, Kansas, and China.

US and International Stock Market Premiums in the Long Run

Stock markets can be volatile and risky in the short- and even the medium-run. But as a long-run average, stock markets (in the US, at least) have provided rewarding returns. Elroy Dimson, Paul Marsh, and Mike Staunton provide some useful background in the Credit Suisse Global Investment Returns Yearbook 2020.  The \”Summary Report\” is freely available on-line.

Here are a couple of figures showing nominal and real rates of return on stocks, bonds, and Treasury bills for the US market from 1900-2019. (Thanks to the authors for permission to reproduce the figures shown here.) In real terms, US stock market returns have risen at a 6.5% annual rate over this time, compared to 2.0% for bonds and 0.8% annually for bills. This is the \”equity premium,\” the name given to the pattern that if you invest in stocks for the long term–and thus ride through the hills and valleys–your patience will be rewarded after a decade or two.

This general pattern of higher stock market returns holds across many countries, but it\’s stronger in the US than in most others. Here\’s a figure showing the return on equities from 1900 to 2019 for a range of countries. Sweden\’s (fairly small) stock market is at the top, with the US just a bit behind. Some of the stock markets that existed in 1900, like those in China and Russia, were  wiped out altogether. (Russia represented about 6% of global stock market capitalization in 1900).

The high returns for US stock market mean that over time, US stock market capitalization has become a substantially larger share of the global total. For example, the US stock market was 15% of total global stock market capitalization in 1899 (top pie graph), but was 54.5% of global stock market capitalization in 2019 (bottom pie graph).

This pattern raises some obvious questions. What specifically is it about US stock markets that has enabled them to grow so robustly over the long-term? In particular, are the key factors more closely related to events and patterns in the US economy as a whole? To characteristics of US corporations? Or to something in the US institutional/financial/legal framework which gives US shareholders a well-founded belief that their stock ownership gives them an actual claim on corporate profits that will be respected in the future? Naturally, these questions also raise concerns about whether US stock market investors with long-term horizons–like pension accounts, insurance companies, and retirement accounts held by individuals–will be able to count on higher stock market returns in the future, too.

The report by Dimson, Marsh, and Staunton  has a lot of other material of interest, as well: how factor investing has worked over time; the \”golden age\” of US bond markets from 1982-2014; environmental, social and governance investing; and more.

Federal Debt in a Time of Pandemic

In the midst of 1,001 ways that the government could increase spending or reduce taxes to blunt the immediate economic effect of the coronavirus pandemic, spare a moment for the existing level and trend of federal debt. The Congressional Budget Office has issued \”Federal Debt:A Primer\” (March 2020).

Here\’s the ratio of federal debt held by the public to GDP, going back to 1790. The \”held by the public\” means that when one part of the federal government loans money to another part of the federal government–like when the Social Security Trust Fund invests in US Treasury debt–this doesn\’t require the federal government to borrow from outside the government, and thus isn\’t counted in the total.

The historical spikes in the debt/GDP ratio are named easily enough. You can point out the rising levels of debt for the Revolutionary War, the Civil War, World War I, the Great Depression, World War II, the tax cuts and defense build-up of the 1980s, and the Great Recession. The current debt/GDP ratio is second-highest in US history, and trending toward highest. 

Much of the report focuses on the specific financial instruments that the Treasury uses to borrow:  short-term Treasury \”bills\” that are repaid in periods from one month up to one year; Treasury \”notes\” that are repaid over periods from two to 10 years; long-term Treasury \”bond\” repaid over 30 years; Treasury Inflation-Protected Securities (TIPS) where the principal value of the borrowing is adjusted for inflation twice a year; and Floating Rate Notes (FRNs) where the interest rate adjusts up and down based on the interest rate for 13-week Treasury bills. The report notes:

Since the late 1990s, Treasury notes typically have accounted for more than half of all outstanding marketable securities, peaking at 67 percent in 2013 . Treasury bills made up between 20 percent and 30 percent of marketable debt until 2010, when the Treasury began to issue fewer short-term instruments. Those securities declined to just 11 percent of marketable debt in 2015 before rising back to 15 percent in 2019. By the end of 2019, bonds accounted for 14 percent of the Treasury’s outstanding marketable debt, in line with their typical share since the end of the 1990s. TIPS were first issued in 1997 and—after an initial growth phase through 2004—have represented between 7 percent and 10 percent of outstanding marketable debt since then. By the end of 2019, the share of debt taken up by FRNs, which were introduced in 2014, was just 3 percent. …

Offerings that best meet investors’ needs typically will lower the Treasury’s overall cost of borrowing. Short-term instruments generally have lower interest costs, but they expose the government to the risk of paying higher interest rates when it refinances the issues. Conversely, long-term securities typically involve higher rates but provide more certainty about the future costs of interest payments because they require less frequent financing.

Putting all of these together, the average time to maturity for federal borrowing hasn\’t changed much in the last two decades: as the figure shows, it got a little shorter during the Great Recession, but now it\’s back to roughly the same level as 2001.

How much of the Treasury borrowing has come from domestic sources and how much from foreign sources? About half comes from abroad, much of that from China and Japan.

One interesting point I had not considered before is the how higher student debt has played a role in raising federal debt. The CBO explains:

In 2011, the federal student loan program stopped providing loan guarantees to banks and instead began lending to borrowers directly, with the result that the magnitude of federal holdings of financial assets began to increase markedly. In total, at the end of 2019, the government’s financial assets—loans as well as cash—had an estimated value of nearly $1.8 trillion. Subtracting that amount from the $16.8 trillion in debt held by the public leaves about $15.0 trillion in debt held by the public net of financial assets. Debt held by the public at the end of 2019 was equal to about 79 percent of gross domestic
product; debt net of financial assets was about 71 percent of GDP.

The CBO report is focused on laying out trends and patterns, not on ringing the gong about possible dangers. However, there\’s a brief discussion of risks and effects at the end of the report.

1) \”If federal debt as a percentage of GDP continues to rise at the pace of CBO’s current-law projections, the economy would be affected in two significant ways: Growth in the nation’s debt would dampen economic output over time, and higher interest costs would increase payments to foreign debt holders and thus reduce the income of U.S. households by rising amounts.\”

2) \”The increases in debt that CBO projects would also pose significant risks to the fiscal and economic outlook, although those risks are not currently apparent in financial markets. … High and rising federal debt increases the likelihood of a fiscal crisis because it erodes investors’ confidence in the government’s fiscal position and could result in a sharp reduction in their valuation of Treasury
securities, which would drive up interest rates on federal debt because investors would demand higher yields to purchase Treasury securities. However, the debt-to-GDP ratio has no identifiable tipping point because the risk of a crisis is influenced by other factors, including the long-term budget outlook, near-term borrowing needs, and the health of the economy. Moreover, because the United States currently benefits from the dollar’s position as the world’s reserve currency and because the federal government borrows in dollars, a financial crisis—similar to those that befell Argentina, Greece, or Ireland—is less likely in the United States. Although no one can predict whether or when a fiscal crisis might occur or how it would unfold, the risk is almost certainly increased by high and rising federal debt.\”

3) \”Not all effects of the projected path of debt are negative, however. In addition to allowing policymakers to maintain current-law spending and revenue policies, that path would cause underlying interest rates to be higher than they otherwise would be, giving the Federal Reserve more flexibility in implementing monetary policy.\”

4) At the current moment, I\’d emphasize one other reason mentioned briefly: \”In addition, high debt might cause policymakers to feel constrained from implementing deficit-financed fiscal policy to respond to unforeseen events …\”  The enormous and fundamentally healthy US economy can take on more debt in response to the coronavirus. But it\’s just a fact that if you have already loaded up on borrowing, and your future tax and spending plans already have you locked into pattern of additional borrowing, then your flexibility is lower. The idea of taking steps to hold down the rise in federal borrowing is never going to be  popular. It feels as if it\’s all discipline and no benefit–right up to when a situation arises when you would like to be able to borrow with confidence in an unconstrained way to meet the challenge of a pandemic. 

Some Ruminations from Mars on Organizations and Informal Jobs

In high-income developed economies like the United States, we tend to take for granted that a large share of economic activity is organized within companies, and that these companies will hire workers and then organize and direct their efforts. However, in lower-income countries the notion of employment through firms cannot be taken for granted–as shown by a lack of \”formal\” jobs in those economies. .

For an extremely high-level view of the importance of business organizations–indeed, a view from Mars–a passage from Herbert A. Simon (Nobel \’78) about two decades ago in an article he wrote for the Journal of Economic Perspectives (\”Organizations and Markets, Spring 1991, pp. 25-44). Simon wrote:

A large part of the behavior of the system now takes place inside the skins of firms, and does not consist just of market exchanges. Counted by the head, most of the actors in a modern economy are employees, who either do not spend their days in trading, or if they do (for example, if they are salesmen or purchasing agents) are assumed to trade as agents of the firm rather than in their own interest, which might be quite different. …

A mythical visitor from Mars, not having been apprised of the centrality of markets and contracts, might find the new institutional economics rather astonishing. Suppose that it (the visitor—I\’ll avoid the question of its sex) approaches the Earth from space, equipped with a telescope that reveals social structures. The firms reveal themselves, say, as solid green areas with faint interior contours marking out divisions and departments. Market transactions show as red lines connecting firms, forming a network in the spaces between them. Within firms (and perhaps even between them) the approaching visitor also sees pale blue lines, the lines of authority connecting bosses with various levels of workers. As our visitor looked more carefully at the scene beneath, it might see one of the green masses divide, as a firm divested itself of one of its divisions. Or it might see one green object gobble up another. At this distance, the departing golden parachutes would probably not be visible.

No matter whether our visitor approached the United States or the Soviet Union, urban China or the European Community, the greater part of the space below it would be within the green areas, for almost all of the inhabitants would be employees, hence inside the firm boundaries. Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of \”large green areas interconnected by red lines.\” It would not likely speak of \”a network of red lines connecting green spots.\”

Of course, if the vehicle hovered over central Africa, or the more rural portions of China or India, the green areas would be much smaller, and there would be large spaces inhabited by the little black dots we know as families and villages. But the red lines would be fainter and sparser in this case, too, because the black dots would be close to self-sufficiency, and only partially immersed in markets. But let us, for the present, restrict our attention to the landscape of the developed economies.

When our visitor came to know that the green masses were organizations and the red lines connecting them were market transactions, it might be surprised to hear the structure called a market economy. \”Wouldn\’t \’organizational economy\’ be the more appropriate term?\” it might ask. The choice of name may matter a great deal. The name can affect the order in which we describe its institutions, and the order of description can affect the theory.

In low-income countries, as Simon pointed out, the \”green areas\” of business organizations are far less prominent. As a result, jobs are more likely to be \”informal,\” in the sense that they are people working on their own or as part of their own family, without regular wages. Here\’s are some columns from a table from the World Employmentand Social Outlook: Trends 2020 published by the International Labour Organization in January 2020. As the report points out, around the world about 60% of workers have informal jobs; in low-income countries, it\’s more like 90% (first column) Conversely, it\’s not a coincidence that in low income countries less than 20% of workers have jobs with wages and salaries (second column).

The ILO discusses these patterns in a section on \”Paid work and the problem of decent work.\” Indeed, a main distinction in lower-income countries between the poor and the middle class in those countries is that the middle class are far more likely to have a formal job that pays a regular wage or salary.

When thinking about Simon\’s \”green areas,\” there\’s no need to to the rhetorical extreme of extremes of Nicholas Murray Butler, president of Columbia University, who said in a 1911 speech: \”The limited liability corporation is the greatest single discovery of modern times.\” But it\’s worth pointing out that a lot of public commentary is a more than a little schizophrenic about the social value of the \”green areas.\” There is often strong criticism that the green areas are following their own logic and goals, which in a number of contexts (pollution is a leading example, direct assistance to the poor is another), may not line up with broader social goals.

Amidst the criticism, one can lose track of the reality that the green areas are the way that advanced economies around the world provide desired and secure jobs for their populations. The green areas are also a major social mechanism for organizing production and pursuing innovation. Indeed, many of the concerns about workers in the \”gig economy\” in high-income countries are about how workers may suffer if they don\’t have a well-defined and ongoing connection to a green area.

In lower-income countries, one of the major social challenges is how to generate decent jobs for growing populations (for discussion, see here , here, here). For these countries, providing the legal, social, and financial conditions where Simon\’s green areas can develop and sustain themselves–thus providing a foundation for secure jobs and future growth–is an important policy goal. Indeed, it seems to me that in many settings the arguments that favor free markets or express concern about free markets are not actually about \”markets\” per se: instead, they are about decisions being made inside the green areas and how to define the rules and responsibilities that should govern those decisions.

Sick Pay Benefits: A Labor Market and Public Health Issue

Providing sick pay to workers is often discussed in terms of fairness or social insurance against the risk of declining income, but it also has an important public health dimension. Employers might prefer that sick workers remain at home, rather than passing on their illness to the rest of the workforce. But workers who do not have sick pay won\’t get paid if they don\’t show up.

Most high-income countries in the world have government-required provision of sick pay. Researchers at the World Policy Analysis Center at UCLA compiled data in a 2018 report \”Paid Leave for Personal Illness: A Detailed Look at Approaches Across OECD Countries.\”  They write that of 34 OECD countries, only the US and Korea do not have a guarantee of paid leave for personal illness. The details of implementation vary across  countries, of course. For example, two of these countries make employers solely responsible for paying for sick leave, nine countries have government solely responsible, and 21 have a mixture of the two. In the mixed systems, a common pattern is that employers pay for the first few weeks of sick leave, and then government takes over after that up to some limit like three or six months. A common pattern in these countries is that sick leave is 80% of regular pay.

In the US, sick leave is much less likely in lower-paying jobs. Here\’s a figure showing the pattern from the Kaiser Family Foundation:


Efforts to have federal sick pay rule in the US have gone nowhere. However, starting with San Franciso in 2007, a number of state and local governments have passed such rules in the last few years. Twelve states now have such laws, and a couple of dozen more cities, including New York City, Chicago, Philadelphia, Washington DC, Seattle, and Portland.  The typical pattern for these laws is that all employees earn  one hour of paid sick leave for every 30 to 40 hours worked. Of course, the idea behind this design is that a worker can\’t take a job and then immediately take paid sick leave, but a worker will accumulate roughly a day of paid sick leave for every 8 weeks worked. For a detailed and updated  \”Interactive Overview of Paid Sick Time Laws in the United States,\” see the A Better Balance website.

What are the effects of these laws? Stefan Pichler and Nicolas R. Ziebarth have written \”Labor Market Effects of U.S. Sick Pay Mandates,\” forthcoming in the Spring 2020 issue of the Journal of Human Resources (55:2, pp. 611–659). They look at employment and wage data over a time frame from 2001 to 2016 for nine cities and four states that have enacted sick pay rules. They create what is called a \”synthetic control group,\” which is a set of cities and states that historically have followed the same patterns of employment and wages, but did not adopt a sick pay rule. Then, they can see whether adopting sick pay causes a change in  employment and wages compared to this control group. They find no evidence of such a change.

In a follow-up study, Johanna Catherine Maclean, Stefan Pichler, and Nicolas R. Ziebarth have published a working paper, \”Mandated Sick Pay: Coverage, Utilization, and Welfare Effects\” (March 2020, NBER Working Paper #26832, not freely available, but readers may have access through their institutiosn).  This study focuses on state-level sick-pay mandates, with data from 2009-2017. During this time, states are adopting sick pay mandates at different times. Thus, for these states and in comparison with other states, one can look for how patterns of sick leave coverage change when sick-pay mandates are adopted. They find:

Within the first two years following mandate adoption, the probability that an employee has access to paid sick leave increases by 18 percentage points from a base coverage rate of 66%. The increase in coverage persists for at least four years without rising further. Over all post-mandate periods covered by this paper, we find a 13 percentage point higher coverage rate attributable to state mandates. As a result of the increased access to paid sick leave, employees take more sick days …  newly covered employees take two additional sick days per year. Employer sick leave costs also increase, but effect sizes are modest. On average, the increase amounts to 2.7 cents per hour worked … Further, we find little evidence that sick pay mandates crowd-out non-mandated benefits such as paid vacation or holidays. Likewise, we find no evidence that employers curtail the provision of group policies such as health, dental, or disability insurance.

These studies are focused on labor market issues, and do not take public health effects into account. However, in a different working paper, Stefan Pichler, Katherine Wen,  Nicolas R. Ziebarth study 
\”Positive Health Externalities of Mandating Paid Sick Leave\” (February 2020).  Looking at state-level data, they find that in the first year after a state enacts sick pay, rates of doctor-certified influenza-like illness fell by about 11%.

This offered broad confirmation of results from an earlier study by Stefan Pichler and Nicolas R. Ziebarth, \”The pros and cons of sick pay schemes: Testing for contagious presenteeism and noncontagious absenteeism behavior\” (Journal of Public Economics, December 2017, pp. 14-33).  Looking at Google Flu data, they found that when U.S. employees gain access to paid sick leave, the general flu rate in the population decreases significantly, which suggests the possibility of less transmission of flu at work.

They also look at sick pay outcomes in Germany, a country with generous sick pay provisions. However, when Germany legislative changes allowed some flexibility to reduce sick pay from 100% of previous salary to 80%, the result was a large drop in more nebulous claims sickness claims like \”back pain\” but little drop for sickness claims related to infectious illnesses. This pattern suggests a plausible tradeoff: very generous sick pay can lead to workers taking time off for reasons not related to public health, but as sick pay becomes less generous, it will also lead to \”contagious presenteeism\” where contagious workers become more likely to show up at the job.

(In passing, I was also struck by this historical comment about Germany sick pay in the Pichler and Ziebarth 2017 paper: \”Historically, paid sick leave was actually one of the first social insurance pillars worldwide; this policy was included in the first federal health insurance legislation. Under Otto van Bismarck, the Sickness Insurance Law of 1883 introduced social health insurance in Germany, which included 13 weeks of paid sick leave along with coverage for medical bills. The costs associated with paid sick leave initially made up more than half of all program costs, given the limited availability of (expensive) medical treatments in the nineteenth century …\”)

Some US companies are now discovering that sick pay may matter to their business: for example,
\”Amazon announces up to 2 weeks of paid sick leave for all workers and a \’relief fund\’ for delivery drivers amid coronavirus outbreak.\”  But the kinds of sick pay laws that have been gradually spreading through certain states and cities are partial and incomplete. The novel coronovirus outbreak suggests that a national sick pay policy–probably with employers responsible for the first weeks and then government serving as a back-up–is an issue with broad public health consequences, not just an argument over whether government should require companies to provide certain benefits. Sitting here in March 2020, it would have been nice to have a national sick-pay policy in place a few years ago, as a way of reducing the spread of coronavirus and cushioning the loss of income for those who become sick. But it\’s not too early to start prepping for the next pandemic.

Some Coronavirus Economics

Back in the mid-1980s, when I worked for a few years at the San Jose Mercury News as an editorial writer, my boss would sometimes remind us (channeling Murray Kempton): \”An editorial writer is someone who comes down from the hills after the battle is over and shoots the wounded.\” Similarly, authors of books about important events have the luxury of time and distance before they commit themselves to print. But Richard Baldwin and and Beatrice Weder di Mauro, much to their credit,  decided to step into the arena of arguments about an appropriate response to the novel coronavirus while the disputes are ongoing by editing an e-book: Economics in the Time of COVID-19 (March 2020, free with registration from VoxEU.com). The very readable book was literally produces over a long weekend: it includes an \”Introduction\” and 14 short essays, many of them summarizing and drawing on longer work. Here, I\’ll draw up on some comments from the book as well as my own thoughts. 

1) The hard question is how bad the novel coronavirus will get, and the short answer is that nobody really knows. 
It is already clear that COVID-19 is worse than the SARS outbreak in 2002-3. Worldwide, that ended up being slightly more than 8,000 total cases and slightly less than 800 deaths. The Johns Hopkins School of Medicine maintains a continually updated page on confirmed cases of coronavirus around the world, as well as deaths and recoveries. As I write, it already has more than 120,000 cases and more than 4,000 deaths. 

For some context, the Centers for Disease Control estimates each year the cases and deaths from flu in the US. In the last decade or so, 2011-12 was a low mark for flu-related deaths, with \”only\” 12,000. Conversely, 2014-15 and  2017-18 were especially bad flu seasons in the US, with 51,000 and 61,000 deaths respectively. The 2009 Avian flu (N1H1) ended up causing between between 151,700 and 575,400 people deaths worldwide (according to Centers for Disease Control estimates), most of them in the US and Mexico. 
Predicting the path of an epidemic is difficult. Baldwin and Weder di Mauro offer a useful diagram, showing that in the early stages, a straight-line prediction will dramatically understate the harms, while in the middle stages, a straight-line prediction will dramatically overstate the harms. They offer a comment from Michael Leavitt, a former head of the US department of Health and Human Services: “Everything we do before a pandemic will seem alarmist. Everything we do
after will seem inadequate.” The challenge is to predict the length and peak of the curve –which depends not only on the epidemiology of the disease but also on what public health steps are taken. 
In addition, there is no guarantee that the coronavirus will ever disappear. AsBaldwin and Weder di Mauro note: \”[T]he virus might become endemic – that is to say, a disease that reappears
periodically – in which case COVID-19 could become one of humanity’s constant
companions, like the seasonal flu and common cold.\”
2) What are some common estimates of potential economic losses from the coronavirus? In their chapter, Laurence Boone, David Haugh, Nigel Pain and Veronique Salins of the OECD  estimate a base scenario and a downside scenario. 

In a first best-case scenario, the epidemic stays contained mostly in China with limited
clusters elsewhere. … In this best-case scenario, overall, the level of world GDP is reduced by up to 0.75% at the peak of the shock, with the full year impact on global GDP growth in 2020 being around half a percentage point. Most of this decline stems from the effects of the initial reduction in demand in China. Global trade is significantly affected, declining by 1.4% in the first half of 2020 and by 0.9% in the year as a whole. The impact on the rest of the world depends on the strength of cross-border linkages with China. …

In the downside scenario, the outbreak of the virus in China is assumed to spread much
more intensively than at present through the wider Asia-Pacific region and the major
advanced economies in the northern hemisphere in 2020. …  Together, the countries affected in this scenario represent over 70% of global GDP … Overall, the level of world GDP is reduced by up to 1.75% (relative to baseline) at the peak of the shock in the latter half of 2020, with the full year impact on global GDP growth in 2020 being close to 1.5%.

Warwick McKibbin and Roshen Fernando simulate seven economic scenarios–three where the disease stays mainly in China, three where a pandemic spreads worldwide, and one in which a mild pandemic recurs each year into the future. For a sense of the range, their low pandemic scenario (S04) estimated 15 million deaths globally, with 236,000 in the US. Their most aggressive pandemic scenario (S06) is based on 68 million deaths worldwide, more than 1 million of them in the US. In this scenario, US GDP falls 8.4 percent in 2020, and the world economy falls by a similar amount.  To get a sense of what this scenario means, it is roughly equivalent to half the world\’s population being infected by the coronavirus, with a mortality rate of 2% for those infected.

3) How will the coronavirus affect the world trading system? Weber di Mauro writes: 

Supply chain disruptions may also turn out to be larger and more extended than is currently evident. Maersk, one of the world’s largest shipping companies, has had
to cancel dozens of container ships and estimates that Chinese factories have been
operating at 50-60% of capacity. Shipping goods to Europe from Asia via sea takes
about five weeks, so at the moment goods are still arriving from pre-virus times. The
International Chamber of Shipping estimates that the virus is costing the industry
$350m a week in lost revenues. More than 350 000 containers have been removed
and there have been 49% fewer sailings by container ships from China between mid
January and mid February. … China has become a major source of demand in the world economy and many core European industries are highly dependent on the Chinese market. Sales in China account for up to 40% of the German car industry’s revenues, for example, and they have collapsed over the last weeks.

Richard Baldwin and Eiichi Tomiura write:

There is a danger of permanent damage to the trade system driven by policy and firms’ reactions. The combination of the US’ ongoing trade war against all of its trading partners (but especially China) and the supply-chain disruptions that are likely to be caused by COVID-19 could lead to a push to repatriate supply chains. Since they supply chains were internationalised to improve productivity, their undoing would do the opposite. We think this would be a misthinking of the lessons. Exclusively depending on suppliers from any one nation does not reduce risk –  it increases it. …  We should not misinterpret pandemic as a justification for anti-globalism. Redundant dual sourcing from multiple countries alleviates the problem of excess dependence on China, though with additional costs. Japanese multinationals have already begun diversifying the destinations of foreign direct investment away from China in recent years, not foreseeing COVID-19 but prompted by Chinese wage hikes. We hope more intensive use of ICT enables firms to more effectively coordinate global sourcing.

4) Perhaps there will be a separation of global trade, which isn\’t likely to transmit pandemics, and free movement of people, which is more likely to do so. Joachim Voth raises this question clearly:

Fortunately, many – but not all – of the benefits of globalisation can be achieved without enormous health risks. The free exchange of goods and capital does not have to be restricted; only very few diseases are transmitted by contaminated goods. The free movement of people itself also contributes to the advantages of globalisation, but it is far less important for production. It is not obvious that running the risk of coronavirus outbreaks every few years – or worse – is a price worth paying for multiple annual vacation trips to Paris and Bangkok, say. Severe restrictions may well be desirable and justifiable, bringing to an end a half-century of ever-increasing individual mobility. In addition, specific restrictions could be brought in. For countries where, for example, wild animals are regularly sold and eaten (such as China, until recently), the certification for travel could be withheld without restrictions; anyone who comes or returns from there must undergo a medical examination and possibly spend a few weeks in quarantine. This would not only build a virtual plague wall against the next major outbreak, it would also put pressure on health authorities around the world to restrict dangerous practices that allow pathogens to jump from one species to the next. Even if airlines, hoteliers and tour operators would suffer from such rules in the short term and would complain, the lesson from Wuhan should be that we need a broad discussion within and outside of academia about how much mobility is actually desirable.

Voth also reminds us of some grim historical episodes:

The ship, Grand Saint Antoine, had already come to the attention of the port authority of Livorno. A cargo ship from Lebanon loaded with expensive textiles, it reached the port of Marseille in 1720. The Health Commission had its doubts – the plague was widespread in the eastern Mediterranean. Like all ships from affected regions, the Grand Saint Antoine was placed in quarantine. Normally, the crew and the property would have had to stay on board for 40 days to rule out the possibility of an infectious disease. But a textile fair near Marseille, where the importing merchants hoped for rich business, would soon begin. Under pressure from the rich traders, the health agency changed its mind. The ship could be unloaded, the crew went to town. 

After only a few days it was clear that changing the initial decision had been a mistake. The ship had carried the plague. Now the disease spread like a forest fire in the dry bush. The city authorities in Marseille could not cope with the number of deaths, with corpses piling up in the streets. … At the behest of the French king and the pope, a plague wall (Mur de Peste) was built in Provence. Tourists can still see parts of it today. The wall was over two meters high and the watchtowers were manned by soldiers. Those who wanted to climb over it were prevented from doing so by force. Although some individuals managed to escape, the last major outbreak of black death in Europe was largely confined to Marseille. While probably 100,000 people – about a third of the population – died in Marseille, the rest of Europe was spared the repeated catastrophe of 1350 when millions of people lost their lives. 

5) Should the economic policies in response to the coronavirus be general or targeted? 

By general policies, I mean policies that refer to cuts in interest rates by central banks, or plans for government to send out checks to everyone (or in a US context, to cut Social Security payroll tax rates). By specific policies, I mean economic policies where the government focuses on specific issues like sick pay for workers not covered by employers, medical bills, support for small/medium firms with cash-flow problems, making sure banks have funds to lend and are not pushing firms into bankruptcy right now, and support for specific hard-hit industries like airlines and tourism.

John Cochrane put it this way:

We need a detailed pandemic response financial plan, sort of like an earthquake, flood, fire, or hurricane plan that (I hope!) local governments and FEMA routinely make and practice. Is there any such thing? Not that I know of, but I would be interested to hear from knowledgeable people if I am simply ignorant of the plan and it’s really sitting there under “Break glass in emergency” down in a basement of the Treasury or Fed. Without a pre-plan, can our political system successfully make this one up on the fly, as they made up the bank bailouts of 2008?

Then we have to figure out how to prevent the atrocious moral hazard that such interventions produce. Pandemics are going to be a regular thing. Ex-post bailout reduces further the incentive for ex-ante precautionary saving. Too good a fire department, and people store gasoline in the basement.
This starts down the same bailout and regulate road that suffocates our debt-based banking system. I welcome better ideas.

6) Will manufacturing or services be hit harder? 

Richard Baldwin and Eiichi Tomiura emphasize the problem for manufacturing:

An important point is that manufacturing is special. Manufactured goods are – on the whole – ‘postpone-able’ purchases. As we saw in the Great Trade Collapse of 2009, the wait-and-see demand shock impacts durable goods more than non-durable goods. In short, the manufacturing sector is likely to get a triple hit.

  1. Direct supply disruptions hindering production since the disease is focused on the world’s manufacturing heartland (East Asia), and spreading fast in the other industrial giants – the US and Germany.
  2. Supply-chain contagion will amplify the direct supply shocks as manufacturing sectors in less-affected nations find it harder and/or more expensive to acquire the necessary imported industrial inputs from the hard-hit nations, and subsequently from each other.
  3. Demand disruptions due to (1) macroeconomic drops in aggregate demand, i.e. recessions, and (2) precautionary or wait-and-see purchase delays by consumers, and investment delays by firms.
However, Catherine Mann points out that while manufacturing may be hit more in the short-term, it is also more likely to recoup its losses: 

Manufacturing will show a ‘V’ or ‘U’ shape. Manufacturing spillovers from factory closures loom large in the near term, but production will rebound to restock inventories once quarantines end and factories reopen. However, the duration of closures, as well as spillovers through supply chains and through virus cases and closures worldwide, will generate a set of Vs that should take on a U-shape in the global data. Importantly, the loss to global growth momentum will drag on both in individual country data and global rebound economic data, particularly trade and industrial production. Services, on the other hand, will experience an ‘L’ shape. The shock to tourism, transportation services, and domestic activities generally will not be recovered, and the projected slowing of global growth will further weigh on the L-shape evolution of demand for these non-storable tradeable services. Domestic services also will bear the brunt of the outbreak, depending in part on the responses of authorities, business, and consumers.

American Mobility: From "Westward Ho" to "Home Attachment"

The number of Americans who move in a given year has been declining. Here\’s an illustrative figure from the Economic Report of the President (February 2020, White House Council of Economic Advisers):

The reasons for this decline, and what (if anything) should be done about it have been murky. Kyle Mangum offers an historical interpretation of the change in \”No More Californias: As American mobility declines, some wonder if we\’ve lost our pioneer spirit. A closer look at the data suggests that the situation is less dire—and more complicated—than it at first appears\” (Economic Insights, Federal Reserve Bank of Philadelphia, Winter 2020, pp. 8-13). He describes the potential economic problems resulting from lower mobility in this way:

Economists widely view labor mobility as the principal mechanism by which regions adjust to local economic shocks. If local industries fall on hard times, workers can leave; in places where labor demand is high, new residents flow in. The decline has therefore generated concern that the economy is less adaptable to local shocks, ultimately resulting in labor misallocation, unrealized output, and lower productivity.

Some of the seemingly plausible explanations for the lower decline don\’t hold up under closer examination. For example, one might hypothesize that the decline in movlity is caused because older people are less likely to move long distances and the US population is aging. But as Mangum points out (footnotes omitted):

Researchers have shown that typical aging differences are not quantitatively big enough to generate the observed national decline. Perhaps more importantly, the decline is present within age groups, so that young people today, for instance, are also moving less than their parents did at the same age. Moreover, aging has occurred at similar rates across cities, so there is no scope for aging to explain the spatial differences in the decline.

Instead, Mangum offers an interpretation of declining geographic mobility in the last few decades based in long-term US history. Here\’s a map from the US Census Bureau showing the movement of the location of the average center of US population from 1790-2010:
Mean Center of Population for the United States: 1790 to 2010
Mangum offers a table showing  how much the center of population moved during each decade.

The basic patterns here  will make sense to those with even a basic familiarity of US history. The US has had a general movement of population to the west and south. For an illustration of the process, the US Census Bureau has a map which shows the movement of the American \”frontier\” from 1790 to 1890. In 1890, the Census Bureau famously announced what has been known as the \”closing of the western frontier\”: \” In 1890, the Superintendent of the Census described the western part of the country as having so many pockets of settled area that a frontier line could no longer be said to exist.\”

The movement to the south and west then mostly lags for several decades in the early 20th century, including the periods of World War I, the Great Depression, and World War II. But after World War II there is a renewed population shift to the south and west from the 1950s through the 1980s–with a marked slowdown in the movement of the geographic center of the US population since then.

Mangum suggests that the motivations to move have diminished in modern America.

[T]here is reason to expect that massive population changes across regions—of the degree seen from colonization to westward expansion—will no longer be business as usual. The major differences in regional habitability have diminished. Transportation has crisscrossed the continent, water delivery- and-control infrastructure has been put in place, and air conditioning is ubiquitous. Technologies today focus on speed and efficiency within cities, not on developing new cities. And in the digital age, new technologies are less spatial. Population growth today is more balanced across locations compared to the skewness of the early and middle 20th century. … And this population growth is occurring more within regions than across regions. To the extent that imbalances exist, growing places are established cities rising in the urban hierarchy, leaving the rest of their home region behind and largely drawing people from within their region. …

So perhaps the U.S. is finally in a “long-run spatial equilibrium,” as some have suggested. The term suggests that households  incentives to relocate have diminished, either because places are more similar than they used to be, or structural changes in the economy have caused real estate and labor prices to rationalize spatial differences, so that, in either case, relative population adjustments across space are no longer necessary. 

Mangum also refers to the extent of \”home attachment,\” in which \”[p]eople living near their birthplace show a strong proclivity to remain in their location compared with people born out of state. A transplanted population, by contrast, is more transient and more subject to various idiosyncratic changes in circumstance. For example, if someone moved to a new place for a job, and the job dissolves for whatever reason, they are likely to move away. Someone with strong local ties whose job dissolves is more inclined to search locally.\” 
Thus, when Americans were first moving in substantial numbers to the south and west, the recent arrivals were still somewhat transient. If for some reasons their first move didn\’t work out, they would move again. But over time, more and more people identify as being from places in the south and west, and with this added \”home attachment\” become less likely to move again. 
The historical shift that Mangum describes seems plausible to me (although I\’d be interested in seeing a parameterized model showing that a connection from greater home attachment to less moving can explain the overall observed patterns). But (as Mangum points out), there are two ways to interpret the fact that many of those who live in struggling labor markets in higher-unemployment cities have been unlikely to move. One interpretation is that they have strong \”home attachment.\” The alternative interpretation is that moving from slower-growth to higher-growth urban areas has become  more difficult and risky than it used to be, in substantial part because housing costs have become so high in high-growth urban areas. A chicken-and-egg problem emerges for someone thinking about such a move: they can\’t afford the high housing costs in the new city unless they already have a job lined up, and they can\’t line up a job unless they first move to the new city. There are a variety of other possible barriers to moving, as well, including rules for occupational licensing that differ across states, a lack of investment in the public transit systems that are more heavily used by those with lower incomes, and so on. 
In this interpretation,  some of the decline in US geographic mobility may be that we have \”no more Californias.\” But part of the mobility decline may also be that state and local policies that affect housing, jobs, and transportation are discouraging a number of potentially willing movers. 
Here are some previous posts and writings on the decline of US geographic mobility.

Women in Economics: The Early-Stage Problem

The share of women has risen substantially in many academic areas, but less so in economics. Shelly Lundberg has edited a VoxEU.org e-book on Women in Economics (March 2020, free registration required). It includes an introduction and 18 short and readable essays, many of which summarize and refer to research presented in more detail elsewhere. Thus, it\’s a good way to get up to speed on thinking in this area. Here, I\’ll point to what seems to me an underemphasized topic in this research, which comes out of what is sometimes called a \”pipeline\” approach.

The basic idea here is that there is a pipeline to becoming a tenured professor of economics. It commonly starts with taking economics in college, doing an undergraduate major in economics, entering an economics PhD program, completing a PhD, getting a job as an assistant professor, and then being promoted to full professor. One can look at the share of women at each stage of the process and get a sense of where the representation of women is falling behind.

Here\’s a figure from \”Women in Economics: Stalled Progress,\” by Shelly Lundberg and Jenna Stearns. The line that is highest in the top right corner shows the share of women among senior economics majors: it\’s been in the range of 30-35% for the last 20 years.

The next two lines show the share of women among first-year PhD students in economics and among new PhDs. There is some \”leakage\” in the pipeline here, in the sense that the share of women in PhD programs in economics is lower than the share who are senior undergraduate majors in economics. Back in the 1990s, the share of women starting an economics PhD was higher than the share completing one, but that gap went away in the early 2000s.

The share of women among assistant professors in economics, shown by the blue line, roughly equalled the share of women in economics Ph.D programs around 2009, but since then has dropped off. The share of women among assistant professors of economics used to be much higher than the share who became associate professors of economics, but that gap has closed in the last 5-10 years. The share of women who are full professors of economics has been rising, although it lags behind the rise in associate professors.

Much of this book focuses on analysis and proposals for addressing the later steps in the career pipeline to becoming an economics professor Here are a few examples, as Lundberg summarizes them in her overview essay:

  • \”Erin Hengel was the first to point out that economics research papers written by women appear to be held to higher standards in the publishing process than papers written by men. As in several other professions (medicine, real estate, law), there appears to be a quality/quantity tradeoff, with female economists producing less output of higher quality than equivalent men. In her chapter, Hengel summarises the results of her study, showing that female-authored  papers at some elite journals are subjected to extended review times, and result in published papers with abstracts that are significantly more readable, according to standard measures.\”
  • \”The chapter by Lorenzo Ductor, Sanjeev Goyal, and Anja Prummer reports the findings of their study of gender differences in the collaborative networks of economists. Using the EconLit database, they undertake a detailed analysis of co-authoring patterns, and find that women work with a smaller network of distinct co-authors than men and tend to collaborate repeatedly with the same co-authors and their co-authors’ collaborators, constructing a tighter network. Since larger networks are associated with higher levels of research output, these patterns may disadvantage women.\”
  • \”Laura Hospido and Carlos Sanz use data from three large general-interest academic conferences to test for gender gaps in the evaluation of submissions. After controlling for a rich set of controls for author and paper quality, including author characteristics, field, paper cites at submission, eventual publication of the submitted paper, and referee fixed effects, they find that all-female-authored papers are about 7% less likely to be accepted than all-male-authored papers.\” 
  • \”The chapter by Donna Ginther, Janet Currie, Francine Blau, and Rachel Croson reports on a follow-up assessment of CSWEP’s flagship intensive mentoring programme, CeMENT. Causal estimates of the impact of such programmes are rare, but this evaluation, based on participants and those who were randomised out of the over-subscribed programme in 2004-2014, provides an unusual opportunity to gauge their potential effectiveness on short- and long-term outcomes. The estimates show that access to CeMENT increased the probability to having a tenure stream job by 14.5% and increased the probability of having tenure in a top-50 ranked institution by 9.0 percentage points. Most of the impact on tenure can be attributed to significant increases in pre-tenure publications in top-five and other highly regarded journals, but the effect on tenure is marginally significant even after controlling for these factors, suggesting that mentoring may provide professional advantages to women beyond easily-observable productivity metrics.\”

There are also some findings that may run against common intuitions. For example, a common proposal is to make allowances for having children, so that the economists who are parents of children get an extra year or two to publish papers before a decision is made on tenure. However, the evidence on these policies is that they help male economists and disadvantage women. Lundberg explains:

\”In American universities, the fixed-length tenure clock period has been a notable hurdle for assistant professor parents trying to build a tenurable research record, despite a temporary decline in productivity after childbirth. Some universities have introduced policies that stop the tenure clock for mothers, but more have adopted gender-neutral policies that give all new parents an extra year before their tenure decision. Heather Antecol, Kelly Bedard, and Jenna Stearns examine the impact of the rollout of these policies at top-50 US economics departments between 1980 and 2005, and find that men are 17 percentage points more likely to get tenure at their first job after this policy is adopted, while women are 19 percentage points less likely to do so. Men who gain more pre-tenure time as a result of this policy are more likely to publish an additional article in a top journal but women, who appear to bear more of the costs of a new child, do not.\” 

However, the alert reader will notice that the studies I have mentioned all focus on what happens after people have already become professors–not earlier in the pipeline. In addition, the share of women among senior economics majors and among PhD students of economics has been pretty much flat for for the last decade or so. If there isn\’t a major adjustment at the front end of the pipeline, the share of women who become full professors of economics will necessarily be limited.

The evidence on how to boost the number of women who become undergraduate economics majors is thin, at least so far. As Lundberg writes: 

Tatyana Avilova and Claudia Goldin tell the story of the Undergraduate Women in Economics (UWE) Challenge, which they have led since 2015. Economics departments were recruited into the programme, randomised into treatment and control groups, and treatment institutions received funding and guidance to initiate interventions to increase the number of female economics majors. These interventions fell into one or more of three groups: (1) providing students with better information about what economists do, (2) providing mentoring and role models and creating networks among students, and (3) improving the content and relevance of introductory economics courses.

Final data on results from UWE projects are not yet available, but one successful intervention that was administered as a field experiment is described in the next chapter by Catherine Porter and Danila Serra. Lack of female role models is often noted as a barrier to women choosing economics as an undergraduate major. The authors implemented a relatively inexpensive intervention in randomly-chosen Principles of Economics classes. Successful and ‘charismatic’ alumnae of the programme were chosen with the help of current undergraduates to visit classes briefly and discuss their educational experiences and career paths. The results were dramatic: role model visits increased the probability that treated female students would major in economics by eight percentage points (from a base of 9%) with no impact on male majors. This provides strong evidence of the salience of role models for women’s choice of major. 

In addition, many students even before arriving at college have a sense some majors where they are potentially interested and some majors they want to avoid. When I looked at at who takes AP microeconomics and macroeconomics exams a couple of years ago, I found that the number of males who get a 4 or 5 on these examples was much higher than the number of females, which probably leads more men to think about taking economics courses in college. Here\’s a comment from Kasey Buckles in an article in the Winter 2019 issue of the Journal of Economic Perspectives:

However, a serious discussion of strategies for closing the gender gap in economics must also include a look at the pipeline’s source—the K-12 level. Large gender gaps in college major intentions among incoming students suggest that many women are being discouraged from studying economics before they ever enter a Principles classroom (Goldin 2015). Avilova and Goldin (2018) offer an explanation: “Students often think that economics is only for those who want to work in the financial and corporate sectors and do not realize that economics is also for those with intellectual, policy and career interests in a wide range of fields” (p. 1). If women are less interested in finance and business (putting aside how those preferences are formed), then we could be losing many potential economists right out of the gate as a result of this misperception. … [I]t is unlikely that economists will make substantial and lasting progress toward gender balance if we ignore the K-12 experience. More innovation and research is needed on this front …\”

My own nagging sense, based on a stream of anecdotes and personal experiences rather than hard evidence, is that the fundamental presentation of what the subject of economics is all about is, on average, less attractive to young women than to young men. I also think the way in which economics is first presented often doesn\’t give much sense of the breadth and richness of the topics that economists actually study.

It seems reasonable to say that this book uses the \”Symposium on Women in Economics\” in the Winter 2019 issue of the Journal of Economic Perspectives as a launching pad. The three papers in that symposium, which are also all represented in this book, include:

Some other writing potentially of interest on this subject includes:

Time to End Single-House Zoning?

Here\’s an interesting metaphor for economists to consider from Michael Manville, Paavo Monkkonen & Michael Lens in their essay, \”It’s Time to End Single-Family Zoning\”:

Suppose that, for your wellbeing, you need regular access to only a small amount of expensive medicine. One day you go to the pharmacy and learn the government has implemented a new rationing system strictly limiting the number of sales that can occur in small doses. Because many people, like you, only need small doses, the new rule results in few small doses being available. Plenty of medicine is available—you can see it over the counter—but the pharmacist can only sell it in large quantities. So you are stuck. If you want your medicine, you must buy more than you need, at a price higher than you can afford. This new rationing system is also strictly enforced. Not only must you buy in large quantities, but you cannot divide up your ration afterward and sell your extra doses to others who might need and value them.

Most people, we suspect, would consider such a rationing system unjust and inefficient. It would force a large number of people to spend and consume more than they otherwise would, subsidize the smaller number of people who want and can afford large doses, and keep some people from getting medicine at all. Fortunately, the United States does not allocate medicine in this bizarre manner. But it does ration urban land this way.

The authors offer this metaphor as part of their argument for the abolition of single-house zoning in cities. The most recent issue of the  Journal of the American Planning Association offers two viewpoint articles advocating the abolition of single-house zoning, along with seven short commentaries, and then two rejoinders from the authors of the viewpoint articles.

It\’s useful to be clear on just what this proposal entails. It is not a call for the abolition of zoning, or for the abolition of any and all rules concerning what can be built on a certain residential lot. There could still be rules regarding issues like height or setbacks from property lines. If people want to live in a detached single house, they would be free to continue doing so. However, if your neighbor wants to turn their existing house (for the sake of argument, say without changing the physical envelope of the house) into townhouses or a a duplex or triplex, they would be free to do so.

In his essay, Jake Wegman makes the point this way: \”Does a zoning category or other type of regulation prohibit everything but a single-family detached house on a large lot? If so, it should be contested. My argument is that there is no defensible rationale grounded in health, safety, or public welfare for effectively mandating a 3,000-ft2 house with one unit while prohibiting three 1,000-ft2 units within the same building envelope. … Regardless of the specifics, single-family zoning should be replaced with regulations that allow some form of low-rise, middle-density housing—or `Missing Middle\’—to be built as of right.\”

As Manville, Monkkonen and Lens point out, single-house or R1 zoning started as a replacement for explicitly racial zoning (citations and footnotes omitted): 

R1 arose, at least in part, from invidious motives. It was built on arguments about the sort of people who don’t live in detached single-family homes and the harms that would arise if they mixed, socially or as fellow taxpayers, with those who do. R1 first proliferated after the Supreme Court struck down racial zoning in 1917’s Buchanan v. Warley decision. Buchanan made single-family mandates appealing because they maintained racial segregation without racial language. Forcing consumers to buy land in bulk made it harder for lower income people, and therefore most non-White people, to enter affluent places. R1 let prices discriminate when laws could not.

Contemporary observers denounced this regime of backdoor segregation, but in 1926 the Supreme Court upheld it. In Village of Euclid v. Ambler Realty Co. (1926), the court tacitly excused R1’s implicit racism by validating its explicit classism. Cities could prohibit apartments, the court said, because apartments were nuisances: “mere parasites” on the value and character of single-family homes. In Euclid’s wake, R1 became a quiet weapon of the White and wealthy in their campaign to live amid but not among the non-White and poor.
Today’s planners cannot be blamed for R1’s origins; however, the past throws a long shadow over the system they now administer. R1 delivers large and undeniable benefits to some people who own property. In places where housing demand is high, R1 inflates home values and protects the physical character of neighborhoods. But its social costs exceed these private benefits. Higher property values for owners mean higher rents for tenants. Because homeowners as a group are richer and Whiter than renters, policies that increase housing prices redistribute resources upward, increasing homeowner wealth, reducing renter real incomes, and exacerbating racial wealth gaps.

These viewpoints are careful to note that they do not expect the removal of single-family zoning to solve all problems of high housing prices, social inequality, long commutes, traffic congestion, high energy use, and so on. Indeed, they accept and expect that in many neighborhoods, the abolition of single-house zoning might not make much difference at all. They are just arguing that perceived benefits of single-family zoning–which are often phrased in terms of the traits of those who for various reasons need to be excluded from neighborhoods–do not justify the costs.

Manville, Monkkonen and Lens offer some thoughts on the potential importance of altering location outcomes (again, citations and footnotes omitted):

Where people live directly affects their exposure to pollution and violence, the quality of schools their children can attend, and the jobs they can reach. Residential location is thus strongly correlated with many life outcomes, from earnings to educational attainment to mental and physical health. Location, moreover, has not just large but multigenerational returns, yielding better outcomes for people who move in and their children as well.

Because opportunity is unevenly distributed both between and within metropolitan areas, and because moving people to opportunities is generally easier than moving opportunities to people, letting more people live in the most prosperous and amenity-rich neighborhoods of our urban areas would dramatically increase wellbeing. Many people, however, are effectively barred from these cities and neighborhoods because access to them is sold primarily in large, expensive, and inefficient chunks—through R1. Lower and middle-income families would benefit immensely from a small foothold in prosperous neighborhoods—perhaps a modest apartment or duplex—but R1’s prevalence means few such small footholds are available. The result is scarce housing in desirable places.

The city of Minneapolis, near where I live and work, will be a laboratory for these kinds of changes.  As Paul Mogush and Heather Worthington note in one of the comments:

In Minneapolis, allowing at least three residential units on each parcel throughout the city is part of a larger package of housing and land use policy changes intended to increase housing supply, choice, and affordability. These strategies include inclusionary zoning, increased investment in affordable housing, and tenant protections. The city’s new comprehensive plan also moves to allow multistory, multifamily development by right on all frequent bus routes and around light rail transit stations and makes it possible to build “missing middle” housing types in neighborhood interiors close to downtown that had previously been downzoned. The intent is to increase predictability in the marketplace. Planners cannot effectively address the challenges of racial inequities, housing affordability, and climate change by fighting a battle for every new apartment building.

Here\’s are links to the full set of  two viewpoint articles, seven commentaries, and two rejoinders:

Viewpoints

Commentaries

Rejoinders

Thought on Sumptuary Laws: Adam Smith to Plastic Bags

\”Sumptuary laws\” typically refers to laws in the Middle Ages that were passed in part to limit conspicuous consumption, and in part to enforce lines of social distinction, so that, say, only the nobility could wear certain fabrics or colors. Melissa Snell offers a quick overview of \”Medieval Sumptuary Laws\” (ThoughtCo.com, March 29, 2019).

The topic was on my mind because of a recent essay by John Tierney in City Journal, who argues that current bans against plastic bags or plastic straws represent a modern version of the sumptuary laws (Winter 2020, \”The Perverse Panic over Plastic: The campaign against disposable bags and other products is harming the planet and the public\”). Tierney writes:

Today’s plastic bans represent a revival of sumptuary laws (from sumptus, Latin for “expense”), which fell out of favor during the Enlightenment after a long and inglorious history dating to ancient Greece, Rome, and China. These restrictions on what people could buy, sell, use, and wear proliferated around the world, particularly after international commerce increased in the late Middle Ages.

Worried by the flood of new consumer goods and by the rising affluence of merchants and artisans, rulers across Europe enacted thousands of sumptuary laws from the thirteenth to the eighteenth centuries. These included exquisitely detailed rules governing dresses, breeches, hose, shoes, jewelry, purses, bags, walking sticks, household furnishings, food, and much more—sometimes covering the whole population, often specific social classes. Gold buttons were verboten in Scotland, and silk was forbidden in Portuguese curtains and tablecloths. In Padua, no man could wear velvet hose, and no one but a cavalier could adorn his horse with pearls. It was illegal at dinner parties in Milan to serve more than two meat courses or offer any kind of sweet confection. No Englishwoman under the rank of countess could wear satin striped with silver or gold, and a German burgher’s wife could wear only one golden ring (and then only if it didn’t have a precious stone).

Religious authorities considered these laws essential to curb “the sin of luxury and of excessive pleasure,” in the words of Fray Hernando de Talavera, the personal confessor to Spain’s Queen Isabella. “Now there is hardly even a poor farmer or craftsman who does not dress in fine wool and even silk,” he wrote, echoing the common complaint that imported luxuries were upsetting the social order and causing everyone to spend beyond their means. In justifying her sumptuary edicts, England’s Queen Elizabeth I lamented that the consumption of imported goods had led to “the impoverishing of the Realme, by dayly bringing into the same of superfluitie of forreine and unnecessarie commodities.”

But like the Americans who go on using plastic bags, the queen’s subjects refused to give up their “unnecessarie commodities.” The sumptuary laws failed to make much impact in England or anywhere else, despite the rulers’ best efforts. Their agents prowled the streets and inspected homes, confiscating taboo luxuries and punishing violators—usually with fines, sometimes with floggings or imprisonment. But the conspicuous consumption continued. If silk was banned, people would find another expensive fabric to flaunt. Rulers had to keep amending their edicts, but they remained one step behind, and often the laws were flouted so widely that the authorities gave up efforts to enforce them.

For historians, the great puzzle of sumptuary laws is why rulers went on issuing them for so many centuries despite their ineffectiveness. … The laws didn’t curb the public’s sinful appetite for luxury or contribute to national prosperity, but they comforted the social elite, protected special interests, enriched the coffers of church and state, and generally expanded the prestige and power of the ruling class. For nobles whose wealth was eclipsed by nouveau-riche merchants, the laws reinforced their social status. The restrictions on imported luxuries shielded local industries from competition. The fines collected for violations provided revenue for the government, which could be shared with religious leaders who supported the laws. Even when a law wasn’t widely enforced, it could be used selectively to punish a political enemy or a commoner who got too uppity.

The laws persisted until the waning of royal sovereignty and church authority, starting in the eighteenth century. As intellectuals promoted new rights for commoners and extolled the economic benefits of free trade, sumptuary laws came to be seen as an embarrassing anachronism. Yet the urge to rule inferiors never goes away.

Those interested in the question on whether bans on plastic bags and plastic straws make economic and environmental sense can start with Tierney\’s article. A couple of my own posts on plastics include:

Here, I want to focus on a different topic: Does it make sense to think of bans on plastic bags in the same category as rules related to wearing gold buttons or silk, or serving two meat courses? As for all topics, a first place to turn is Adam Smith, who mentions sumptuary laws several times in the Wealth of Nations. (As usual, I quote here from the online version freely available at the Library of Economics and Liberty website.)

One mention of sumptuary laws comes up during Smith\’s discussion \”Of the Accumulation of Capital, or of Productive and Unproductive Labour\” in Book II, Chapter III. Smith argues tartly that productivity has been rising in England, and mostly because of the frugality and efforts of individuals, not kings and ministers. Thus, Smith writes (boldface added):

The annual produce of its land and labour is, undoubtedly, much greater at present than it was either at the Restoration or at the Revolution. The capital, therefore, annually employed in cultivating this land, and in maintaining this labour, must likewise be much greater. In the midst of all the exactions of government, this capital has been silently and gradually accumulated by the private frugality and good conduct of individuals, by their universal, continual, and uninterrupted effort to better their own condition. It is this effort, protected by law and allowed by liberty to exert itself in the manner that is most advantageous, which has maintained the progress of England towards opulence and improvement in almost all former times, and which, it is to be hoped, will do so in all future times. England, however, as it has never been blessed with a very parsimonious government, so parsimony has at no time been the characteristical virtue of its inhabitants. It is the highest impertinence and presumption, therefore, in kings and ministers, to pretend to watch over the œconomy of private people, and to restrain their expence, either by sumptuary laws, or by prohibiting the importation of foreign luxuries. They are themselves always, and without any exception, the greatest spendthrifts in the society. Let them look well after their own expence, and they may safely trust private people with theirs. If their own extravagance does not ruin the state, that of their subjects never will.

The arguments over plastic bags do not seem to fit well into this discussion of extravagance and opulence. However, the topic of sumptuary laws also comes up in Smith\’s discussion of \”taxes on consumable commodities\” in Book V, Ch. II. Smith is arguing that that commodities can be classified into \”necessities\” and \”luxuries,\” and further argues that taxes on necessities will lead to a rise in the wages of labor–in modern terms, we would say that taxes on necessities are passed on to employers. However, Smith argues that taxes on luxuries like tobacco and alcohol can be viewed as sumptuary laws that may have a beneficial effect on the poor. Smith writes: 

It is otherwise with taxes upon what I call luxuries, even upon those of the poor. The rise in the price of the taxed commodities will not necessarily occasion any rise in the wages of labour. A tax upon tobacco, for example, though a luxury of the poor as well as of the rich, will not raise wages. …  The different taxes which in Great Britain have in the course of the present century been imposed upon spirituous liquors are not supposed to have had any effect upon the wages of labour. …

The high price of such commodities does not necessarily diminish the ability of the inferior ranks of people to bring up families. Upon the sober and industrious poor, taxes upon such commodities act as sumptuary laws, and dispose them either to moderate, or to refrain altogether from the use of superfluities which they can no longer easily afford. Their ability to bring up families, in consequence of this forced frugality, instead of being diminished, is frequently, perhaps, increased by the tax. It is the sober and industrious poor who generally bring up the most numerous families, and who principally supply the demand for useful labour. 

This meaning of sumptuary laws is a little closer to the plastic bag application. In modern language, overuse of tobacco and alcohol have negative social consequences, and thus discouraging their use is appropriate. The argument made for bans on plastic bags is that they have negative social consequences, too.  But ultimately, it\’s hard for me to view rules about plastic bags and straws (whether one favors or opposes such proposals) as attempts to control what Smith would call the \”luxuries\” of the poor, or as an attempt to reinforce class distinctions.

My own sense is that environmental concerns about plastic in the environment have a sound basis, but that plastic grocery bags and plastic straws are a visible but insignificant part of that overall problem. Moreover, it has become apparent that a substantial share of efforts that claimed to recycle plastics in the past actually involved exporting them to China and other Asian destinations; in retrospect, putting that waste plastic into landfills might well have been a preferable environmental choice.