Sick Pay Benefits: A Labor Market and Public Health Issue

Providing sick pay to workers is often discussed in terms of fairness or social insurance against the risk of declining income, but it also has an important public health dimension. Employers might prefer that sick workers remain at home, rather than passing on their illness to the rest of the workforce. But workers who do not have sick pay won\’t get paid if they don\’t show up.

Most high-income countries in the world have government-required provision of sick pay. Researchers at the World Policy Analysis Center at UCLA compiled data in a 2018 report \”Paid Leave for Personal Illness: A Detailed Look at Approaches Across OECD Countries.\”  They write that of 34 OECD countries, only the US and Korea do not have a guarantee of paid leave for personal illness. The details of implementation vary across  countries, of course. For example, two of these countries make employers solely responsible for paying for sick leave, nine countries have government solely responsible, and 21 have a mixture of the two. In the mixed systems, a common pattern is that employers pay for the first few weeks of sick leave, and then government takes over after that up to some limit like three or six months. A common pattern in these countries is that sick leave is 80% of regular pay.

In the US, sick leave is much less likely in lower-paying jobs. Here\’s a figure showing the pattern from the Kaiser Family Foundation:


Efforts to have federal sick pay rule in the US have gone nowhere. However, starting with San Franciso in 2007, a number of state and local governments have passed such rules in the last few years. Twelve states now have such laws, and a couple of dozen more cities, including New York City, Chicago, Philadelphia, Washington DC, Seattle, and Portland.  The typical pattern for these laws is that all employees earn  one hour of paid sick leave for every 30 to 40 hours worked. Of course, the idea behind this design is that a worker can\’t take a job and then immediately take paid sick leave, but a worker will accumulate roughly a day of paid sick leave for every 8 weeks worked. For a detailed and updated  \”Interactive Overview of Paid Sick Time Laws in the United States,\” see the A Better Balance website.

What are the effects of these laws? Stefan Pichler and Nicolas R. Ziebarth have written \”Labor Market Effects of U.S. Sick Pay Mandates,\” forthcoming in the Spring 2020 issue of the Journal of Human Resources (55:2, pp. 611–659). They look at employment and wage data over a time frame from 2001 to 2016 for nine cities and four states that have enacted sick pay rules. They create what is called a \”synthetic control group,\” which is a set of cities and states that historically have followed the same patterns of employment and wages, but did not adopt a sick pay rule. Then, they can see whether adopting sick pay causes a change in  employment and wages compared to this control group. They find no evidence of such a change.

In a follow-up study, Johanna Catherine Maclean, Stefan Pichler, and Nicolas R. Ziebarth have published a working paper, \”Mandated Sick Pay: Coverage, Utilization, and Welfare Effects\” (March 2020, NBER Working Paper #26832, not freely available, but readers may have access through their institutiosn).  This study focuses on state-level sick-pay mandates, with data from 2009-2017. During this time, states are adopting sick pay mandates at different times. Thus, for these states and in comparison with other states, one can look for how patterns of sick leave coverage change when sick-pay mandates are adopted. They find:

Within the first two years following mandate adoption, the probability that an employee has access to paid sick leave increases by 18 percentage points from a base coverage rate of 66%. The increase in coverage persists for at least four years without rising further. Over all post-mandate periods covered by this paper, we find a 13 percentage point higher coverage rate attributable to state mandates. As a result of the increased access to paid sick leave, employees take more sick days …  newly covered employees take two additional sick days per year. Employer sick leave costs also increase, but effect sizes are modest. On average, the increase amounts to 2.7 cents per hour worked … Further, we find little evidence that sick pay mandates crowd-out non-mandated benefits such as paid vacation or holidays. Likewise, we find no evidence that employers curtail the provision of group policies such as health, dental, or disability insurance.

These studies are focused on labor market issues, and do not take public health effects into account. However, in a different working paper, Stefan Pichler, Katherine Wen,  Nicolas R. Ziebarth study 
\”Positive Health Externalities of Mandating Paid Sick Leave\” (February 2020).  Looking at state-level data, they find that in the first year after a state enacts sick pay, rates of doctor-certified influenza-like illness fell by about 11%.

This offered broad confirmation of results from an earlier study by Stefan Pichler and Nicolas R. Ziebarth, \”The pros and cons of sick pay schemes: Testing for contagious presenteeism and noncontagious absenteeism behavior\” (Journal of Public Economics, December 2017, pp. 14-33).  Looking at Google Flu data, they found that when U.S. employees gain access to paid sick leave, the general flu rate in the population decreases significantly, which suggests the possibility of less transmission of flu at work.

They also look at sick pay outcomes in Germany, a country with generous sick pay provisions. However, when Germany legislative changes allowed some flexibility to reduce sick pay from 100% of previous salary to 80%, the result was a large drop in more nebulous claims sickness claims like \”back pain\” but little drop for sickness claims related to infectious illnesses. This pattern suggests a plausible tradeoff: very generous sick pay can lead to workers taking time off for reasons not related to public health, but as sick pay becomes less generous, it will also lead to \”contagious presenteeism\” where contagious workers become more likely to show up at the job.

(In passing, I was also struck by this historical comment about Germany sick pay in the Pichler and Ziebarth 2017 paper: \”Historically, paid sick leave was actually one of the first social insurance pillars worldwide; this policy was included in the first federal health insurance legislation. Under Otto van Bismarck, the Sickness Insurance Law of 1883 introduced social health insurance in Germany, which included 13 weeks of paid sick leave along with coverage for medical bills. The costs associated with paid sick leave initially made up more than half of all program costs, given the limited availability of (expensive) medical treatments in the nineteenth century …\”)

Some US companies are now discovering that sick pay may matter to their business: for example,
\”Amazon announces up to 2 weeks of paid sick leave for all workers and a \’relief fund\’ for delivery drivers amid coronavirus outbreak.\”  But the kinds of sick pay laws that have been gradually spreading through certain states and cities are partial and incomplete. The novel coronovirus outbreak suggests that a national sick pay policy–probably with employers responsible for the first weeks and then government serving as a back-up–is an issue with broad public health consequences, not just an argument over whether government should require companies to provide certain benefits. Sitting here in March 2020, it would have been nice to have a national sick-pay policy in place a few years ago, as a way of reducing the spread of coronavirus and cushioning the loss of income for those who become sick. But it\’s not too early to start prepping for the next pandemic.

Some Coronavirus Economics

Back in the mid-1980s, when I worked for a few years at the San Jose Mercury News as an editorial writer, my boss would sometimes remind us (channeling Murray Kempton): \”An editorial writer is someone who comes down from the hills after the battle is over and shoots the wounded.\” Similarly, authors of books about important events have the luxury of time and distance before they commit themselves to print. But Richard Baldwin and and Beatrice Weder di Mauro, much to their credit,  decided to step into the arena of arguments about an appropriate response to the novel coronavirus while the disputes are ongoing by editing an e-book: Economics in the Time of COVID-19 (March 2020, free with registration from VoxEU.com). The very readable book was literally produces over a long weekend: it includes an \”Introduction\” and 14 short essays, many of them summarizing and drawing on longer work. Here, I\’ll draw up on some comments from the book as well as my own thoughts. 

1) The hard question is how bad the novel coronavirus will get, and the short answer is that nobody really knows. 
It is already clear that COVID-19 is worse than the SARS outbreak in 2002-3. Worldwide, that ended up being slightly more than 8,000 total cases and slightly less than 800 deaths. The Johns Hopkins School of Medicine maintains a continually updated page on confirmed cases of coronavirus around the world, as well as deaths and recoveries. As I write, it already has more than 120,000 cases and more than 4,000 deaths. 

For some context, the Centers for Disease Control estimates each year the cases and deaths from flu in the US. In the last decade or so, 2011-12 was a low mark for flu-related deaths, with \”only\” 12,000. Conversely, 2014-15 and  2017-18 were especially bad flu seasons in the US, with 51,000 and 61,000 deaths respectively. The 2009 Avian flu (N1H1) ended up causing between between 151,700 and 575,400 people deaths worldwide (according to Centers for Disease Control estimates), most of them in the US and Mexico. 
Predicting the path of an epidemic is difficult. Baldwin and Weder di Mauro offer a useful diagram, showing that in the early stages, a straight-line prediction will dramatically understate the harms, while in the middle stages, a straight-line prediction will dramatically overstate the harms. They offer a comment from Michael Leavitt, a former head of the US department of Health and Human Services: “Everything we do before a pandemic will seem alarmist. Everything we do
after will seem inadequate.” The challenge is to predict the length and peak of the curve –which depends not only on the epidemiology of the disease but also on what public health steps are taken. 
In addition, there is no guarantee that the coronavirus will ever disappear. AsBaldwin and Weder di Mauro note: \”[T]he virus might become endemic – that is to say, a disease that reappears
periodically – in which case COVID-19 could become one of humanity’s constant
companions, like the seasonal flu and common cold.\”
2) What are some common estimates of potential economic losses from the coronavirus? In their chapter, Laurence Boone, David Haugh, Nigel Pain and Veronique Salins of the OECD  estimate a base scenario and a downside scenario. 

In a first best-case scenario, the epidemic stays contained mostly in China with limited
clusters elsewhere. … In this best-case scenario, overall, the level of world GDP is reduced by up to 0.75% at the peak of the shock, with the full year impact on global GDP growth in 2020 being around half a percentage point. Most of this decline stems from the effects of the initial reduction in demand in China. Global trade is significantly affected, declining by 1.4% in the first half of 2020 and by 0.9% in the year as a whole. The impact on the rest of the world depends on the strength of cross-border linkages with China. …

In the downside scenario, the outbreak of the virus in China is assumed to spread much
more intensively than at present through the wider Asia-Pacific region and the major
advanced economies in the northern hemisphere in 2020. …  Together, the countries affected in this scenario represent over 70% of global GDP … Overall, the level of world GDP is reduced by up to 1.75% (relative to baseline) at the peak of the shock in the latter half of 2020, with the full year impact on global GDP growth in 2020 being close to 1.5%.

Warwick McKibbin and Roshen Fernando simulate seven economic scenarios–three where the disease stays mainly in China, three where a pandemic spreads worldwide, and one in which a mild pandemic recurs each year into the future. For a sense of the range, their low pandemic scenario (S04) estimated 15 million deaths globally, with 236,000 in the US. Their most aggressive pandemic scenario (S06) is based on 68 million deaths worldwide, more than 1 million of them in the US. In this scenario, US GDP falls 8.4 percent in 2020, and the world economy falls by a similar amount.  To get a sense of what this scenario means, it is roughly equivalent to half the world\’s population being infected by the coronavirus, with a mortality rate of 2% for those infected.

3) How will the coronavirus affect the world trading system? Weber di Mauro writes: 

Supply chain disruptions may also turn out to be larger and more extended than is currently evident. Maersk, one of the world’s largest shipping companies, has had
to cancel dozens of container ships and estimates that Chinese factories have been
operating at 50-60% of capacity. Shipping goods to Europe from Asia via sea takes
about five weeks, so at the moment goods are still arriving from pre-virus times. The
International Chamber of Shipping estimates that the virus is costing the industry
$350m a week in lost revenues. More than 350 000 containers have been removed
and there have been 49% fewer sailings by container ships from China between mid
January and mid February. … China has become a major source of demand in the world economy and many core European industries are highly dependent on the Chinese market. Sales in China account for up to 40% of the German car industry’s revenues, for example, and they have collapsed over the last weeks.

Richard Baldwin and Eiichi Tomiura write:

There is a danger of permanent damage to the trade system driven by policy and firms’ reactions. The combination of the US’ ongoing trade war against all of its trading partners (but especially China) and the supply-chain disruptions that are likely to be caused by COVID-19 could lead to a push to repatriate supply chains. Since they supply chains were internationalised to improve productivity, their undoing would do the opposite. We think this would be a misthinking of the lessons. Exclusively depending on suppliers from any one nation does not reduce risk –  it increases it. …  We should not misinterpret pandemic as a justification for anti-globalism. Redundant dual sourcing from multiple countries alleviates the problem of excess dependence on China, though with additional costs. Japanese multinationals have already begun diversifying the destinations of foreign direct investment away from China in recent years, not foreseeing COVID-19 but prompted by Chinese wage hikes. We hope more intensive use of ICT enables firms to more effectively coordinate global sourcing.

4) Perhaps there will be a separation of global trade, which isn\’t likely to transmit pandemics, and free movement of people, which is more likely to do so. Joachim Voth raises this question clearly:

Fortunately, many – but not all – of the benefits of globalisation can be achieved without enormous health risks. The free exchange of goods and capital does not have to be restricted; only very few diseases are transmitted by contaminated goods. The free movement of people itself also contributes to the advantages of globalisation, but it is far less important for production. It is not obvious that running the risk of coronavirus outbreaks every few years – or worse – is a price worth paying for multiple annual vacation trips to Paris and Bangkok, say. Severe restrictions may well be desirable and justifiable, bringing to an end a half-century of ever-increasing individual mobility. In addition, specific restrictions could be brought in. For countries where, for example, wild animals are regularly sold and eaten (such as China, until recently), the certification for travel could be withheld without restrictions; anyone who comes or returns from there must undergo a medical examination and possibly spend a few weeks in quarantine. This would not only build a virtual plague wall against the next major outbreak, it would also put pressure on health authorities around the world to restrict dangerous practices that allow pathogens to jump from one species to the next. Even if airlines, hoteliers and tour operators would suffer from such rules in the short term and would complain, the lesson from Wuhan should be that we need a broad discussion within and outside of academia about how much mobility is actually desirable.

Voth also reminds us of some grim historical episodes:

The ship, Grand Saint Antoine, had already come to the attention of the port authority of Livorno. A cargo ship from Lebanon loaded with expensive textiles, it reached the port of Marseille in 1720. The Health Commission had its doubts – the plague was widespread in the eastern Mediterranean. Like all ships from affected regions, the Grand Saint Antoine was placed in quarantine. Normally, the crew and the property would have had to stay on board for 40 days to rule out the possibility of an infectious disease. But a textile fair near Marseille, where the importing merchants hoped for rich business, would soon begin. Under pressure from the rich traders, the health agency changed its mind. The ship could be unloaded, the crew went to town. 

After only a few days it was clear that changing the initial decision had been a mistake. The ship had carried the plague. Now the disease spread like a forest fire in the dry bush. The city authorities in Marseille could not cope with the number of deaths, with corpses piling up in the streets. … At the behest of the French king and the pope, a plague wall (Mur de Peste) was built in Provence. Tourists can still see parts of it today. The wall was over two meters high and the watchtowers were manned by soldiers. Those who wanted to climb over it were prevented from doing so by force. Although some individuals managed to escape, the last major outbreak of black death in Europe was largely confined to Marseille. While probably 100,000 people – about a third of the population – died in Marseille, the rest of Europe was spared the repeated catastrophe of 1350 when millions of people lost their lives. 

5) Should the economic policies in response to the coronavirus be general or targeted? 

By general policies, I mean policies that refer to cuts in interest rates by central banks, or plans for government to send out checks to everyone (or in a US context, to cut Social Security payroll tax rates). By specific policies, I mean economic policies where the government focuses on specific issues like sick pay for workers not covered by employers, medical bills, support for small/medium firms with cash-flow problems, making sure banks have funds to lend and are not pushing firms into bankruptcy right now, and support for specific hard-hit industries like airlines and tourism.

John Cochrane put it this way:

We need a detailed pandemic response financial plan, sort of like an earthquake, flood, fire, or hurricane plan that (I hope!) local governments and FEMA routinely make and practice. Is there any such thing? Not that I know of, but I would be interested to hear from knowledgeable people if I am simply ignorant of the plan and it’s really sitting there under “Break glass in emergency” down in a basement of the Treasury or Fed. Without a pre-plan, can our political system successfully make this one up on the fly, as they made up the bank bailouts of 2008?

Then we have to figure out how to prevent the atrocious moral hazard that such interventions produce. Pandemics are going to be a regular thing. Ex-post bailout reduces further the incentive for ex-ante precautionary saving. Too good a fire department, and people store gasoline in the basement.
This starts down the same bailout and regulate road that suffocates our debt-based banking system. I welcome better ideas.

6) Will manufacturing or services be hit harder? 

Richard Baldwin and Eiichi Tomiura emphasize the problem for manufacturing:

An important point is that manufacturing is special. Manufactured goods are – on the whole – ‘postpone-able’ purchases. As we saw in the Great Trade Collapse of 2009, the wait-and-see demand shock impacts durable goods more than non-durable goods. In short, the manufacturing sector is likely to get a triple hit.

  1. Direct supply disruptions hindering production since the disease is focused on the world’s manufacturing heartland (East Asia), and spreading fast in the other industrial giants – the US and Germany.
  2. Supply-chain contagion will amplify the direct supply shocks as manufacturing sectors in less-affected nations find it harder and/or more expensive to acquire the necessary imported industrial inputs from the hard-hit nations, and subsequently from each other.
  3. Demand disruptions due to (1) macroeconomic drops in aggregate demand, i.e. recessions, and (2) precautionary or wait-and-see purchase delays by consumers, and investment delays by firms.
However, Catherine Mann points out that while manufacturing may be hit more in the short-term, it is also more likely to recoup its losses: 

Manufacturing will show a ‘V’ or ‘U’ shape. Manufacturing spillovers from factory closures loom large in the near term, but production will rebound to restock inventories once quarantines end and factories reopen. However, the duration of closures, as well as spillovers through supply chains and through virus cases and closures worldwide, will generate a set of Vs that should take on a U-shape in the global data. Importantly, the loss to global growth momentum will drag on both in individual country data and global rebound economic data, particularly trade and industrial production. Services, on the other hand, will experience an ‘L’ shape. The shock to tourism, transportation services, and domestic activities generally will not be recovered, and the projected slowing of global growth will further weigh on the L-shape evolution of demand for these non-storable tradeable services. Domestic services also will bear the brunt of the outbreak, depending in part on the responses of authorities, business, and consumers.

American Mobility: From "Westward Ho" to "Home Attachment"

The number of Americans who move in a given year has been declining. Here\’s an illustrative figure from the Economic Report of the President (February 2020, White House Council of Economic Advisers):

The reasons for this decline, and what (if anything) should be done about it have been murky. Kyle Mangum offers an historical interpretation of the change in \”No More Californias: As American mobility declines, some wonder if we\’ve lost our pioneer spirit. A closer look at the data suggests that the situation is less dire—and more complicated—than it at first appears\” (Economic Insights, Federal Reserve Bank of Philadelphia, Winter 2020, pp. 8-13). He describes the potential economic problems resulting from lower mobility in this way:

Economists widely view labor mobility as the principal mechanism by which regions adjust to local economic shocks. If local industries fall on hard times, workers can leave; in places where labor demand is high, new residents flow in. The decline has therefore generated concern that the economy is less adaptable to local shocks, ultimately resulting in labor misallocation, unrealized output, and lower productivity.

Some of the seemingly plausible explanations for the lower decline don\’t hold up under closer examination. For example, one might hypothesize that the decline in movlity is caused because older people are less likely to move long distances and the US population is aging. But as Mangum points out (footnotes omitted):

Researchers have shown that typical aging differences are not quantitatively big enough to generate the observed national decline. Perhaps more importantly, the decline is present within age groups, so that young people today, for instance, are also moving less than their parents did at the same age. Moreover, aging has occurred at similar rates across cities, so there is no scope for aging to explain the spatial differences in the decline.

Instead, Mangum offers an interpretation of declining geographic mobility in the last few decades based in long-term US history. Here\’s a map from the US Census Bureau showing the movement of the location of the average center of US population from 1790-2010:
Mean Center of Population for the United States: 1790 to 2010
Mangum offers a table showing  how much the center of population moved during each decade.

The basic patterns here  will make sense to those with even a basic familiarity of US history. The US has had a general movement of population to the west and south. For an illustration of the process, the US Census Bureau has a map which shows the movement of the American \”frontier\” from 1790 to 1890. In 1890, the Census Bureau famously announced what has been known as the \”closing of the western frontier\”: \” In 1890, the Superintendent of the Census described the western part of the country as having so many pockets of settled area that a frontier line could no longer be said to exist.\”

The movement to the south and west then mostly lags for several decades in the early 20th century, including the periods of World War I, the Great Depression, and World War II. But after World War II there is a renewed population shift to the south and west from the 1950s through the 1980s–with a marked slowdown in the movement of the geographic center of the US population since then.

Mangum suggests that the motivations to move have diminished in modern America.

[T]here is reason to expect that massive population changes across regions—of the degree seen from colonization to westward expansion—will no longer be business as usual. The major differences in regional habitability have diminished. Transportation has crisscrossed the continent, water delivery- and-control infrastructure has been put in place, and air conditioning is ubiquitous. Technologies today focus on speed and efficiency within cities, not on developing new cities. And in the digital age, new technologies are less spatial. Population growth today is more balanced across locations compared to the skewness of the early and middle 20th century. … And this population growth is occurring more within regions than across regions. To the extent that imbalances exist, growing places are established cities rising in the urban hierarchy, leaving the rest of their home region behind and largely drawing people from within their region. …

So perhaps the U.S. is finally in a “long-run spatial equilibrium,” as some have suggested. The term suggests that households  incentives to relocate have diminished, either because places are more similar than they used to be, or structural changes in the economy have caused real estate and labor prices to rationalize spatial differences, so that, in either case, relative population adjustments across space are no longer necessary. 

Mangum also refers to the extent of \”home attachment,\” in which \”[p]eople living near their birthplace show a strong proclivity to remain in their location compared with people born out of state. A transplanted population, by contrast, is more transient and more subject to various idiosyncratic changes in circumstance. For example, if someone moved to a new place for a job, and the job dissolves for whatever reason, they are likely to move away. Someone with strong local ties whose job dissolves is more inclined to search locally.\” 
Thus, when Americans were first moving in substantial numbers to the south and west, the recent arrivals were still somewhat transient. If for some reasons their first move didn\’t work out, they would move again. But over time, more and more people identify as being from places in the south and west, and with this added \”home attachment\” become less likely to move again. 
The historical shift that Mangum describes seems plausible to me (although I\’d be interested in seeing a parameterized model showing that a connection from greater home attachment to less moving can explain the overall observed patterns). But (as Mangum points out), there are two ways to interpret the fact that many of those who live in struggling labor markets in higher-unemployment cities have been unlikely to move. One interpretation is that they have strong \”home attachment.\” The alternative interpretation is that moving from slower-growth to higher-growth urban areas has become  more difficult and risky than it used to be, in substantial part because housing costs have become so high in high-growth urban areas. A chicken-and-egg problem emerges for someone thinking about such a move: they can\’t afford the high housing costs in the new city unless they already have a job lined up, and they can\’t line up a job unless they first move to the new city. There are a variety of other possible barriers to moving, as well, including rules for occupational licensing that differ across states, a lack of investment in the public transit systems that are more heavily used by those with lower incomes, and so on. 
In this interpretation,  some of the decline in US geographic mobility may be that we have \”no more Californias.\” But part of the mobility decline may also be that state and local policies that affect housing, jobs, and transportation are discouraging a number of potentially willing movers. 
Here are some previous posts and writings on the decline of US geographic mobility.

Women in Economics: The Early-Stage Problem

The share of women has risen substantially in many academic areas, but less so in economics. Shelly Lundberg has edited a VoxEU.org e-book on Women in Economics (March 2020, free registration required). It includes an introduction and 18 short and readable essays, many of which summarize and refer to research presented in more detail elsewhere. Thus, it\’s a good way to get up to speed on thinking in this area. Here, I\’ll point to what seems to me an underemphasized topic in this research, which comes out of what is sometimes called a \”pipeline\” approach.

The basic idea here is that there is a pipeline to becoming a tenured professor of economics. It commonly starts with taking economics in college, doing an undergraduate major in economics, entering an economics PhD program, completing a PhD, getting a job as an assistant professor, and then being promoted to full professor. One can look at the share of women at each stage of the process and get a sense of where the representation of women is falling behind.

Here\’s a figure from \”Women in Economics: Stalled Progress,\” by Shelly Lundberg and Jenna Stearns. The line that is highest in the top right corner shows the share of women among senior economics majors: it\’s been in the range of 30-35% for the last 20 years.

The next two lines show the share of women among first-year PhD students in economics and among new PhDs. There is some \”leakage\” in the pipeline here, in the sense that the share of women in PhD programs in economics is lower than the share who are senior undergraduate majors in economics. Back in the 1990s, the share of women starting an economics PhD was higher than the share completing one, but that gap went away in the early 2000s.

The share of women among assistant professors in economics, shown by the blue line, roughly equalled the share of women in economics Ph.D programs around 2009, but since then has dropped off. The share of women among assistant professors of economics used to be much higher than the share who became associate professors of economics, but that gap has closed in the last 5-10 years. The share of women who are full professors of economics has been rising, although it lags behind the rise in associate professors.

Much of this book focuses on analysis and proposals for addressing the later steps in the career pipeline to becoming an economics professor Here are a few examples, as Lundberg summarizes them in her overview essay:

  • \”Erin Hengel was the first to point out that economics research papers written by women appear to be held to higher standards in the publishing process than papers written by men. As in several other professions (medicine, real estate, law), there appears to be a quality/quantity tradeoff, with female economists producing less output of higher quality than equivalent men. In her chapter, Hengel summarises the results of her study, showing that female-authored  papers at some elite journals are subjected to extended review times, and result in published papers with abstracts that are significantly more readable, according to standard measures.\”
  • \”The chapter by Lorenzo Ductor, Sanjeev Goyal, and Anja Prummer reports the findings of their study of gender differences in the collaborative networks of economists. Using the EconLit database, they undertake a detailed analysis of co-authoring patterns, and find that women work with a smaller network of distinct co-authors than men and tend to collaborate repeatedly with the same co-authors and their co-authors’ collaborators, constructing a tighter network. Since larger networks are associated with higher levels of research output, these patterns may disadvantage women.\”
  • \”Laura Hospido and Carlos Sanz use data from three large general-interest academic conferences to test for gender gaps in the evaluation of submissions. After controlling for a rich set of controls for author and paper quality, including author characteristics, field, paper cites at submission, eventual publication of the submitted paper, and referee fixed effects, they find that all-female-authored papers are about 7% less likely to be accepted than all-male-authored papers.\” 
  • \”The chapter by Donna Ginther, Janet Currie, Francine Blau, and Rachel Croson reports on a follow-up assessment of CSWEP’s flagship intensive mentoring programme, CeMENT. Causal estimates of the impact of such programmes are rare, but this evaluation, based on participants and those who were randomised out of the over-subscribed programme in 2004-2014, provides an unusual opportunity to gauge their potential effectiveness on short- and long-term outcomes. The estimates show that access to CeMENT increased the probability to having a tenure stream job by 14.5% and increased the probability of having tenure in a top-50 ranked institution by 9.0 percentage points. Most of the impact on tenure can be attributed to significant increases in pre-tenure publications in top-five and other highly regarded journals, but the effect on tenure is marginally significant even after controlling for these factors, suggesting that mentoring may provide professional advantages to women beyond easily-observable productivity metrics.\”

There are also some findings that may run against common intuitions. For example, a common proposal is to make allowances for having children, so that the economists who are parents of children get an extra year or two to publish papers before a decision is made on tenure. However, the evidence on these policies is that they help male economists and disadvantage women. Lundberg explains:

\”In American universities, the fixed-length tenure clock period has been a notable hurdle for assistant professor parents trying to build a tenurable research record, despite a temporary decline in productivity after childbirth. Some universities have introduced policies that stop the tenure clock for mothers, but more have adopted gender-neutral policies that give all new parents an extra year before their tenure decision. Heather Antecol, Kelly Bedard, and Jenna Stearns examine the impact of the rollout of these policies at top-50 US economics departments between 1980 and 2005, and find that men are 17 percentage points more likely to get tenure at their first job after this policy is adopted, while women are 19 percentage points less likely to do so. Men who gain more pre-tenure time as a result of this policy are more likely to publish an additional article in a top journal but women, who appear to bear more of the costs of a new child, do not.\” 

However, the alert reader will notice that the studies I have mentioned all focus on what happens after people have already become professors–not earlier in the pipeline. In addition, the share of women among senior economics majors and among PhD students of economics has been pretty much flat for for the last decade or so. If there isn\’t a major adjustment at the front end of the pipeline, the share of women who become full professors of economics will necessarily be limited.

The evidence on how to boost the number of women who become undergraduate economics majors is thin, at least so far. As Lundberg writes: 

Tatyana Avilova and Claudia Goldin tell the story of the Undergraduate Women in Economics (UWE) Challenge, which they have led since 2015. Economics departments were recruited into the programme, randomised into treatment and control groups, and treatment institutions received funding and guidance to initiate interventions to increase the number of female economics majors. These interventions fell into one or more of three groups: (1) providing students with better information about what economists do, (2) providing mentoring and role models and creating networks among students, and (3) improving the content and relevance of introductory economics courses.

Final data on results from UWE projects are not yet available, but one successful intervention that was administered as a field experiment is described in the next chapter by Catherine Porter and Danila Serra. Lack of female role models is often noted as a barrier to women choosing economics as an undergraduate major. The authors implemented a relatively inexpensive intervention in randomly-chosen Principles of Economics classes. Successful and ‘charismatic’ alumnae of the programme were chosen with the help of current undergraduates to visit classes briefly and discuss their educational experiences and career paths. The results were dramatic: role model visits increased the probability that treated female students would major in economics by eight percentage points (from a base of 9%) with no impact on male majors. This provides strong evidence of the salience of role models for women’s choice of major. 

In addition, many students even before arriving at college have a sense some majors where they are potentially interested and some majors they want to avoid. When I looked at at who takes AP microeconomics and macroeconomics exams a couple of years ago, I found that the number of males who get a 4 or 5 on these examples was much higher than the number of females, which probably leads more men to think about taking economics courses in college. Here\’s a comment from Kasey Buckles in an article in the Winter 2019 issue of the Journal of Economic Perspectives:

However, a serious discussion of strategies for closing the gender gap in economics must also include a look at the pipeline’s source—the K-12 level. Large gender gaps in college major intentions among incoming students suggest that many women are being discouraged from studying economics before they ever enter a Principles classroom (Goldin 2015). Avilova and Goldin (2018) offer an explanation: “Students often think that economics is only for those who want to work in the financial and corporate sectors and do not realize that economics is also for those with intellectual, policy and career interests in a wide range of fields” (p. 1). If women are less interested in finance and business (putting aside how those preferences are formed), then we could be losing many potential economists right out of the gate as a result of this misperception. … [I]t is unlikely that economists will make substantial and lasting progress toward gender balance if we ignore the K-12 experience. More innovation and research is needed on this front …\”

My own nagging sense, based on a stream of anecdotes and personal experiences rather than hard evidence, is that the fundamental presentation of what the subject of economics is all about is, on average, less attractive to young women than to young men. I also think the way in which economics is first presented often doesn\’t give much sense of the breadth and richness of the topics that economists actually study.

It seems reasonable to say that this book uses the \”Symposium on Women in Economics\” in the Winter 2019 issue of the Journal of Economic Perspectives as a launching pad. The three papers in that symposium, which are also all represented in this book, include:

Some other writing potentially of interest on this subject includes:

Time to End Single-House Zoning?

Here\’s an interesting metaphor for economists to consider from Michael Manville, Paavo Monkkonen & Michael Lens in their essay, \”It’s Time to End Single-Family Zoning\”:

Suppose that, for your wellbeing, you need regular access to only a small amount of expensive medicine. One day you go to the pharmacy and learn the government has implemented a new rationing system strictly limiting the number of sales that can occur in small doses. Because many people, like you, only need small doses, the new rule results in few small doses being available. Plenty of medicine is available—you can see it over the counter—but the pharmacist can only sell it in large quantities. So you are stuck. If you want your medicine, you must buy more than you need, at a price higher than you can afford. This new rationing system is also strictly enforced. Not only must you buy in large quantities, but you cannot divide up your ration afterward and sell your extra doses to others who might need and value them.

Most people, we suspect, would consider such a rationing system unjust and inefficient. It would force a large number of people to spend and consume more than they otherwise would, subsidize the smaller number of people who want and can afford large doses, and keep some people from getting medicine at all. Fortunately, the United States does not allocate medicine in this bizarre manner. But it does ration urban land this way.

The authors offer this metaphor as part of their argument for the abolition of single-house zoning in cities. The most recent issue of the  Journal of the American Planning Association offers two viewpoint articles advocating the abolition of single-house zoning, along with seven short commentaries, and then two rejoinders from the authors of the viewpoint articles.

It\’s useful to be clear on just what this proposal entails. It is not a call for the abolition of zoning, or for the abolition of any and all rules concerning what can be built on a certain residential lot. There could still be rules regarding issues like height or setbacks from property lines. If people want to live in a detached single house, they would be free to continue doing so. However, if your neighbor wants to turn their existing house (for the sake of argument, say without changing the physical envelope of the house) into townhouses or a a duplex or triplex, they would be free to do so.

In his essay, Jake Wegman makes the point this way: \”Does a zoning category or other type of regulation prohibit everything but a single-family detached house on a large lot? If so, it should be contested. My argument is that there is no defensible rationale grounded in health, safety, or public welfare for effectively mandating a 3,000-ft2 house with one unit while prohibiting three 1,000-ft2 units within the same building envelope. … Regardless of the specifics, single-family zoning should be replaced with regulations that allow some form of low-rise, middle-density housing—or `Missing Middle\’—to be built as of right.\”

As Manville, Monkkonen and Lens point out, single-house or R1 zoning started as a replacement for explicitly racial zoning (citations and footnotes omitted): 

R1 arose, at least in part, from invidious motives. It was built on arguments about the sort of people who don’t live in detached single-family homes and the harms that would arise if they mixed, socially or as fellow taxpayers, with those who do. R1 first proliferated after the Supreme Court struck down racial zoning in 1917’s Buchanan v. Warley decision. Buchanan made single-family mandates appealing because they maintained racial segregation without racial language. Forcing consumers to buy land in bulk made it harder for lower income people, and therefore most non-White people, to enter affluent places. R1 let prices discriminate when laws could not.

Contemporary observers denounced this regime of backdoor segregation, but in 1926 the Supreme Court upheld it. In Village of Euclid v. Ambler Realty Co. (1926), the court tacitly excused R1’s implicit racism by validating its explicit classism. Cities could prohibit apartments, the court said, because apartments were nuisances: “mere parasites” on the value and character of single-family homes. In Euclid’s wake, R1 became a quiet weapon of the White and wealthy in their campaign to live amid but not among the non-White and poor.
Today’s planners cannot be blamed for R1’s origins; however, the past throws a long shadow over the system they now administer. R1 delivers large and undeniable benefits to some people who own property. In places where housing demand is high, R1 inflates home values and protects the physical character of neighborhoods. But its social costs exceed these private benefits. Higher property values for owners mean higher rents for tenants. Because homeowners as a group are richer and Whiter than renters, policies that increase housing prices redistribute resources upward, increasing homeowner wealth, reducing renter real incomes, and exacerbating racial wealth gaps.

These viewpoints are careful to note that they do not expect the removal of single-family zoning to solve all problems of high housing prices, social inequality, long commutes, traffic congestion, high energy use, and so on. Indeed, they accept and expect that in many neighborhoods, the abolition of single-house zoning might not make much difference at all. They are just arguing that perceived benefits of single-family zoning–which are often phrased in terms of the traits of those who for various reasons need to be excluded from neighborhoods–do not justify the costs.

Manville, Monkkonen and Lens offer some thoughts on the potential importance of altering location outcomes (again, citations and footnotes omitted):

Where people live directly affects their exposure to pollution and violence, the quality of schools their children can attend, and the jobs they can reach. Residential location is thus strongly correlated with many life outcomes, from earnings to educational attainment to mental and physical health. Location, moreover, has not just large but multigenerational returns, yielding better outcomes for people who move in and their children as well.

Because opportunity is unevenly distributed both between and within metropolitan areas, and because moving people to opportunities is generally easier than moving opportunities to people, letting more people live in the most prosperous and amenity-rich neighborhoods of our urban areas would dramatically increase wellbeing. Many people, however, are effectively barred from these cities and neighborhoods because access to them is sold primarily in large, expensive, and inefficient chunks—through R1. Lower and middle-income families would benefit immensely from a small foothold in prosperous neighborhoods—perhaps a modest apartment or duplex—but R1’s prevalence means few such small footholds are available. The result is scarce housing in desirable places.

The city of Minneapolis, near where I live and work, will be a laboratory for these kinds of changes.  As Paul Mogush and Heather Worthington note in one of the comments:

In Minneapolis, allowing at least three residential units on each parcel throughout the city is part of a larger package of housing and land use policy changes intended to increase housing supply, choice, and affordability. These strategies include inclusionary zoning, increased investment in affordable housing, and tenant protections. The city’s new comprehensive plan also moves to allow multistory, multifamily development by right on all frequent bus routes and around light rail transit stations and makes it possible to build “missing middle” housing types in neighborhood interiors close to downtown that had previously been downzoned. The intent is to increase predictability in the marketplace. Planners cannot effectively address the challenges of racial inequities, housing affordability, and climate change by fighting a battle for every new apartment building.

Here\’s are links to the full set of  two viewpoint articles, seven commentaries, and two rejoinders:

Viewpoints

Commentaries

Rejoinders

Thought on Sumptuary Laws: Adam Smith to Plastic Bags

\”Sumptuary laws\” typically refers to laws in the Middle Ages that were passed in part to limit conspicuous consumption, and in part to enforce lines of social distinction, so that, say, only the nobility could wear certain fabrics or colors. Melissa Snell offers a quick overview of \”Medieval Sumptuary Laws\” (ThoughtCo.com, March 29, 2019).

The topic was on my mind because of a recent essay by John Tierney in City Journal, who argues that current bans against plastic bags or plastic straws represent a modern version of the sumptuary laws (Winter 2020, \”The Perverse Panic over Plastic: The campaign against disposable bags and other products is harming the planet and the public\”). Tierney writes:

Today’s plastic bans represent a revival of sumptuary laws (from sumptus, Latin for “expense”), which fell out of favor during the Enlightenment after a long and inglorious history dating to ancient Greece, Rome, and China. These restrictions on what people could buy, sell, use, and wear proliferated around the world, particularly after international commerce increased in the late Middle Ages.

Worried by the flood of new consumer goods and by the rising affluence of merchants and artisans, rulers across Europe enacted thousands of sumptuary laws from the thirteenth to the eighteenth centuries. These included exquisitely detailed rules governing dresses, breeches, hose, shoes, jewelry, purses, bags, walking sticks, household furnishings, food, and much more—sometimes covering the whole population, often specific social classes. Gold buttons were verboten in Scotland, and silk was forbidden in Portuguese curtains and tablecloths. In Padua, no man could wear velvet hose, and no one but a cavalier could adorn his horse with pearls. It was illegal at dinner parties in Milan to serve more than two meat courses or offer any kind of sweet confection. No Englishwoman under the rank of countess could wear satin striped with silver or gold, and a German burgher’s wife could wear only one golden ring (and then only if it didn’t have a precious stone).

Religious authorities considered these laws essential to curb “the sin of luxury and of excessive pleasure,” in the words of Fray Hernando de Talavera, the personal confessor to Spain’s Queen Isabella. “Now there is hardly even a poor farmer or craftsman who does not dress in fine wool and even silk,” he wrote, echoing the common complaint that imported luxuries were upsetting the social order and causing everyone to spend beyond their means. In justifying her sumptuary edicts, England’s Queen Elizabeth I lamented that the consumption of imported goods had led to “the impoverishing of the Realme, by dayly bringing into the same of superfluitie of forreine and unnecessarie commodities.”

But like the Americans who go on using plastic bags, the queen’s subjects refused to give up their “unnecessarie commodities.” The sumptuary laws failed to make much impact in England or anywhere else, despite the rulers’ best efforts. Their agents prowled the streets and inspected homes, confiscating taboo luxuries and punishing violators—usually with fines, sometimes with floggings or imprisonment. But the conspicuous consumption continued. If silk was banned, people would find another expensive fabric to flaunt. Rulers had to keep amending their edicts, but they remained one step behind, and often the laws were flouted so widely that the authorities gave up efforts to enforce them.

For historians, the great puzzle of sumptuary laws is why rulers went on issuing them for so many centuries despite their ineffectiveness. … The laws didn’t curb the public’s sinful appetite for luxury or contribute to national prosperity, but they comforted the social elite, protected special interests, enriched the coffers of church and state, and generally expanded the prestige and power of the ruling class. For nobles whose wealth was eclipsed by nouveau-riche merchants, the laws reinforced their social status. The restrictions on imported luxuries shielded local industries from competition. The fines collected for violations provided revenue for the government, which could be shared with religious leaders who supported the laws. Even when a law wasn’t widely enforced, it could be used selectively to punish a political enemy or a commoner who got too uppity.

The laws persisted until the waning of royal sovereignty and church authority, starting in the eighteenth century. As intellectuals promoted new rights for commoners and extolled the economic benefits of free trade, sumptuary laws came to be seen as an embarrassing anachronism. Yet the urge to rule inferiors never goes away.

Those interested in the question on whether bans on plastic bags and plastic straws make economic and environmental sense can start with Tierney\’s article. A couple of my own posts on plastics include:

Here, I want to focus on a different topic: Does it make sense to think of bans on plastic bags in the same category as rules related to wearing gold buttons or silk, or serving two meat courses? As for all topics, a first place to turn is Adam Smith, who mentions sumptuary laws several times in the Wealth of Nations. (As usual, I quote here from the online version freely available at the Library of Economics and Liberty website.)

One mention of sumptuary laws comes up during Smith\’s discussion \”Of the Accumulation of Capital, or of Productive and Unproductive Labour\” in Book II, Chapter III. Smith argues tartly that productivity has been rising in England, and mostly because of the frugality and efforts of individuals, not kings and ministers. Thus, Smith writes (boldface added):

The annual produce of its land and labour is, undoubtedly, much greater at present than it was either at the Restoration or at the Revolution. The capital, therefore, annually employed in cultivating this land, and in maintaining this labour, must likewise be much greater. In the midst of all the exactions of government, this capital has been silently and gradually accumulated by the private frugality and good conduct of individuals, by their universal, continual, and uninterrupted effort to better their own condition. It is this effort, protected by law and allowed by liberty to exert itself in the manner that is most advantageous, which has maintained the progress of England towards opulence and improvement in almost all former times, and which, it is to be hoped, will do so in all future times. England, however, as it has never been blessed with a very parsimonious government, so parsimony has at no time been the characteristical virtue of its inhabitants. It is the highest impertinence and presumption, therefore, in kings and ministers, to pretend to watch over the œconomy of private people, and to restrain their expence, either by sumptuary laws, or by prohibiting the importation of foreign luxuries. They are themselves always, and without any exception, the greatest spendthrifts in the society. Let them look well after their own expence, and they may safely trust private people with theirs. If their own extravagance does not ruin the state, that of their subjects never will.

The arguments over plastic bags do not seem to fit well into this discussion of extravagance and opulence. However, the topic of sumptuary laws also comes up in Smith\’s discussion of \”taxes on consumable commodities\” in Book V, Ch. II. Smith is arguing that that commodities can be classified into \”necessities\” and \”luxuries,\” and further argues that taxes on necessities will lead to a rise in the wages of labor–in modern terms, we would say that taxes on necessities are passed on to employers. However, Smith argues that taxes on luxuries like tobacco and alcohol can be viewed as sumptuary laws that may have a beneficial effect on the poor. Smith writes: 

It is otherwise with taxes upon what I call luxuries, even upon those of the poor. The rise in the price of the taxed commodities will not necessarily occasion any rise in the wages of labour. A tax upon tobacco, for example, though a luxury of the poor as well as of the rich, will not raise wages. …  The different taxes which in Great Britain have in the course of the present century been imposed upon spirituous liquors are not supposed to have had any effect upon the wages of labour. …

The high price of such commodities does not necessarily diminish the ability of the inferior ranks of people to bring up families. Upon the sober and industrious poor, taxes upon such commodities act as sumptuary laws, and dispose them either to moderate, or to refrain altogether from the use of superfluities which they can no longer easily afford. Their ability to bring up families, in consequence of this forced frugality, instead of being diminished, is frequently, perhaps, increased by the tax. It is the sober and industrious poor who generally bring up the most numerous families, and who principally supply the demand for useful labour. 

This meaning of sumptuary laws is a little closer to the plastic bag application. In modern language, overuse of tobacco and alcohol have negative social consequences, and thus discouraging their use is appropriate. The argument made for bans on plastic bags is that they have negative social consequences, too.  But ultimately, it\’s hard for me to view rules about plastic bags and straws (whether one favors or opposes such proposals) as attempts to control what Smith would call the \”luxuries\” of the poor, or as an attempt to reinforce class distinctions.

My own sense is that environmental concerns about plastic in the environment have a sound basis, but that plastic grocery bags and plastic straws are a visible but insignificant part of that overall problem. Moreover, it has become apparent that a substantial share of efforts that claimed to recycle plastics in the past actually involved exporting them to China and other Asian destinations; in retrospect, putting that waste plastic into landfills might well have been a preferable environmental choice. 

Unemployment: Transition Patterns In and Out

The monthly US unemployment rate has been 4.0% or lower since March 2018. The idea that the US could sustain a level of unemployment this low was unexpected by mainstream economic forecasters.

For example, back in December 2012 when the unemployment rate was 7.9%, the Federal Reserve announced: \”In particular, the Committee decided to keep the target range for the federal funds rate at 0 to 1/4 percent and currently anticipates that this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6-1/2 percent …\” The implication was that when the unemployment rate fell to 6.5%, that would be near the lowest level and it would be about time for the Fed to think about raising interest rates. But the unemployment rate fell below 6.5% by April 2014, and the Fed wasn\’t ready to start raising rates. It wasn\’t until December 2015 when the unemployment rate had fallen to 5.0% that the Fed started nudging interest rates upward.

Again, the implication in late 2015 and into 2016 was that when the unemployment rate hit 5%, it had fallen about as far as it was going to go. Here are a couple of illustrative figures from the 2020 Economic Report of the President, produced by the White House Council of Economic Advisers. The first figure shows the predictions from late 2016, from the Fed and the Congressional Budget Office, about that the decline in the unemployment rate was going to level out. But the unemployment rate just kept falling.

This figure shows the Fed\’s prediction in late 2016 for the rise in the total number of jobs in the US economy. The prediction was that the rise in jobs would level out, but the total number of jobs just kept rising.

Of course, Trumpophiles will credit this change to Trump administration policies, while the Trumpophobic will emphasize that the momentum toward lower unemployment rates and more jobs seems to have continued more-or-less uninterrupted from the pre-Trump years. Here, I want to sidestep the question of credit, and focus more tightly on what is actually going on behind these numbers.

An unemployed person can leave unemployment in two ways: either by getting a job, or by leavint the labor force so that they are no longer looking for a job. Similarly, person can enter unemployment in two way: either by losing a job and looking for a new one, or by re-entering the labor force and deciding to start looking for a job but not finding one right away. So, is the unemployment rate low because lower numbers of people are transitioning in, or because higher number are transitioning out, or some of both?  Marianna Kudlyak and Mitchell G. Ochse look at how the patterns of these transitions have evolved over time in \”Why Is Unemployment Currently So Low?\” (Economic Letter, Federal Reserve Bank of San Francisco, March 2, 2020).

Here\’s one of their figures showing transitions out of unemployment (using averages over three-month periods). By historical standards, these transitions from unemployment to jobs, or unemployment to out of the labor force, do not look very different. Indeed, the rate at which the unemployed are shifting to employment is lower now than it was before the Great Depression or back in the 1990s. In other words, the very low unemployment rate doesn\’t seem to be occurring because the unemployed are leaving unemployment at higher rates than in the past. 
Transitions out of unemployment
This figure shows transitions into unemployment. The red line shows that transitions from out-of-the-labor force to unemployment were high right after the Great Recession, but are now at low levels. The blue line shows, similarly, that transitions from employment to unemployment were high during the Great Recession, but are now at low levels.

Transitions into unemployment

What are the underlying reasons why transitions into unemployment may be lower now than in the past? Kudlyak and Ochse don\’t analyze this issue explicitly, but they highlight two possibilities: an aging workforce and better job matches. The intuition is that an aging workforce (along with a lower rate of start-up firms) means that people are more likely to remain in a current job, and only to move if they have the new job already lined up. The argument for better job matches is that employers have developed better tools for evaluating whether employees are a good fit–in terms of skills, personality, cognitive ability, and more–and so those who get hired into a job will on average stay longer. In other words, when the Fed and the CBO were incorrect in their forecasts of where the unemployment rate was headed, these factors were the ones they had not not taken into account.

Winning the "War on Poverty"–And Now What?

President Lyndon Johnson declared \”war on poverty\” during his State of the Union address on January 8, 1964. But the official poverty rate has remained disturbingly high. Here are figures showing the total number of people below the US poverty line and the poverty rate from the US Census Bureau:

Notice that the number of Americans below the poverty line falls sharply in the 1960s. But since about 1970, the number of people below the poverty line has generally trended upward with the growth of population, and the poverty rate hasn\’t moved much. 

It\’s mildly amusing to watch responses to these patterns from across the political spectrum. For example, liberals often are quick to embrace the basic interpretation of these figures–that poverty remains an enormous and unsolved social problem–but don\’t much like the implication that US programs to reduce poverty have been pretty ineffective over the decades. On the other side, conservatives are quicker to embrace the implication that US anti-poverty programs haven\’t worked well, but are less comfortable with accepting the idea that poverty remains as big a social problem as it was 50 years ago.

Of course, one can also argue that the official measure of poverty is misleading for various well-known reasons. For example, the official poverty measure is based on before-tax income received, and thus does not take into account how the poor might benefit from non-cash programs like Medicaid or food stamps, or from poverty benefits that involve reducing taxes or providing refundable tax credits. Another issues is that the official poverty lines (which vary according to number of people in a household) are increased each year according to the level of inflation as measured by the Consumer Price Index–but is that the most appropriate adjustment? Another more subtle issue is whether poverty should be measured across a \”family,\” which refers to people who are related by marriage or parent-child status, or or whether it should be measured across a \”household,\” which includes all of those living together at a certain address.

Richard V. Burkhauser, Kevin Corinth, James Elwell, and Jeff Larrimore recalculate what the official poverty rates would look like with an alternative set of adjustments for these factors in \”Evaluating the Success of President Johnson’s War on Poverty: Revisiting the Historical Record Using a Full-Income Poverty Measure\”  (December 2019, IZA DP No. 12855). Here\’s how the patterns change:

The top line is the official poverty line, with an adjustment for what is called the \”equivalence scale,\” which refers to how much the poverty line should be adjusted based on the number of people in a household. The authors write: \”For our equivalence scale, we adjust poverty thresholds based on the square root of the number of people in the household. For example, the poverty threshold for a 4-person household is twice that for a 1-person household.\” As they point out, this equivalence scale is fairly standard for a lot of work on poverty lines across different countries. Using a single equivalence scale doesn\’t make a big difference, but holds this factor constant for the calculations that follow.

Using the household as the unit of measure, rather than the family, takes into account how families live together and share resources, and thus reduces the poverty rate as shown. Looking at after-tax income, not before-tax income, reduces the poverty rate further as shown by the orange line. Accounting for non-cash, non-health insurance benefits like food stamps reduces the poverty rate further as shown by the green line. Adding health insurance benefits like Medicaid, together with all the other changes, reduces the poverty rate as shown by the orange line.

An additional step is to think about the appropriate level of inflation to use over time. The official poverty rate uses the Consumer Price Index. But one interesting fact about the CPI is that after it is calculated, it is never revised–not even when the US Bureau of Labor Statistics later makes changes in the technical formulas used to adjust the CPI. Because various contracts and laws depend on the CPI, this lack of adjustment makes some sense. But if you want to know how the poverty rate should have been adjusted over time, it makes sense to use the most current methods for calculating inflation. One prominent measure of buying power that is adjusted over time is called the Personal Consumption Expenditures price index. There is also a Consumer Price Index \”Research Series\” that uses modern methods to calculate what the CPI would have been in the past. The authors write:

Compared to 1963 thresholds, in 2017 the CPI-U used by the Official Poverty Measure generates a threshold that is 8.0 times as high in nominal dollars to hold the real value of the thresholds constant. To the degree that this is an overstatement of inflation, it will effectively raise the real level of these poverty thresholds and exaggerate the share of people in poverty in 2017 relative to 1963. In contrast, all of the other measures of inflation shown result in smaller changes in nominal thresholds. In particular, the PCE—which we use for the Full-income Poverty Measure—generates nominal thresholds in 2017 that are 22 percent below the thresholds using the Official Poverty Measure’s CPI-U, whereas using the Meyer-Sullivan adjusted CPI-U-RS would generate thresholds that are 46 percent below that using the CPI-U.

Adding this different measure of inflation, the preferred \”full-income poverty rate\” for these authors looks like this:

Clearly, one\’s beliefs about whether the \”War on Poverty\” was a success depends the extent to which these kinds of changes seem reasonable to you.  It\’s also worth remembering that attempts to measure the number of people in poverty using consumption data, rather than income data, also show a dramatic fall in the poverty rate.

Of course, while it might seem that evidence suggesting that that US poverty level is actually far below the official rate is good news (to the extent that it is true), nothing is simple in a politically polarized world. Conservatives would have to accept that a number of government programs have had a dramatic effect in successfully reducing poverty rates. Liberals would have to accept that poverty is now a much smaller problem than several decades ago.

A deeper problem here is that \”poverty\” is a judgement that always applies in the context of a specific place and time. US poverty isn\’t the same as what it means to be in poverty in Brazil or Nigeria or India. US poverty in 2020 isn\’t the same as what meant to be in \”poverty\” in the US in 1964 or in 1920 or in 1820. Burkhauser, Corinth, Elwell, and Larrimore suggest this conclusion (citations and footnotes omitted):

President Johnson’s War on Poverty—based on economic standards when he declared that war—is largely over and a success. … Nonetheless, societal views on poverty evolve over time. In 1971, Robert Lampman observed that “by present-day American standards most of the several billion people who have ever lived, and most of the three billion people alive today, were or are poor”. He also suggested that the goal of eliminating poverty based on these initial standards “should be achieved before 1980, at which time the next generation will have to set new economic and societal goals, perhaps including a new distributional goal for themselves”… Nevertheless, the dramatic reduction in poverty by 2017 based on President Johnson’s standards suggests that policymakers might consider setting new poverty thresholds that reflect modern-day expectations for what it means to be impoverished.

For a sense of what poverty meant back practical terms in the late 1950 and early 1960s–in terms of malnutrition, poor health, and lack of education–a readable starting is Michael Harrington\’s 1962 book: The Other America: Poverty in the United States. My own sense is that what one might call \”consumption poverty\” has dramatically decreased in the last 50-60 years, both because of government programs like Medicaid and food stamps and also because of how technologies from the microwave to the smartphone have become widespread even among those with lower incomes.  However, what one might call \”opportunity poverty\”–those who reach their late teens and early 20s with considerably less preparation to take participate in what 21st century America has to offer–remains a severe and ongoing issue.  

Warren Buffett: Ruminations on Board Independence

Each year, the legendary investor Warren Buffett writes a letter the Berkshire Hathaway shareholders, which offers both a detailed overview of company performance and a sprinkling of thoughts about investments and business. The most recent letter about company performance in 2019, which came out a week ago, Buffett offers some thoughts about independence in members of a corporate board of directors: 

Over the years, board “independence” has become a new area of emphasis. One key point relating to this topic, though, is almost invariably overlooked: Director compensation has now soared to a level that inevitably makes pay a subconscious factor affecting the behavior of many non-wealthy members. Think, for a moment, of the director earning $250,000-300,000 for board meetings consuming a pleasant couple of days six or so times a year. Frequently, the possession of one such directorship bestows on its holder three to four times the annual median income of U.S. households. (I missed much of this gravy train: As a director of Portland Gas Light in the early 1960s, I received $100 annually for my service. To earn this princely sum, I commuted to Maine four times a year.) 

And job security now? It’s fabulous. Board members may get politely ignored, but they seldom get fired. Instead, generous age limits – usually 70 or higher – act as the standard method for the genteel ejection of directors. 

Is it any wonder that a non-wealthy director (“NWD”) now hopes – or even yearns – to be asked to join a second board, thereby vaulting into the $500,000-600,000 class? To achieve this goal, the NWD will need help. The CEO of a company searching for board members will almost certainly check with the NWD’s current CEO as to whether NWD is a “good” director. “Good,” of course, is a code word. If the NWD has seriously challenged his/her present CEO’s compensation or acquisition dreams, his or her candidacy will silently die. When seeking directors, CEOs don’t look for pit bulls. It’s the cocker spaniel that gets taken home. 

Despite the illogic of it all, the director for whom fees are important – indeed, craved – is almost universally classified as “independent” while many directors possessing fortunes very substantially linked to the welfare of the corporation are deemed lacking in independence. Not long ago, I looked at the proxy material of a large American company and found that eight directors had never purchased a share of the company’s stock using their own money. (They, of course, had received grants of stock as a supplement to their generous cash compensation.) This particular company had long been a laggard, but the directors were doing wonderfully. 

Paid-with-my-own-money ownership, of course, does not create wisdom or ensure business smarts. Nevertheless, I feel better when directors of our portfolio companies have had the experience of purchasing shares with their savings, rather than simply having been the recipients of grants.

************

Here, a pause is due: I’d like you to know that almost all of the directors I have met over the years have been decent, likable and intelligent. They dressed well, made good neighbors and were fine citizens. I’ve enjoyed their company. Among the group are some men and women that I would not have met except for our mutual board service and who have become close friends. 

Nevertheless, many of these good souls are people whom I would never have chosen to handle money or business matters. It simply was not their game. 

They, in turn, would never have asked me for help in removing a tooth, decorating their home or improving their golf swing. Moreover, if I were ever scheduled to appear on Dancing With the Stars, I would immediately seek refuge in the Witness Protection Program. We are all duds at one thing or another. For most of us, the list is long. The important point to recognize is that if you are Bobby Fischer, you must play only chess for money.