Policing: What’s the Function?

Economists tend to think of public policy in terms of inputs and outputs: a policy is undertaken and there is a response. One can then evaluate how well the policy worked in terms of costs and benefits. When it comes to police work, the usual assumption has been that the function of police is to reduce crime, and that a higher quantity of police will tend to lead to lower levels of crime.

This approach isn’t exactly wrong, but it is incomplete. If the social goal is to have lower levels of crime, there may be multiple ways of accomplishing that goal along with hiring additional police officers. Policing is more than just the number of officers, but also has to do with how the policing is carried out: for example, are police often on foot or perhaps bicycles, being seen in the community, or are they arriving in cars with sirens blaring only after a crime has occurred? How do police manage their interactions with community members, including the continual need for aback-and-forth transition between being protectors and aggressors?

In the most recent issue of the Journal of Economic Perspectives, two papers address these issues of thinking through the functions of policing–and the input and output functions that researchers should use when studying policing. Emily Owens and Bocar Ba write “The Economics of Policing and Public Safety” (Fall 2021, 35:4, pp 3-28). Monica Bell contributes “Next-Generation Policing Research: Three Propositions” (pp. 29-48). I won’t try to summarize the articles here, but instead will pass along some thoughts worth considering.

(Full disclosure: I’ve been the Managing Editor of JEP since the first issue in 1987, so I am perhaps predisposed to find its articles to be of interest. These articles and indeed the entire corpus of JEP, from the most recent issue back to the first issue 35 years ago, is freely available online compliments of the publisher, the American Economic Association.)

There is considerable evidence, as Owens and Ba point out, that additional police do reduce the crime rate. The challenge to this research, as in so many cases, is to find a plausible way of separating cause and effect: for example, if one observes that cities with high crime rates had more police while cities with lower crime rates had fewer police, there would appear to be a positive correlation between police and crime. But of course, correlation isn’t causation, and responding to higher crime rates with more police doesn’t mean that the police caused the crime. The empirical challenge has been to find situations in which the number of police increased or decreased for reasons that had nothing to do with crime, and then to track what happened. One classic study, for example, note that politicians often boost the number of police before elections for political reasons–whether or not crime had actually been increasing–and that these increases in police do typically reduce crime. There is also considerable evidence that alternative strategies of policing, like the policies enacted in New York City in the early 1990s, helped to reduce the crime rate.

The benefits of lower crime rates can be divided into two categories: one is people not victimized; the other is that those who are not victims of crime can worry less and take fewer actions to protect themselves. On a per person basis, being a crime victim is much worse than just worrying about it. But there are many, many more nonvictims of crime than victims, and when you add up all the benefits, the total benefits of crime reduction are probably much higher for the nonvictims of crime than for the would-have-been victims.

Being the subject of police engagement is costly, and Owens and Ba report that about 24% of civilians have some engagement with police in a given year. These costs are not evenly distributed across society. Indeed, as Owens and Ba point out, those who fear crime and support more aggressive policing will also sometimes be those who do not bear the costs of more aggressive policing. Owens and Ba write:

External pressure on police departments tends to encourage crime reduction, with less attention to legitimacy costs. Even if a department was able to identify socially optimal police policies that carefully consider the distribution of direct and indirect benefits, it is not obvious those policies would be implemented by individual police officers. Current tools available to police managers to encourage individual officers to behave in the interest of the department are, from a personnel perspective, limited, and have been only rarely shown to alter police behavior in the field. Instead, many compensation and oversight strategies tend to encourage individual officers to make arrests and to emphasize officer’s personal safety. While these goals are both important and laudable, they are different from engaging with the public in a socially optimal way.

Owens and Ba take a granular look at the incentives of the various actors in the police structure. For example, politicians have an incentive to appeal to the median voter and to raise revenue. Thus, their policies toward policing will either vocalizing the views of those who fear crime or those who feel the costs of aggressive policing, depending on who they see as the median voter in the community. Politicians will also tend to favor actions by the police that lead to more revenue. These incentives may not coincide with encouragement of socially optimal policing behavior.

At a day-to-day level, police officers have an incentive to get home safely, clear offenses, and avoid complaints. Police departments can try to affect their behavior in the ways that officers are hired, trained, and monitored. But police unions typically enforce a salary structure where raises are based on experience and promotions–and there is little or no room for financial incentives aimed at other aspects of police behavior. In addition, police unions care about protecting their members from any and accusations. Many police departments have community outreach events, but the community members who participate in these events are not a randomly chosen cross-section. Some police departments do not reflective the ethnic mix of their community.

Putting all this together, Owens and Ba write:

While police departments and officers are tasked with solving a complicated social welfare problem, the structure of institutional incentives is relatively simple. The dominant incentives faced by police departments are to develop policies which provide indirect benefits—to make civilians feel safe and to see the police “doing something” about crime. As long as only a small fraction of the population is directly affected by criminal victimization and only a small fraction of the population bears the cost of achieving the direct and indirect benefits of crime reduction, we would expect that rational, vote-maximizing politicians might not object to policies that either under-provided or over-provided police engagement in specific areas of concentrated disadvantage. …

On top of truly optimal social policies being difficult to identify, individual officers
within a department have substantial discretion in how they engage with the
public. Standard strategies that organizations use to provide incentives for workers
are limited by structured wage and promotion mechanisms, high monitoring costs,
and limits on the ability to sanction employees. There is currently little evidence
base with which one might identify screening, training, or monitoring strategies
that support a department of officers who are able to make welfare enhancing decisions
about civilian engagement.

Monica Bell raises many similar issues from a slightly different perspective. She stresses the importance of spelling out the costs that arise from more aggressive police interactions: for example, there are studies that aggressive policing in certain areas is associated with lower educational performance for teenager, for lower stress-related health outcomes ranging from high blood pressure to anxiety/trauma, and even to increased residential segregation or lower voting rates. In short, the costs of aggressive policing are not to be brushed as a minor inconvenience.

Bell points out that alternative methods of reducing crime are often under-researched. Here’s one example:

But there are a number of other community-based programs or alternatives to traditional policing that remain largely unstudied, even though some of them are becoming models for other jurisdictions across the nation. For example, CAHOOTS (Crisis Assistance Helping Out on the Street) started in Eugene, Oregon, to send two-person clinical response teams to aid people in mental health crisis, without relying on armed police officers. Although the program has existed for more than three decades, in summer 2020 it gained national attention and became the model for numerous pilot programs—in San Francisco, Denver, Rochester, Toronto, and more. Eugene’s CAHOOTS program is funded and overseen by the police department, but some other emerging programs are funded and managed separately from police. Despite its long duration … CAHOOTS has never been rigorously evaluated.

Another largely unevaluated program seeks to develop quantitative metrics, based on local community involvement, for steps that might reduce crime. Examples might include improved outdoor lighting in certain areas, or an exercise program, or a youth jobs program.

My point here is not to be starry-eyed about possible alternatives to police, but instead to take seriously the idea that if more police are the only item in your policy toolkit for reducing crime, you aren’t serious enough about actually reducing crime–or about building the kinds of communities where young men in particular are less prone to crime. I would also say that if police and appropriate policing strategies aren’t a substantial part of your overall policy toolkit for addressing crime, you also aren’t serious enough about reducing crime.

China’s Growing Role in International Financial Flows

Most of the International Debt Statistics 2022 report just published by the the World Bank is region- and country-level tables about types of financial inflows and outflows with a focus on low- and middle-income countries. But the “Overview” essay at the start, which lays out some of the overall patterns, has a section called “China: The Largest Borrower and Lender among Low- and Middle-Income Countries.” The discussion emphasizes China’s decision to become central to international finances of low- and middle-income countries.

There are several kinds of international financial flows: debt, which includes corporate and government debt; foreign direct investment, which is a purchase of an ownership stake of at least 10% in a company; and portfolio investment, which is a purchase of equity that involves a smaller stake. The report notes that in 2020, net inflows of international capital to China rose, while net inflows to the rest of low- and middle-income countries dropped:

China accounted for over half of the combined aggregate net inflows to low- and middle-income countries in 2020. Aggregate financial flows to China rose 33 percent in 2020 to $466 billion, equivalent to 51 percent of aggregate financial flows to all low- and middle-income countries. Inflows to China were driven by a 62 percent increase in net debt inflows to $233 billion, from $144 billion in 2019, and a 12 percent rise in net equity inflows also to $233 billion. … This was in sharp contrast to aggregate net financial flows to other low- and middle-income countries, which fell 26 percent in 2020 to $443 billion, from $602 billion in 2019. The decline was due to a 21 percent contraction in debt inflows and a steeper 31 percent fall in net equity inflows. Within net equity flows, FDI fell 23 percent and portfolio equity flows were negative with an outflow of $24 billion compared to a small, $3 billon inflow in 2019.

Part of the reason for the rise in debt inflows to China is a set of policy decision to make the China Interbank Bond Market (CIBM) more accessible to international investors. As the report notes: “The CIBM, China’s domestic bond market to which non-resident have access, has an
estimated market value of $12 trillion at end-2020, making it the world’s second-largest
bond market after the United States, or third if the Eurozone markets are counted together
as one.” Here are some of the steps taken by China to encourage foreign bond investors include:

Since 2016, Chinese authorities have implemented various programs and measures to improve non-resident access to the CIBM. The introduction of the CIBM Direct Access program removed investment quotas or repatriation restrictions for foreign institutional investors. The Bond Connect program, launched in July 2017, gave investors the option of registering and settling trades onshore, easing investor concerns over repatriation and capital account risk since the assets are held and settled offshore. In 2018, repatriation and holding period restrictions were removed. In June 2020, quota restrictions were abolished, and repatriation of fund procedures simplified, while in November 2020, in response to investor feedback on ease of access, the application process was streamlined. Inclusion of renminbi (RMB) bonds in the Bloomberg Barclays Global Aggregate Index and China-A shares in FTSE Russell emerging market index and automated links between the Shanghai, London, and Hong Kong SAR, China markets also encourage and facilitate foreign investors’ appetite for access to RMB bonds.

The report breaks down the low- and middle-income countries into three categories: China, the other top-10 low- and middle- countries with the largest stocks of international debt (other than China, this would be Argentina, Brazil, India, Indonesia, Mexico, the Russian Federation, South Africa, Thailand, and Turkey); and other low- and middle-income countries. Here’s a figure and a description from the report:

Aside from China, the downturn in aggregate financial flows to the largest borrowers was much sharper than to other low- and middle-income countries. Aggregate financial flows to low- and middle-income countries’ nine largest borrowers, excluding China (defined on the basis of end-2020 external debt stock), fell to $151 billion in 2020, close to half the
comparable figure for 2019. This reflected a near total collapse in net debt inflows, which plummeted from $110 billion in 2019 to $4 billion in 2020 and a 33 percent contraction in net equity inflows. In contrast, other low- and middle-income countries recorded a 7 percent rise in aggregate financial inflows in 2020 to $292 billion underpinned by a 36 percent increase in net debt inflows to $198 billion. However, net equity inflows fell 26 percent, in parallel with those of the nine major borrowers. As a result, net debt inflows accounted for 68 percent of aggregate financial flows to these countries in 2020 as compared to 54 percent in 2019.

China has become a very prominent lender to other low- and middle-income countries: indeed, some of China’s borrowing is then used to support its overseas lending. Here’s a figure showing the rise in total lending from China to other low- and middle-income countries.

Low- and middle-income countries’ combined debt to China was $170 billion at end-2020, more than three times the comparable level in 2011. To put this figure in context, low- and middle-income countries’ combined obligations to the International Bank for Reconstruction and Development were $204 billion at end-2020 and to the International Development Association $177 billion. [These two organizations are arms of the World Bank.] Most of the debt owed to China relates to large infrastructure projects and operations in the extractive industries. Countries in Sub-Saharan Africa, led by Angola, have seen one of the sharpest rises in Chinese debt although the pace of accumulation has slowed since 2018. The region accounted for 45 percent of end-2020 obligations to China. In South Asia, debt to China has risen, from $4.7 billion in 2011 to $36.3 billion in 2020, and China is now the largest bilateral creditor to the Maldives, Pakistan, and Sri Lanka.

In a big-picture sense, what’s interesting here is that China’s expanded involvement in international financial flows also implies added interdependencies. When you rely on inflows of debt, you become interdependent with lenders. When you become a big lender, you are interdependent with borrowers. There is often a quick assumption that in these lender-borrower relationships, the lender has greater power to exert influence, but it seems to me that the situation is often more complex. For example, I don’t doubt that China seeks to expand its influence by lending to many other low- and middle-income countries, but as the US and many other high-income countries discovered decades ago, other countries can become surly, uncooperative, and downright unfriendly as a result of debts that they owe. In addition, if China wants a continued inflow of international capital, then its government will need to abide by and even expand the kinds of market-opening reforms that have encouraged that inflow.

Lessons from a Half-Century of US Industrial Policies

The fundamental idea behind government industrial policy is that the forces of a market economy are not moving, or not moving quickly enough, in a desirable direction. Thus, a shove from the government in form of subsidies, tax breaks, import limitations, research and development support, or other steps to support a particular industry might be needed. There is continual political pressure for such steps, and both the Trump and the Biden administration have pushed for industrial policy in non-identical but often overlapping ways. Set aside for now the philosophical disputes over whether industrial might work, or whether such efforts will be corrupted by politics. Also, set aside arguments over whether industrial policy in the broad-based form of supporting infrastracture for transportation and communications can help growth. Instead, focus on the question of what we have actually learned from policies aimed at supporting specific industries as enacted in the US during the last 50 years.

In that spirit, Gary Clyde Hufbauer and Euijin Jung discuss “Scoring 50 Years of
US Industrial Policy, 1970–2020″
(Peterson Institute of International Affairs, November 2021). They write:

Over the past half-century, the goals of industrial policy (as we define the term) have varied. In some episodes, the goal was to assist declining industries (e.g., steel, textiles and apparel), both to save jobs and to rescue firms. In others, the goal was to offset externalities (e.g., solar and wind energy), in particular to reduce carbon emissions. And in still other episodes, the goal was to promote US leadership in emerging technologies (e.g., semiconductors, communications). The Trump administration responded to the COVID-19 national emergency with Operation Warp Speed, designed to accelerate the discovery and dissemination of effective vaccines. Looking forward, the Biden administration and Congress are seeking to ensure that the United States stays ahead of China in frontier technologies such as artificial intelligence, cyberspace, and electric vehicles.

As Hufbauer and Jung write: “This study examined 18 US industrial policy episodes implemented between 1970 and 2020, dividing them into three broad categories: cases where trade measures blocked the US market or opened foreign markets, cases where
federal or state subsidies were targeted to specific firms, and cases where public
and private R&D was funded to advance technology. For each policy, they focus on three practical questions:

  1. Did the industry become competitive in international (or in some cases national) markets?
  2. Were jobs saved in the industry at a reasonable cost to taxpayers or purchasers? Their measure of “reasonable cost” is that the e costs did not exceed the wages paid to the workers whose jobs were saved.
  3. Was the technological frontier advanced through government assistance?

For each of the policies, and each of these measures, they provide a grade of A, B, C, or D. As they write: “Numerical equivalents of these grades, used for the summaries in chapter 5, are A+ = 4.5; A = 4; B = 3; C = 2; D = 1.”

To give a flavor of their discussion, here is their summary of the longer discussion on US industrial policy aimed at the steel industry.

• Has the US steel industry become internationally competitive? Failed outcome = D
The industry clings to protection in 2020 as strongly as it did in 1970. During the Trump administration (2017–21), national security tariffs of 25 percent were added to the existing armory of barriers (initially across the board, later subject to exceptions). Exports of steel, concentrated in high-value products, amounted to 8 percent of domestic production in 2019, about the same percentage recorded in 1970.
• Did protection save jobs at a reasonable cost? Failed outcome = D
Per job saved in the steel industry, the cost to steel users was near the top of US protection episodes. Trump’s national security tariffs cost steel users about $900,000 annually per steel industry job saved, a figure many times the industry’s average annual wage of $59,000 in 2019.
• Did protection advance the technological frontier of steel production? Failed outcome = D
The biggest technological advance in the past half-century—but not as a consequence of protection—was the rise of minimills, which make an ever larger range of steel items from scrap. The shift to minimill production continues to grow owing to the demand for high-quality steel. The leading minimill producer, Nucor, advocated free trade until former CEO Ken Iverson died in 2002. The industry has made other innovations, but none as spectacular.

The authors write: [W]e offer four policy observations, distilled from this study of 18 industrial policy cases.

• Industrial policy can save or create jobs, but often at high cost. A major political selling point for industrial policy is to save or create jobs in a specific industry or location. In about half our sample, this was achieved at a taxpayer or consumer cost below the prevailing wage. But our calculations do not reflect the probable loss of downstream jobs, and in many instances the cost per job-year vastly exceeds the prevailing wage. Moreover, jobs created in one state often come at the expense of comparable jobs that might have
been created in another state … At the national level, far better policies are available for creating jobs—for example, training programs and earned income tax credits.

• Import protection seldom pays off. A big exception: when actual or threatened barriers prompt a world-class firm to open in the United States. Toyota was an example in the 1980s; Taiwan Semiconductor Manufacturing Corporation (TSMC) is an example today. But in most cases, import protection does not create a competitive US industry and it imposes extreme costs on household and business users per job-year saved. Trade policy concentrated on opening markets abroad is a better bet.

• Designating a single firm to advance technology yields inconsistent results. Our study did not find single-firm triumphs that compare with the Manhattan Project. Perhaps they exist, but when government confines its support to a single firm to advance frontier technology, it forecloses alternative solutions that might be advocated by different scientists or business leaders. … The highly successful model of Operation Warp Speed vividly demonstrated that competition is an American strength.

• R&D industrial policy has the best track record by far. Among the 18 cases, DARPA has the outstanding record. The DARPA model entails broad guidance to science and engineering experts who, without political interference, award grants to promising but high-risk R&D. When public R&D strikes gold, private firms step in to commercialize the findings. This model finds favor both with the Biden administration and the Congress. In fact, the Biden administration has proposed other projects—ARPA-Health and ARPA-Climate—fashioned after DARPA. Renewable energy R&D, Florida’s biotech region, and the
North Carolina Research Triangle all attest to this industrial policy approach.

Some Economics of Place Effects

Studying how much you are affected by living in a certain place isn’t easy. After all, people are not first observed, then randomly distributed across places, then observed again. In more concrete terms, those who end up living in a place with high-priced real estate and those who end up living in a place with low-priced real estate may well differ in a number of ways in their backgrounds, education levels, and job history, and it would be peculiar to say that the places in which people ended up live “caused” these differences. Instead, it often seems more plausible that the place where people end up living is caused by earlier mechanisms of social and economic sorting.

Nonetheless, here are two examples of how knowing something about place effects seems important. Imagine that there is a program to provide support for people from low-income neighborhoods to move to higher-income neighborhoods. Will the shift in the place they are living affect their job prospects or the outcomes for their children? Or imagine that a retiree moves from one state to another. Will the shift in place affect their health or the health care they receive? A couple of papers in the Fall 2021 issue of the Journal of Economic Perspectives tackle such questions of place effects head-on.

(Full disclosure: I’ve been Managing Editor of the JEP since the first issue in 1987, and so may be predisposed to believe that the articles are of interest. Fuller disclosure: The JEP and all its articles back to the first issue have ben freely available online for a decade now, courtesy of the American Economic Association, so there is no financial benefit for me or anyone else from recommending the articles.)

It turns out that there are certain situations where one can look at those who changed places, compare them with a “control group” that did not change places, and draw some plausible conclusions. Eric Chyn and Lawrence F. Katz provide an overview and contextual interpretation of this research in “Neighborhoods Matter: Assessing the Evidence for Place Effects” (Journal of Economic Perspectives, 35:4, pp. 197-222). Here’s how they describe perhaps the most prominent study in this area:

Beginning in 1994, the Moving to Opportunity housing mobility demonstration randomized access to housing vouchers and assistance in moving to less-distressed communities to about 4,600 families living in public housing projects located in deeply impoverished neighborhoods in five cities: Baltimore, Boston, Chicago, Los Angeles, and New York. The program randomized families into three groups: 1) a low-poverty voucher group (also called the “experimental group”) that was offered housing-mobility counseling and restricted housing vouchers that could only be used to move to low-poverty areas (Census tracts with 1990 poverty rates below 10 percent); 2) a traditional voucher group that was offered regular Section 8 housing vouchers that had no additional locational constraints (also called the Section 8 group); and 3) a control group that received no assistance through the program.

As researchers began to think seriously about place effects, they sought out other situations in which people ended up moving out of low-income areas. For example, studies looked at situations where people had to move because public housing was demolished, or where Hurricane Katrina destroyed existing housing, in comparison to outcomes for a similar group that was not forced to move. There are also studies where one group of low-income households is offered information and counseling to help them match up with rental options in areas with higher average incomes, while a control group is not offered such assistance and is thus much less likely to move to those other areas. Notice that in one way or another, all of these approaches, in different ways, have an element of randomness which allow the researcher to make a plausible estimate of the causal effects of living in a different place.

The overall finding from this line of research is that when lower-income household move to a higher-income neighborhood, it has substantial effects for younger children who grow up in the new neighborhood, smaller or even nil effects for those who are older teenagers at the time of relocation, and not much effect on the job or income outcomes for the adults in those households. Interestingly, the positive effects for younger children don’t seem to primarily be reflected in school test scores, but instead show up as a result of gains in noncognitive skills as reflected in measures like numbers of school absences or suspensions, and chances of repeating a grade. The authors write: “Studies of the Moving to Opportunity demonstration and Chicago public housing demolitions found no evidence that relocating
to less distressed areas had impacts on the economic outcomes of adults, but both
settings revealed large long-run gains for younger children …”

In the same issue, Tatyana Deryugina and David Molitor discuss “The Causal Effects of Place on Health and Longevity” (Journal of Economic Perspectives, 35: 4, pp. 47-70). They point out that the size of regional differences in health across places are actually fairly similar across the European Union and the United States. They write:

Three main results emerge from comparing the regional variation in life expectancy in the United States and Europe. First, average life expectancy is 2.8 years higher in Europe than in the United States. Second, the overall variation in life expectancy, as captured by the standard deviation or interdecile range of the life expectancy distribution, is similar in both contexts. Third, most of the regional variation in life expectancy in Europe is explained by country of residence, whereas in the United States, most of the variation is within-state.

A primary issue is the problem of figuring out whether it’s the place that matters, or whether it’s the prior sorting of people by income and occupation–which is often correlated with place. For a given place, health could also be affected by issues like local public health policies, or peer effects (like whether there is a culture of substance abuse or outdoor exercise), or by local environmental contamination. In addition, these factors often overlap: for example, a location with lower incomes may also receive less health care, be more affected by environmental pollution, and have peer effects that do not reinforce good health. It’s not obvious that “place” is always the most useful way to think about these kinds of factors, rather than focusing on the underlying determinants. The authors mention a classic example from the research in this area:

As a vivid example of geographic differences in mortality across the United States, Fuchs (1974) compared mortality rates in Nevada and Utah, which are neighboring states with similar climates and, at the time, similar income levels and physicians per capita. Fuchs noted that, nonetheless, adult mortality rates were substantially higher in Nevada than in Utah, which he attributed to Nevada’s high rates of cigarette and alcohol consumption as well as “marital and geographical instability.” Even today, the average person born in Utah has a life expectancy 1.9 years higher than the average person born in Nevada.

But broadly speaking, the approach is to look at movers between areas, and then to think carefully about what what can learn from such comparisons. For example, one source of data is to look at the elderly, who are covered by Medicare insurance, who move between states. Patterns of medical practice vary across the US, so you can observe Medicare patients with similar earlier patterns of health care use who move to a place where Medicare recipients on average get more care, or where they get less care, or those who don’t move at all. One can also look at doctors who move between areas and, looking at doctors who had similar patterns of Medicare charges before their move, see if their pattern of Medicare charges tend to stay the same when they go to a different place. As the authors write:

Other indirect evidence that local conditions matter for health comes from papers that use movers to study how local conditions affect health care provision and other non-health outcomes that could ultimately affect health. For example, Song et al. (2010) show that when Medicare recipients move between regions, rates of medical diagnoses change. Finkelstein, Gentzkow, and Williams (2016) study Medicare recipients who move between areas and show that place of residence affects movers’ medical spending. Molitor (2018) looks at cardiologists who move and finds that, on average, their own practice patterns change by 60–80 percent of the difference in local norms between their new and original practice regions.

There are lots of complexities when looking a place effects of health. So far, the research based on movers from one place to another (that is, not on correlations looking at health effects in different places) suggests that place effects on health are real, but does not yet give strong answers as to why they are real. The authors write:

The observed geographic dispersion in life expectancy and evidence from movers between areas strongly suggest that where one lives matters for when one dies. Determining whether place health effects are large or trivially small, however, has not been accomplished until very recently. New evidence comparing movers to other movers to estimate place health effects make it reasonable to conclude that, at least for some groups, place of residence has a sizable effect on health. However, more research is needed to build on these findings and, in particular, to understand the effect of place at younger ages on long-term longevity. Although there are many plausible mechanisms through which these place effects may materialize, the question of what it is exactly that causes some places to be better for health than others has so far not been answered directly by any existing study.

Forks: A Story of Technological Diffusion

The Thanksgiving holiday is a time when many of us make good use of our forks. Thus, it seems like an appropriate time to pass along this short note, “Introduction of Forks,” from the March 1849 issue of Stryker’s Quarterly Register and Magazine (pp. 204-5).

As any well authenticated account of the invention or introduction of any of our present customs, or modes of living, cannot but be both instructive and amusing, we insert the following account of the first introduction of the table fork into England, as related by Thomas Corgate in his book of travels through a part of Europe, A.D. 1608.

“Here I will mention what might have been spoken of before in discourse of the first Italian towne. I observed a custom in all those Italian cities and towns through which I passed, that is not used in any other country that I saw in my travels, neither doe I thinke that any other nation of Christendom doth use it, but only Italy. The Italian, and also most strangers that are commorant in Italy, doe alwaies at their meales use a little forke when they cut their meate. For while with their knife, which they hold in one hand, they cut the meate out of the dish, they fasten the fork, which they hold in the other hand, upon the same dish: so that whatsever he be that, setting in companie with any others at meale, should unadvisedly touch the dish of meate with his fingers, from which all the table doe cut, he will give occasion of offence to the companie, as having transgressed the laws of good manners, insomuch that for his error he shall be at least brow-beaten if not reprehended in wordes. This forme of feeding I understand is generally used in all places of Italy; their forkes being for the most part made of yron or steele and some of silver, but these are only used by gentlemen. The reason of this their curiosity is, because the Italian cannot by any meanes indure to have his dish touched with fingers, seeing all men’s fingers are not alike cleane. Hereupon I myself thought good to imitate the Italian fashion by this forked manner of cutting meate, not only while I was in Italy, but also in Germany, and oftentimes in England, since I came home, being once equipped for that frequent using of my forke, and by a certaine learned gentleman, a familiar friend of mine, one Mr. Lawrence Whitaker, who in his merry humour doubted not to call me at table furcifer, only for using a forke at feeding. and for no other cause.”

Of course, the invention and use of the fork goes back much earlier. Chad Ward provides a useful overview in “Origins of the Common Fork” (Leite’s Culinaria, May 6, 2009). For example:

Forks were in use in ancient Egypt, as well as Greece and Rome. However, they weren’t used for eating, but were, rather, lengthy cooking tools used for carving or lifting meats from a cauldron or the fire. Most diners ate with their fingers and a knife, which they brought with them to the table. Forks for dining only started to appear in the noble courts of the Middle East and the Byzantine Empire in about the 7th century and became common among wealthy families of the regions by the 10th century. Elsewhere, including Europe, where the favored implements were the knife and the hand, the fork was conspicuously absent.

Imagine the astonishment then when in 1004 Maria Argyropoulina, Greek niece of Byzantine Emperor Basil II, showed up in Venice for her marriage to Giovanni, son of the Pietro Orseolo II, the Doge of Venice, with a case of golden forks—and then proceeded to use them at the wedding feast. They weren’t exactly a hit. She was roundly condemned by the local clergy for her decadence, with one going so far as to say, “God in his wisdom has provided man with natural forks—his fingers. Therefore it is an insult to him to substitute artificial metal forks for them when eating.”

When Argyropoulina died of the plague two years later, Saint Peter Damian, with ill-concealed satisfaction, suggested that it was God’s punishment for her lavish ways. “Nor did she deign to touch her food with her fingers, but would command her eunuchs to cut it up into small pieces, which she would impale on a certain golden instrument with two prongs and thus carry to her mouth. . . . this woman’s vanity was hateful to Almighty God; and so, unmistakably, did He take his revenge. For He raised over her the sword of His divine justice, so that her whole body did putrefy and all her limbs began to wither.”

Doomed by God for using a fork. Life was harsh in the 11th century.

However, forks did gradually gain a foothold in Italy, and then France, and then crossed the English Channel with Thomas Corgate, as noted above. The early United States had few forks, so that most people ate with the combined efforts of a knife and a spoon, but even among the Americans, forks had apparently become standard practice by about 1850.

An Economist Chews over Thanksgiving

 As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there’s anything wrong with that. [This is an updated, amended, elongated, and cobbled-together version of a post that was first published on Thanksgiving Day 2011.]

The last time the U.S. Department of Agriculture did a detailed “Overview of the U.S. Turkey Industry\” appears to be back in 2007, although an update was published in April 2014  Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but then declined somewhat, but appears to have made a modest recovery in the last few years The figure below was taken from the Eatturkey.com website run by the National Turkey Federation a couple of years ago.

Turkey companies are what economists call “vertically integrated,” which means that they either carry out all the steps of production directly, or control these steps with contractual agreements. Over time, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”

The 2014 USDA report points out that the capacity of eggs per hatchery has continued to rise (again, references to charts omitted):

For several decades, the number of turkey hatcheries has declined steadily. During the last six years, however, this decrease began to slow down. As of 2013, there are 54 turkey hatcheries in the United States, down from 58 in 2008, but up from the historical low of 49 reached in 2012. The total capacity of these facilities remained steady during this period at approximately 39.4 million eggs. The average capacity per hatchery reached a record high in 2012. During 2013, average capacity per hatchery was 730 thousand (data records are available from 1965 to present).

U.S. agriculture is full of examples of remarkable increases in yields over periods of a few decades, but such examples always drop my jaw. I tend to think of a “turkey” as a product that doesn’t have a lot of opportunity for technological development, but clearly I’m wrong. Here\’s a graph showing the rise in size of turkeys over time from the 2007 report.

The production of turkey is not a very concentrated industry with three relatively large producers (Butterball, Jennie-O, and Cargill Turkey & Cooked Meats) and then more than a dozen mid-sized producers.    Given this reasonably competitive environment, it’s interesting to note that the price markups for turkey–that is, the margin between the wholesale and the retail price–have in the past tended to decline around Thanksgiving, which obviously helps to keep the price lower for consumers. However, this pattern may be weakening over time, as margins have been higher in the last couple of Thanksgivings  Kim Ha of the US Department of Agriculture spells this out in the “Livestock, Dairy, and Poultry Outlook” report of November 2018. The vertical lines in the figure show Thanksgiving. She writes: “In the past, Thanksgiving holiday season retail turkey prices were commonly near annual low points, while wholesale prices rose. … The data indicate that the past Thanksgiving season relationship between retail and wholesale turkey prices may be lessening.”

For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?

Anyway, the starting point for measuring inflation is to define a relevant “basket” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical US household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose 6% from from 2020 to 2021, The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The lower line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been an OK measure of the overall inflation rate.

Thanksgiving is a distinctively American holiday, and it\s my favorite. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like?

Holding Down Costs of Megaprojects: The Madrid Subway Example

Imagine that at some point in your life, for your sins, you part of a panel to evaluate different methods of construction for a megaproject–say, an extension of a light-rail public transit system. Let’s also say that your goals are that the project should get done reasonably soon and at reasonable cost. What are some flashing red lights that suggest the plans may be going off-kilter?

Bent Flyvbjerg tackles this question in his essay, “Make Megaprojects More Modular,” written for the November-December 2021 issue of the Harvard Business Review. The key lessons are suggested by the subheading is: “Repeatable design and quick iterations can reduce costs and risks and get to revenues faster.” If someone starts talking about calling in famous architects for new designs, or famous artists for unique decorations, be wary. If they start talking about custom-designed components, be even more wary. Sure, be a little creative if you can. But design the project to happen in reasonably sized bites, each one briskly implemented, using readily available off-the-shelf components and technology. Flyvbjerg gives several examples of success stories. One is from the guy responsible for expanding the subway system in Madrid.

Manuel Melis Maynar understands the importance of scalability. An experienced civil engineer and the president of Madrid Metro, he was responsible for one of the largest and fastest subway expansions in history. Subway construction is generally seen as custom and slow by nature. It can easily take 10 years from the decision to invest in a new line until trains start running, as was the case with Copenhagen’s recent City Circle Line. And that’s if you don’t encounter problems, in which case you’re looking at 15 to 20 years, as happened with London’s Victoria line. Melis figured there had to be a better way, and he found it.

Begun in 1995, the Madrid subway extension was completed in two stages of just four years each (1995 to 1999: 56 kilometers of rail, 37 stations; 1999 to 2003: 75 kilometers, 39 stations), thanks to Melis’s radical approach to tunneling and station building. In project management terms, it offers a stark contrast to the experience of the Eurotunnel, which has cost its investors dearly. Melis’s success was the result of applying three basic rules to the design and management of the project.

No monuments. Melis decided that no signature architecture would be used in the stations, although such embellishment is common, sometimes with each station built as a separate monument. (Think Stockholm, Moscow, Naples, and London’s Jubilee line.) Signature architecture is notorious for delays and cost overruns, Melis knew, so why invite trouble? His stations would each follow the same modular design and use proven cut-and-cover construction methods, allowing replication and learning from station to station as the metro expanded.

No new technology. The project would eschew new construction techniques, designs, and train cars. Again, this mindset goes against the grain of most subway planners, who often pride themselves on delivering the latest in signaling systems, driverless trains, and so on. Melis was keenly aware that new product development is one of the riskiest things any organization can take on, including his own. He wanted none of it. He cared only for what worked and could be done fast, cheaply, safely, and at a high level of quality. He took existing, tried-and-tested products and processes and combined them in new ways. Does that sound familiar? It should. It’s the way Apple innovates, with huge success.

Speed. Melis understood that time is like a window. The bigger it is, the more bad stuff can fly through it, including unpredictable catastrophic events, or so-called black swans. … Traditionally, cities building a metro would bring in one or two tunnel-boring machines to do the job. Melis instead calculated the optimal length of tunnel that one boring machine and team could deliver—typically three to six kilometers in 200 to 400 days—divided the total length of tunnel he needed by that amount, and then hired the number of machines and teams required to meet the schedule. At times, he used up to six machines at once, completely unheard of when he first did it. His module unit was the optimal length of tunnel for one machine, and like the station modules, the tunnel modules were replicated over and over, facilitating positive learning. As an unforeseen benefit, the tunnel-boring teams began to compete with one another, accelerating the pace further. They’d meet in Madrid’s tapas bars at night and compare notes on daily progress, making sure their team was ahead, transferring learning in the process. And by having many machines and teams operating at the same time, Melis could also systematically study which performed best and hire them the next time around. More positive learning. A feedback system was set up to avoid time-consuming disputes with community groups, and Melis persuaded them to accept tunneling 24/7, instead of the usual daytime and weekday working hours, by asking openly if they preferred a three-year or an eight-year tunnel-construction period.

The worry, of course, is that rules like this will make the megaproject boring and perhaps low-quality. But quality problems are often what arise when using new designs, custom parts, and new technologies–especially when it comes time to repair and replace parts of the system. Also, is the purpose of the megaproject to do a job for society–in this case to get people from A to B–or is it to give politicians a pretty place for a press conference? Flyvberg writes:

But go to Madrid and you will find large, functional, airy stations and trains—nothing like the dark, cramped catacombs of London and New York. Melis’s metro is a workhorse, with no fancy technology to disrupt operations. It transports millions of passengers, day in and day out, year after year, exactly as it is supposed to do. Melis achieved this at half the cost and twice the speed of industry averages—something most thought impossible.

Infrastructure Projects: History of Understated Costs and Overstated Benefits

It sometimes seems as if every big infrastructure project underestimates its costs and overpromises its benefits. But is that a few bad apples that get a lot of publicity, or is it the real overall pattern? Bent Flyvbjerg and Dirk W. Bester put together some evidence in “The Cost-Benefit Fallacy: Why Cost-Benefit Analysis Is Broken and How to Fix It” (Journal of Benefit-Cost Analysis, published online October 11, 2021).

They collect “a sample of 2062 public investment projects with data on cost and benefit
overrun. The sample includes eight investment types: Bridges, buildings, bus rapid
transit (BRT), dams, power plants, rail, roads, and tunnels. Geographically, the
sample incorporates investments in 104 countries on six continents, covering both
developed and developing nations, with the majority of data from the United States
and Europe. Historically, the data cover almost a century, from 1927 to 2013.” Not all of these studies have data on both expected and realized benefits and costs. But here’s a table summarizing the results: On average, there are cost overruns in every category, and overstated benefits in every category. In more detailed results, they show that this pattern hasn’t evolved much over time.

Of course, the average doesn’t apply to every project. Indeed, sometimes there are cost overruns but then even bigger benefits than expected. But the average pattern is disheartening. Indeed, “[c]onsidering cost and benefit overrun together, we see that the detected biases work in such a manner that cost overruns are not compensated by benefit overruns, but quite the opposite, on average. We also see that investment types with large
average cost overruns tend to have large average benefit shortfalls.”

The essential problem here, Flyvbjerg and Bester argue, is that those who do these estimates of benefits and costs are overly optimistic.

The root cause of cost overrun, according to behavioral science, is the well-documented fact that planners and managers keep underestimating the importance of schedules, geology, market prices, scope changes, and complexity in investment after investment. From the point of view of behavioral science, the mechanisms of scope changes, complex interfaces, archeology, geology, bad weather, business cycles, and so forth are not unknown to public investment planners, just as it is not unknown to planners that such mechanisms may be mitigated. However, planners often underestimate these mechanisms and overestimate the effectiveness of mitigation measures, due to well-known behavioral phenomena like overconfidence bias, the planning fallacy, and strategic misrepresentation.

Thus, the question is how to get those estimating the benefits and costs of mega-projects to be more realistic. The authors offer some suggestions.

“Reference class forecasting” is the idea of basing your estimates on look at actual costs and benefits that happened with similar projects in other places, representing a range of better and worse outcomes. Another idea is to give the benefit-cost forecasters some “skin in the
game.” “Lawmakers and policymakers should develop institutional setups that reward
forecasters who get their estimates right and punish those who do not.” This can be done in friendly ways, with bonuses, or it can be done in punitive ways, with lawsuits and even crimial punishments when things go badly wrong. There can be a rule in advance that independent audits will be carried out during and after the project–perhaps even by several different auditors. Finally, the decisions about whether to proceed shouldn’t just involve technocrats and spreadsheets, but also need public involvement. For example, if the public is going demand processes that slow down or complicate a project, that needs to be taken into account at the start–even if those demands may seem irrelevant or counterproductive to the forecasters.

The authors note that these kinds of changes are being used in various places and by various governments around the world. If there are plans for a megaproject where you live, you might want to think about whether these processes might lead to more accurate benefit-cost estimates, too.

Sorting Men and Women by College Major and Occupation

Men and women tend to sort into different college majors. Even given the same college major, they tend to sort into different jobs. Carolyn M. Sloane, Erik G. Hurst, and Dan A. Black explore these patterns, and some implications for wage differences between men and women, in “College Majors, Occupations, and the Gender Wage Gap” (Journal of Economic Perspectives, Fall 2021, 35:4, pp, 223-48).

(Full disclosure: I’ve been the Managing Editor at JEP since the first issue in 1987, so I am perhaps predisposed to think the articles are of wider interest. Fuller disclosure: All JEP articles back to the first issue have been freely available online for a decade now, courtesy of the American Economic Association, so neither I nor anyone else get any direct financial benefit if you choose to check out the journal.)

For example, here are some broad groupings of majors that tend to be male- or female-dominated. The left-hand panel shows majors where the female/male share of majors started at less than 1. Biological sciences has risen above one, and is now majority female, but the other majors have changed much less. The right-hand panel shows broad categories majors where the female/male share of majors started above one–in some cases, several multiples above one. None of these areas have switched to majority male: in one area, psychology, the female dominance has become much more pronounced

Sloane, Hurst, and Black aren’t trying to explain why these patterns arose or why they persist (although that’s an obvious topic for speculation!). Instead, they are interested in pointing out the extent of this difference, its persistence over time, and pointing out that the male-dominated majors on average have higher wages. Their data breaks down the broad major categories into 134 detailed majors: for example, the category of “Engineering” contains 17 different majors. They write:

We find that women are systematically sorted into majors with lower potential wages relative to men. For example, Aerospace Engineering, one of the highest potential wage majors, is 88 percent male, while Early Childhood Education, one of the lowest potential wage majors, is 97 percent female. We also find that such patterns are long-standing and have been slow to converge. Overall, college-educated women born in the 1950s matriculated with majors that had potential wages 12 percent lower than men from their cohort. That gap fell to about 9 percent for the 1990 birth cohort. Even after some convergence in major sorting between men and women during the last 40 years, the youngest birth cohorts of women are still sorted into majors with lower potential wages than their male peers. Intriguingly, much of the convergence in major sorting between men and women occurred between the 1950 and 1975 birth cohorts, with a modest divergence for recent cohorts.

The authors use an interesting method of comparing wages across majors. For every major, they look at the median wages paid to a middle-aged, US-born, white male in that category. Thus, they are not trying to measure gaps between female and male wages, or the extent of discrimination. Instead, they are noting that wages are lower in female-dominated majors even if one just compares white men of the same age with different majors.

They then take the idea of sorting one step further. Men and women who have the same major tend to sort into different occupations. Here’s a table illustrating this pattern. The top category shows that for education majors, 68% of women end up as teachers, compared with 50% of men. But among education majors, 18% of men end up in executive/manager jobs, compared with 9% of women. Similarly, the next panel shows that among nursing/pharmacy majors, women are more likely to end up as nurses, while men are more likely to end up in executive/manager roles.

The authors have data on 251 distinct occupations. They find that when women and men have the same majors, women have historically sorted into lower-paid occupations (again, just noting that this happens, while not investigating the question of how or why). For the effect of this occupational sorting for men and women with the same college major, they write:

{H]ow has occupational sorting conditional on major evolved across generations of US college graduates? We find that while women are sorted into occupations with lower potential wages conditional on major, this gap is closing somewhat over time. For the 1950 birth cohort, for example, women on average sorted to occupations with 11 percent lower potential earnings relative to otherwise similar men with the same majors. This gap narrowed to about 9 percent for the 1990 birth cohort. Almost all of the convergence occurred within highest potential earning majors. For example, women from the 1950 cohort who majored in Engineering—a high potential earning major—sorted into occupations with potential wages that were 14 percent lower than men from the same cohort who also majored in Engineering. For the 1990 birth cohort, however, women who majored in Engineering ended up working in occupations with roughly the same potential wages as their male peers.

Of course, these patterns of sorting by college major and occupation are also taking place against a backdrop of other changes: a rising share of women graduating from college, expansion of the US health care sector, falling birth rates, and so on. But the importance of sorting that happens early in life in college major and occupation has a lasting importance to later wages. The authors find that accounting for sorting by college major, and by occupation given the same major, can explain about 60% of the wage gap between men and women college graduates.

Some Eviction Economics

Part of the Coronavirus Aid, Relief, and Economic Security Act (the CARES Act) signed into law by President Trump on March 27, 2020, was a national moratorium on evictions. However, the moratorium was scheduled to end on July 24, 2020–although it effectively required an additional 30 days beyond that date before landlords could file notices to vacate. Congress did not vote to extend the moratorium. However, the Centers for Disease Control then announced a national eviction moratorium to start on September 4, 2020. The US Supreme Court held in August 2021 that the CDC lacked the power to make this policy decision without the passage of a law through Congress and signed by the president. Of course, the Supreme Court decision was not about whether the eviction moratoriums were good policy or had beneficial effects. Here, I set aside the legal questions and focus on what we know about the outcomes.

It’s worth saying at the start that data on rental evictions is not nationally centralized, and is not up-to-the-minute. Every study has its own sample. However, certain patterns do seem to emerge across studies. Jasmine Rangel, Jacob Haas, Emily Lemmerman, Joe Fish, and Peter Hepburn at The Eviction Lab at Princeton University provide evidence on overall eviction patterns in “Preliminary Analysis: 11 months of the CDC Moratorium” (August 21, 2021). Their project collects data from 31 cities and six full states, representing about one-fourth of all the renters in the country. Here’s their estimate based on the sites they trask of how the total number of evictions would have evolved starting in January 2020, compared to what actually happened. Evictions fall by about half starting in March 2020 , and the gap between expected and actual evictions continues to expand after the CDC moratorium is enacted in September 2020.

This drop in evictions has considerable local variation, in part because some states and cities enacted new eviction restrictions of their own. For example, here’s a figure showing the pattern across cities.

The researchers at the Eviction Lab also use data on the sites they track, together with historical data from the rest of the country, to make some overall estimates. They write: “In total, we estimate that federal, state, and local policies helped to prevent at least 2.45 million eviction filings since the start of the pandemic (March 15, 2020).” However, as far as I can tell, this estimate assumes that without the moratorium, evictions would have remained at pre-pandemic levels even after the pandemic started, which isn’t obvious. One can imagine that factors other than the moratorium made a difference, too.

What were some of the benefits of the eviction moratorium? The US Department of Housing and Urban Development published a magazine called Evidence Matters, with the Summer 2021 issue devoted to several articles on the theme of “Evictions.” The articles are heavily footnoted with references to published studies. The first article is titled “Affordable Housing, Eviction, and Health.”

The HUD article point out that the most recent national evidence from 2016 suggest that about 8 of every 100 renters got an eviction notice that year. It’s not clear how the number of official eviction notices translates into actual evictions: evidence from some cities suggests that informal evictions are more common than formal ones; evidence from other cities suggests that only about half of eviction notices lead to an actual eviction. Both of these confounding factors can be true, of course. The article notes (footnotes omitted):

Nonpayment of rent is the primary reason for eviction, which itself can arise from various causes, including rising rents combined with stagnant income growth and persistent poverty, job or income loss, or a sudden economic shock such as a health emergency or a car breakdown. Other reasons include lease violations, which can be technical in nature; property damage; and disruptions, such as police calls. Landlords, for their own reasons, may force tenants to move, either informally or through a legal “no-fault” eviction. Renters often are evicted over relatively small amounts of money — in many cases, less than a full month’s rent. … These studies built on the findings of local investigations. The Milwaukee Area Renters Study found higher rates of eviction for African-American, Latinx, and lower-income renters and renters with children. Neighborhood crime and eviction rates, the number of children in a household, and “network disadvantage” — defined by Desmond and Gershenson as “the proportion of one’s strong ties to people who are unemployed, addicted to drugs, in abusive relationships, or who have experienced major, poverty-inducing events (e.g., incarceration, teenage pregnancy) to increase his or her propensity for eviction” — are factors associated with an increased likelihood of eviction.

During the pandemic, some evidence suggests that the eviction moratorium, by keeping households in place, helped to limit the spread of COVID.

Eviction is a particular threat to health during a pandemic because, as Benfer explains, “we know that eviction results in doubling up, in couch surfing, in residing in overcrowded environments, in being forced to use public facilities, and, at the same time, not being able to comply with pandemic mitigation strategies like wearing a mask, cleaning your PPE [personal protective equipment], social distancing, and sheltering in place.” Epidemiological modeling under counterfactual scenarios comparing results with a strict moratorium against results without a moratorium suggests that evictions increase COVID-19 infection rates significantly. …

By studying COVID-19 incidence and mortality in 43 states and the District of Columbia with varying expiration dates for their eviction moratoria, Leifheit et al. found that “COVID-19 incidence was significantly increased in states that lifted their moratoriums starting 10 weeks after lifting, with 1.6 times the incidence…[and] 16 or more weeks after lifting their moratoriums, states had, on average, 2.1 times higher incidence and 5.4 times higher mortality.” The researchers conclude that, nationally, expiring eviction moratoria are associated with a total of 433,700 excess COVID-19 cases and 10,700 excess deaths. Another study estimates that, had eviction moratoria been implemented nationwide from March 2020 through November 2020, COVID-19 infection rates would have been reduced by 14.2 percent and COVID-19 deaths would have been reduced by 40.7 percent.

A certain amount of the literature on the eviction moratorium focuses so intensely on the outcomes for renters that it barely mentions landlords, although when it comes to understanding rental markets, this is like discussing the yin without the yang. But there are some exceptions. Elijah de la Campa, Vincent J. Reina and Christopher Herbert published “How Are Landlords Faring During the COVID-19 Pandemic? Evidence from a National Cross-Site Survey” (Joint Center for Housing Studies of Harvard University, August 2021), based on a national survey of landlords carried out from February to May 2021. The research group at JP Morgan Chase recently published “How did landlords fare during COVID?” (October 2021), which is based on Chase customers who are small business owners and who have indicates that they own a residential property that is rented out, or customers who have a mortgage from Chase on a multifamily or investment property. Both sources of data have their limitations, as noted above. But some overall patterns do emerge.

The studies both find that many landlords experienced real disruptions of income during the pandemic. The Harvard study found: “The share of landlords collecting 90 percent or more of yearly rent fell 30 percent from 2019 to 2020. … Ten percent of all landlords collected less than half of their yearly rent in 2020, with smaller landlords (1-5 units) most likely to have tenants deeply behind on rental payments.” The JP Morgan Chase study finds that “in the spring of 2020 … rental revenue for the median landlord was down about 20 percent relative to 2019.”

On the other side, for many landlords the eviction moratorium wasn’t as bad as it might have been. Many renters were receiving substantial payments from the government, and some of those flowed through to landlords. Some landlords who had mortgages on their rental properties were able to take advantage of policies to push back those payments for a time. Both the Harvard and the JP Morgan Chase studies find that many landlords also reduced or postponed their spending on maintenance. Others put their rental properties up for sale. The Harvard survey found: “The share of landlords deferring maintenance and listing their properties for sale also increased in 2020 (5 to 31 percent and 3 to 13 percent, respectively)…”

The federal government did allocate funds to help renters, and thus landlords, but that particular problem never really got off the ground. The JPMorgan study puts it this way (again, footnotes omitted):

Between the Emergency Rental Assistance Program and the American Rescue Plan Act, $46.5 billion of rental assistance has been made available by the federal government for states and localities to distribute. As of the end of September [2021], less than a quarter of the funds have been distributed. The distribution of these funds has been hampered by onerous paperwork requirements for both tenants and landlords to prove that tenants meet strict requirements to qualify for assistance, including matching information from the renter and the landlord. Among the many challenges, many of the most vulnerable tenants are not part of the formal rental market (e.g., subletting, renting illegal units, striking informal agreements, etc.) and are not able to provide the leases or other paperwork that is required of them to receive aid. Government officials have altered the rules of the program over time (e.g., allowing for self-attestation of need, providing advances while paperwork is processed, increasing flexibility for what the funds can be used for, etc.) to accelerate the process for getting assistance to needy families when it became clear that paperwork had become too much of a bottleneck. Such flexibility will be key to helping landlords as the pandemic drags on and keeping tenants in their homes as the expiration of various eviction moratoriums rapidly approaches. This need is especially acute for smaller landlords as they are more likely to supply affordable rental housing and rent shortfalls during the pandemic has caused more of them to sell their rental properties. Helping these landlords helps to preserve our supply of affordable housing.

Renters tend to haver lower income than landlords, and renters suffered more in financial terms during the pandemic than landlords. But given that the US rental housing markets are fundamentally based on private-sector rentals, the ability and willingness of landlords to be provide affordable rental housing in the future is important .

The issue of what rules should govern rental evictions existed before the pandemic, and will be a perennial topic moving ahead as well. Whatever the case for a national moratorium as a short-term or even a medium-term step, it surely can’t be a permanent policy. The HUD publication goes into these issues and discusses various local programs, although I don’t have a strong sense of what localities have rules that work better or worse. The HUD also offer some interesting evidence that evictions tend to be highly concentrated in certain areas and even among certain landlords in those areas. As the report notes: “Among the implications of these findings is that interventions targeted at the neighborhoods, buildings, and landlords responsible for significant numbers of evictions can have a profound impact.”