Cousins and Kin Networks in China’s Family Structure

Certain aspects of China’s evolving population structure have been widely discussed. China’s fertility rates plummeted in the 1970s and have remained low since then, especially in the aftermath of the government’s “one-child policy.” There has been a saying in China about “six adults, one child,” illustrated by the idea of a child playing in a park with two parents and four grandparents watching–the result of two generations of one-child policies. As a result, China’s population structure has been shifting toward a greater share of elderly and a lesser share of children. In between, China’s own national statistics bureau has reported that the working-age population between ages 15-64 peaked in 2014 and has been declining since then. Earlier this year, China’s government has now acknowledged that deaths exceeded births in 2022, and thus that China’s total population had declined.

Nicholas Eberstadt and Ashton Verdery dig into how these changes affect Chinese family and kinship networks in “China’s Revolution in Family Structure: A Huge Demographic Blind Spot with Surprises Ahead” (American Enterprise Institute, February 2023).

It’s a story of how demographic changes can echo for decades. In the 1950s and 1960s, the fertility rate in China was 4-5 children per woman. To put it another way, children born during this time had on average 3-4 siblings. If one person with 3-4 siblings marries another person with 3-4 siblings, and that couple has children, then the child will have 6-8 aunts and uncles, and a substantial number of cousins, as well. And this quick sketch doesn’t count the great-aunts and great-uncles from the generation of the grandparents–along with their descendants.

Eberstadt and Verdery emphasize that even when China’s fertility rate drops dramatically in the 1970s and 1980s, it remain true that the parents of the children being born at that time were often from larger extended families. Thus it’s was possible for a few decades from the 1970s through the 1990s for a single child to have a substantial number of aunts, uncles, and cousins. But after multiple generations of low fertility, the average number of aunts, uncles, and cousins will diminish substantially.

China, like most other countries, does not keep official data on numbers of aunts, uncles, and cousins. Thus, Eberstadt and Verdery are forced to estimate these numbers in a way that is consistent with the information that is available about fertility, family size, mortality rates at different ages, and life expectancy. Here are some of their findings:

As recently as 1950, we estimate, only about 7 percent of Chinese men and women in their 50s had any living parents; the corresponding figure today (i.e., 2020) would exceed 60 percent (Table A1). Conversely, a decidedly larger fraction of Chinese men and women in their 70s and 80s have two or more live children today than around the time of the 1949 “liberation” (Table A2); they are also more likely to have a living spouse nowadays than in that much less developed era (Table A6).

By our estimates, men and women in their 40s are three times more likely to have two or more living siblings today than they were in 1950 (Table A3). In 1950, by our reckoning, only one in four Chinese in their 30s had 10 or more living cousins; today that share would be an amazing 90 percent or higher. And practically none of today’s 30-somethings lack cousins altogether, as opposed to about 5 percent of their counterparts in 1950 (Table A4).

Despite 35 years of coerced anti-natalism through Beijing’s notorious One-Child Policy (1980–2015), today’s teens in China are more likely to have 10 or more living cousins and vastly more likely to have 10 or more living uncles and aunts than their predecessors in 1950 were (Table A4). In fact, 10 or more cousins and 10 or more uncles and aunts looks to be the most common family type for teens in contemporary China (Table A5).

Here are their estimates of the number of cousins for people in different age groups. For the 0-9 age bracket at any given time, you can see that the number of cousins rises sharply in the early 20th century, with declining infant mortality and longer life expectancies, but then tops out and starts declining after fertility rates start falling in the 1970s. For those in the 30-39 age bracket at any given time, this pattern essentially happens 30 years later–although additional rises in life expectancy make the peak a little higher. For those in the 60-69 age bracket at any point in time, the rise and fall is again roughly 30 year later.

Again, the exact numbers here come from a population modeling exercise, and thus have a margin of error around them. But again, these numbers are based on the known estimates for population, fertility, and infant mortality, and so should capture any big-picture swings. Here are some main themes:

1) China’s extended networks of blood relatives may have been larger around the turn of the 21st century than at any previous time. In the words of Eberstadt and Verdery:

Our estimates obviously cannot speak to the (possibly changing) quality of familial relations in China, but in terms of sheer quantity, it seems safe to say that Chinese networks of blood kin have never been nearly as thick as they were at the start of the 21st century. … This proliferation of living relatives is a confluence of three driving forces: (1) the tremendous general improvement in life expectancy starting seven decades ago, (2) the generationally delayed impact of steep fertility declines on counts of extended kin, and (3) the as-yet-modest inroads of the “flight from marriage” (to borrow a phrase from Gavin W. Jones) witnessed already in the rest of East Asia, as well as affluent Europe and North America.

2) Extended family networks are a form of social capital: for example, during China’s period of dramatic economic growth, with all of its reallocations across industrial sectors and from rural to urban areas, deep kinship networks have been a way of spreading information about opportunities. Eberstadt and Verdery:

China has evidently enjoyed a massive “demographic deepening” of the extended family over the past generation or so—a deepening with many likely benefits, not least an augmentation of social capital. …

Social capital begets economic capital. China’s “kin explosion,” for instance, may have had highly propitious implications for guanxi, the quintessentially Chinese kin-based networks of personal connections that have always been integral to getting business done in China. With a sudden new wealth of close and especially extended relatives on whom potentially to rely, the outlook in China for both entrepreneurship and informal safety nets may have brightened considerably—and rapidly—in post-Mao China, as numbers of working-age siblings, cousins, uncles, aunts, and other relatives surged for ordinary people. It is intriguing that China’s breathtaking economic boom should have taken place at the very time that the country’s extended family networks were becoming so much more robust. No one has yet examined the role of changing family structure in China’s dazzling developmental advance over the past four decades. But there is good reason to suspect that family dynamics are integral to the propitious properties of the much-discussed “demographic dividend” in China and other Asian venues. If so, the family would be a crucial though unappreciated element in contemporary China’s socioeconomic rise, and this unobserved variable may require some rethinking of China’s modern development, both its recent past and its presumed future.

3) Although China has been living through what might be called a “golden era of the extended family,” this pattern of deep kin networks is going to end in the next few decades. Eberstadt and Verdery write:

The number of living cousins for Chinese under age 30 is about to crash. Just three decades from now, young Chinese on average will have only a fifth as many cousins as young Chinese have today (Figure 4). By 2050, according to our estimates, almost no young Chinese will live in families with large numbers of cousins (Table A4). Between now and 2050, the fraction of Chinese 20-somethings with 10 or more living cousins is set to plummet from three in four to just one in six. …

By then [2050], a small but growing share of China’s children and adolescents will have no brothers, sisters, cousins, uncles, or aunts. Still more sibling-less young people will have just one or two such kin. Thus, a significant minority of this coming generation in China—many tens of millions of persons—will be traversing life from school through work and on into retirement with little or no firsthand experience of the Chinese extended family, the tradition hitherto inseparable from China’s very culture and civilization and experienced most acutely in fullest measure in the decades just now completed.

These kinds of deep demographic changes are likely to have momentous effects. For example, the Chinese government has been able to rely on extended family networks to a substantial extent to look after children, the elderly, or the disabled. It has been able to rely in family networks to spread information about available jobs, useful skills, and possible destinations. People’s family networks have been in some case leveraged into political networks and even to mechanisms for social control. But Eberstadt and Valery discussion suggests that China’s extended family networks are about to diminish sharply, and the functions they have served will be diminished as well.

Comparing EU-to-US Output Per Hour

The per capita GDP for the combined 27 countries of the European Union (EU-27) is about 72% of the US level. On the other side, the average worker in EU countries puts in far fewer hours on the job than do American workers. For example, OECD data says that the average US worker put in 1,811 total hours in 2022, while due to a combination of more holidays and more part-time workers, the average for a French worker was 1,511 hours a German worker was just 1,341 hours. To put it another way, the average French worker works 7 1/2 fewer 40-hour weeks than an average American worker–almost two full months less. The average German worker works 11 3/4 fewer 40-hour weeks in a year than the average American worker–almost three full months less.

Thus, an obvious question is whether the per capita GDP for EU countries lags behind because EU workers are less productive per hour worked, or because they just work so many fewer hours. Zsolt Darvas offers an overview in “The European Union’s remarkable growth performance relative to the United States” (Brueghel blog, October 26, 2023).

Before doing the comparisons on a per-hour basis, we need to clear up a different issue: comparing the US and the EU economies requires converting between US dollars and the euro plus a few other remaining European currencies. Exchange rates can move around a lot in a few years, but it would be peculiar to claim that, say, because, the US dollar appreciated by 1/3 compared to the euro, the US economy was also now 1/3 bigger compared to the euro. As Darvas puts it:

In 2000, €1 was worth $0.92. By 2008, the euro’s exchange rate strengthened considerably, and €1 was worth $1.47. The EU’s GDP is mostly generated in euros, and thus it was worth many more dollars in 2008 than in 2000 because of the currency appreciation. But this was just a temporary rise in the value of the euro and not a reflection of skyrocketing economic growth in the EU. After 2008, the opposite happened. By 2022, €1 was worth $1.05, so compared to 2008, the euro’s significant depreciation relative to the dollar reduced the dollar value of EU GDP.

To avoid the complications of fluctuating exchange rates, Darvas instead uses “purchasing power parity” exchange rates, which are calculated by the International Comparison Project at the World Bank to measure the actual buying power of a currency in terms of purchased goods and services. For our purposes, the key point is that the PPP exchange rate doesn’t bounce around like the market exchange rate; for present purposes, it is similar to comparing the US and EU economies as if the market exchange rate had stated at the 2000 and 2022 levels, without the big bounce in the middle.

So using the PPP exchange rate, here’s per capita GDP from , with the US level expressed as 100. On the left-hand panel, the red line show’s China’s rise from 2% of US per capita GDP in 1980 to about 28% at present. The EU-27 line rises from 67% of the US level in 1995 to 72% at present. The breakdown on the right-hand side shows that this increase is mostly due to the countries of the “east EU” region, which is catch-up growth from Bulgaria, Czechia, Croatia, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia.

Now let’s do the comparison not in terms of GDP per person, but instead in terms of GDP per hours worked, and also GDP per worker. What’s interesting here is that for the EU as a whole, GDP per hour worked has risen from about 72% of US levels back in the early 2000s to about 82% of US levels (blue dashed line). For Germany, with its very low level of average hours worked, GDP per hour worked was roughly equal to the US level back in the mid-1990s, then dropped off, and has now caught up again.

To put it another way, for the EU as a whole, the lower per capita GDP–28% below the US level–is roughly two-thirds due to the fact that GDP per hour worked is below US levels, and one-third due to fewer hours worked. But for Germany (and for some other western and northern EU economies), the lower per capita GDP compared to the US level is entirely due to fewer hours worked.

Of course, comparisons like these are grist for conversation. These kinds of comparisons do suggest that it is possible to combine fewer annual hours worked with rising output-per-hour. For my American readers, would you personally be willing to take an annual income cut of 10%, in exchange for an extra month of vacation each year? Does your answer change if everyone else also takes an income cut of 10% in exchange for an additional month of vacation? If group of major American employers offered that combination of similar hourly pay but arranged the firm on the expectation of substantially lower hours worked, would the company attract at least a sampling of top talent? What if the employers offered this option only to employees who had worked the longer hours and remained with the firm for, say, five or ten years?

The Limited Effects of Taxes on Sugar-Sweetened Beverages

Here’s the case for imposing a tax on sugar-sweetened beverages: 1) Obesity is a major public health problem, through its effects on diabetes, cardiovascular diseases, asthma, certain cancers, and mental health; 2) Consumption of sugar-sweetened beverages is an outsized contributor to obesity; 3) Taxing sugar-sweetened beverages will raise the cost that consumers pay, and thus diminish their consumption. This logic is sufficiently powerful that taxes on sugary drinks have been imposed, sometimes locally and sometimes at that national level, in 50 countries. The average American (average!) consumes about 200 calories per day in the form of sugar-sweetened beverages, among the highest of any country in the world.

So how is it going? Kristin Kiesel, Hairu Lang, and Richard J. Sexton discuss the evidence in “A New Wave of Sugar-Sweetened Beverage Taxes: Are They Meeting Policy Goals and Can We Do Better?” (Annual Review of Resource Economics, 2023, pp. 407-432). Here are a few of their findings:

1) The effect of taxes on sugar-sweetened beverages (SSBs) on calories consumed is often pretty small. They write:

Eating two extra fries, chips, gummy bears or a single teaspoon of ice cream on a given day cancels out the calorie effect of reduced purchases of SSBs due to taxes measured by Dickson et al. (2021) for the United Kingdom. SSBs have been identified as a major contributor to obesity by many, including the World Bank (2020), but it is unhealthy diets overall, a lack of exercise, a variety of environmental factors, and genetics that determine gaining and retaining excess weight (NICHD 2021).

2) Details of the tax matter considerably. For example, a tax imposed at the city level means that sellers in the city will be aware that, when selling sugar-sweetened beverages, they are competing against untaxed sellers of such beverages outside the city limits. Thus, national taxes will tend to have larger effects than local ones., because sellers will be less likely to pass on a large share of the tax to consumers. Some taxes exclude “fruit drinks,” even though they may have added sugar , while others tax diet soda. Consumption of some types of sugar-sweetened beverages seems more responsive to price increases, like sodas, while consumption of others is less responsive, like energy drinks.

3) If drinking sugared beverages is in part a self-control problem, there are lots of alternative sources of calories that can readily replace sugary drinks, from candy bars to fast food.

4)Unsurprisingly, the revenues from taxes on sugar-sweetened beverages are not especially large compared with other tax sources. The authors write:

The estimate that $133.9 million in tax revenue is collected annually across the seven US cities with local SSB taxes amounts to about $33 per capita within the taxing jurisdictions (Krieger et al. 2021b). A tax implemented nationally that generated similar per capita revenue would amount to 0.32% of the US total tax revenue. Thus, revenues generated from current SSB taxes are rather trivial as a share of revenues, and beneficial purposes to which these funds are devoted could be supported from a modest redirection of funds from more broad-based taxes.

5) The taxes on sugar-sweetened beverages are probably regressive: that is, they cost a greater share of income for the poor than the rich. Indeed, such taxes tend to be less favored by the poor than the rich.

It is well documented that it is easier to be in favor of policy measures that mainly affect others (e.g., Diepeveen et al. 2013) and that at-risk groups whose behaviors are targeted by SSB taxes remain strongly opposed to them (Hagmann et al. 2018). Lang (2022) showed that tax pass-through for local SSB taxes was higher and demand was more inelastic in low-income and more racially diverse neighborhoods than in wealthier and predominantly white neighborhoods. These outcomes exacerbate the disproportionate burden on low-income consumers of raising revenue via SSB taxes. Even when modeled to be socially optimal under consideration of heterogeneous and time-inconsistent preferences (e.g., Allcott et al. 2019a,Dubois et al. 2020), SSB taxes remain mildly regressive at best.

This study isn’t the final word. As the authors are careful to point out, some studies of this literature suggest more optimism about carefully designed taxes on sugar-sweetened beverages as a policy tool. Those interested in more positive estimates might begin with Hunt Allcott, Benjamin B. Lockwood, and Dmitry Taubinsky, “Should We Tax Sugar-Sweetened Beverages? An Overview of Theory and Evidence,” in the Summer 2019 issue of the Journal of Economic Perspectives, or with the 2020 World Bank study, “Taxes on Sugar-Sweetened Beverages: International Evidence and Experiences.”

But I suspect that even those who are more optimistic about the virtues of taxes on sugar-sweetened beverages would agree that they are best-viewed as part of a broader effort to reduce obesity, not as a substitute for a broader effort. Kiesel, Hairu , and Sexton conclude in this way:

Indeed, it will take the knowledge and expertise of public health officials and scholars, economists, psychologists, and those most affected by health inequities and SSB taxes to carefully design multifaceted policies that alter our food environments and nudge both producers and consumers toward improved behavioral responses, health outcomes, and greater social welfare. Carefully designed taxes on added sugars and unhealthy foods implemented countrywide could be part of combined policies aimed at reducing the obesity epidemic and related health harms.Given their regressivity, limited impact on consumption of SSBs, and failure to incentivize product reformulations, we find little basis to support further implementation of local SSB taxes.


Value of a Statistical Life (VSL): Time for an Update?

Imagine a government spending program or a health or safety program, where one of the benefits is that over the population as a whole, it reduces some risk that people face and thus saves lives. Is the spending or regulation a good idea? Answering this question requires putting a monetary value on the benefits of the lives that are expected to be saved. For some years now, the Environmental Protection Agency has used a value of $7.4 million per life saved, measured in 2006 dollars, and then updated over time based on inflation and rising incomes since 2006.

Among non-economists, there is often a (perhaps natural) disinclination to be repelled by putting a monetary value on lives saved. But as economists point out, both governments and people people make decisions all the time that involve tradeoffs between costs in money or time and risks of death. Governments make decisions about what safety standards to created for cars and consumer products, or for workplace safety. The level where speed limits are set has implications for costs in terms of time on the road and a risk of lives lost. Some people take jobs with a higher mortality risk but also higher pay, or buy houses near known pollution sites at a lower price, or spend extra for cars with additional safety equipment.

Government regulators have to estimate the “Value of a Statistical Life” or VSL. The word “statistical” is meant to convey that when the government spends or regulates in an effort to save lives, it typically does not know whose life will save: for example, it does not know which traffic accidents will turn out not to be fatal because seat belt use is required. Indeed, the “Value of a Saved Life” would be more accurately described as the “Value of Reducing the Risk of a Lost Life.” The number turns out to be central in carrying out benefit-cost analyses: as an example, one study of benefits and costs of the 1990 Clean Air Act Amendments found that 80% of the benefits were from the monetary value placed on lower mortality risks leading to lives saved.

But when you look at how the value of a statistical life is estimated by the Environmental Protection Agency, you find that it relies on a group of 22 studies done between 1974 and 1991. Maureen Cropper, Emily Joiner, and Alan Krupnick argue the case for an update, based on research in the three decades since then, in in “Revisiting the Environmental Protection Agency’s Value of Statistical Life” (Resources for the Future, Working Paper 23-30, July 2023). For a short readable overview of the paper and the underlying issues, Joiner offers “Rethinking the Value of a Statistical Life” (RFF blog post, September 21, 2023).” A quick overview of this research provides a sense of what kinds of studies are done, and what types of new evidence has emerged.

But before describing the research, it’s perhaps useful to note that Cropper, Joiner, and Krupnick don’t try to provide a bottom line: that is, they don’t try to estimate whether the value of a statistical life would be higher or lower if the last three decades of research are included. This is annoying, but probably also canny. After all, you should be able to agree or disagree with the idea that it would be useful to update the VSL estimate without knowing in advance what the answer will be.

For example, of the 22 pre-1991 studies, 17 of them are “hedonic wage studies.” Basically, these studies compare the wages of workers in different jobs. They then include a bunch of other variables: age, education, marital status, race, years in the labor force, industry, occupation, and the like. They also include a variable for risk of death on the job. Thus, these studies seek to answer the question: “After adjusting for other variables that seem likely to affect wages, do those those in riskier jobs get paid more than similar workers in less-risky jobs?”

There are a variety of detailed questions one might ask about these studies. They don’t all adjust for the same variables: for example, many of the earlier studies don’t adjust for industry or occupation. Some studies also include measures of health risks other than mortality; others do not. In addition, people are not randomly assigned to jobs with different risk levels, and those who choose such jobs are not likely to be a representative sample. As a result, when you adjust for industry and occupation, you are in some ways adjusting away some of the choices made about risk.

A broader issue is that more than 90% of job-related fatalities occur for men, with a disproportionate share happening in a narrow range of jobs. Thus, it would be nice to base the value of a statistical life on other tradeoffs that people make about costs and risks.

For example, car-buyers often face a choice between paying more for a safer car, or not. The additional amount that people are willing to pay for safety can be estimated, with some effort, from this data. As one example, airbags were phased in for US cars in 1996 and 1997. Thus, one can study how much extra people were willing to pay for a car with airbags vs a very similar car from 1994 or 1995 without airbags. One can also look at different speed limits imposed on highways across the United States, and measure the implied tradeoff in terms of hours saved by travelling faster vs. greater risk of lives lost. In another example, studies have looked at people’s willingness to buy houses closer to pollution or to a Superfund site as a measure of willingness to trade off higher risk for a lower price.

I’m not claiming these kinds of studies are easy to do in a persuasive way! But non of these kinds of studies about how people value risk in different settings are included in the 22 pre-1991 studies that are the basis for estimating the value of a statistical life.

In the original 22 studies pre-1991, the other five studies are “revealed preference” studies, which are basically elaborate survey tools in which researchers ask a series of questions to get people to place a value on different risks. For example, one can survey people on how much they would be willing to pay to see their risk of death from specific causes like heart disease, stroke, cancer, or car accident reduced by a certain amount by the time they reach age 70. The responses of different people at different ages can then be woven together to build a measure of willingness to pay for risk reduction, and how this evolves over a lifetime.

As you might imagine, there is considerable controversy over the narrow question of how to construct these surveys, and also over the broader question of how much reliance to place on the choices people actually make in job, car and house purchases, and the like, vs. what people say in surveys when they don’t actually have money on the line. Here, I’ll just say that the methods for doing this kind of work have evolved since 1991, but none of the later studies are included in the current estimates of the value of a statistical life.

For me, one of the biggest benefits of the value of a statistical life concept is that it allows comparisons across different regulations and spending programs. For example, say that one regulation reduces risk at a cost of $100,000 per life saved, while another other regulation reduces risk at a cost of $100 million per life saved. With this kind of information, there is a strong case for for strengthening and enforcing the first rule, and perhaps just dropping the second rule. With such comparisons (which are not uncommon in the underlying literature), it becomes possible to save more lives at lower overall cost.

For those who would like to know more about how the idea of a value of a statistical life originated in military planning doctrine of the 1950s and the work of Thomas Schelling, as well some additional intuition about how such calculations are done, useful starting points are “Value of a Statistical Life: Where Does It Come From?” (March 27, 2020) and “The Origins of the Value of Statistical Life Concept” (Novemer 14, 2014).

Construction Jobs: A (Partial) Substitute for Manufacturing Jobs?

For many manufactured goods, the output can be produced in many different places–including other countries–and then shipped to where it is needed. Moreover, manufacturing jobs in industries around the world are being challenged by the rise of automation: in particular, high-tech industries like, say, semiconductor manufacturing, are extremely capital-intensive, with relatively few jobs.

In comparison, for construction jobs, the output typically needs to be produced in the particular place where it is used. You can’t outsource production of a highway or an apartment building. Sure, construction jobs involve heavy equipment, but they also involve a substantial number of jobs. Consider the trends in manufacturing and construction jobs over time. In the last four decades, in particular, the trend in manufacturing jobs is mostly down and the trend in construction jobs is mostly up.

There is a degree of substitutability between certain manufacturing and construction jobs. As an example, look at the patterns in the early 2000s, when there was both a boom in construction jobs and a drop in manufacturing jobs. For more discussion, see the discussion of this period by Kerwin Kofi Charles, Erik Hurst, and Matthew J. Notowidigdo in the Spring 2016 issue of the Journal of Economic Perspectives (where I work as Managing Editor), “The Masking of the Decline in Manufacturing Employment by the Housing Bubble.” 

As I’ve written before, it seems important to me that the US economy have a strong and healthy manufacturing sector, in part because innovative technologies are often linked to activities that are being carried out nearby. But when it comes to jobs for medium-skilled workers, manufacturing jobs are unlikely to come back in force. Could jobs in a substantially expanded construction industry fill some of this gap? Consider some examples.

Housing prices are high in many urban areas, and an increase in supply would help to keep those prices from continually rising further. Also, many downtown areas are experiencing a substantial drop in the need for office space, as work-from-home patterns (at least a few days each week) take root. The transition of existing office building to alterative uses, whether it’s housing, retail, restaurants, entertainment, or something else, is an enormous construction task.

Construction is involved in the ongoing energy agenda, both building solar and wind generation, but perhaps even more in building the transmission lines to take the electricity to where it is needed. I’ve heard comparisons between the amount of additional transmission capacity needed analogized to the effort it took to build the interstate highway system. Fires in California, Hawaii, and elsewhere suggest the need construction to make powerlines more safe. At a bigger level, it seems important to think seriously about what is needed to “harden” the electrical grid against the risk of electromagnetic pulses, whether generated naturally by solar flares or by weapons. I’d also add that America’s pipelines for carrying oil and natural gas are a far safer method of transport than using trucks and trains to carry these materials, and they could use updating as well.

As use of internet capacity continues to grow rapidly, construction of data servers and the wired and wireless communication lines to use them continues to grow.

Many areas are living with infrastructure that was built decades ago: old city water systems, drainage and runoff tunnels, old dams, old sewage treatment plants, and old public lighting. There are old bridges and roads, old seaports and airports. In a number of agricultural areas, construction projects to filter runoff from fields is needed to protect streams, lakes, and aquifers. Many K-12 schools could use a serious upgrade to their physical plant , along with some community colleges and public universities, as well.

Readers can probably add to this list. I’m not suggesting that this list requires yet another large federal spending program. In many cases, the spending should be directed by private firms and state or local local governments. Also, I recognize the problems with prioritizing and managing large construction projects. But when I look out across the US economy, I don’t see an enormous shortage of manufactured goods. I do see a need for a sustained push in construction projects that, if we were ever to reform our political process of permits and reviews and get started on these projects in earnest, would still take several decades to complete. And as an engine for growth in medium-skill US-based jobs, construction seems far more promising to me than manufacturing.

Globalizing to Protect National Security, the Poor, and the Environment

The winds of public opinion and politics have been blowing toward de-globalization for a few years now, not just under the Biden and Trump administrations in the US, but in countries around the world. The complaints about globalization come in (at least) three categories: it’s bad for national security by strengthening geopolitical rivals and creating risky and overstretched supply chains; it’s bad for workers and the poor; and it’s bad for the environment. The World Trade Organization takes on all three arguments in its World Trade Report 2023: Reglobalization for a Secure, Inclusive, and Sustainable Future.

The concerns that trade might reduce national security seems to be on the rise. The WTO report offers a chart of the number of quantitative trade restrictions around the world that refer to Article XXI, the “Security Concerns” exception to freer trade. The number has tripled in the last decade.

Concerns about trade and national security come in several parts. One is concern over the reliability of supply chain ; that is, trade makes the US economy dependent on long supply chains that can be disrupted. But the obvious lesson here would seem to be to have a well-diversified set of supply chains from many sources, not to have fewer global suppliers.

More broadly, it’s useful to remember that a fundamental purpose of the world trading system has been the idea that nations which trade are less likely to be nations which fight: to put it another way, when two nations that have been trading fight, they face both the costs of war and the costs of disrupted trade. The WTO report notes:

Security and geopolitical concerns have also always been an important aspect of the multilateral trading system. The founding of the WTO’s predecessor, the General Agreement on Tariffs and Trade (GATT), was in part a response to the disastrous effects of two world wars and the first era of deglobalization in which bloc-based trade had started to dominate multilateral cooperation. As one pillar of the international system established in the aftermath of the Second World War, the GATT’s aim was to promote cooperation and address the underlying causes of the war in combination with the United Nations, the World Bank and the International Monetary Fund (IMF) …

I certainly make no claim that these global institutions are perfect, or anywhere near it. But it’s unwise to assume that the alternatives would be more peaceful or secure. Is a Chinese economy that depends on a substantial export sector more or less likely to invade Taiwan, knowing the economic pushback that would result?

The concerns that trade contributes to poverty, inequality, or economic disruption look differently depending on your viewpoint. If you are living in a less developed country, the ability to export goods and services to the high-income countries of the world, and to benefit from technology transfer from those countries, is a core component of your hopes to reduce poverty and raise the standard of living (which includes, remember, education, health, and a cleaner environment) over time. I am not aware of any countries that have reached a high standard of living while separated from international trade. From a global perspective, it seems clear

From the perspective of any individual country, like the United States, the effects of trade are a mixture of overall gains, but at the expense of targeted disruption. But any economy is being continually disrupted by a number of factors: new technologies for consumer goods, new technologies for production of goods, shifts in consumer demand, better-managed and worse-managed companies, and international trade. These disruptions are part of the price paid for rising standard of living over time: growth doesn’t happen, nor do people improve their education, health care, and the environment, without disruptive change. The US economy, with its giant internal market, is actually not overly exposed to international trade: exports and imports are each about 12-14% of US GDP in a given year. For the US economy, trade is one disruptor among many, and almost certainly not the biggest one. Also, if global trade was notably reduced, it would be disruptive as well, for industries from farming to pharmaceuticals to machinery, and many more. Many other high-income countries are more susceptible to being disrupted by trade than the United States (that is, imports and exports are a higher share of their GDP), but they have more policies in place to reduce disruption and reduce inequality.

The concerns that trade contributes to environmental decline includes only half the story: on one side, yes, trade is part of production and increases in the scale of production will tend to increase pollution. But the other side is that trade also creates efficiency gains (after all, that is how trade provides mutual benefits), and shifts the mix of products (for example, potentially toward trade in technologies and practices that can reduce pollution). The WTO writes:

When measuring the impact of trade on the environment, it is important not only to account for the amounts of pollution associated with trade, but also to consider a situation without international trade. In such a hypothetical case, domestic production would have to rise to meet consumer demands while maintaining the same standards of living. Consequently, the reduced pollution from less trade would be partly offset by increased pollution from domestic production. Moreover, without trade, economies lacking certain resources or production capacity would not be able to consume many products, while some producing economies would not be able to expand investments due to the limited scale of their domestic market. Some studies suggest that international trade increases carbon dioxide (CO2) emissions by 5 per cent, compared with a scenario without trade. Moreover, the benefits of international trade exceed its environmental costs from CO2 emissions by two orders of magnitude (Shapiro, 2016). Similar findings have been observed for sulphur dioxide (SO2) emissions, where trade contributes to a 3-10 per cent increase in emissions compared to a scenario without trade (Grether, Mathys and de Melo, 2009).

In addition, if you believe that international agreements will need to play a useful role in addressing global environmental issues, then it’s important to remember that trade agreements can play a role in facilitating environmental agreements. Here’s the WTO:

Greater international cooperation is key if trade is to play an even more important role in environmental sustainability. The benefits of re-globalization include creating a more integrated global environmental governance system. Importantly, when combined with appropriate environmental policies, trade can significantly advance the green transition by unlocking green comparative advantage. This would enhance the ability of developing economies to tap into new trading opportunities arising from the green transition.

A reasonable reaction to the World Trade Organization discussion, it seems to me, is to say something like: “It’s complicated. The appropriate goal should be to favor the kinds of trade that improve national security, reduce poverty/inequality, and improve the environment, and to discourage trade that does the opposite.” Fair enough! I strongly suspect that if you believe trade can potentially play a meaningful positive role in these three issues, and not an inevitably negative one, the WTO would view its report as a success.

Negative Reflections on the Positive Case for the New Proposed Merger Guidelines

The government antitrust enforcers have two institutional homes: the Antitrust Division of the US Department of Justice and the Federal Trade Commission. The Biden administration team in both places has shown a strong desire to overturn antitrust law as it has evolved in 50 years or so, and to return to the earlier tradition. One manifestation of this change is a proposed revision of what are called the Merger Guidelines. The guidelines started back in 1968, and have been updated a number of times since then, including in 1982, 1984, 1992, 1997, 2010, and 2020.

I’ve commented on this blog a few times about the Biden antitrust team, the new merger guidelines, and the historical changes in merger law over time. For those who are want to dig deeper, some of my more recent posts include:

The ProMarket center at the University of Chicago has been collecting readable and reasonably short essays on the proposed new Merger Guidelines from a dozen experts in the field, with a mixture of pro, con, and in-between reactions. If you want to spend some getting an overview of the arguments on all sides, the collection is a good place to start. (I lean to the “con” side.)

Here, I won’t try to review essays by a dozen people, some of them multi-part. Instead, I’ll focus on an essay by an advocate of the proposed new guidelines, Zephyr Teachout, whose essay is titled “The Proposed Merger Guidelines Represent a Reassertion of Law over Ideology” (August 16, 2023). As the title suggests, the case for the merger guidelines is that the “ideology” of economics has become too central to the administration of antitrust law, and that we need to return to the “law.” Here’s a representative quotation from the Teachout essay.

First, these Guidelines represent a return to the text of the Clayton Act. Section 7 of the Clayton Act prohibits mergers and acquisitions where “in any line of commerce or in any activity affecting commerce in any section of the country, the effect of such acquisition may be substantially to lessen competition, or to tend to create monopoly.” As the new Guidelines point out, the language is explicitly prophylactic, and preventative. Section 7 is a malum prohibitum instead of a malum in se law. The Clayton Act instructs the Agencies to stop that which might substantially lessen competition, not merely that which definitively will substantially lessen competition (let alone that which definitely will increase prices). …

Unlike the 1982 Guidelines, which subordinated the Clayton Act’s goals and case law to the ideology of efficiency and consumer welfare, the draft Guidelines are tethered to case law. … [T]he Guidelines give permission to use common sense and not require every bit of logic in a merger decision be derived from high-priced economic experts. For instance, serial acquisitions can be a threat to competition, and mergers between competitors can be a threat to competition. Again, these common sense assertions should not need expression, but the legal part of the antitrust legal community has become so intimidated by the control economists have exercised over the last 40 years—an intimidation that started with Baxter’s aggressive use of economic experts to supersede lawyers’ decisions in the 1980s. Therefore, the permission to be lawyers, use logic, and treat antitrust law like other laws, where the project is primarily legal, is important. The Guidelines reject the extra-legal policy of giving legal decisions to another field of experts. 

I like the bluntness of Teachout’s discussion. What she means by “ideology” are basic ideas of economics like lower prices, consumer welfare, and efficiency. She is explicitly arguing that antitrust enforcement agencies should not need to concern themselves with whether a merger will lead to higher or lower prices, or improvements in consumer welfare, or improved efficiency. Instead, antitrust should be able to block any mergers or acquisitions that tend limit “competition,” whether the effects happen in the present or only present a future risk and without consideration of whether a merger might benefit consumers or intensify competition, but just because courts should hold such actions to be presumptively bad.

Instead of looking at arguments of “high-priced economic experts,” Teachout argues, such decisions should be governed by the “common sense assertions [that] should not need expression” of lawyers. If faced with a choice between high-priced economic experts and common sense assertions of lawyers, I would beg Divine Providence on my knees for a third choice. But at least the economic experts, on both sides, offer reasons for their conclusions. In a legal context, saying it’s just common sense assertions is equivalent to saying “I don’t want or need to explain my reasons.”

It is not clear what a court is supposed to do when the common-sense assertions of the lawyers for the antitrust enforcers who want to block a merger conflict with the common-sense assertions of the lawyers for the companies that want to carry out a merger. Once you have ruled out economic arguments about the likelihood that consumers will benefit from a merger, or not, it’s not clear what’s left.

There is an unintentionally hilarious moment later in Teachout’s essay, when she argues: “The draft Guidelines indicate the Agencies will use lower market concentration thresholds to trigger review. This change is long overdue, based on the evidence; see the great work by economist John Kwoka.” I happen to be an admirer of Kwoka’s work. But apparently Teachout is perfectly willing to use economic interpretations of what concentration thresholds are appropriate, as long as it fits her personal common sense, but we should ignore any high-priced economic experts with a different view.

It’s perhaps useful to note here that the Merger Guidelines do not make the law. Merger law is determined by how courts interpret statutes and earlier cases. The virtue of earlier Merger Guidelines was that they tried to offer a fair-minded overview of the existing law. Indeed, in most modern antitrust cases, both sides could start by agreeing that the Merger Guidelines were a fair statement of the existing law–and then start arguing. The proposed new version of these guidelines are instead an argumentative case for what the writer would prefer the law to be: the proposed guidelines would be challenged in court, and if the courts stick to existing precedents, the new guidelines would lose. Thus, Teachout and other supporters argue that the proposed new Merger Guideline are the true law, from before it was corrupted.

There’s a certain amount of myth-making going on in this case for new Merger Guidelines. As Teachout tells the story, everything was basically fine in antitrust law until the Reagan-era antitrust authorities followed a limited Chicago-school “ideology” and started emphasizing whether a merger leads to lower prices, consumer welfare, efficiency, and other ideologies. Thus, the proposed new guidelines quote various antitrust cases and claim to be the true inheritors of the law. But as Teachout surely knows, this intellectual history is oversimplified to the point of caricature.

Concerns about how best to interpret antitrust law, and how to combine legal and economic insights, go back for decades–and have not been particularly partisan. For example there was an American Bar Association report on these isseus back in 1956. Legal critics of the incoherencies of antitrust law as it existed back in the post-World War II decades include academics from Harvard, Yale, Columbia, Michigan, and many other places as well as the University of Chicago. The most well-known legal treatise on antitrust, continuing through multiple editions up to the present day, started with Philip Areeda of Harvard who was later joined as a co-author by Herbert Hovenkamp of the University of Pennsylvania. Moreover, in the last half-century, plenty of Democratic-identified judges, lawyers, and economists have been overall just fine with the emerging and evolving synthesis of law and economics–even though they have disagreements about specific cases.

The notion that a few rebel economists and a Reagan-era antitrust administrator hijacked the mainstream consensus–and everyone else has just gone along with that hijacking for 40 years–is incorrect. Indeed, the new proposed Merger Guidelines do cite lots of cases, but they are old cases that have not reflected actual case law for decades now.

There is also an implicit assertion that merger law as it was enforced back in the 1950s and 1960s was especially tough, and since then has become too lenient. This claim would have struck writers of the 1950s and 1960s as ridiculous. As I wrote in an earlier post:

If one looks back to the Fortune 500 list of largest companies in, say, 1960, you find the US auto industry dominated by General Motors (#1 overall on the list), Ford (#3) and Chrysler (#9). The US steel industry is dominated by US Steel (#5) and Bethlehem Steel (#13). The US oil industry was dominated by Exxon (#2), Mobil (#6), Gulf Oil (#7), and Texaco (#8). Government-regulated AT&T (#11) provided nationwide monopoly phone service. General Electric (#4) dominated in a swath of industries including appliances, engines, and turbines, while DuPont (#12) dominated in chemicals. Such examples could easily be multiplied, as some social critics pointed out. As one prominent example, John Kenneth Galbraith published a best-seller called The New Industrial State in 1967, which basically argued that the United was no longer a free market economy, but instead had become dominated by large corporations who used advertising to determine consumer demand.

The idea that antitrust law was aggressively going after big companies in the 1950 and 1960s doesn’t match the facts. Indeed, many of the complaints about antitrust decisions at the time arose when the antitrust regulators went after mergers of small local grocery chains or small companies that made shoes. The legal reasoning was precisely what Teachout advocates above: maybe these mergers themselves wouldn’t lead to less competition, but additional future mergers might do so. If two companies in the 1960s that wanted to merge because they thought it would bring greater choices and lower prices for consumers, the courts would typically rule that this outcome was bad for “competition”–because if some firms offered consumers lower prices and more choices, other firms would find it harder to compete. Conversely, in the mid-1980s, under the new and supposedly more lenient antitrust rules, the national phone monopoly of AT&T was broken up.

There are plenty of antitrust issues of concern in the modern economy. As I have pointed out in earlier posts, many of the current issues in antitrust are about digital companies: Amazon, Google, Facebook, Netflix, Apple, and others. Other topics are about large retailers like WalMart, Target, and Costco. Still other topics are about mergers in local areas: for example, if a small metro area has only two hospitals, and they propose a merger, how will that affect both prices to consumers and wages for health care workers in that area? A prominent case a few years ago found a group of Silicon Valey companies guilty of anti-competitive behavior for agreeing not to poach each other’s workers. Another set of topics involves how to make sure that when drug patents expire, generic drugs have a fair opportunity to compete. Another topic is about tech companies that pile up a “thicket” of patents, with new patents continually replacing those that expire, as a way of holding off new competitors. Other topics involve improving the number of competitors that consumers face when choosing home internet service, or a smartphone.

Advocates of the proposed new guidelines like to phrase their position as being for increased merger enforcement, and thus to imply that opposing the proposed guidelines is jus being against greater enforcement. But as other comments in the ProMarket symposium illustrate, many (economist) authors view themselves as very much in favor of tougher antitrust enforcement in many of the situations I just described. They just think that the legal aspects of antitrust decisions should to be interpreted through a lens of economic reasoning, not the “common sense assertions” of lawyers.

The Case for Housing First

The homeless population can be loosely divided into three groups: the transient homeless who use a shelter once; the episodic homeless who return to the shelter repeatedly, but for brief periods; and the chronic homeless, who rely on homeless shelters for long periods. The chronic homeless are also much more likely to have issues with substance abuse, disabilities, and health issues.

If one looks at all the people who are homeless during a year, the chronic homeless are a fairly small share–maybe 10% or so, depending on the details of how the group is defined. But this group also takes up half or more of all the homeless shelter days. When not at homeless shelters, or outside on the street, they may instead end up in hospitals or in some cases in jails. The chronic homeless may be the most visible, and most troubling, part of the homeless population.

There are two broad models for how to address the chronic homeless, which go under the headings of “treatment first” and “housing first.” Joseph R. Downes makes the case for the second in “Housing First: A Review of the Evidence” (Evidence Matters: US Department of Housing and Urban Development, Spring/Summer 2023, pp. 11-19).

As Downes described it, these two paradigms both emerged in the 1990s. With treatment first, the process is a “staircase” model where as the person shows a commitment to sobriety and treatment, they can move from emergency to temporary and perhaps to permanent housing. With housing first, an early program required only that participants pay 30% of income for housing (which in practice often meant 30% of the cash benefits they were receiving from Supplemental Security Income) and that they meet with a staffer twice a month. The George W. Bush administration endorsed a housing first approach, and it has guided federal homelessness programs since then.

My working assumption is that readers of this blog may have strong visceral or philosophical reactions to treatment first and housing first. But in addition, readers would like to know about the studies of what actually works. The gold standard for methodology in this areas are “randomized control trials,” in which people are randomly assigned to either a treatment first or a housing first approach. Downes writes:

To assess the effectiveness of Housing First and the role of consumer choice, a randomized controlled trial (RCT) was performed on the Pathways to Housing program in 2004. Participants were assigned randomly to either a Housing First experimental group or a local Continuum of Care control group to receive treatment as usual (TAU). Eligibility for this study reflected key characteristics of the chronically homeless population: participants must have spent half of the previous month living on the street or in public places, exhibited a history of homelessness over the previous 6 months, and been diagnosed with an Axis I mental health disorder. The results indicate that Housing First participants experienced significantly faster decreases in homeless status and increases in stably housed status than the TAU group did, with no significant differences in either drug or alcohol use. Overall, the Housing First experimental group demonstrated a housing retention rate of approximately 80 percent, roughly 50 percentage points above that of TAU, which, the authors noted, “presents a profound challenge to clinical assumptions held by many Continuum of Care supportive housing providers who regard the chronically homeless as ‘not housing ready.’”

Four major RCTs have been performed to compare the effectiveness of Housing First programs with treatment first programs. Three of these RCTs were conducted in the United States, and the other was conducted in Canada. In a review of these RCTs, Tsai notes that two RCTs conclusively found that Housing First led to quicker exits from homelessness and greater housing stability than did TAU. In the Canadian trial, an RCT in five of Canada’s largest cities known as At Home/Chez Soi, analysis revealed that, in findings similar to those of the American RCTs, “Housing First participants spent 73% of their time in stable housing compared with 32% of those who received treatment as usual.” Baxter et al. also performed a systematic literature review and metanalysis of these four RCTs, finding that Housing First resulted in significant improvements in housing stability. This study also found that no clear differences existed between Housing First and TAU for mental health, quality of life, and substance use outcomes …

In short, the findings seem to be that using permanent housing as a carrot to encourage the chronic homeless to go through treatment doesn’t work well. The result is too often that neither effective treatment nor permanent housing results. The housing first approach at least does better on providing housing, although by itself it doesn’t seem to improve the underlying issues that drive the problems of the chronic homeless, either.

However, the housing first approach may offer some additional benefits, although the evidence on these themes is not always consistent across studies. First, one of the randomized studies found:

[P]articipants in Housing First reported a significant reduction in costly emergency room visits and hospitalizations compared with TAU — 24 percent and 29 percent, respectively. Based on these findings, Basu et al. evaluated the relative costs of Housing First versus treatment first programs, assessing differences in hospital days, emergency room visits, outpatient visits, days in residential substance abuse programs, nursing home stays, legal services (including days in incarceration), days in shelter housing, and case management between the two programmatic models.26 Basu et al. found that participants in Housing First programs had decreased costs because they spent fewer days in hospitals, emergency rooms, residential substance abuse programs, nursing homes, and prisons or jail. On the other hand, Housing First participants incurred higher costs from higher outpatient visits per year and a greater number of days in stable housing than TAU participants. Ultimately, a comprehensive cost analysis from this RCT found that Housing First saved $6,307 annually per homeless adult with a chronic medical condition, with the highest cost savings occurring for chronically homeless individuals, at $9,809 per year.

Other randomized studies do not back up these cost savings, which often means that something is going on in the details of how the programs or run or how the costs are being measured that doesn’t match up across the studies.

The other gain from housing first involves family dynamics, like issues of spousal abuse and child welfare. Downes writes:

Recently, a team from Michigan State University, with support from the Washington State Coalition Against Domestic Violence, the Office of the Assistant Secretary for Planning and Evaluation in HHS, and the Gates Foundation completed a study to assess the effects of Housing First programmatic assistance on domestic violence survivors experiencing homelessness. For this program, adherence to the Domestic Violence Housing First (DVHF) model included mobile, housing-focused advocacy; flexible financial assistance for housing and other needs; and community engagement. The study found that adherence to this survivor-centered, low-barrier service model yielded a statistically significant difference between DVHF recipients and those receiving TAU, with DVHF recipients experiencing improved outcomes in the categories of housing instability, physical abuse, emotional abuse, stalking, economic abuse, use of the children as an abuse tactic, depression, anxiety, posttraumatic stress disorder, and children’s prosocial behaviors.

I wouldn’t want to downplay the practical and logistical difficulties of providing housing to the chronic homeless, and then working on their other life issues afterward. But in a situation of imperfect alternatives, the housing first approach seems the better option.

Evidence on Declining Intergenerational Mobility in the United States

If the US economy had considerable intergenerational mobility–that is, if the children growing up in lower-income households had a reasonably good chance of ending up as adults in higher-income households, and conversely the children growing up in higher-income households had a reasonably good chance of ending up as adults in lower-income households–then I would be less concerned about the extent of income and wealth inequality. Brian Stuart offers a readable overview of the current evidence in “Inequality Research Review: Intergenerational Economic Mobility” (Economic Insights: Federal Reserve Bank of Philadelphia, Third Quarter 2023, pp. 2-7).

Here are a couple of figures to summarize the main takeaways. The first figure shows the share of children at age 30 who earn more than their parents did at age 30 –adjusted for inflation. For children born in the 1940s and ’50s, the share was 80% and higher. For children born in the 1960s and 1970s, it was about 60%. For children born in 1984 (who would have been 30 years old in 2014), the share is about 50%.

A related but different measure of intergenerational mobility looks at how ranking of parents’ income is correlated with the ranking of their children, when those children are grown to adulthood. As Stuart writes:

Specifically, there is considerable upward mobility for children born to parents with
lower incomes. For example, children born to the poorest parents—in the 1st
percentile of the income distribution—rise on average to the 31st percentile. There
is also considerable downward mobility for children born to parents with higher
incomes. Children born to the richest parents—in the 100th percentile—on average
fall to the 73rd percentile. When averaging over all parents and children in the data, each 1 percentile increase in parents’ income rank is associated with a 0.37 percentile increase in children’s income rank. This relationship lies between
the benchmarks of perfect mobility—where a child’s income rank would be unrelated to their parents’ income rank—and no mobility—where a child’s income rank would equal their parents’ rank.

The breakthrough in the last 8-10 years in this area of research is that it became possible to take Census data and to link it to data from federal income tax returns over a sustained period of time. The data is “de-identified,” that it’s impossible tor track specific individuals, but only to look at patterns. However, when filling out tax returns, you list your children and their Social Security numbers. Thus, it’s possible to look at patterns of adult earnings, and then later at earnings for children. It’s also possible to look at neighborhoods where children grew up, and what happens to families who move to higher-income or lower-income neighborhoods, and all sorts of interesting stuff. For an overview of this line of research earlier posts, see “Intergenerational Mobility and Neighborhood Effects” (March 8, 2021) and “Black-White Income and Wealth Gaps” (July 2, 2018).

I do not have a magic ethics ball to tell me if this is “enough” intergenerational mobility or not. After all, any distribution of income will always have, by definition, half of the population below the median income and half above. Not everyone can be above-average. Humans and the world being what they are, it doesn’t feel reasonable to expect that children will be unaffected by the household where they grow up.

On the other side, a society where most people are out-earning their parents and thus feel an expanding sense of possibility will have a different feeling than a society where half the people are not out-earning their parents. Perhaps the issue is not mobility from bottom-to-top, or top-too-bottom, but the share of people who feel that they have a “middle-class” level of income. A few years back, the OECD did a report on the middle-class, arguing in part that being middle class means feeling that you can afford certain middle-class goods, and in particular, a middle-class level of education, health care, and housing. Thus, trying to assure that children from lower-income families have a reasonable shot at the education and health they need to succeed is one useful goal, but the definition of “success” may depend on policies that make education, health care, and housing feel available and affordable to those with middle-class incomes.

For some earlier articles about research on intergenerational mobility from the Journal of Economic Perspectives (where I work as Managing Editor), see the article by Miles Corak in the Summer 2013 issue:  “Income Inequality, Equality of Opportunity, and Intergenerational Mobility.” Also, from the Summer 2002 issue, see:

Claudia Goldin: A Nobel for the Study of Women’s Labor Market Outcomes

Claudia Goldin has been awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2023 ““for having advanced our understanding of women’s labour market outcomes.”

Goldin does economic history, but as a different economic historian once said to me: “History starts yesterday.” In a similar spirit, Goldin’s work (with a primarily US focus) ranges across centuries and decades, but also focuses directly on 21st century patterns. As usual, the Nobel committee has published two descriptions of Goldin’s work, which they designate as the “Popular Science Background” (7 pages long) and the “Scientific Background” (38 pages long). With those overviews easily available, I’ll just touch on three themes here.

First, real economic historians like Goldin do serious archival work, often uncovering previously unstudied data or looking at common data in a different way. For one of many examples, Goldin noticed that in the data on “occupations,” married women often had “wife” listed as their occupation. In the past, this designation had been taken to mean that women weren’t working as part of the paid labor force. As the “Scientific Background” report notes:

One of her main contributions concerns the under-counting and omission of female workers, especially in the period before 1940. During this time, it was, for example, common to simply list “wife” in the census when referring to married women. In a modern context, this would imply nonparticipation in the labor market. Yet many married women who were recorded as wives actually engaged in what we would now consider labor market activity. The wife of a small farmer or farm laborer almost certainly worked together with her husband on the farm. The wives of boardinghouse keepers, and probably many other small business owners, worked in their husband’s business. Combining data from income reports, time budget surveys and census data, Goldin showed that adjusting for the undercounting of women in the agricultural sector raises the female labor force participation rate by almost 7 percentage points in 1890. Adjusting for undercounting in other sectors (mainly boardinghouse keepers and manufacturing workers) adds an additional 3 percentage points. Most of the adjustments apply to white married women: the corrected female participation rate of that group is five times the official census statistic (12.5% versus 2.5%). The labor force participation rate for married women in 1890, therefore, is similar to that observed in 1940. … Goldin (1990) provided evidence that the undercounting problem is likely greater further back in time, since the occupations of women were not collected in most pre-1890 censuses.

Digging into the alternative US data sources going back a couple of centuries offers a number of insights, but one big takeaways is that the share of women producing output for sale in the market had a U-shape during US economic history: that is, very high levels back around 1800 when many women worked in agriculture, a declining share with the shift from agriculture to industry in the 19th century, and then a rising share with the transition to service industries starting about 1920.

Second, a substantial advantage of a historical approach is that it allows looking seeking to understand big institutional or technological changes. In a substantial share of modern economic research, the focus is on a particular theoretical model or a particular set of data. In contrast, work by economic historians is often focused on a bigger question, using theory and data as needed to illuminate it. Here are a few examples:

For example, one reason why women were not represented in the paid labor force in the early decades of the 20th century was the growing adoption of “marriage bars,” laws and rules which blocked women from staying in many jobs after they married. In other words, the role of women in the labor force was not just a matter of social convention or choices made within familied, but enforced by law. The Nobel committee again:

in the 19th century … women almost uniformly left the labor force upon marriage (and were married for the vast majority of their adult lives). The social stigma and norms driving this exit were formalized into explicit regulations in the late 19th and early 20th centuries. So-called marriage bars, which explicitly prohibited the hiring or employment of married women, were introduced. Goldin (1988, 1990) documented two kinds of marriage bars. “Hire bars” banned hiring married women but permitted firms to retain women who got married while already employed.
“Retain bars” were more restrictive and required the firing of women upon marriage. The use of the marriage bars peaked after the Great Depression and were particularly common for positions as teachers and clerical workers. In 1942, 87% of school districts had hire bars and 70% had retain bars. Marriage bars were also more prevalent in large firms. A 1930s survey of firms found that 35-40% of women worked in firms that would not hire a married woman.

For an example of a major technological change, Goldin (along with Larry Katz) considered how the the contraceptive pill affected labor market outcomes for women. The Nobel committee wrote:

In the US, the first oral contraceptive was approved in 1960 and made available to married women. But until the end of the 1960s, access was limited for young unmarried women. Single women below the (state-specific) age of majority needed parental consent to access the pill. In the early 1970s, many states reduced the age of majority from 21 to 18 and passed laws increasing access to family planning and contraception without parental consent. Thus, there is state-by-time variation in access to oral contraceptives for young single women. Importantly, the changes in the age of majority were not actually driven by family planning concerns but rather by a desire to reduce the age of conscription for the Vietnam War.

Using the variations in when laws were passed across states, along with other evidence, “they found breaks in the time series of premarital sex behavior, age of marriage, and career investment, which occur for women born in the early 1950s (i.e., the first cohorts of unmarried women to have access to the pill). … [F]or instance, a surge in investment in professional programs started in the early 1970s when these women made their college education choices.”

An ongoing theme in Goldin’s work is how changes for broad groups, like “women,” often seem to happen for a cohort (that is, group born at about the same time) where shifts in conditions combine to change expectations and also behavior.

Finally, I’ll point to one of Goldin’s contributions to women and the labor market in the 21st century. An overall pay gap between men and women remains. There’s solid research that much of the remaining pay gap is a “parental” gap, reflecting the fact that women end up doing more childcare than men. As a result, women find themselves less likely to have an uninterrupted career path, and more likely to end up in jobs that offer more work/life balance.

Goldin pushed this line of thought further. She found that “the majority of
the current earnings gap [between men and women] comes from earnings differences within rather than between occupations.” In particular, the pattern in a number of occupations is that those who are most highly paid work long hours: that is, in many occupations the pay per hour is higher for someone working a 50-60 hour week than for someone working a 35-40 hour week or a 20-25 hour week.

As the Nobel committee writes: “[W]omen receive a wage penalty for demanding a job flexible enough to be the on-call parent. Men, on the other hand, receive a premium for being flexible enough to be the on-call employee, i.e., constantly available to meet the needs of an employer and/or client. In jobs where such “face time” is valued, one employee cannot easily substitute for another and part-time work is hard to implement. Nonlinearities in wages emerge as a result: workers willing to work many hours are rewarded with a higher wage.”

Thus, certain jobs like pharmacists have a high level of workplace flexibility, and the wage gap per hour between men and women is relatively low. An ongoing subject for research is whether it is possible to have greater flexibility on hours, perhaps by pushing back on expectations about how the fast-track and high-paid workers must also be those who work the longest hours. But it may be that greater workplace flexibility holds one of the secrets to further reductions in the male/female wage gap.

For a few earlier blog posts on Goldin’s work, see: