Construction Jobs: A (Partial) Substitute for Manufacturing Jobs?

For many manufactured goods, the output can be produced in many different places–including other countries–and then shipped to where it is needed. Moreover, manufacturing jobs in industries around the world are being challenged by the rise of automation: in particular, high-tech industries like, say, semiconductor manufacturing, are extremely capital-intensive, with relatively few jobs.

In comparison, for construction jobs, the output typically needs to be produced in the particular place where it is used. You can’t outsource production of a highway or an apartment building. Sure, construction jobs involve heavy equipment, but they also involve a substantial number of jobs. Consider the trends in manufacturing and construction jobs over time. In the last four decades, in particular, the trend in manufacturing jobs is mostly down and the trend in construction jobs is mostly up.

There is a degree of substitutability between certain manufacturing and construction jobs. As an example, look at the patterns in the early 2000s, when there was both a boom in construction jobs and a drop in manufacturing jobs. For more discussion, see the discussion of this period by Kerwin Kofi Charles, Erik Hurst, and Matthew J. Notowidigdo in the Spring 2016 issue of the Journal of Economic Perspectives (where I work as Managing Editor), “The Masking of the Decline in Manufacturing Employment by the Housing Bubble.” 

As I’ve written before, it seems important to me that the US economy have a strong and healthy manufacturing sector, in part because innovative technologies are often linked to activities that are being carried out nearby. But when it comes to jobs for medium-skilled workers, manufacturing jobs are unlikely to come back in force. Could jobs in a substantially expanded construction industry fill some of this gap? Consider some examples.

Housing prices are high in many urban areas, and an increase in supply would help to keep those prices from continually rising further. Also, many downtown areas are experiencing a substantial drop in the need for office space, as work-from-home patterns (at least a few days each week) take root. The transition of existing office building to alterative uses, whether it’s housing, retail, restaurants, entertainment, or something else, is an enormous construction task.

Construction is involved in the ongoing energy agenda, both building solar and wind generation, but perhaps even more in building the transmission lines to take the electricity to where it is needed. I’ve heard comparisons between the amount of additional transmission capacity needed analogized to the effort it took to build the interstate highway system. Fires in California, Hawaii, and elsewhere suggest the need construction to make powerlines more safe. At a bigger level, it seems important to think seriously about what is needed to “harden” the electrical grid against the risk of electromagnetic pulses, whether generated naturally by solar flares or by weapons. I’d also add that America’s pipelines for carrying oil and natural gas are a far safer method of transport than using trucks and trains to carry these materials, and they could use updating as well.

As use of internet capacity continues to grow rapidly, construction of data servers and the wired and wireless communication lines to use them continues to grow.

Many areas are living with infrastructure that was built decades ago: old city water systems, drainage and runoff tunnels, old dams, old sewage treatment plants, and old public lighting. There are old bridges and roads, old seaports and airports. In a number of agricultural areas, construction projects to filter runoff from fields is needed to protect streams, lakes, and aquifers. Many K-12 schools could use a serious upgrade to their physical plant , along with some community colleges and public universities, as well.

Readers can probably add to this list. I’m not suggesting that this list requires yet another large federal spending program. In many cases, the spending should be directed by private firms and state or local local governments. Also, I recognize the problems with prioritizing and managing large construction projects. But when I look out across the US economy, I don’t see an enormous shortage of manufactured goods. I do see a need for a sustained push in construction projects that, if we were ever to reform our political process of permits and reviews and get started on these projects in earnest, would still take several decades to complete. And as an engine for growth in medium-skill US-based jobs, construction seems far more promising to me than manufacturing.

Globalizing to Protect National Security, the Poor, and the Environment

The winds of public opinion and politics have been blowing toward de-globalization for a few years now, not just under the Biden and Trump administrations in the US, but in countries around the world. The complaints about globalization come in (at least) three categories: it’s bad for national security by strengthening geopolitical rivals and creating risky and overstretched supply chains; it’s bad for workers and the poor; and it’s bad for the environment. The World Trade Organization takes on all three arguments in its World Trade Report 2023: Reglobalization for a Secure, Inclusive, and Sustainable Future.

The concerns that trade might reduce national security seems to be on the rise. The WTO report offers a chart of the number of quantitative trade restrictions around the world that refer to Article XXI, the “Security Concerns” exception to freer trade. The number has tripled in the last decade.

Concerns about trade and national security come in several parts. One is concern over the reliability of supply chain ; that is, trade makes the US economy dependent on long supply chains that can be disrupted. But the obvious lesson here would seem to be to have a well-diversified set of supply chains from many sources, not to have fewer global suppliers.

More broadly, it’s useful to remember that a fundamental purpose of the world trading system has been the idea that nations which trade are less likely to be nations which fight: to put it another way, when two nations that have been trading fight, they face both the costs of war and the costs of disrupted trade. The WTO report notes:

Security and geopolitical concerns have also always been an important aspect of the multilateral trading system. The founding of the WTO’s predecessor, the General Agreement on Tariffs and Trade (GATT), was in part a response to the disastrous effects of two world wars and the first era of deglobalization in which bloc-based trade had started to dominate multilateral cooperation. As one pillar of the international system established in the aftermath of the Second World War, the GATT’s aim was to promote cooperation and address the underlying causes of the war in combination with the United Nations, the World Bank and the International Monetary Fund (IMF) …

I certainly make no claim that these global institutions are perfect, or anywhere near it. But it’s unwise to assume that the alternatives would be more peaceful or secure. Is a Chinese economy that depends on a substantial export sector more or less likely to invade Taiwan, knowing the economic pushback that would result?

The concerns that trade contributes to poverty, inequality, or economic disruption look differently depending on your viewpoint. If you are living in a less developed country, the ability to export goods and services to the high-income countries of the world, and to benefit from technology transfer from those countries, is a core component of your hopes to reduce poverty and raise the standard of living (which includes, remember, education, health, and a cleaner environment) over time. I am not aware of any countries that have reached a high standard of living while separated from international trade. From a global perspective, it seems clear

From the perspective of any individual country, like the United States, the effects of trade are a mixture of overall gains, but at the expense of targeted disruption. But any economy is being continually disrupted by a number of factors: new technologies for consumer goods, new technologies for production of goods, shifts in consumer demand, better-managed and worse-managed companies, and international trade. These disruptions are part of the price paid for rising standard of living over time: growth doesn’t happen, nor do people improve their education, health care, and the environment, without disruptive change. The US economy, with its giant internal market, is actually not overly exposed to international trade: exports and imports are each about 12-14% of US GDP in a given year. For the US economy, trade is one disruptor among many, and almost certainly not the biggest one. Also, if global trade was notably reduced, it would be disruptive as well, for industries from farming to pharmaceuticals to machinery, and many more. Many other high-income countries are more susceptible to being disrupted by trade than the United States (that is, imports and exports are a higher share of their GDP), but they have more policies in place to reduce disruption and reduce inequality.

The concerns that trade contributes to environmental decline includes only half the story: on one side, yes, trade is part of production and increases in the scale of production will tend to increase pollution. But the other side is that trade also creates efficiency gains (after all, that is how trade provides mutual benefits), and shifts the mix of products (for example, potentially toward trade in technologies and practices that can reduce pollution). The WTO writes:

When measuring the impact of trade on the environment, it is important not only to account for the amounts of pollution associated with trade, but also to consider a situation without international trade. In such a hypothetical case, domestic production would have to rise to meet consumer demands while maintaining the same standards of living. Consequently, the reduced pollution from less trade would be partly offset by increased pollution from domestic production. Moreover, without trade, economies lacking certain resources or production capacity would not be able to consume many products, while some producing economies would not be able to expand investments due to the limited scale of their domestic market. Some studies suggest that international trade increases carbon dioxide (CO2) emissions by 5 per cent, compared with a scenario without trade. Moreover, the benefits of international trade exceed its environmental costs from CO2 emissions by two orders of magnitude (Shapiro, 2016). Similar findings have been observed for sulphur dioxide (SO2) emissions, where trade contributes to a 3-10 per cent increase in emissions compared to a scenario without trade (Grether, Mathys and de Melo, 2009).

In addition, if you believe that international agreements will need to play a useful role in addressing global environmental issues, then it’s important to remember that trade agreements can play a role in facilitating environmental agreements. Here’s the WTO:

Greater international cooperation is key if trade is to play an even more important role in environmental sustainability. The benefits of re-globalization include creating a more integrated global environmental governance system. Importantly, when combined with appropriate environmental policies, trade can significantly advance the green transition by unlocking green comparative advantage. This would enhance the ability of developing economies to tap into new trading opportunities arising from the green transition.

A reasonable reaction to the World Trade Organization discussion, it seems to me, is to say something like: “It’s complicated. The appropriate goal should be to favor the kinds of trade that improve national security, reduce poverty/inequality, and improve the environment, and to discourage trade that does the opposite.” Fair enough! I strongly suspect that if you believe trade can potentially play a meaningful positive role in these three issues, and not an inevitably negative one, the WTO would view its report as a success.

Negative Reflections on the Positive Case for the New Proposed Merger Guidelines

The government antitrust enforcers have two institutional homes: the Antitrust Division of the US Department of Justice and the Federal Trade Commission. The Biden administration team in both places has shown a strong desire to overturn antitrust law as it has evolved in 50 years or so, and to return to the earlier tradition. One manifestation of this change is a proposed revision of what are called the Merger Guidelines. The guidelines started back in 1968, and have been updated a number of times since then, including in 1982, 1984, 1992, 1997, 2010, and 2020.

I’ve commented on this blog a few times about the Biden antitrust team, the new merger guidelines, and the historical changes in merger law over time. For those who are want to dig deeper, some of my more recent posts include:

The ProMarket center at the University of Chicago has been collecting readable and reasonably short essays on the proposed new Merger Guidelines from a dozen experts in the field, with a mixture of pro, con, and in-between reactions. If you want to spend some getting an overview of the arguments on all sides, the collection is a good place to start. (I lean to the “con” side.)

Here, I won’t try to review essays by a dozen people, some of them multi-part. Instead, I’ll focus on an essay by an advocate of the proposed new guidelines, Zephyr Teachout, whose essay is titled “The Proposed Merger Guidelines Represent a Reassertion of Law over Ideology” (August 16, 2023). As the title suggests, the case for the merger guidelines is that the “ideology” of economics has become too central to the administration of antitrust law, and that we need to return to the “law.” Here’s a representative quotation from the Teachout essay.

First, these Guidelines represent a return to the text of the Clayton Act. Section 7 of the Clayton Act prohibits mergers and acquisitions where “in any line of commerce or in any activity affecting commerce in any section of the country, the effect of such acquisition may be substantially to lessen competition, or to tend to create monopoly.” As the new Guidelines point out, the language is explicitly prophylactic, and preventative. Section 7 is a malum prohibitum instead of a malum in se law. The Clayton Act instructs the Agencies to stop that which might substantially lessen competition, not merely that which definitively will substantially lessen competition (let alone that which definitely will increase prices). …

Unlike the 1982 Guidelines, which subordinated the Clayton Act’s goals and case law to the ideology of efficiency and consumer welfare, the draft Guidelines are tethered to case law. … [T]he Guidelines give permission to use common sense and not require every bit of logic in a merger decision be derived from high-priced economic experts. For instance, serial acquisitions can be a threat to competition, and mergers between competitors can be a threat to competition. Again, these common sense assertions should not need expression, but the legal part of the antitrust legal community has become so intimidated by the control economists have exercised over the last 40 years—an intimidation that started with Baxter’s aggressive use of economic experts to supersede lawyers’ decisions in the 1980s. Therefore, the permission to be lawyers, use logic, and treat antitrust law like other laws, where the project is primarily legal, is important. The Guidelines reject the extra-legal policy of giving legal decisions to another field of experts. 

I like the bluntness of Teachout’s discussion. What she means by “ideology” are basic ideas of economics like lower prices, consumer welfare, and efficiency. She is explicitly arguing that antitrust enforcement agencies should not need to concern themselves with whether a merger will lead to higher or lower prices, or improvements in consumer welfare, or improved efficiency. Instead, antitrust should be able to block any mergers or acquisitions that tend limit “competition,” whether the effects happen in the present or only present a future risk and without consideration of whether a merger might benefit consumers or intensify competition, but just because courts should hold such actions to be presumptively bad.

Instead of looking at arguments of “high-priced economic experts,” Teachout argues, such decisions should be governed by the “common sense assertions [that] should not need expression” of lawyers. If faced with a choice between high-priced economic experts and common sense assertions of lawyers, I would beg Divine Providence on my knees for a third choice. But at least the economic experts, on both sides, offer reasons for their conclusions. In a legal context, saying it’s just common sense assertions is equivalent to saying “I don’t want or need to explain my reasons.”

It is not clear what a court is supposed to do when the common-sense assertions of the lawyers for the antitrust enforcers who want to block a merger conflict with the common-sense assertions of the lawyers for the companies that want to carry out a merger. Once you have ruled out economic arguments about the likelihood that consumers will benefit from a merger, or not, it’s not clear what’s left.

There is an unintentionally hilarious moment later in Teachout’s essay, when she argues: “The draft Guidelines indicate the Agencies will use lower market concentration thresholds to trigger review. This change is long overdue, based on the evidence; see the great work by economist John Kwoka.” I happen to be an admirer of Kwoka’s work. But apparently Teachout is perfectly willing to use economic interpretations of what concentration thresholds are appropriate, as long as it fits her personal common sense, but we should ignore any high-priced economic experts with a different view.

It’s perhaps useful to note here that the Merger Guidelines do not make the law. Merger law is determined by how courts interpret statutes and earlier cases. The virtue of earlier Merger Guidelines was that they tried to offer a fair-minded overview of the existing law. Indeed, in most modern antitrust cases, both sides could start by agreeing that the Merger Guidelines were a fair statement of the existing law–and then start arguing. The proposed new version of these guidelines are instead an argumentative case for what the writer would prefer the law to be: the proposed guidelines would be challenged in court, and if the courts stick to existing precedents, the new guidelines would lose. Thus, Teachout and other supporters argue that the proposed new Merger Guideline are the true law, from before it was corrupted.

There’s a certain amount of myth-making going on in this case for new Merger Guidelines. As Teachout tells the story, everything was basically fine in antitrust law until the Reagan-era antitrust authorities followed a limited Chicago-school “ideology” and started emphasizing whether a merger leads to lower prices, consumer welfare, efficiency, and other ideologies. Thus, the proposed new guidelines quote various antitrust cases and claim to be the true inheritors of the law. But as Teachout surely knows, this intellectual history is oversimplified to the point of caricature.

Concerns about how best to interpret antitrust law, and how to combine legal and economic insights, go back for decades–and have not been particularly partisan. For example there was an American Bar Association report on these isseus back in 1956. Legal critics of the incoherencies of antitrust law as it existed back in the post-World War II decades include academics from Harvard, Yale, Columbia, Michigan, and many other places as well as the University of Chicago. The most well-known legal treatise on antitrust, continuing through multiple editions up to the present day, started with Philip Areeda of Harvard who was later joined as a co-author by Herbert Hovenkamp of the University of Pennsylvania. Moreover, in the last half-century, plenty of Democratic-identified judges, lawyers, and economists have been overall just fine with the emerging and evolving synthesis of law and economics–even though they have disagreements about specific cases.

The notion that a few rebel economists and a Reagan-era antitrust administrator hijacked the mainstream consensus–and everyone else has just gone along with that hijacking for 40 years–is incorrect. Indeed, the new proposed Merger Guidelines do cite lots of cases, but they are old cases that have not reflected actual case law for decades now.

There is also an implicit assertion that merger law as it was enforced back in the 1950s and 1960s was especially tough, and since then has become too lenient. This claim would have struck writers of the 1950s and 1960s as ridiculous. As I wrote in an earlier post:

If one looks back to the Fortune 500 list of largest companies in, say, 1960, you find the US auto industry dominated by General Motors (#1 overall on the list), Ford (#3) and Chrysler (#9). The US steel industry is dominated by US Steel (#5) and Bethlehem Steel (#13). The US oil industry was dominated by Exxon (#2), Mobil (#6), Gulf Oil (#7), and Texaco (#8). Government-regulated AT&T (#11) provided nationwide monopoly phone service. General Electric (#4) dominated in a swath of industries including appliances, engines, and turbines, while DuPont (#12) dominated in chemicals. Such examples could easily be multiplied, as some social critics pointed out. As one prominent example, John Kenneth Galbraith published a best-seller called The New Industrial State in 1967, which basically argued that the United was no longer a free market economy, but instead had become dominated by large corporations who used advertising to determine consumer demand.

The idea that antitrust law was aggressively going after big companies in the 1950 and 1960s doesn’t match the facts. Indeed, many of the complaints about antitrust decisions at the time arose when the antitrust regulators went after mergers of small local grocery chains or small companies that made shoes. The legal reasoning was precisely what Teachout advocates above: maybe these mergers themselves wouldn’t lead to less competition, but additional future mergers might do so. If two companies in the 1960s that wanted to merge because they thought it would bring greater choices and lower prices for consumers, the courts would typically rule that this outcome was bad for “competition”–because if some firms offered consumers lower prices and more choices, other firms would find it harder to compete. Conversely, in the mid-1980s, under the new and supposedly more lenient antitrust rules, the national phone monopoly of AT&T was broken up.

There are plenty of antitrust issues of concern in the modern economy. As I have pointed out in earlier posts, many of the current issues in antitrust are about digital companies: Amazon, Google, Facebook, Netflix, Apple, and others. Other topics are about large retailers like WalMart, Target, and Costco. Still other topics are about mergers in local areas: for example, if a small metro area has only two hospitals, and they propose a merger, how will that affect both prices to consumers and wages for health care workers in that area? A prominent case a few years ago found a group of Silicon Valey companies guilty of anti-competitive behavior for agreeing not to poach each other’s workers. Another set of topics involves how to make sure that when drug patents expire, generic drugs have a fair opportunity to compete. Another topic is about tech companies that pile up a “thicket” of patents, with new patents continually replacing those that expire, as a way of holding off new competitors. Other topics involve improving the number of competitors that consumers face when choosing home internet service, or a smartphone.

Advocates of the proposed new guidelines like to phrase their position as being for increased merger enforcement, and thus to imply that opposing the proposed guidelines is jus being against greater enforcement. But as other comments in the ProMarket symposium illustrate, many (economist) authors view themselves as very much in favor of tougher antitrust enforcement in many of the situations I just described. They just think that the legal aspects of antitrust decisions should to be interpreted through a lens of economic reasoning, not the “common sense assertions” of lawyers.

The Case for Housing First

The homeless population can be loosely divided into three groups: the transient homeless who use a shelter once; the episodic homeless who return to the shelter repeatedly, but for brief periods; and the chronic homeless, who rely on homeless shelters for long periods. The chronic homeless are also much more likely to have issues with substance abuse, disabilities, and health issues.

If one looks at all the people who are homeless during a year, the chronic homeless are a fairly small share–maybe 10% or so, depending on the details of how the group is defined. But this group also takes up half or more of all the homeless shelter days. When not at homeless shelters, or outside on the street, they may instead end up in hospitals or in some cases in jails. The chronic homeless may be the most visible, and most troubling, part of the homeless population.

There are two broad models for how to address the chronic homeless, which go under the headings of “treatment first” and “housing first.” Joseph R. Downes makes the case for the second in “Housing First: A Review of the Evidence” (Evidence Matters: US Department of Housing and Urban Development, Spring/Summer 2023, pp. 11-19).

As Downes described it, these two paradigms both emerged in the 1990s. With treatment first, the process is a “staircase” model where as the person shows a commitment to sobriety and treatment, they can move from emergency to temporary and perhaps to permanent housing. With housing first, an early program required only that participants pay 30% of income for housing (which in practice often meant 30% of the cash benefits they were receiving from Supplemental Security Income) and that they meet with a staffer twice a month. The George W. Bush administration endorsed a housing first approach, and it has guided federal homelessness programs since then.

My working assumption is that readers of this blog may have strong visceral or philosophical reactions to treatment first and housing first. But in addition, readers would like to know about the studies of what actually works. The gold standard for methodology in this areas are “randomized control trials,” in which people are randomly assigned to either a treatment first or a housing first approach. Downes writes:

To assess the effectiveness of Housing First and the role of consumer choice, a randomized controlled trial (RCT) was performed on the Pathways to Housing program in 2004. Participants were assigned randomly to either a Housing First experimental group or a local Continuum of Care control group to receive treatment as usual (TAU). Eligibility for this study reflected key characteristics of the chronically homeless population: participants must have spent half of the previous month living on the street or in public places, exhibited a history of homelessness over the previous 6 months, and been diagnosed with an Axis I mental health disorder. The results indicate that Housing First participants experienced significantly faster decreases in homeless status and increases in stably housed status than the TAU group did, with no significant differences in either drug or alcohol use. Overall, the Housing First experimental group demonstrated a housing retention rate of approximately 80 percent, roughly 50 percentage points above that of TAU, which, the authors noted, “presents a profound challenge to clinical assumptions held by many Continuum of Care supportive housing providers who regard the chronically homeless as ‘not housing ready.’”

Four major RCTs have been performed to compare the effectiveness of Housing First programs with treatment first programs. Three of these RCTs were conducted in the United States, and the other was conducted in Canada. In a review of these RCTs, Tsai notes that two RCTs conclusively found that Housing First led to quicker exits from homelessness and greater housing stability than did TAU. In the Canadian trial, an RCT in five of Canada’s largest cities known as At Home/Chez Soi, analysis revealed that, in findings similar to those of the American RCTs, “Housing First participants spent 73% of their time in stable housing compared with 32% of those who received treatment as usual.” Baxter et al. also performed a systematic literature review and metanalysis of these four RCTs, finding that Housing First resulted in significant improvements in housing stability. This study also found that no clear differences existed between Housing First and TAU for mental health, quality of life, and substance use outcomes …

In short, the findings seem to be that using permanent housing as a carrot to encourage the chronic homeless to go through treatment doesn’t work well. The result is too often that neither effective treatment nor permanent housing results. The housing first approach at least does better on providing housing, although by itself it doesn’t seem to improve the underlying issues that drive the problems of the chronic homeless, either.

However, the housing first approach may offer some additional benefits, although the evidence on these themes is not always consistent across studies. First, one of the randomized studies found:

[P]articipants in Housing First reported a significant reduction in costly emergency room visits and hospitalizations compared with TAU — 24 percent and 29 percent, respectively. Based on these findings, Basu et al. evaluated the relative costs of Housing First versus treatment first programs, assessing differences in hospital days, emergency room visits, outpatient visits, days in residential substance abuse programs, nursing home stays, legal services (including days in incarceration), days in shelter housing, and case management between the two programmatic models.26 Basu et al. found that participants in Housing First programs had decreased costs because they spent fewer days in hospitals, emergency rooms, residential substance abuse programs, nursing homes, and prisons or jail. On the other hand, Housing First participants incurred higher costs from higher outpatient visits per year and a greater number of days in stable housing than TAU participants. Ultimately, a comprehensive cost analysis from this RCT found that Housing First saved $6,307 annually per homeless adult with a chronic medical condition, with the highest cost savings occurring for chronically homeless individuals, at $9,809 per year.

Other randomized studies do not back up these cost savings, which often means that something is going on in the details of how the programs or run or how the costs are being measured that doesn’t match up across the studies.

The other gain from housing first involves family dynamics, like issues of spousal abuse and child welfare. Downes writes:

Recently, a team from Michigan State University, with support from the Washington State Coalition Against Domestic Violence, the Office of the Assistant Secretary for Planning and Evaluation in HHS, and the Gates Foundation completed a study to assess the effects of Housing First programmatic assistance on domestic violence survivors experiencing homelessness. For this program, adherence to the Domestic Violence Housing First (DVHF) model included mobile, housing-focused advocacy; flexible financial assistance for housing and other needs; and community engagement. The study found that adherence to this survivor-centered, low-barrier service model yielded a statistically significant difference between DVHF recipients and those receiving TAU, with DVHF recipients experiencing improved outcomes in the categories of housing instability, physical abuse, emotional abuse, stalking, economic abuse, use of the children as an abuse tactic, depression, anxiety, posttraumatic stress disorder, and children’s prosocial behaviors.

I wouldn’t want to downplay the practical and logistical difficulties of providing housing to the chronic homeless, and then working on their other life issues afterward. But in a situation of imperfect alternatives, the housing first approach seems the better option.

Evidence on Declining Intergenerational Mobility in the United States

If the US economy had considerable intergenerational mobility–that is, if the children growing up in lower-income households had a reasonably good chance of ending up as adults in higher-income households, and conversely the children growing up in higher-income households had a reasonably good chance of ending up as adults in lower-income households–then I would be less concerned about the extent of income and wealth inequality. Brian Stuart offers a readable overview of the current evidence in “Inequality Research Review: Intergenerational Economic Mobility” (Economic Insights: Federal Reserve Bank of Philadelphia, Third Quarter 2023, pp. 2-7).

Here are a couple of figures to summarize the main takeaways. The first figure shows the share of children at age 30 who earn more than their parents did at age 30 –adjusted for inflation. For children born in the 1940s and ’50s, the share was 80% and higher. For children born in the 1960s and 1970s, it was about 60%. For children born in 1984 (who would have been 30 years old in 2014), the share is about 50%.

A related but different measure of intergenerational mobility looks at how ranking of parents’ income is correlated with the ranking of their children, when those children are grown to adulthood. As Stuart writes:

Specifically, there is considerable upward mobility for children born to parents with
lower incomes. For example, children born to the poorest parents—in the 1st
percentile of the income distribution—rise on average to the 31st percentile. There
is also considerable downward mobility for children born to parents with higher
incomes. Children born to the richest parents—in the 100th percentile—on average
fall to the 73rd percentile. When averaging over all parents and children in the data, each 1 percentile increase in parents’ income rank is associated with a 0.37 percentile increase in children’s income rank. This relationship lies between
the benchmarks of perfect mobility—where a child’s income rank would be unrelated to their parents’ income rank—and no mobility—where a child’s income rank would equal their parents’ rank.

The breakthrough in the last 8-10 years in this area of research is that it became possible to take Census data and to link it to data from federal income tax returns over a sustained period of time. The data is “de-identified,” that it’s impossible tor track specific individuals, but only to look at patterns. However, when filling out tax returns, you list your children and their Social Security numbers. Thus, it’s possible to look at patterns of adult earnings, and then later at earnings for children. It’s also possible to look at neighborhoods where children grew up, and what happens to families who move to higher-income or lower-income neighborhoods, and all sorts of interesting stuff. For an overview of this line of research earlier posts, see “Intergenerational Mobility and Neighborhood Effects” (March 8, 2021) and “Black-White Income and Wealth Gaps” (July 2, 2018).

I do not have a magic ethics ball to tell me if this is “enough” intergenerational mobility or not. After all, any distribution of income will always have, by definition, half of the population below the median income and half above. Not everyone can be above-average. Humans and the world being what they are, it doesn’t feel reasonable to expect that children will be unaffected by the household where they grow up.

On the other side, a society where most people are out-earning their parents and thus feel an expanding sense of possibility will have a different feeling than a society where half the people are not out-earning their parents. Perhaps the issue is not mobility from bottom-to-top, or top-too-bottom, but the share of people who feel that they have a “middle-class” level of income. A few years back, the OECD did a report on the middle-class, arguing in part that being middle class means feeling that you can afford certain middle-class goods, and in particular, a middle-class level of education, health care, and housing. Thus, trying to assure that children from lower-income families have a reasonable shot at the education and health they need to succeed is one useful goal, but the definition of “success” may depend on policies that make education, health care, and housing feel available and affordable to those with middle-class incomes.

For some earlier articles about research on intergenerational mobility from the Journal of Economic Perspectives (where I work as Managing Editor), see the article by Miles Corak in the Summer 2013 issue:  “Income Inequality, Equality of Opportunity, and Intergenerational Mobility.” Also, from the Summer 2002 issue, see:

Claudia Goldin: A Nobel for the Study of Women’s Labor Market Outcomes

Claudia Goldin has been awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2023 ““for having advanced our understanding of women’s labour market outcomes.”

Goldin does economic history, but as a different economic historian once said to me: “History starts yesterday.” In a similar spirit, Goldin’s work (with a primarily US focus) ranges across centuries and decades, but also focuses directly on 21st century patterns. As usual, the Nobel committee has published two descriptions of Goldin’s work, which they designate as the “Popular Science Background” (7 pages long) and the “Scientific Background” (38 pages long). With those overviews easily available, I’ll just touch on three themes here.

First, real economic historians like Goldin do serious archival work, often uncovering previously unstudied data or looking at common data in a different way. For one of many examples, Goldin noticed that in the data on “occupations,” married women often had “wife” listed as their occupation. In the past, this designation had been taken to mean that women weren’t working as part of the paid labor force. As the “Scientific Background” report notes:

One of her main contributions concerns the under-counting and omission of female workers, especially in the period before 1940. During this time, it was, for example, common to simply list “wife” in the census when referring to married women. In a modern context, this would imply nonparticipation in the labor market. Yet many married women who were recorded as wives actually engaged in what we would now consider labor market activity. The wife of a small farmer or farm laborer almost certainly worked together with her husband on the farm. The wives of boardinghouse keepers, and probably many other small business owners, worked in their husband’s business. Combining data from income reports, time budget surveys and census data, Goldin showed that adjusting for the undercounting of women in the agricultural sector raises the female labor force participation rate by almost 7 percentage points in 1890. Adjusting for undercounting in other sectors (mainly boardinghouse keepers and manufacturing workers) adds an additional 3 percentage points. Most of the adjustments apply to white married women: the corrected female participation rate of that group is five times the official census statistic (12.5% versus 2.5%). The labor force participation rate for married women in 1890, therefore, is similar to that observed in 1940. … Goldin (1990) provided evidence that the undercounting problem is likely greater further back in time, since the occupations of women were not collected in most pre-1890 censuses.

Digging into the alternative US data sources going back a couple of centuries offers a number of insights, but one big takeaways is that the share of women producing output for sale in the market had a U-shape during US economic history: that is, very high levels back around 1800 when many women worked in agriculture, a declining share with the shift from agriculture to industry in the 19th century, and then a rising share with the transition to service industries starting about 1920.

Second, a substantial advantage of a historical approach is that it allows looking seeking to understand big institutional or technological changes. In a substantial share of modern economic research, the focus is on a particular theoretical model or a particular set of data. In contrast, work by economic historians is often focused on a bigger question, using theory and data as needed to illuminate it. Here are a few examples:

For example, one reason why women were not represented in the paid labor force in the early decades of the 20th century was the growing adoption of “marriage bars,” laws and rules which blocked women from staying in many jobs after they married. In other words, the role of women in the labor force was not just a matter of social convention or choices made within familied, but enforced by law. The Nobel committee again:

in the 19th century … women almost uniformly left the labor force upon marriage (and were married for the vast majority of their adult lives). The social stigma and norms driving this exit were formalized into explicit regulations in the late 19th and early 20th centuries. So-called marriage bars, which explicitly prohibited the hiring or employment of married women, were introduced. Goldin (1988, 1990) documented two kinds of marriage bars. “Hire bars” banned hiring married women but permitted firms to retain women who got married while already employed.
“Retain bars” were more restrictive and required the firing of women upon marriage. The use of the marriage bars peaked after the Great Depression and were particularly common for positions as teachers and clerical workers. In 1942, 87% of school districts had hire bars and 70% had retain bars. Marriage bars were also more prevalent in large firms. A 1930s survey of firms found that 35-40% of women worked in firms that would not hire a married woman.

For an example of a major technological change, Goldin (along with Larry Katz) considered how the the contraceptive pill affected labor market outcomes for women. The Nobel committee wrote:

In the US, the first oral contraceptive was approved in 1960 and made available to married women. But until the end of the 1960s, access was limited for young unmarried women. Single women below the (state-specific) age of majority needed parental consent to access the pill. In the early 1970s, many states reduced the age of majority from 21 to 18 and passed laws increasing access to family planning and contraception without parental consent. Thus, there is state-by-time variation in access to oral contraceptives for young single women. Importantly, the changes in the age of majority were not actually driven by family planning concerns but rather by a desire to reduce the age of conscription for the Vietnam War.

Using the variations in when laws were passed across states, along with other evidence, “they found breaks in the time series of premarital sex behavior, age of marriage, and career investment, which occur for women born in the early 1950s (i.e., the first cohorts of unmarried women to have access to the pill). … [F]or instance, a surge in investment in professional programs started in the early 1970s when these women made their college education choices.”

An ongoing theme in Goldin’s work is how changes for broad groups, like “women,” often seem to happen for a cohort (that is, group born at about the same time) where shifts in conditions combine to change expectations and also behavior.

Finally, I’ll point to one of Goldin’s contributions to women and the labor market in the 21st century. An overall pay gap between men and women remains. There’s solid research that much of the remaining pay gap is a “parental” gap, reflecting the fact that women end up doing more childcare than men. As a result, women find themselves less likely to have an uninterrupted career path, and more likely to end up in jobs that offer more work/life balance.

Goldin pushed this line of thought further. She found that “the majority of
the current earnings gap [between men and women] comes from earnings differences within rather than between occupations.” In particular, the pattern in a number of occupations is that those who are most highly paid work long hours: that is, in many occupations the pay per hour is higher for someone working a 50-60 hour week than for someone working a 35-40 hour week or a 20-25 hour week.

As the Nobel committee writes: “[W]omen receive a wage penalty for demanding a job flexible enough to be the on-call parent. Men, on the other hand, receive a premium for being flexible enough to be the on-call employee, i.e., constantly available to meet the needs of an employer and/or client. In jobs where such “face time” is valued, one employee cannot easily substitute for another and part-time work is hard to implement. Nonlinearities in wages emerge as a result: workers willing to work many hours are rewarded with a higher wage.”

Thus, certain jobs like pharmacists have a high level of workplace flexibility, and the wage gap per hour between men and women is relatively low. An ongoing subject for research is whether it is possible to have greater flexibility on hours, perhaps by pushing back on expectations about how the fast-track and high-paid workers must also be those who work the longest hours. But it may be that greater workplace flexibility holds one of the secrets to further reductions in the male/female wage gap.

For a few earlier blog posts on Goldin’s work, see:

Reminder of the Ongoing Opioid Epidemic

The Centers for Disease Control reports that 645,000 Americans died from opioid overdoses from 1999 -2021. As the CDC points out, the problem has hit in three waves: a rise in prescription opioid overdose deaths starting in the late 1990s; a rise heroin overdose deaths starting in 2010; and what appears to be an ongoing sharp rise in overdoses from synthetic opioids like fentanyl starting around 2013.

For an overview of what we know about causes and economic costs, Wenli Li, Raluca Roman, and Nonna Sorokina provide a discussion in “The Economic
Impact of the Opioid Epidemic”
(Economic Insights: Federal Reserve Bank of Philadelphia, Third Quarter 2023, pp. 8-14).

Of course, overdose deaths are only one part of the costs of increased opioid use: others include illness, lost worker productivity, higher health care costs, greater likelihood of credit defaults, and others. What caused the increase? As the authors point out, there doesn’t seem to be surge in demand-side factors, so most explanations have focused on changes in supply:

Researchers have concluded that changes in demand-side factors alone—including physical pain, depression, despair, and social isolation—explain only a small fraction of the increase in opioid use and deaths. Moreover, there doesn’t appear to be a substantial link between local economic downturns and rising working-age mortality from drug overdoses, opioids or otherwise. Instead, researchers have identified supply-side factors as the primary explanation for the recent opioid epidemic.

(I should note that one of the citations here is to an article published in the Fall 2021 Journal of Economic Perspectives, where I work as Managing Editor: David Cutler and Edward Glaeser, “When Innovation Goes Wrong: Technological Regress and the Opioid Epidemic.”)

A lot of the policy response to the opioid epidemic has focused on limiting overuse of prescription opioids. Nothing wrong with that as a policy goal. But as the authors point out, making prescription opioids harder to get will tend to encourage people to use illegal opioids instead–which is the source of the bulk of the overdose deaths. Reducing the spread of fentanyl is a very difficult policy problem: it’s relatively easy to make in a high concentrated form. Thus, it can be shipped easily and cheaply–even using regular mail or package delivery. The general idea is that it will be diluted extensively before sale, but quality standards in the illegal drug markets being what they are, some users are likely to end up with unexpectedly heavy doses. Indeed, some users may not even know that what they are using has had fentanyl added.

Studies of state and local laws about opioid use don’t offer much guidance. As the authors note:

Federal, state, and local policymakers have introduced many opioid-related laws and regulations to combat the opioid epidemic. In this article, we focus on state and local laws, as do most previous studies. Broadly speaking, we can divide these regulations into two groups: those that aim to restrict opioid supply and those that aim to restrict opioid demand. However, none of these laws have been very successful at curbing opioid use and abuse.

It is truly remarkable to me how little attention the ongoing opioid crisis seems to get. Around 2015, the annual deaths from opioid overdoses exceeded the deaths from motor vehicle accidents–and has kept rising. Maybe COVID made it impossible for us to consider two public health emergencies at the same time? Maybe when the villain of the piece wasn’t Purdue Pharma and its introduction of oxycontin, but instead became illegal opioids from noncorporate sources, the story no longer attracted the same audience? Anyway, for some of my previous efforts to at least keep the subject on public view, see:

Why Sweden Isn’t an Example of Socialism

When I meet Americans who self-identify as “socialists,” it is quite uncommon for them to advocate the abolition of private property and the “collective or governmental ownership and administration of the means of production and distribution of goods”–which is the dictionary definition of socialism. Instead most of the American “socialists” I meet favor a more expansive set of government benefits, including national health insurance, government-provided day care, more generous unemployment insurance, and the like. They favor what they perceive to be policies that are common across the European Union, especially the Scandinavian countries of northern Europe, and perhaps especially Sweden.

But Sweden, like many countries that had a geographically nearby seat to watch the activities of the Union of Soviet Socialist Republics, does not view itself as “socialist.” Johan Norberg tells the story in an extended essay “The Mirage of Swedish Socialism: The Economic History of a Welfare State” (Fraser Institute 2023). Norberg describes Sweden’s patterns over the long-run like this:

Sweden has a tradition of sticking to the path it has chosen and ignoring
problems until they become too big to deny and everybody changes their minds at the same time. Then Swedes move fast in the opposite direction. Far from following the famed “middle way,” Sweden has often been a country of extremes. It liberalized the economy more than other countries did in the mid-1800s, socialized more than others in the mid-1900s, and then reversed course and liberalized again faster than others in the late 20th century.

As Norberg tells the history, what most Americans think of as Sweden’s “socialism” is a set of policies Sweden enacted in the 1970s, and then rethought and revised extensively in the 1990s.

I will skip over Nordberg’s discussion of the pro-democracy, pro-market reforms in Sweden that happened between 1840-1870, and their evolution in the decades that followed, and jump to the state of Sweden’s economy in 1950. From being a poor country in 1870–GDP per capita about 40% of Great Britain–Sweden’s market-based economic development had been a great success. Nordberg writes (citations, footnotes, and references to figures omitted):

In 1950, … Sweden had achieved the fourth highest per capita GDP in the world, just behind the United Kingdom to that point. Sweden was by then a success story, the envy of the world. Between 1870 and 1950, life expectancy had increased from 45 to 71 years. Child mortality declined from 22.1 to 2.7 percent. Maternal mortality declined by over 90 percent ….

In 1950, Sweden was the third freest economy in the developed western world, after the United States and Switzerland, according to attempts to extend the Economic Freedom index retrospectively … Public spending as a share of GDP … was below 20 percent, well below countries like Britain, France, and West Germany. Taxes as a share of GDP were slightly lower than in the United States, and the highest marginal tax rate was 20 percentage points lower than in that country. In other words, Sweden was one of the richest, healthiest, and most successful societies the world had ever seen—and that was before it was a generous welfare state and had started experimenting with socialist ideas.

In the 1970s, Sweden decide that it was time for a major shift to larger government, higher taxes, and bigger social benefits. While the government did not literally take over companies, it imposed extensive price controls and took control of labor rules that had previously been negotiated between unions and firms. Norberg writes:

In the 1960s Sweden was on top of the world. The country had globally admired companies, an educated work force, and an open and competitive economy that delivered high growth, decent profits, and higher wages. … and West Germany. The conclusion many drew was that now the economy could afford a very big government indeed. The time for patience was over. … In just 20 years, public spending more than doubled, from 25.4 to 58.5 percent [of GDP] between 1965 and 1985. This came primarily from a rapid expansion of social services like health care, elderly care, and child care, and transfers like pensions and housing allowances. The marginal tax rate for blue collar workers increased from less than 40 percent in 1960 to more than 60 percent in 1980, and for white collar workers to above 70 percent. The payroll tax rose from 12.5 percent in 1970 to 36.7 percent in 1979. Capital gains were taxed as income, at progressive rates. In a series of steps, the corporation tax increased to almost 60 percent in the 1980s, even though it also offered generous deductions.


At the same time, the government raised the costs of doing business with a whole battery of regulations aimed at solving every conceivable problem and inequity. In 1970, Sweden introduced an opaque system of price controls, which forced businesses to negotiate price changes with business groups and government authorities. When Sweden devalued its currency it often implemented temporary bans on all price increases. Now it also gave up on the traditional Sweden model where labour market affairs were left to negotiations between business organizations and trade unions. Starting in 1974, the government regulated labour protection substantially, defining lawful reasons for termination and requiring that workplaces needing to fire staff for redundancy do so according to seniority (“last in, first out”).

Thus, Sweden appeared to outsiders in the 1970s and into the 1980s to be an example of a high-income country that had made a transition from a small-government capitalist orientation to a large-government welfare state with a socialist orientation. There was talk of Sweden as the “middle way” between capitalist United States and communist USSR.

But when you set aside the happy talk and looked at economic statistics, Sweden’s experiment was leaking from the seams almost immediately. The combination of high labor costs and inflexible regulatory control basically took down Sweden’s steel, shipbuilding, textile, and mining industries by the late 1970s. Investment sank, productivity gains dropped. Norberg: “Fewer companies were created in Sweden and the ones already in existence did not expand. In fact, by 1990, the Swedish economy had not created a single net job in the private sector since 1950, even though the population had increased by one and a half million people.” Sweden became more equal by subtraction: “Many of the country’s most important companies, entrepreneurs, and individualists left the country, primarily because taxation was suffocating and often made it impossible to pass family companies on to the next generation.”

Attitudes toward the generosity of Sweden’s welfare state also shifted. An earlier generation had welcomed the greater security, but also had strong feelings about only using the safety net when needed. A newer generation had less guilt about exploiting the system.

In the early 1980s, 82 percent of Swedes said it was never justifiable to claim government benefits to which you are not entitled. Thirty years later just 55 percent agreed with that statement. After generous sick leave benefits were implemented, Swedes who were objectively healthier than any other population on the planet were suddenly “off sick” from work more than almost any other population. As early as 1978, one of the founding fathers of Sweden’s welfare state, the economist Gunnar Myrdal, complained that the traditionally honest Swedes were obsessed with escaping tax, and were turning into “a population of cheats” …

By the early 1990s, Sweden’s economy was in a crisis: think three years of severe negative growth, high inflation, and nominal interest rates that at one point hit 500 percent. Unemployment rose to over 10%, where it remained for years. Government budget deficits rose to more than 10% of GDP.

It as time for another major set of reforms, and the prominent Swedish economist Assar Lindbeck was chosen to head a commission that would set an agenda. (Lindbeck wrote about the experience in “The Swedish Experiment” in a 1997 issue of the Journal of Economic Literature.) Sweden did not return to its pre-1970 model, but the changes were substantial. Here’s a partial list from Nordberg:

[D]uring the next few years, Sweden cut public spending substantially, moving both expenditures and revenue closer to the OECD average. The country also reduced the benefit levels in its social security systems. Nineteen state-owned companies were privatized and public investment funds that had interfered with the investment decisions of private businesses were abolished. Private and commercial radio and television stations were permitted for the first time. Railways, buses, and domestic aviation were deregulated. The telecom and energy sectors were opened up to competition. Private employment agencies were permitted, and unemployment benefits reduced. The last vestiges of the price control system were abolished, with the infamous exception of rent control, which has continued to make it very difficult to get a rental apartment in growing cities like Stockholm. The central bank was given an explicit inflation target of 2 percent annually.

In 1992, Sweden initiated an ambitious opening up of public services when it created a national school voucher system, which gave families the freedom to choose independent schools for their children’s education. Private alternatives in government-subsidized childcare, elderly care, and health care started to proliferate. … In 1994, parliament decided to introduce a new pension system, which replaced defined benefits with defined contributions and included a “break” that automatically reduces payments in bad times. It also included individual accounts, which can be invested according to personal preference.

In this most recent shift, Sweden remained a country with a welfare state that is large by US standards, although not especially large by western European standards. However, this welfare state operates along-side an economy that is quite deregulated and open to international trade–my common measures, more so than the US economy. As part of this change, Sweden gave up the idea that it could pay for its welfare state by sticking corporations and the rich with the tax bill. Instead, the middle classes pay the bulk of taxes (for example, through a value-added tax), but also receive the bulk of benefits. Nordberg writes:

In the 1990s, Sweden also gave up the pipe dream of making the wealthy pay for it all. Swedes learned that you could either have a big government or make the rich pay for it all, but you couldn’t have both. High earners and successful businesses are too few and too important for the country’s economy to deter or chase away with high taxes. Scaring off high earners and successful businesses had not just hurt innovation and risk-taking, it had also threatened the long-term financial basis for the welfare state. Now Sweden relies more on consumption taxes and flat payroll and local income taxes that it did before the reforms, which means that most citizens pay for most public services out of their own pockets and that the country is once again a more attractive place to do business … The overall effect is that Sweden’s tax system is now one of the least progressive in the OECD …

Nordberg offers considerably more detail on Sweden’s evolution over time, but I hope my encapsulated description here makes the main point. There’s a lot to reflect on and to admire about how Sweden’s system manages tradeoffs between social equity and economic efficiency. But it’s not socialism: indeed, it explicitly focuses on supporting market-based energies of capitalism as a method of funding the welfare state. For a sense of Sweden’s attitude toward “socialism,” Nordberg starts his discussion by quoting an exchange with Göran Persson, who was Sweden’s Prime Minister from 1996-2006, and a member of the Social Democrat party:

“What do you think of socialism?”
“I’m a Social Democrat.”
“Not a socialist?”
“No, if you call yourself a socialist, they confuse you with a lot of
crazies.”

The Upsurge of New US Firms: The Detailed Story

About a month ago, I wrote a “Is the US Economy Seeing an Upsurge of New Firms?” based on some patterns I had noticed in the data. I didn’t realize that I was about to be big-footed–that is, some real experts on this subject were just about to weigh in. At the Fall 2023 Brookings Papers on Economic Activity conference, held yesterday and today, Ryan Decker and John Haltiwanger presented “Surging Business Formation in the Pandemic: Causes and Consequences?” A draft of the paper, along with drafts, slides, and full video for the conference as a whole, is available at the conference website.

Decker and Haltiwanger confirm the basic facts of my earlier post (I write thankfully). But they also put the facts in a broader perspective, by emphasizing that formation of new businesses was one of the ways in which the US economy pivoted during the dislocations of the pandemic recession, as well as in response to certain opportunities opened up by the recession like more widespread acceptance of remote work.

Here’s are the basic patterns under discussion. When starting a new firm, it’s common to apply to the Internal Revenue Service for an Employer Identification Number, which is needed if the plan is (someday) to hire employees. Such applications spike after the pandemic. Moreover, the government statisticians also try to keep count of which applications seem likely to turn into firms that employ people–not just a business for one person. Here are the patterns:

The obvious question is then whether these applications for employer ID numbers are followed by actually opening a new firm. The available quarterly data here is about “establishments,” which refers to a business operating at a certain location. One firm can have multiple establishments. This data is collected through the Quarterly Census of Employment and Wages, and it based on the fact that employers need to pay into state unemployment insurance funds–and thus need to report where they are operating. Here’s the data on formation of new establishments for several categories spelled out in more detail in the paper.

The authors summarize their overall findings this way (again, much more detail in the paper itself):

This set of facts lends itself to a compelling narrative of pandemic business and labor market dynamics. The pandemic sparked rapid, dramatic changes to the composition of consumer demand and to preferences for work and lifestyle, and these patterns have continued to evolve through mid-2023. From the standpoint of potential entrepreneurs, these dramatic changes presented opportunities—both to meet newly formed consumer and business needs and to change the career trajectories of the entrepreneurs themselves. Entrepreneurs made plans and applied to start businesses both early on and through mid-2023; some of these plans have resulted in new firms and establishments that hired workers in large numbers. Entrepreneurial opportunities and the demand for employees at these new firms appear to have played an important role in the “Great Resignation,” as some quitting workers likely flowed toward new businesses (as either entrepreneurs or new hires). Taken together, these patterns imply significant economic restructuring across industry, geography, and the firm size and age distribution. The extent to which these changes will be long lasting has yet to
be seen. …

The rise in applications and employer entry is highly concentrated in a few industries that are conducive to pandemic patterns of work and life (such as online retail and other high-tech industries), consistent with the changing sectoral structure of the economy. We also observe substantial spatial variation in the surge in applications and business entry, consistent with geographic restructuring. The surge in applications and business entry is especially notable in the South, with states such as Georgia standing out. Within large cities we observe a “donut effect” with applications surging more in the suburbs of metropolitan areas than in central business districts. …

We find a tight spatial correlation—at the state and county level—between surging business applications and quits (or excess separations, a close proxy for quits), with a much weaker correlation between applications and layoffs (or job destruction, a close proxy for layoffs). Among other possible explanations,
these results are consistent with workers quitting their jobs to start or join new businesses—and somewhat less consistent with job loss being a key driver of business formation. …

We also document a pandemic pause—and modest reversal—of the longer-run shift in activity toward large, mature businesses. The share of activity accounted for by young and small firms has risen; young and small firms exhibit a higher pace of dynamism than large and mature firms, so one might anticipate an ongoing increase in the pace of dynamism. In other words, we find early hints of a revival of business dynamism; but in many respects it is too early to ascertain whether a durable reversal of pre-pandemic trends is occurring.

I sometimes say that the pandemic recession had the affect of dramatically accelerating some changes that were already underway, but at a slower pace: telemedicine become common for a time; online education boomed for a time; online purchases and home delivery continued to grow; and work-from-home increased as well. The greater flexibility to start one’s own business may turn out be another shift.

Some Economics of Pharmacy Benefit Managers

It is a common belief, applicable across many different kinds of markets, that if you could just “cut out the middleman” and “pass the savings along to consumers,” everyone would be better off. Pharmacy benefit managers are quintessential middlemen. Thus, when it comes to addressing high drug prices, going after them has considerable political appeal. Matthew Fiedler, Loren Adler, and Richard G. Frank offer some useful background in “A Brief Look at Key Debates About Pharmacy Benefit Managers” (Brookings Institution, September 7, 2023).

What do pharmacy benefit managers (PBMs) do? The authors put it this way:

A PBM is an entity that administers a health insurance plan’s prescription drug benefit. One core function of a PBM is to negotiate drug prices with manufacturers; when negotiating prices, a PBM generally offers a drug a place on the plan’s “formulary” (which specifies which drugs the plan covers and on what terms and, thus, determines how much enrollees use the drug) in exchange for below the
manufacturer’s “list price.” PBMs also negotiate with pharmacies, generally offering the pharmacy a place in the plan’s network (which increases how many of the plan’s enrollees use the pharmacy) in exchange for accepting specified prices to dispense drugs. And PBMs perform administrative functions, notably processing pharmacy claims. Most of these functions (e.g., setting coverage terms, negotiating prices, establishing networks, and processing claims) parallel functions that insurers perform for non-drug benefits, but PBMs’ specialized knowledge of drug markets may allow them to perform them more effectively.

Who are the main PBM companies? Here’s a list. Note that about 75-80% of all claims are handled by three companies.

The list probably oversimplifies the role of PBMs. As the Brookings authors note, the PBMs are typically integrated with major health insurance companies. Here’s a chart showing major health insurance companies across the top, and who they use as PBMs for various purposes below.

What are the specific complaints against PBMs? Given that the big three control a huge share of the market, there is concern that the lack of competition might give them power to raise prices. Some concerns are more detailed. “The prices that PBMs charge payers for pharmacy claims often differ from the prices that PBMs pay the pharmacies that fulfill those claims,” so there is a “spread” that might be reduced. Sometimes, the rules that require patients to share costs for a certain drug are calculated in a way that is based on the original price of the drug, not on the discounted price negotiated by the PBM. When there is a manufacturer rebate for use of a certain drug, this money can sometimes flow to the PBM, rather than directly to patients. Overall, the notion is to find a way to squeeze these middlemen, and get lower drug prices for consumers as a result.

In their essay, it seems to me that Fiedler, Adler, and Frank are managing expectations about how much this approach is likely to achieve. For example, they point out: “Pre-tax operating margins for the three largest PBMs averaged a bit more than 4% of their revenues in 2022. Since PBMs’ revenues encompass both the
administrative fees charged to PBMs and payers’ net payments for claims, this implies that even completely eliminating PBMs’ margins would only modestly reduce payers’ drug-related costs.”

The potential cost savings from some of the steps being discussed are uncertain. For example, if you favor more competition among a larger number of PBMs, it’s important to recognize that a bigger PBM has more negotiating leverage. If you are in favor of the US government using its considerable financial clout to negotiate lower drug prices for Medicare and Medicaid patients, you understand the principle that a small-scale PBM will have less leverage when negotiating with pharmaceutical producers and with pharmacies than a larger one. It turns out that PBMs on average retain about 9% of rebates from manufactures, so reducing this to zero percent will not have much effect on final prices.

As in most economic discussions about the role of middlemen, it’s important to remember that they (usually) don’t just sit around with their hands out, collecting money. Some entity needs to negotiate on behalf of health insurance companies with drug manufacturers and pharmacies. Some entity needs to process insurance claims for drug prices. I do not mean to defend the relatively high drug prices paid by American consumers compared to international markets, nor to defend the costs and requirements for developing new drugs, nor to defend some of the mechanisms used by drug companies to keep prices high. But while it might be possible to squeeze some money out of PBMs for slightly lower drug prices, and it’s certainly possible to mess up PBMs in a way that leads to higher drug prices, it doesn’t seem plausible that reform of PBMs is going to be a powerful lever for reducing drug prices.