US State-Level Abortion Regulations: Causes and Effects

Regulations about abortion are often wildly controversial. But what effects to they actually have? Caitlin Myers addresses these issues in “From Roe to Dobbs: 50 Years of Cause and Effect of US State Abortion Regulations” (Annual Review of Public Health 2025, pp. 433-446).

As a starting point, consider the years before and after the 1973 US Supreme Court decision in Roe v. Wade that struck down existing abortion restrictions across the country. The left-hand panel shows the states which has repealed the bans on abortion before Roe in purple, those that had relaxed but not eliminated their ban before Roe in pink, and those in which abortion was legalized by Roe in gray. In the purple states that had already repealed their ban on abortion, the number of abortions had risen in the years before Roe, but had then started declining–and the decline continued after the passage of Roe. Part of the reason for the decline in the early-legalization states is that, after Roe, women no longer had to travel from other states where abortion was illegal. In the other groups of states, the number of abortions rose.

As Myers argues, the effects on abortion levels of states that repealed their abortion bans before 1973 is very large–probably larger than the increase in abortion following the Roe decision. She writes:

Of the three broad policy changes liberalizing abortion access—early reforms, early repeal, and repeal with Roe—it is early repeal that results in the greatest effects on national abortion and birth rates. As Joyce et al. (51) conclude following a detailed analysis of the effects of distance to early repeal states, “The story that emerges from these data is that…Roe v. Wade was arguably less important for unintended childbearing than was access to services in California, the District of Columbia and especially New York in the years before Roe” (pp. 813–14) because so many people were able to travel to these early repeal states even if their state of residence had not yet legalized abortion.

States then tested the limits of what the Supreme Court would allow with a variety of restrictions: mandatory waiting periods before an abortion, mandatory counseling before an abortion, different types of content that might be involved in that counseling, parental permission for teenager and/or spousal permission for wives, whether Medicaid funding could be used to pay for abortions, whether abortions needed to be performed in or near hospitals, what doctors were allowed to perform abortions, and others. This array of rules–as they were proposed, passed or failed in legislatures, and were upheld or not by courts–provides a rich set of contexts for researchers.

Here’s one example. In North Carolina in the 1980s and into the 1990s, there was a state fund to pay for abortions for low-income women: in this way, the state did not draw on federal Medicaid funds to pay for abortions. But the state fund sometimes ran out of money. Myers writes: “Cook et al. (25) exploit a natural experiment that took place within North Carolina between 1980 and 1994 when the state abortion fund ran out of money on five different occasions. Comparing changes in outcomes among women seeking abortions and eligible for funding, the authors conclude that when funding is unavailable, about one-third of pregnancies that would have been terminated are instead carried to term …”

This kind of study is referred to as a “natural experiment”–that is, there was no plan for the North Carolina fund to run out of money. It seems unlikely that sexual activity in North Carolina was being adjusted according to the state of the fund. Instead, some North Carolina women seeking abortions found that funding was available, and others didn’t, and this had an effect on their deicsions.

Myers goes into detail in considering the array of natural experiments that have been analyzed. For example, when a state altered its abortion laws, then women who lived relatively close to that state were also affected, because it was relatively easy for them to travel to that state, while women living farther from that state were less affected, because their costs of travelling to that state were higher. As another example, those interested in, for example, the application of difference-in-differences statistical methods may want to check out the paper.

Here, I’ll mention some of the bottom lines of this survey of the evidence (citations omitted here, but appear in the article iteself): When and where abortion is more restricted, birth rates are higher. Higher birth rates, especially for women at younger ages, are associated with lower levels of educational achievement, and thus with lasting effects on employment outcomes. These effect are typically larger for black women then for white women.

What about the period since the 2022 US Supreme Court decision in Dobbs v. Jackson, which struck down Roe v. Wade and thus gave states much wider latitude in setting abortion laws? Of course, the evidence on this point is still evolving, and the setting for abortion is now rather different than it was before 1973. Myers notes:

  • “Abortion prior to 12 weeks’ gestation remains legal in 34 states (65) and many states have bolstered their protections (22), providing many more destinations than existed in 1971, when abortion was legal in only 6 jurisdictions.”
  • “The delivery of abortion services has also evolved, with a major shift occurring in 2000 when the US Food and Drug Administration (FDA) approved the drug mifepristone for the termination of pregnancies. The proportion of medication abortions grew rapidly, from 6% of all abortions in 2001 to 39% in 2017.”
  • “[I]n December 2021 the FDA lifted the restriction permanently (55), allowing health care providers to dispense abortion medications directly to patients via mail without requiring the patient to receive in-person consultation or tests (85). This expanded abortion access in the 32 states that did not restrict telehealth abortion (5), likely fueling the rise in medication abortions to 63% of all abortions by 2023 … By the end of 2023, telehealth accounted for nearly 1 in 5 abortions in the United States (83), and national abortions had actually risen relative to pre-Dobbs levels …”
  • “Yet not everyone seeking an abortion can find a way to drive hundreds of miles to reach facilities in nonban states or will find telehealth medication abortion an acceptable option. Near-total abortion bans enforced in the first 6 months after Dobbs are estimated to have increased births in ban states by an average of 2.3% relative to if no ban had been enforced (26). The estimated effects of bans on fertility are greater in states where distances are greatest, reaching 4.4% in Mississippi and 5.0% in Texas …”

In addition, teenage birth rates have fallen dramatically over the last three decades for an array of reasons not directly related to availability of abortion: less sexual activity, greater use of contraception, and more broadly, a larger share of young women viewing their early adulthood as a time for education and job experience, with later ages for marriage and childbearing.

Update on the Military Base Realignment and Closure Process

One of the strongest examples of how a commission report can overcome problems with a legislative process involves the Base Realignment and Closure Process that started in 1988, and has now gone through five rounds. The challenge was that the number of active duty US military personnel had rise to about 3.5 million during the Vietnam War in the late 1960s, but then had fallen to about 2 million by the later part of the 1980s. It was obvious that the number of military bases should also decline, but the US Congress had a very hard time doing it. Any Congressperson who had a military base in or near their district or state would not vote for cutting their own base; moreover, they would not vote for cutting bases in other places–out of fear that the base in their own district might be the next to do. Literally no military bases were close between 1977 and 1987.

The idea behind the Base Realignment and Closure Process was to have an outside commission draw up a list of bases to be closed and a timeline for closing them. Congress could then vote for or against the proposed list as a whole–but Congress committed in advance not to amend the list. Chandler S. Reilly and Christopher J. Coyne bring us up to date in ” The Political Economy of Military Base Redevelopment” (Eastern Economic Journal, 2025, 51: 7–26). As they write: “Since the initial round in 1988, four subsequent BRAC rounds were completed in 1991, 1993, 1995, and 2005, resulting in the closure of over a hundred major military bases, with the property transferred to local communities for redevelopment.”

But while the process has facilitated base closures, Reilly and Coyne point out that the visible hand of political clout has continued to play a role in the redevelopment process. They write:

In most cases, base property is not simply auctioned off with the rights to that property transferred to private parties. Instead, a political process governs base redevelopment from start to finish. Some forms of property transfer, such as Economic Development Conveyances (EDC) and Public Benefit Transfers (PBT), follow predetermined redevelopment paths with the intention of stimulating economic development or benefiting certain segments of the community. These rules incentivize redevelopment along predetermined lines which results in property being zoned for specific uses over many years. … [B]ase repurposing hinges on a political process where interest groups compete not by offering the highest market bid for reuse rights, but rather through waste-generating rent-seeking activities.

These zoning decisions may represent some mixture of special-interest lobbying, political clout, and attempts to pass costs to others. For example, the California State University Monterey Bay (CSUMB) was established after the closure of Fort Ord on that site. There had not been any previous plan for the establishment of a Cal State campus in that area; earlier reports noted that existing Cal State campuses had plenty of space for projected enrollments. But if the state was getting the land and buildings for “free” (ignoring opportunity cost of alternative uses, of course) and the federal government was chipping in with some spending to build out the rest of the campus, it seemed like a good idea.

Political processes can even lead to gridlock in redevelopment. The interests of thos living right beside the former base may not be aligned with the interests of those at the regional or state level (for example, should the base become a nature preserve, a shopping mall, a mixed-use development, or an industrial park?). It can take years, or in some cases decades, for these interjurisdictional issues to be resolved. The authors write: “As of 2017, there remained over 70 thousand acres of base land that had yet to be disposed, representing around 19 percent of the total acreage of closed bases over the five rounds …”

The authors suggest the merits of auctioning the land from base closures, which has the advantages of straightforwardness–although one expects that politicians would still push to play a heavy role. There is probably an intermediate approach in which politicians work with a master planner who would designate some of the land for parks or transportation or other uses, and then auctioning off the rest with the framework established. But when “free” land and buildings seem to be available, it’s not easy for local politicians to take a step back.

An obvious question is whether the commission approach might be used to resolve other logjammed issues. Back in the early 1980s, for example, when the Social Security system was verging on insolvency, a National Commission on Social Security Reform headed by Alan Greenspan proposed a set of changes that became the basis for a 1983 law which made the system solvent up until the early 2030s. Maybe it’s time for another such commission? Or imagine if Congress designated a target for spending cuts and another target for tax increases, and then set up two committees to propose how to meet that target. In this case, the arrangement might be that Congress could vote for or against, but any amendments suggesting more spending or lower taxes in one area would need to be include a revenue-neutal spending cut or tax increase in another area, to remain within the overall targets. Yes, it would be nice if Congress could debate and vote to address these kinds of issues like adults. But perhaps other arrangements are needed.

The Import-So-That-They-Can-Export Firms

Much of the discussion about trade and imports is based on discussions of products and sectors of the economy. But among the researchers who study international trade, a major shift has been a focus on relatively small firms that are directly involved in international trade. It turns out that many of these firms are both major importer and major exporters: indeed, they import intermediate goods in order as part of a global supply chain, to add economic value in the US economy while planning to export a finished (or more-finished) product. When you think about what US firms that are involved in international trade actually do, the arguments over tariffs take on a different flavor.

Pol Antràs provides a nice overview of thia research his FBBVA Lecture 2024: “The Uncharted Waters of International Trade,” delivered at the annual meetings of the European Economic Association, and now published in the Journal of the European Economic Association (February 2025, pp. 1-51). Researchers in international trade will be especially interested in the “uncharted waters” for future theoretical and empirical research that Antràs describes. Here, I’ll focus on looking back at the “charted waters” of key facts discovered by reseach in the previous decade or two.

(For an article from a few years back as this line of research got underway, I can recommend Andrew B. Bernard, J. Bradford Jensen, Stephen J. Redding, and Peter K. Schott, “Firms in International Trade,” from the Summer 2007 issue of the Journal of Economic Perspectives, where I labor as Managing Editor.)

Here’s Antràs with some facts about only a small share of US firms are involved in exporting.

First, … in the real world, only a small proportion of firms engage in exporting, with most exporting firms targeting just a few markets. … [O]nly 35% of all manufacturing firms in the United States exported in 2007. Furthermore, this is not driven by universal exporting in some sectors and zero exporting in import- competing sectors: The share of firms that export is highest among firms in “Computer and Electronic Products,”reaching 75% export participation, but this share is positive and significantly lower than 50% in most sectors.

Second, the distribution of exporters is highly skewed. Despite accounting for only 0.03% of all US manufacturing firms … the top 1% of exporters accounted for a staggering 80.9% of US manufacturing exports. The top 2%–5% and top 5%–10% accounted for an additional 12.1% and 3.3%, respectively, leaving the contribution of the bottom 90% at a mere 3.7% of total US exports. This phenomenon is not special to the United States. The top 1% of exporters accounted for 77% of exports in Hungary, 68% of exports in France, 59% of exports in Germany, 53% of exports in Norway, 51% of exports in China, 48% of exports in Belgium, 47% of exports in Denmark, 42% of exports in the United Kingdom, and 32% of exports in Italy (Mayer and Ottaviano 2008 ; Manova and Zhang 2012 ; Ciliberto and Jäkel 2021 ). Why are exporters often in the minority, even in an economy’s most competitive sectors, and why are aggregate exports so concentrated among a small number of firms?

The third stylized fact unveiled by empirical work in the late 1990s is that … exporters appear to be systematically different from non-exporters: they are larger, more productive, and operate at higher physical capital and skill intensities. … [T]hese differences are very large. US exporters are on average 1.11 log points (or 203% ) larger in terms of employment than non-exporters in the same sector, and even controlling for the number of employees, exporters feature substantially higher sales, labor productivity, total factor productivity (TFP), wages, capital intensity, and skill intensity.

A similar pattern arises for imports: that is, a relatively small share of firms account for a very large share of imports, and most of this trade involves inputs to finished goods, not the finished good themselves.

Perhaps most notably, the vast majority of world trade is not in finished products: It has been estimated that trade in intermediate inputs accounts for as much as two-thirds of world trade (Johnson and Noguera 2012 ). This implies that global firms not only export but also import. … More specifically, importers in the United States are in the minority, the distribution of US imports is as skewed as that for exports, importers are larger, more productive, and more capital and skill intensive than non- importers … Antràs, Fort, and Tintelnot ( 2017 ) further document that US importers are not only larger than non-importers, but that their relative size advantage is also increasing in the number of countries from which they source.

Indeed, in many cases imports and exports happen within a single firm: that is, the firm owns overseas suppliers and imports from them, and it owns overseas distributors and exports to them.: “Using a newly merged data on US firms’ exports and imports, and their global production locations in 2007, Antràs et al. ( 2024 ) estimate that around 80% of US exports and imports are accounted for by US firms that manufacture goods both in the US as well as in foreign countries.”

The current high-drama agenda of threatening tariffs, then backing away, then negotiating, then threatening again, all makes for lively headlines and talk shows. Yes, after a transition period of at least years and likely a decade or more, some of these firms that import-to-export could re-invent their production processes with much more reliance on domestic supply chains. But remember, these import-to-export firms evolved in this way because it was more cost-effective for them to do so–that is, there were gains from trade. These firms buy inputs in global markets either because the products aren’t available in US markets, or are available only at a substantially higher price; similarly, they export because global markets have the necessary demand to absorb the quantities that they produce.

These large US firms that import-to-export, often within the structure of the firm itself. are often among the crown jewels of the US economy. Remember, they are well above average in “sales, labor productivity, total factor productivity (TFP), wages, capital intensity, and skill intensity.” For these kinds of firms, which represent the lion’s share of US trade, the issue with tariffs isn’t about whether a family will be able to afford toys or T-shirts for their children. If these firms end up over time facing both substantially highe rtariffs on their imports of input for production and retaliatory tariffs on their exports, that policy will cut the heart out of their business model.

Prepping for the Next Pandemic

If you are like me, you spend a certain amount of time trying not to remember the pandemic experience. But COVID-19 pandemic did cause more than one million American deaths. In a world of sane and sensible prioritizing and policy-making, spending some time and effort focused on how to reduce the risks and costs of a future pandemic seems potentially productive. Alex Tabarrok discusses a few pragmatic possibilities in “Pandemic preparation without romance: insights from public choice” (Public Choice, published online April 16, 2025).

One metaphor for America’s level of unpreparedness for the COVID-19 pandemic is warehouses of rotting N95 masks. Tabarrok notes:

[C]onsider that The Strategic National Stockpile (SNS) of personal protective equipment (PPE) was severely inadequate to meet the demands of the COVID-19 pandemic. At the start of the pandemic, the stockpile had only about 35 million N95 masks on hand, far short of the estimated 3.5 billion that would have been needed to adequately protect healthcare workers and first responders. Moreover, much of the stockpile was rotting as the N95 masks were more than 10 years old by the time of the pandemic.


I should emphasize that even taken all together, Tabarrok is not claiming that his recommended policies can eliminate the costs of future pandemics. But if we could reduce the cost by say, just 10% , the US savings alone would have been more than 100,000 lives and more than $1 trillion in lost economic output. Here are four of his options:

#1: Testing for disease at sewage treatment plants

People infected with SARS-CoV-2 shed genetic material from the virus in their feces (Bivins et al. 2020). Wastewater surveillance can detect the presence, concentration and growth of this genetic material before people present clinically. Thus, wastewater surveillance gives public health officials an early warning which can be used to allocate scarce resources and to implement control measures. More generally, wastewater surveillance can detect a host of viral and bacterial pathogens including influenza viruses, poliovirus, norovirus, hepatitis A and E viruses and bacteria such as Escheric hia coli and Salmonella. Wastewater surveillance to monitor antibacterial resistance may be of special importance (Philo et al. 2023; Singer et al. 2023). As with surveillance for SARS-CoV-2, wastewater surveillance more generally can be used to predict disease outbreaks more quickly, track the spread and virulence of pathogens and novel variants of concern, and inform and provide feedback to public health decisions (Wu et al. 2020).

#2: Build a vaccine library, by doing the research in advance on vaccines for viruses most likely to cause an outbreak

In 2016, the WHO identified 11 viruses with the greatest potential to cause severe outbreaks. Gouglas et al. (2018) estimated that developing at least one vaccine candidate for each of these viruses up to phase 2a would cost approximately $2.8 to $3.7 billion in total (see also Krammer 2020). Bringing a vaccine candidate up to phase 2a means designing the vaccine and evaluating it for safety and essentially “proof of concept” in small trials. Prior to a significant outbreak, it would not be possible to run phase 3 efficacy trials. It should be clear that these costs are small, almost trivial, relative to the expected gains. It’s notable that SARS-CoV-1 was on the WHO’s list. The knowledge gained from studying SARS-CoV-1 helped to speed a vaccine for SARS-COV-II but had SARS-COV-I vaccines been developed to Phase 2a prior to the COVID pandemic, for example, we could have likely knocked months off the development process for SARS-COV-II, saving perhaps millions of lives and trillions of dollars worldwide.

#3: When a virus hits, test the vaccines with “human challenge trials”

COVID vaccines were tested through traditional randomized controlled trials (RCTs) in the field. In an RCT, participants are randomly assigned to either a vaccinated (treatment) group or an unvaccinated (control) group, and both groups resume their normal activities until enough participants contract COVID to establish a statistically significant difference in infection rates. A major drawback of RCTs in a pandemic is the unpredictability of reaching the infection threshold required for statistical significance. If infection rates are low or participants take steps to avoid exposure, trials can be prolonged, delaying vaccine rollout. While increasing the trial size can reduce these delays, it also increases the cost and complexity of the trials.

In contrast, in a human challenge trial (HCT), participants are randomly split into two groups and all of them are deliberately exposed to the virus, accelerating the timeline for obtaining results. Since participants are deliberately exposed the number of participants in a human challenge trial can be much smaller than in an RCT, perhaps on the order of 50–100. Most importantly, where an RCT might take years to produce results, a HCT can have results in a matter of months or weeks (Eyal and Lipsitch 2021; Nguyen et al. 2021). For a variety of reasons, HCT are not necessarily full substitutes for RCTs, but they are surely complements and should be used in emergencies.

#4: A Pandemic Trust Fund

As another example, some $60 billion was spent on special programs to pay furloughed pilots, flight attendants, and other airline staff as travel demand plummeted. Why? One factor was that the airlines were already well organized and politically active. The airlines, for example, spent over one hundred million dollars on lobbying in the year before the pandemic (Evers-Hillstrom 2020). During the pandemic, the airlines were also joined in their lobbying efforts by the airline unions making for a politically powerful team on both sides of the aisle. The lines of power were also well defined. The airlines knew, for example, which members of Congress sat on the requisite committees and what they needed.

In contrast, OWS [Operation Warp Speed, the program for developing vaccines] was a new program with few concentrated interest groups and no previous lobbying efforts. Although some of the vaccine manufacturers understood lobbying, there was no locus of support in Congress because committee responsibilities for a program like OWS had not been established. OWS was run primarily out of the executive and the DOD [Department of Defense]. The program was also controversial from the beginning and any lobbying at the time from the vaccine manufacturers would have been highly scrutinized.

The lesson from political economy is that we do not want emergency funds to be drawn, or to be perceived to be drawn, from other programs. Pre-approved legal authority to spend is necessary to quickly address a low-probability, high-cost emergency. One way to do this would be to establish a Pandemic Trust Fund (PTF) nominally composed of say $250 billion in US government bonds. The PTF would be something of an accounting fiction, similar to the Social Security Trust Fund, but accounting fictions can have real effects. … By clearly denoting pandemic spending rights, a pandemic trust fund would avoid budget battles in the event of a pandemic. At $250 billion and 3% interest, a PTF could also generate annual revenues of $7.5 billion for ongoing pandemic spending. Some of this spending would be wasted but sausages and legislation both require pork as an input.

In the context of total US federal spending, none of these steps are especially costly, but having them in place could make a real difference. As Tabarrok points out, there were plenty of well-publicized warnings in the decade or two about risks of pandemics, including high profile stories in outlets like TIME and CNN, a Bill Gates TED talk seen by millions, and even the 2011 movie Contagion. But when the crunch came, America was not well-prepared. There will be a next time.

How a Small Share of Firms Drive Economic Growth

My guess is that everyone would be happier if economic growth was evenly distributed, so that everyone’s income rose in lockstep. Instead, growth is a disruptive process, with some firms and sectors rising while others decline. As a wise economist once put it, the process growth could in theory be like “yeast,” with everything expanding at once, or like “mushrooms,” with spurts of growth in cerain areas. But most of the time, it’s mushrooms.

A team from the McKinsey Global Institute writes about the mushrooms in “The power of one: How standout firms grow national productivity” (May 6, 2025). The thesis, as stated in the subtitle: “National productivity growth is a matter of few firms taking bold strategic action rather than millions of firms raising efficiency.” For the relatively short time frame they analysis in this study, from 2011 to 2019, this seems likely to be true.

The authors have a dataset of 8300 firms across the US, UK, and German economy, all with at least 50 employees and many with more than 500 employees, and focused in four sectors: retail, automotive and aerospace, travel and logistics, and computers and electronics. They refer to this limited group of companies in each country as a “lab economy.” define a “Standout” firm as a company where the productivity growth in that single company, by itself, adds at least 0.01% to the productivity growth of the entire set of companies for the lab economy in one country. Conversely, they define a “Straggler” firm as a single company that, by itself, subtracts at least 0.01% of productivity growth from the entire economy. Of courses, most firms are between these extremes.

Two conclusions frm the report seem worth emphasizing, in part as explanations for why the US economy has been outperforming the UK and German economies.

First, a relatively small number of Standouts and Stragglers can drive the overall productivity growth patterns of an economy. The report notes: “Fewer than 100 firms in our sample of 8,300—a group that we have dubbed Standouts—accounted for about two-thirds of the positive productivity gains in each of the three country samples we analyzed. … To give a sense of how important a single firm can be, just another dozen or so of the largest Standouts could have doubled productivity growth in their entire country. … In the United States, for instance, 44 Standouts—5 percent of sample firms, accounting for 23 percent of employment share—generated 78 percent of positive productivity growth. … US Standouts included household names like Apple, Amazon, The Home Depot, and United Airlines.

Second, the US has a higher proportion of Standouts relative to Stragglers, compared to the UK and Germany: “US productivity growth from 2011 to 2019 was faster than that of the other countries in our sample at 2.1 percent, compared with 0.2 percent in Germany and close to zero in the United Kingdom. … The US sample had three times more Standouts than Stragglers, while the German and UK samples had almost even numbers.”

Third, US Standouts are more likely to grow and expand, while US Stragglers are more likely to contract, compared with the UK and Germany: “Firms in the US sample had more reallocation of employees from less productive to more productive firms. Leaders grew faster, and underperforming firms more swiftly restructured or exited. In the United States, Standouts include scalers (firms far above average sector productivity that contribute by gaining employees) and restructurers (firms with below-average sector productivity that contribute by losing employees). In Germany and the United Kingdom, this was not the case. Rather, these countries preserved underperforming firms as Stragglers. Frontier firms scaling and gaining share added 0.6 percentage point to productivity growth in the United States, and unproductive firms exiting contributed an additional 0.5 percentage point. Overall, dynamic reallocation, including reallocation across subsector boundaries, added 0.9 of 2.1 percentage points—slightly less than half—to productivity growth in the US sample. In contrast, the contribution of reallocation was negligible in Germany and the United Kingdom. This may be explained by the fact that the United States has highly dynamic factor markets, allowing for quick entry and exit as well as fast scale-up and restructuring.

I’ll add that over longer time periods, the “standout” firms will change, and gradual gains by all of the intermediate firms will loom larger. As the report notes, “The millions of MSMEs [micro, small, and medium sized enterprises] outside our sample collectively contributed up to 30 percent of productivity growth in the four sectors in the national statistics. Indeed, a handful of them may emerge as the Standouts of tomorrow.”

Perhaps the bigger lesson is that all nations claim that they want dynamic standout “superstar” firms (for previous discussions of the role of such firms, see here and here). But then, when those dynamic firms start expanding, they create economic disruption and start driving other competitors out of business. At that point, political pressure will arise to rein them in. But sustained economic growth, at least in the short- and medium-run, is typically mushrooms, not yeast.

What’s a New Drug Worth?

In a a juxtaposition of events that redefines the meaning of “coincidence,” President Trump announced a new policy for prescription drug pricing this morning, and the the Spring 2025 issue of the Journal of Economic Perspectives, released three days ago on Friday morning, begins with a four-paper symposium on drug pricing. (Full disclosure: I work as Managing Editor of the JEP, so this coincidence was perhaps more apparent to me than to others.) The four JEP papers are:

Trump’s proposal starts from the well-known fact that US consumers pay higher prices for brand-name prescription drugs than buyers in other countries. His executive order (yet to be tested in court) would require that US consumers pay prices for drugs no higher than charged in other countries. From the JEP paper by Margaret Kyle:

Kyle points out that Trump’s proposal fits under the category of “external reference pricing,” which is to say that US drug prices for brand-name drugs would be set based on prices in other countries. Of course, if this was to happen, the players in the market would adjust: for example, drug companies would probably seek to charge more for brand-name drugs in other countries. Trump’s executive order does not differentiate between brand-name and generic drugs, but the logic of the order suggests the possiblity of higher US prices for generic drugs.

Kyle points out that many European countries already have a version of “external reference pricing”–in which prices for a drug in one European country are not supposed to be more than in neighboring countries. Strategic maneuvering results. Kyle writes:

A less optimistic assessment of external reference pricing considers the European experience. As noted above, external reference pricing like this would induce a number of strategic responses from other stakeholders. These include delayed launch and/or supply limitations to lower-price markets, as well as efforts to make products less comparable across countries (Kyle 2007, 2011; Maini and Pammolli 2023). … Some European countries also use hidden rebates. For example, the use of France as a reference by other countries ultimately led to agreements between manufacturers and the government to establish a public price as well as secret rebates paid by manufacturers back to the government (Kanavos et al. 2017). This allows the official price (that which is referenced by other countries) to be higher, like the list price in the United States, than what is in fact paid. These nonpublic prices have prompted calls for greater price transparency, but the effects of increased transparency here are ambiguous. When (true) prices are secret, a manufacturer can more easily lower its price in a country, because it sees no negative consequences from having that secret price referenced by other countries. In concentrated markets, transparent prices could also facilitate collusion by manufacturers. However, nonpublic prices make economic assessments much more challenging. The evidence suggests that US adoption of reimportation or external reference pricing would have only modest effects on US drug prices (but would probably reduce access or price transparency in other countries).

But there are two elephants in the room along with this discussion. One is that the higher prices for brand-name drugs paid by Americans also fund the research and development costs of pharmaceutical companies. The Trump administration is seeking to cut government support for R&D in other ways, like reducing grants given through the National Science Foundation. If we are threatening to cut off the sources of funding for pharmaceutical R&D, it raises a fundamental question: What’s a new drug worth, anyway?

The fundamental tradeoff in US pharma markets is that drug companies do research, get patents, and then charge a lot for brand-name drugs. But after the patents expire, the drugs become available in generic versions, where US consumers actually pay less than those in other countries. Hemphill and Sampat point out in their JEP article how this tradeoff was formalized into law 40 years ago with the Hatch-Waxman act. As Conti and Wosińska point out in their JEP article: “In 2023, 92 percent of US drug prescriptions were filled as generics, representing less than 13 percent of overall invoice spending on drugs …”

Of course, a primary benefit of new drugs is their health benefits. In JEP, Garthwaite sketches some past and future benefits of new drugs:

Pharmaceutical innovations are responsible for 35 percent of the remarkable decline in cardiovascular mortality from 1990 to 2015 (Buxbaum et al. 2020). Previously deadly conditions such as HIV/AIDS have been transformed into manageable chronic maladies and others such as hepatitis-C have been cured. Gene therapies are becoming more commonplace as treatments for a wide range of rare and deadly genetic conditions. Advancements in immuno-oncology are providing meaningful advances across a variety of cancers as the body’s natural systems are used to combat cancer. Most recently, the first truly effective treatments for obesity in the form of GLP-1 agonists have emerged with corresponding improvements across a host of cardiometabolic outcomes such as heart disease, diabetes, and chronic kidney disease.

However, the benefits of successful pharma R&D go beyond immediate health benefits for the ill. Garthwaite writes:

[M]edical technologies transform the medical risk individuals face (that is, becoming afflicted with a condition for which there is no treatment) into a financial risk (that is, finding a way to finance the purchase of medical innovations if they get sick (Lakdawalla, Malani, and Reif 2017). All risk-averse consumers should value this reduction in health variance. Indeed, the insurance value of the new innovation can even exceed the value of health insurance in the first place, especially for disease areas where the existing treatment armamentarium is quite poor and the physical effects of the condition are quite severe. This could explain why many treatments for rare diseases so often exceed several thresholds based solely on clinical value. Another gain from new drugs is that scientific progress is often iterative, building on the knowledge and insights from previous advances. Thus, an optimal level of innovation will only be achieved to the extent the eventual value created for society by the next generation of innovations is in some way accounted for in revenues for the manufacturers making incremental progress. … Consider how medical innovations can change available treatment options for individuals who are not yet afflicted, but could become sick in the future.

To put it more bluntly, none of us knows what health conditions we or our loved ones may face in the future. Successful new drugs reduce this risk of what might happen. Paying a lot for a new drug when you need it is no fun, but not having the drug available at all is probably worse.

The other elephant in the room is about the long-term health of the pharmaceutical industry. The Trump administration has put a high priority on supporting US producers in many industies. Well, US firms account for 40-50% of global pharmaceutical sales, according to industry sources. There are about 350,000 US jobs in “Pharmaceutical and Medicine Manufacturing.” The success of the US firms is driven by spending 20% or more of its revenue on research and development, most years. In short, policies that dramatically reduce R&D spending by pharma companies will kneecap their ability to stay ahead as leading exporters in global markets, and pose a threat to several hundred thousand US jobs.

There are a variety of potentially useful mechanisms to negotiate lower drug prices for US consumers discussed in the papers of the JEP symposium, which do not threaten to cut off the future pipeline of new drugs.

But clearly, President Trump prefers what might be called a bumper-car approach to issues: that is, ram full-speed into a problem with a half-baked proposal, then spin the wheel back and forth while backing rapidly away, then ram full speed into the same problem again, and so on. Whatever the merits or demerits of this approach as a negotiating strategy, R&D projects are long-run investments that pay off only over extended periods of time. Playing bumper-car games means that industry will focus on project with a more immediate payoff, while reducing or postponing projeects that would only have longer-run payoffs. But it will be very hard to identify those groups of future patients who suffer because future breakthoughs in new drug therapies are delayed, or don’t happen at all.

Spring 2025 Journal of Economic Perspectives Freely Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Spring 2025 issue, which in the Taylor household is known as issue #152. Below that are abstracts and direct links for all of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.

________

Symposium: Drug Pricing and Regulation

“Economic Markets and Pharmaceutical Innovation,” by Craig Garthwaite

     Pharmaceutical innovations reach the market after a long and risky process that requires large, fixed, and sunk investments. Governments provide incentives for firms to make these investments through various forms of intellectual property protection that attempt to provide a return on capital for investors. As a result, pharmaceutical innovation results from an explicit intersection of public policy and private market incentives. Developing optimal policy therefore requires understanding market features such as how innovation is financed, how firms commercialize pharmaceutical products, the influence of insurance coverage on consumption and spending, and how competition emerges after intellectual property protection ends.

Patents, Innovation, and Competition in Pharmaceuticals: The Hatch-Waxman Act after 40 Years,” by C. Scott Hemphill and Bhaven N. Sampat

     A central policy issue in pharmaceuticals is how to balance the dynamic benefits of new drugs against the static benefits of low prices for existing drugs. In the United States, that balance is set by the Hatch-Waxman Act. We review the Act’s origins and key features, then present evidence on its effects on competition and innovation. On the competition side, we show how the Act creates incentives for brands to accumulate patents and generics to challenge them, with the result being a rough stalemate. We also discuss strategies deployed by brands to delay generic entry. On the innovation side, we show that the Act’s patent extension provisions—which aim to allow branded firms to make up for time lost during clinical trials and regulatory review—are incomplete, resulting in potential distortions. The net result is a convoluted and expensive approach to balancing innovation and competition.

Lessons for the United States from Pharmaceutical Regulation Abroad,” by    Margaret K. Kyle

     Pharmaceutical markets are characterized by barriers to entry and information problems. Many countries intervene in the pricing and reimbursement of drugs to a greater extent than the US government to date. Continued pressure from politicians and recent legislation are likely to change the market for pharmaceuticals in the United States. This article discusses the approaches adopted in other developed countries and the implications of their use in the United States, which due to its size, has far greater influence over the rate and direction of innovation. Alternative policy choices and the challenges of their implementation are also reviewed.

The Economics of Generic Drug Shortages: The Limits of Competition,” Rena M. Conti and Marta E. Wosińska

     We examine the economics of the US generic prescription drug market, which comprises the majority of medicines sold. The market is celebrated for its benefits in the form of high quality and low prices for consumers but is also increasingly challenged by shortages that may disrupt patient care. Shortages in the generic drug market present an economic puzzle—in the face of a shortage, prices should rise, encouraging entry, yet we observe shortages increasing in number and persistence. Moreover, if shortages cause patient harm, why don’t markets pay a premium for a reliable supply chain? We argue that the puzzle can be explained by the inability of generic drug prices to adjust easily due to regulatory and contracting frictions, and the coexisting presence of asymmetric information and agency problems in the US market. We conclude with a discussion of policy interventions aimed at addressing these challenges to ensure resilient US generic drug supply.

Symposium: Income Inequality

Measuring Income and Income Inequality,” by Conor Clarke and Wojciech Kopczuk

     Income inequality is important, but attempts to measure it arrive at strikingly different conclusions. Why? We use recent disputes over measuring United States income inequality to return to first principles about both the income concept and inequality measurement. We emphasize two broad points. First, no measure of the income distribution is truly comprehensive, or could attempt to be comprehensive without making controversial choices. We document the practical and conceptual problems that the standard ideal—comprehensive Haig-Simons income—raises. Second, much of the controversy in this area turns on the many tradeoffs between starting with individual tax data versus more expansive income concepts. Individual tax data reflect only a shrinking subset of a more comprehensive income concept–but they are individual data. More expansive alternatives, on the other hand, are harder to allocate to individuals. We document some of the most important and contestable assumptions that such an allocation requires.

Macro Perspectives on Income Inequality,” by Matthieu Gomez

     Inequality has become a defining challenge for modern economies and a central focus of economic research over the past two decades. I begin by revisiting the foundations of income measurement, showing that standard definitions—taxable income, factor income, and Haig-Simons income—suffer from important conceptual limitations. I contrast these income measures with the ideal notion of income from a welfare perspective—Hicksian income—which captures an individual’s ability to consume or save for future consumption. I then examine the drivers of rising top income inequality, with particular attention to the surge in entrepreneurial incomes. I highlight three key forces behind this phenomenon: higher returns on capital (technological factors), lower external financing costs (financial factors), and a lighter tax burden on business owners (fiscal factors).

“Public Finance Implications of Economic Inequality,” by Alan J. Auerbach

     This paper considers questions about the implications of rising inequality for the theory and practice of public finance. It begins by addressing fundamental reasons why the distribution of income or wealth on an annual basis before taxes and transfers offers insufficient information: (1) it does not tell us what resources are actually available to households for consumption; and (2) in providing a snapshot of the resources available to individuals of different ages at a given moment in time, without controlling for life-cycle related differences or income dynamics, it can provide a misleading estimate of the underlying degree of inequality. The paper then considers the implications of high and perhaps rising economic inequality for the design of government policy: top marginal tax rates, phase-outs of government policies for those with higher incomes, the political economy of inequality, and other subjects.

Symposium: Bond Markets

A Hitchhiker’s Guide to Federal Reserve Participation in Fixed Income Markets,” by Nina Boyarchenko and Or Shachar

     We review US dealer-intermediated fixed income markets, including Treasuries, agency mortgage-backed securities, corporate bonds, and municipal bonds. Through the lenses of primary dealers’ positions, we show these markets’ evolution over the past decade and the effects of recent episodes of abrupt deterioration in market functioning. We then overview how the Federal Reserve interacts with fixed income markets for the purposes of monetary policy implementation and liquidity interventions. We conclude by discussing the shifting composition of investors in US fixed income markets, and what consequences such changes in the investor base may have for monetary policy transmissions.

How US Treasuries Can Remain the World’s Safe Haven,” by Darrell Duffie

     Weaknesses in the design of the market for US Treasuries have reduced the effectiveness of world’s favored safe-haven asset. Since the Global Financial Crisis, the market’s intermediation capacity is far more constrained by the balance sheets of dealer banks, which handle virtually all investor trades. Since 2007, the total size of primary dealer balance sheets per dollar of Treasuries outstanding has shrunk by a factor of four. This trend continues because of large US fiscal deficits and post-GFC regulatory capital constraints, which are necessary for financial stability but limit the provision of liquidity under stress. For US Treasuries to remain a powerful safe haven, the intermediation capacity of the market will need to be expanded and further supported by official-sector backstops.

US Corporate Bond Markets: Bigger and (Maybe) Better?” by Maureen O’Hara and Xing (Alex) Zhou

       The US corporate bond market has expanded significantly, fueled by electronic trading, institutional innovation, and growing retail participation via mutual and exchange-traded funds. These developments have improved efficiency by reducing costs and enhancing transparency, yet they have also introduced new vulnerabilities. The market’s shift from relationship-based to transaction-based trading has weakened its ability to absorb stress, especially during periods of widespread selling. We examine the structural changes that have reduced dealer intermediation, the limited liquidity benefits of electronic platforms, and the destabilizing role of fund flows. The COVID-19 crisis exposed these weaknesses, prompting the Federal Reserve to act as a “market maker of last resort.” We argue that while the market is “better” in many ways, enhancing resilience through transparency and long-term investor participation is essential for future stability.

Why Is the Fragmented Municipal Bond Market So Costly to Investors and Issuers” by John M. Griffin, Nicholas Hirschey, and Samuel Kruger

       The municipal bond market plays a crucial role in providing capital to US municipalities and functions through a network of underwriters, municipal advisors, credit rating agencies, insurers, individual and institutional investors, and multiple regulators. Many of these market participants have significant asymmetric information and conflicting incentive structures, which can sometimes lead to disparate and seemingly inefficient outcomes. Puzzles documented in the academic literature include high underwriting costs, conflicting roles by municipal advisors, extreme and widely varying trade markups, investment holdings that are often not tax-efficient, inconsistent implied marginal tax rates, a heavy reliance on credit ratings, little benefit but widespread use of insurance, delayed use of call provisions, and inconsistent treatment of accounting information. We review issues in the municipal bond market and propose implementable suggestions that would hopefully allow for a more competitive and low-cost market for both taxpayers and investors.

Features

Retrospectives: Yair Mundlak and the Fixed Effects Estimator,” by Marc F. Bellemare and Daniel L. Millimet

       We discuss Yair Mundlak’s (1927–2015) contribution to econometrics through the lens of the fixed effects estimator. We set the stage by discussing Mundlak’s life and his seminal 1961 article in the Journal of Farm Economics, showing how it was looking at the right application—the study of agricultural productivity, which had hitherto been thought to be marred by the presence of management bias—that led Mundlak to use the fixed effects estimator. After discussing Mundlak’s contribution, we briefly discuss the historical economic and statistical contexts in which he made that contribution. We then highlight the dialogue that took place between the proponents of fixed versus random effects and discuss how Mundlak settled the debate in his 1978 Econometrica article. We conclude by discussing how, between fixed and random effects, the fixed effects estimator won the day, becoming the de facto estimator of choice among applied economists because of the Credibility Revolution, culminating in the popularity nowadays of difference-in-differences designs and of two-way fixed effects estimators.

Recommendations for Further Reading,” by Timothy Taylor

How Can You Tell if Health Insurance Helps Health?

It may seem obvious that health insurance helps health, but very few cause-and-effect conclusions are obvious to economists. For example, suppose that we just compared the health of everyone who has health insurance and everyone who doesn’t. It would be unsurprising to find that those with health insurance are healthier, but the two groups will also differ in many other ways. For example, given that many Americans get health insurance through their employer, the chances are good that those with health insurance are more likely to be employed and on average to have higher incomes. How can we disentangle effect of health insurance from other possible confounding factors?

Or imagine that you compared the health of people before-and-after they had health insurance. This approach has some promise, but again, if getting health insurance is also connected to getting a job with benefits, a higher income, and perhaps a more settled life in other ways, then the task of separating out the effect of health insurance from other confounding factors remains.

Or one can imagine a social experiment in which a large group is randomly divided, with part of the group receiving health insurance and part not. Then you could track the two randomlly selected groups over time, and see what happens. This is essentially the approach used to test the safety and efficacy of new drugs, for example. Thus, social scientists are on the lookout for situations where this kind of random selection in to health insurance happened, but perhaps by accident rather than policy.

In their essay, “The Impact of Health Insurance on Mortality,” Helen Levy and Thomas C. Buchmueller focus in some of these situations in which access to health insurance was determined in a way with a high degree of randomness (Annual Review of Public Health, April 2025).

One of the most clear-cut examples happened in Oregon in 2008. The state wanted to expand eligibility for Medicaid, but didn’t have the money to expand it for everyone. The result, as the authors describe it was”the 2008 Oregon Health Insurance Experiment, which studied ∼75,000 low-income adults under age 65, 40% of whom were selected by lottery to be eligible for Medicaid (the treatment group) with the remaining 60% serving as a control group.” Thus, some randomly received health insurance, and some did not.

Another truly randomized study looked at an “IRS initiative that sent letters in early 2017 with information about HealthCare.gov to a randomly selected sample of 3.9 million households that had been subject to the ACA [Affordable Care Act] individual mandate penalty for failing to have coverage in the previous year. The study finds that the letters led to a small but significant increase in coverage.” In this case, some randomly received a letter that increased the share of that group with health insurance, while others did not.

Yet another approach looked at those admitted to California hospitals who were either just under age 65, and thus not eligible for Medicare, or just over age 65, and thus covered by Medicare. The idea here is that the just-unders and just-overs should be highly comparable groups: after all, the only way they differ was in being born a few months apart. In this “discontinuity” approach (in this example, the discontinuity is age 65), the greater or lesser share of health insurance across groups is quite similar to random.

Other examples involve Medicaid coverage Medicaid is a joint federal-state program, so the program was often introduce in a staggered way, over time, across states. This was true back in the 1960s, when Medicaid first enacted, and it was also true in the 2010s, when states were allowed to expand Medicaid coverage, but over several years, only some did so. A researcher can look at this data and see if, when a group of people become eligible for Medicaid, the pattern of their health outcomes then shifts from previous patterns–and the patterns of health outcomes for groups that did not become eligible at that time. Here, the random ingredient is the staggered time periods in which health insurance was introduced.

My theme here is that there are plausible ways for researchers to study a cause-and-effect relationship between health insurance and health. Of course, not all of these studies cover the same age groups, or find the same outcomes. But my guess is that a number of readers care less about the way the studies are done, and more about how the authors of this review would summarize the overall results. Here, I quote from the abstract of their paper:

A 2008 review in the Annual Review of Public Health considered the question of whether health insurance improves health. The answer was a cautious yes because few studies provided convincing causal evidence. We revisit this question by focusing on a single outcome: mortality. Because of multiple high-quality studies published since 2008, which exploit new sources of quasi-experimental variation as well as new empirical approaches to evaluating older data, our answer is more definitive. Studies using different data sources and research designs provide credible evidence that health insurance coverage reduces mortality. The effects, which tend to be strongest for adults in middle age or older and for children, are generally evident shortly after coverage gains and grow over time. The evidence now unequivocally supports the conclusion that health insurance improves health.

Why Is the US Economy Surging Ahead of the UK?

The US economy has emerged from the pandemic growing at a faster pace than the UK and other high-income countries. Simon Pittaway tackles the question of why in “Yanked away: Accounting for the post-pandemic productivity divergence between Britain and America” (Resolution Foundation, April 2025).

The average standard of living in any economy, over time, will be determined by the productivity of workers in that economy. This figure calculates productivity as GDP/worker, adjusted so that productivity in all the G7 countries just before the pandemic was equal to 100. (The G7 countries are the US, UK, France, Germany, Italy, Japan, and Canada.) You can see the red US line pulling ahead of the rest. The official British data is the dashed line, but Pittaway argues that the official data is too optimistic, and the actual labor productivity in the UK is actually lower than it was in 2019.

When you trace the productivity patterns deeper into the data, what do you find? For the UK, Pittaway points to several industries where the decline in productivity since 2019 has been especially high.

For example, it appears that the UK health care sector is experiencing an outright decline in productivity. In the UK oil and natural gas sector, employment is up slightly, although production of oil is down by two-fifths and production of natural gas is down by three-fifths. There seems to have been a decline British productivity in wholesale and retail trade, as well–that is, output in the industry is down much more than employment. Here, I want to focus on a few bigger-picture issues.

One is the level of investment. Pittaway writes:

The investment gap between Britain and America has widened in recent years. Investment by British businesses hit a brick wall around the time of the Brexit referendum.42 As a result, growth in Britain’s capital stock has slowed by two-thirds, from 2.8 per cent in 2016 to 0.9 per cent in 2023. Notably, this slowdown has been particularly stark in the service sectors where the US has significantly outperformed the UK. In real terms, American businesses in those sectors invested 24 per cent more in 2023 than in 2016, while their British counterparts invested only 7 per cent more.

Back when Brexit was happening, I wrote that, as an American, I understand the urge to break trade ties and declare independence. But whatever the merits of Brexit as a cry for self-determination and national autonomy, it wasn’t good for investment incentives. The current US push to fracture trade ties with the rest of the world, especially as it is happening in unclear and ever-evolving ways, won’t be good for US investment incentives, either.

A second big difference worth noting is the US productivity growth advantage in technology-using jobs. US firms are investing more in technology, in particular. As a result, productivity growth in service-related has been higher in the US economy. Pittaway writes:

Professional services emerge as a particularly important source of productivity growth in the US. In part, this reflects the rapid growth of America’s large, high profile tech companies, who mostly operate in the information and communications sector. But productivity growth in professional services sectors that use rather than produce tech has been more consequential. Between 2019 and 2023, professional, scientific and technical services accounted for one-sixth (17 per cent) of the post-pandemic gap in productivity growth between the US and the UK – twice as much as the tech (ICT) sector (8 per cent). The additional tailwind from faster productivity growth in less glamorous service sectors – like administrative and support services, wholesale and retail, and hospitality – shouldn’t be overlooked. For example, different rates of productivity growth in the wholesale and retail sector account for almost as much of the US-UK aggregate productivity growth gap (0.51 percentage points) as information and communications.

A third difference is that energy costs are much lower in the United States than in the UK, or in other countries across Europe. The left-hand panel compares the price of natural gas; the right-hand panel compares the price of electricity.

Finally, the US economy seems to have emerged from the pandemic with a rise of dynamism: more new companies being started, more economic shifts toward areas of greater economic opportunity. Here’s a figure illustrating one aspect of that pattern. As you can see, company births and deaths in the US spiked during the pandemic, but company births have remained high since then. There’s no such movement in the UK data.

During the pandemic, many European countries focused on preserving the connection between workers and their jobs, while the US focused more on income protection for workers, but without linking that aid to remaining with their previous employer. One consequence of those different policy choices is that the US economy has been more fluid in adjusting since the pandemic.

Korea’s Low Fertility Rate

Fertility rates are falling around the world, but Republic of Korea is the outlier, with a fertility rate of 0.72 in 2024. The International Monetary Fund, in its report on Korea’s economic situation (generally quite good), thought that Korea’s low fertility justified adding an “Annex” to its most recent report on Korea’s economy: “Addressing Korea’s Declining Labor Force” (IMF Country Report No. 25/41, Republic of Korea, 2024 Article IV Consultation, February 5, 2025).

This figure puts Korea’s fertility rate in perspective. You can see that the US fertility rate is a little above the average for the OECD countries (mostly the high-income countries of the world). Countries with low fertility rates include Spain, Italy, and Japan. But even among the nations with low fertility rates, Korea is a clear outlier.

As the report notes: “According to the UN’s population projection, Korea’s population is projected to decline by 17 million (equivalent to 33 percent of its current population) by 2070. The working-age population, which peaked in 2019, is projected to decline to 36.3 million (70.2 percent of total population) in 2024; 34 million (66.4 percent) in 2030; and 16 million (45.8 percent) in 2070. This decline is putting considerable strain on labor supply and hence potential growth of the Korean economy.”

Let me just empahasize that opening line again: with a fertility rate of .72, Korea’s population will fall by one-third in less than a half-century. Setting aside extreme conditions of war, disease, famine, and oppression, I do not know of any country which has gone through such an experience.

To see this another way, the top figure shows Korea’s population by age group in 2023. As you can see, the young age groups are quite small compared to the middle age groups. The bottom figure than projects out these trends to 2070. At that point, the middle age groups are small compared to the elderly. Also, if you look at the figure more closely, you will see that the horizontal axis in 2070 is different from the figure in 2023, so the decline in the size of the bars is even larger than it might at first appear.

Discussions of fertility can have a high emotional charge, because they sometimes can sound as if the policymaker (or the innocent writer) is telling people–and women in particular–how many children they “should” have. It’s a legitimate concern. But choices about children are heavily affected by other factors: cost of housing and schools, flexibility of workplace arrangements, availability of childare, structure of labor force, and more. In Korea, these other factors tend to lean against having children.

Consider some of these factors:

Housing costs. The IMF notes: “As of 2024Q1, it is estimated that median income families spend about 63 percent of household income for loan repayment of a median-priced home. The ratio is notably higher in the Seoul Metropolitan Area (151 percent), where the best jobs and education institutions are concentrated, and for larger living spaces needed to raise a child (153 percent for a property bigger than 135 square meters).” Payment of this size pretty much define “unaffordable.” (For those who don’t read metric, 135 square meter is about 1200 square feet; that is, a typical size for a two-bedroom apartment or condo.)

The centrality of private tutoring for children. Mothers in Korea are often expected to oversea a regime of private tutoring, which is seen as necessary to gain entry to prestigous univerisities. The IMF: “Korea’s high private tutoring participation rates largely reflect fierce competition to enter prestigious universities. … A significant portion of parent’s income is thus spent on private tutoring. In 2023, 78.5 percent of Korean primary and secondary school students took private tutoring (Ministry of Education, 2024). Monthly average expenditure for private tutoring per student relative to household disposable income has increased sharply since 2015, reaching … roughly 10 percent of average household disposable income in 2023. Empirical analysis suggests that prevalence of private tutoring is negatively associated with country-level total fertility rate …”

A dual-structure workforce. Korea’s labor market has what is called a “dual structure,” which means that one set of jobs are highly paid, highly demanding, seniority-based and often quite secure, while the remaining jobs are less well-paid, with limited promotion prospects, and often insecure. Thus, a mother in Korea will have a very hard time remaining on the highly-paid track–and in a dual-structure economy, once you are off the highly-paid track, it is very difficult to re-enter that track. Here’s a figure showing flexibility of working arrangements. The US ranks near the top; Korea is near the bottom.

This figure illustrates the dual labor-market in Korea by showing that temporary and self-employment in Korea are especially high in comparison to other countries.

Although the IMF report doesn’t mention this point, it also seems relevant to me that the tradition in Korea has been for a married couple to move in with the husband’s family, which has often meant that the wife end up doing household tasks with her mother-in-law. This pattern has become less common over time, but the possibility of such a living arrangement seems likely to discourage marriage and child-bearing for at least some women.

The IMF report goes into detail about how various policy steps could offset Korea’s low fertility rates, at least to some extent. I should also add that the dangers of extrapolation apply here with some force: If Korea’s population and workforce decline with the speed of these predictions, then in the next few decades housing should become substantially more affordable, admissions at major Korean universities will be less selective, firms will be under pressure to be more flexible in their workforce, and so on. As the US experienced after World War II, baby booms are possible, too. Decisions about how many children you “want to have” are not made in a vacuum.