Some Economics of Africa’s Struggle

Back in 2000, the World Bank published a report with the provocative title, “Can Africa Claim the 21st Century?” The tone of the report was carefully-hedged optimism. For example, it said:

The question of whether Sub-Saharan Africa (Africa) can claim the 21st century is complex and provocative. This report does not pretend to address all the issues facing Africa or to offer definitive solutions to all the challenges in the region’s future. Our central message is: Yes, Africa can claim the new century. But this is a qualified yes, conditional on Africa’s ability—aided by its development partners—to overcome the development traps that kept it confined to a vicious cycle of underdevelopment, conflict, and untold human suffering for most of the 20th century

So how is overcoming the development traps coming along? A quarter-century later, the World Bank has published a follow-up report, 21st-Century Africa
Governance and Growth
, a collection of eight chapters on different aspects of development, edited by Chorching Goh. From the “Main Message” section at the start of the report, here’s some of the flavor. On one side, substantial and undeniable progress has been made.

Over the past 25 years, Africa has achieved notable progress … Mortality rates have fallen, with life expectancy rising from 50 years in 1998 to 61 years in 2022. School attendance has improved, with primary school enrollment increasing from 80 percent in 1999 to 99 percent in 2022 and secondary school enrollment increasing from 26 percent to 45 percent over the same period. The early 2000s saw strong economic growth fueled by high commodity prices. China emerged as a trade and investment partner, and the continent experienced a massive inflow of foreign capital from 17.6 percent of gross domestic product (GDP) in 1998 to 38.1 percent in 2018. Consequently, African countries have shown significant growth performances: from 2000 to 2019, 7 of the world’s 10 fastest-growing economies were in Africa. Aid dependence has declined, tax revenues have increased, and the median poverty rate fell by about 10 percentage points to about 43 percent.

On the other side, as the report notes, “Africa remains the world’s biggest development challenge.” Here are some bullet-points:

  • Persistent poverty. By 2030, 90 percent of the world’s extremely poor population will live in Africa.
  • Economic stagnation. Sub-Saharan Africa’s share of the global economy remains at 2 percent, with minimal change in the region’s merchandise exports.
  • Investment levels. Private investment remains low, with the informal economy accounting for 59 percent of total nonagricultural employment.
  • Limited growth. The reliance on smallholder agriculture limits economic growth due to low investment and productivity.
  • Electricity access. Only 51 percent of the African population has access to electricity, compared to the global average of 91 percent. …
  • Political upheaval. Violent conflicts increased eightfold between 2000 and 2023 throughout the continent, leading to increases in conflict-related deaths and the number of internally displaced people.
  • Governance challenges. The issues of corruption, political instability, and a lack of trust in government and institutions persist.

The report also notes up front that “Africa’s income level per capita would be 40% higher if it had grown at the global average since 1990,” “Nearly 83% of Africa’s employment is informal,” and “86% of 10 year-olds in Africa can’t read and understand a simple paragraph.”

The volume includes chapters on all of these topics and more. My own sense is that if the authors of the 2000 volume could have looked forward to the situation in 2025, they would be more disappointed than pleased.

Here, I’ll add a few words on Chapter 3 of the volume, “Productivity,” by Cesar Calderon and Ayan Qu. Per capita GDP can serve as a rough measure of standard of living, as well as a rough measure of productivity. By that measure, the countries of Africa are struggling relative to the rest of the world. The subregion of West Africa had a little spurt from about 2000-2015, but has now given back those modest gains.

To get a sense of why this pattern is so disappointing, remember that lower-income countries have some potential for “catch-up growth.” They can draw on technologies developed elsewhere, and sell into markets of higher-income countries. A low-income country should have numerous opportunities. When starting from a low base, then achieving a higher rate of growth is somewhat easier. But the countries of Africa are instead experiencing only fall-further-behind growth. Here’s a figure comparing Africa, South Asia, and the East Asia/Pacific region to the US economy in labor productivity. Two of the regions are catching up to the US, at least somewhat, in the last two decades; Africa is below its relative level in the late 1970s.

As Calderon and Qu dig into the underlying data, they point out that the share of productivity differences explained by investment in physical capital is relatively small, and the share explained by human capital differences is only a little larger. The biggest factor is “total factor productivity,” in the lingo of economists, which is efficiently the economy translates these inputs into outputs. Thus, while investing more in human capital and infrastructure can pay off for these economies, the big challenge is to accomplish dramatic structural changes.

These economies need to move away from a focus on small-holder agriculture. The three channels to productivity discussed by Calderon and Qu are: 1) within-firm productivity growth, in which a well-managed company improves its worker skills, technology adoption, and innotation; 2) between-firm productivity growth, in which the high-productivity firms make up a bigger share of the economy than low-productivity firms, so the growth of the successes outweighs the failures; and 3) net entry of productive firms, in which more productive firms are more likely to enter and less-productive firms are more likely to exit. At the end of the day, economies only raise productivity when these dynamics are operating, and for roughly a jillion reasons (see the World Bank volume for details), these dynamics have not been operating especially well across sub-Saharan Africa as a whole.

Bernanke on Federal Reserve Communication

Thirty years ago and further, before 1994, the Federal Reserve did not make any announcement at all when it altered monetary policy. Instead, market-watchers had to detect changes in interest rates as they occurred. Now, the Fed announces a target range for the specific interest rate that it targets (the “federal funds interest rate”) and holds a press conference to explain its choice. The Fed also releases a Summary of Economic Projections, which reports 19 different projections of key economic variables from the 19 participants in meetings of the Federal Open Market Committee. The Fed publishes minutes of FOMC meetings with a three-week lag. Members of the seven Fed Board of Governors in DC as well as presidents of the 12 regional Federal Reserve banks often comment on the reasoning behind the Fed’s policy choice during Congressional testimony and speeches as well.

Should the Fed be taking addition steps to explain its choices more fully? Ben Bernanke (Nobel ’22) offers some ideas in “Improving Fed Communications: A Proposal,” presented at the Fed’s Second Thomas Laubach Research Conference (May 15-16, 2025, full text, audio, and video for the six research papers and other presentations available at the website).

For a sense of what is at stake, Bernanke notes the positive aspect of communication when he writes: “Effective communication—about what the Fed sees in the economy and how it plans to respond—helps households and businesses better understand the economic outlook, clarifies and explains the Fed’s policy strategy, and builds trust and democratic accountability.”

Lest this comment sound like boilerplate, it is perhaps useful to note that, as I see it, the opposite of this statement is also true. That is, one could restate Bernanke’s comment: “Ineffective communication—about what the Fed sees in the economy and how it plans to respond—makes it harder for households and businesses to understand the economic outlook, muddles perception of the Fed’s policy strategy, and diminishes trust and democratic accountability.”

Bernanke proposes one main change to Fed communications practices. As he points out, it’s common for other central banks around the world to present an actual economic forecast, with the underlying assumptions and calculations spelled out. A quarterly forecast could also include discussion of the range of uncertainty, and alternative scenarios that might emerge. Bernanke writes:

The centerpiece … would be forecasts of key economic and policy variables at varying horizons, drawn from a comprehensive macroeconomic forecast led and “owned” by the Board staff (possibly with some input and commentary from policymakers …). Because the underlying forecast would be internally consistent and based on explicit economic assumptions, it would provide greater insight than the projections of individual FOMC participants into the factors affecting the outlook for the economy and policy. Critically, a fully articulated baseline forecast would also facilitate the public discussion of economic scenarios that differ from that baseline. Besides highlighting the inherent uncertainty of economic forecasts, the publication of selected alternative scenarios and their implications could facilitate a subtle but important shift in the Fed’s communications strategy. Specifically, it would allow the FOMC to provide policy guidance that is more explicitly contingent on how the economy evolves, underscoring for the public that the future path of policy is not unconditional (“on a preset course”) but depends sensitively on economic developments and risk management considerations.

For a sense of how this might work in practice, Bernanke refers back to the public discussion in 2021 of whether the surge of inflation was transitory.

To illustrate the use of alternative scenarios in communication, suppose—with a large dose of hindsight—that in mid-2021 the Fed had not, figuratively speaking, put all its chips on its central forecast that inflation would prove “transitory” but instead had said the equivalent of: “For the following reasons we think that the most likely scenario is that the increases in inflation will be transitory. However, should inflation prove to be higher and more persistent, perhaps for these reasons, our response would be to do this [where “this” could be a projected path for rates and the balance sheet, perhaps described only qualitatively]. Similarly, if inflation sinks lower than in the modal forecast, we expect to do that.” Even if lacking in quantitative details, a more explicitly conditional approach would have better conveyed to the public the intrinsic uncertainty of the outlook, and discussion of the reaction function would have provided the public some advance notice about how the Committee would likely respond in less probable but still plausible scenarios.

On one side, it’s hard to quarrel with the idea that the Fed should seek to spell out its thinking more fully. Other central banks around the world do so. Open and honest communication is a beautiful thing, and Bernanke’s proposal seems sensible to me.

On the other side, how easy will it be for the Fed to acknowledge when it is wrong, or to explain that projections of a certain scenario turned out differently in the real world? The Fed is probably strongest when it is perceived to be sticking to the pursuit of its goals of stable prices and maximum employment. If and when the Fed starts producing economic forecasts and scenarios, it will be even more open to the accusations and reality of political lobbying and motivations. Even setting politics aside, the instinct to defend past predictions, or to make a range of predictions vague enough that they become unfalsifiable, are real things, too.

Will the Courts Save Trump from His Tariffs?

The US Court of International Trade has acted to block pretty much all of President Trump’s tariffs. I guess the first question is “what the heck is the US Court of International Trade? The story seems to be that back in 1890, Congress created a “Board of General Appraisers, a quasi-judicial administrative unit within the Treasury Department. The nine general appraisers reviewed decisions by United States Customs officials …” In 1926, Congress replaced the Board of Appraisers with US Customs Court. The status of this court evolved over time, and in 1980 became the US Court  of International Trade, a “national court established under Article III of the Constitution”– the part of the constitution that establishes the federal judicial branch.

I’ve written before that a legal challenge to the Trump tariffs seemed inevitable. The key issue is that the Article 1 of the US Constitution–the part which lays out the structure and powers of the legislative branch–states in Section 8: “The Congress shall have Power To lay and collect taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States …” Over the years, Congress has written a number of exceptions into the law. For example, the International Emergency Economic Powers Act of 1977 (IEEPA) lets the President address “unusual and extraordinary” peacetime threats. For example , when Iran took US hostages in 1979, President Carter could immediately respond with trade sanctions.

The legal question is whether President Trump has the authority to invoke the “emergency” provisions and rewrite all tariffs for countries and goods all around the globe in whatever way he wishes. The law firm Reed Smith has been producing a “tariff tracker” that shows the results. The US Court of International Trade held that Trump has stretched the “emergency” provision considerably too far, that US importing firms are adversely affected, and that previous laws do not mean that Congress has given away all of its Constitutional power in this area to the President. (For those who keep score in this way, the Court decision was a 3-0 vote, and the three judges were appointed by Trump, Obama, and Reagan.)

The Trump administration justifications for its tariffs are full of goofy statements. For example, President Trump argues that the tariffs will all be paid by foreign companies, with no effect on US consumers and firms. Seems unlikely, but say that it’s true. In that case, foreign exporters to the US would have lower profits, but would be exporting the same quantity of goods at the same price to US markets. The idea that tariffs won’t affect the quantities or prices of what foreign exporters sell in the US market is inconsistent with the idea that the tariffs will give breathing space to US producers.

Or Secretary of Commerce Howard Lutnick explained in an interview a few weeks ago what kind of manufacturing jobs were going to return from China to the United States. He said, “The army of millions and millions of human beings screwing in little screws to make iPhones — that kind of thing is going to come to America …” I suppose Lutnick deserves some credit for expressing a concrete idea, but my guess is that most supporters of Trump’s tariffs do not have in mind a US economy based on an “army of millions and millions of human beings screwing in little screws to make iPhones …”

The trade economist Richard Baldwin has just published an e-book called The Great Trade Hack: How Trump’s trade war fails and global trade moves on. He rehearses the arguments over tariffs at some length: how they will not reduce the trade deficit, or revive US manufacturing, or help the middle class. Baldwin writes fluently, and this book is for a generalist readership. Here, I want to touch on one of Baldwin’s themes that I haven’t discussed recently. He writes:

Tariffs persist precisely because they fail economically, yet succeed politically. They provide symbolic relief, project toughness, and shift blame onto external actors without confronting difficult domestic policy challenges like higher taxes or expanded social programmes. …

Tariffs don’t coordinate investment across firms and sectors. They don’t train workers. They don’t bridge skill gaps or modernise vocational education. They don’t fund infrastructure, improve logistics, or support research and development. They don’t unlock capital, align upstream and downstream firms, or connect regions to supply chains. In short, tariffs can defend an industrial base, but they cannot create one.

Reindustrialisation requires more than tweaking relative prices. It needs a strategy. A real one. With planning, sequencing, and sustained commitment. It needs a trained workforce, one that matches the needs of 21st-century manufacturing. And to get those workers, federal and local governments must partner with industry. Firms can’t do it alone. No company will invest heavily in training workers if they’re unsure those workers will stay once their skills are upgraded. That’s why, in most countries, governments step in – funding training with tax dollars to solve the coordination problem. It’s a public good with private benefits, and it only works when governments and employers pull in the same direction.

It also needs reliable infrastructure, stable regulation, and targeted investment incentives. It needs the trust of industrialists – not just that the cost of imported goods will be higher this year, but that America will be a profitable place to make things for decades to come. And this is critical: building a modern manufacturing operation is a long-term proposition. From planning to permitting, from equipment procurement to workforce training, the timeline is measured in years, not months. For investors to commit, they need confidence that support policies – tariffs, subsidies, tax credits, training programmes – will remain in place long enough to generate a return. If the policy environment is unpredictable or politicised, those factories won’t get built.

That’s the real shortcoming of Trump’s pray-and-spray, tariff-first and tariffs-only approach to reshoring manufacturing. There’s no plan to use the breathing room tariffs might create. Without that plan, the most likely outcomes from the 2 April tariffs are higher prices, reduced manufacturing, riled allies, and retaliation against exports from industries where America is competitive today.

We are seeing with President Trump one of the dangers of electing someone with a business background to government: the lessons of business and government overlap in some areas, but are not the same. Baldwin writes:

Trump’s real estate experience also taught him one simple rule: the seller is ripping off the buyer. From that premise, it’s just a logic hop-skip-and-jump to the idea – which the President is firmly convinced of and which shapes his attitude towards trade – that a bilateral trade deficit is theft. … This notion is completely false – as anyone versed in mainstream, positive-sum business practices would attest. Nevertheless, it is a cornerstone of Trump’s belief system.

I’m confident that the US Court of International Trade decision will be appealed to the US Supreme Court, but I suspect that President Trump might benefit politically if the courts take his tariff plans off the table. Trump blocked by the courts is a powerful political force. On the other side, if Trump is forced to face the actual effects of his tariffs, my expectation is that as the gains from international trade are diminished, he won’t come out looking so good.

Permitting Reform: The Supreme Court Weighs In

The National Environmental Policy Act, commonly known as NEPA, requires that large projects obtain federal environmental permits if they cross state borders or federal property (including not just parks, but also interstate highways). Many states and localities have permitting processes as well. If you believe that the US needs to have a wave of building–perhaps to produce green energy and the associated electricity transmission lines, or perhap for additional housing develoment, or perhaps to expand mass transit in cities, or perhaps to build the data centers needed to run the new AI tools, or perhaps to build the factories for the US jobs of the future–then you should be concerned that the lawsuits from small and unrepresentative groups enabled by NEPA are a cause of serious delay.

I’ve written about permitting reform before. For example, Zachary Liscow wrote in the Winter 2025 issue of the Journal of Economic Perspectives on “Getting Infrastructure Built: The Law and Economics of Permitting.” Broadly speaking, his notion is to find ways to get broad public input earlier in the permitting process, and if such input is collected and taken into account, then courts would be quite hesitant to let a lawsuit from a small special-interest group block a project. Or for wincing and giggles, consider this the figure accompanying this post on “What Permits are Needed for New Electricity Transmission Lines?”

Now the US Supreme Court has weighed in, in the case of Seven County Infrastructure Coalition, et al., vs. Eagle County, Colorado. The decision was released earlier today. Here’s the fact setting as described by the court:

Under federal law, new railroad construction and operation must first be approved by the U. S. Surface Transportation Board. 49 U. S. C. §10901. In 2020, the Seven County Infrastructure Coalition applied tothe Board for approval of an 88-mile railroad line connecting Utah’s oil-rich Uinta Basin to the national freight rail network, facilitating the transportation of crude oil to refineries along the Gulf Coast. As part of its project review, the Board prepared an environmental impactstatement (EIS) that addressed significant environmental effects of the project and identified feasible alternatives that could mitigatethose effects, as required by the National Environmental Policy Act (NEPA). The Board issued a draft EIS and invited public comment. After holding six public meetings and collecting more than 1,900 comments, the Board prepared a 3,600-page EIS that analyzed numerous impacts of the railway’s construction and operation. Relevant here, the EIS noted, but did not fully analyze, the potential environmental effects of increased upstream oil drilling in the Uinta Basin and increased downstream refining of crude oil. The Board subsequently approved the railroad line, concluding that the project’s transportation and economic benefits outweighed its environmental impacts. Petitions challenging the Board’s action were filed in the D. C. Circuit by a Colorado county and several environmental organizations. The D. C. Circuit found “numerous NEPA violations arising from the EIS.” 82 F. 4th 1152, 1196. Specifically, the D. C. Circuit held that the Board impermissibly limited its analysis of the environmental effects from upstream oil drilling and downstream oil refining projects, concluding that those effects were reasonably foreseeable impacts that the EIS should have analyzed more extensively.

You can see the issue here. The Environment Impact Statement focuse on the construction and operation of the 88 miles of railroad track. However, it did not “upstream” and “downstream” issues, like the costs and benefits of increased oil drilling in Utah’s Uinta basin, or the effects of additional oil at Gulf refineries, or perhaps even the basic question of whether US oil production should rise or fall.

The Court’s decision was 8-0 (Judge Gorsuch did not take part). The main opinion says:

Some courts have strayed and not applied NEPA with the level of deference demanded by the statutory text and this Court’s cases. Those decisions have instead engaged in overly intrusive (and unpredictable) review in NEPA cases. Those rulings have slowed down or blocked many projects and, in turn, caused litigation-averse agencies to take ever more time and to prepare ever longer EISs for future projects.

The upshot: NEPA has transformed from a modest procedural requirement into a blunt and haphazard tool employed by project opponents (who may not always be entirely motivated by concern for the environment) to try to stop or at least slow down new infrastructure and construction projects. Some project opponents have invoked NEPA and sought to enlist the courts in blocking or delaying even those projects that otherwise comply with all relevant substantive environmental laws. Indeed, certain project opponents have relied on NEPA to fight even clean-energy projects—from wind farms to hydroelectric dams, from solar farms to geothermal wells. See, e.g., Brief for Chamber of Commerce of the United States of America, et al. as Amici Curiae 19–20.

All of that has led to more agency analysis of separateprojects, more consideration of attenuated effects, more exploration of alternatives to proposed agency action, more speculation and consultation and estimation and litigation. Delay upon delay, so much so that the process sometimes seems to “borde[r] on the Kafkaesque.” Vermont Yankee, 435 U. S., at 557. Fewer projects make it to the finish line. Indeed, fewer projects make it to the starting line. Those that survive often end up costing much more than is anticipated or necessary, both for the agency preparing the EIS and for the builder of the project. And that in turn means fewer and more expensive railroads, airports, wind turbines, transmission lines, dams, housing developments, highways, bridges, subways, stadiums, arenas, data centers, and the like. And that also means fewer jobs, as new projects become difficult to finance and build in a timely fashion. A 1970 legislative acorn has grown over the years into a judicial oak that has hindered infrastructure development “under the guise” of just a little more process.

The United States is a famously litigious society, and there will always be a small interest groups what wants to sue–not because they want the project to be done better, but because they don’t want the project at all. Having the Supreme Court alter the interpretation of the law in this way may be an imperfect way to proce4ed, but one way or another, some pushback on the current permitting process was in the wind.

US State-Level Abortion Regulations: Causes and Effects

Regulations about abortion are often wildly controversial. But what effects to they actually have? Caitlin Myers addresses these issues in “From Roe to Dobbs: 50 Years of Cause and Effect of US State Abortion Regulations” (Annual Review of Public Health 2025, pp. 433-446).

As a starting point, consider the years before and after the 1973 US Supreme Court decision in Roe v. Wade that struck down existing abortion restrictions across the country. The left-hand panel shows the states which has repealed the bans on abortion before Roe in purple, those that had relaxed but not eliminated their ban before Roe in pink, and those in which abortion was legalized by Roe in gray. In the purple states that had already repealed their ban on abortion, the number of abortions had risen in the years before Roe, but had then started declining–and the decline continued after the passage of Roe. Part of the reason for the decline in the early-legalization states is that, after Roe, women no longer had to travel from other states where abortion was illegal. In the other groups of states, the number of abortions rose.

As Myers argues, the effects on abortion levels of states that repealed their abortion bans before 1973 is very large–probably larger than the increase in abortion following the Roe decision. She writes:

Of the three broad policy changes liberalizing abortion access—early reforms, early repeal, and repeal with Roe—it is early repeal that results in the greatest effects on national abortion and birth rates. As Joyce et al. (51) conclude following a detailed analysis of the effects of distance to early repeal states, “The story that emerges from these data is that…Roe v. Wade was arguably less important for unintended childbearing than was access to services in California, the District of Columbia and especially New York in the years before Roe” (pp. 813–14) because so many people were able to travel to these early repeal states even if their state of residence had not yet legalized abortion.

States then tested the limits of what the Supreme Court would allow with a variety of restrictions: mandatory waiting periods before an abortion, mandatory counseling before an abortion, different types of content that might be involved in that counseling, parental permission for teenager and/or spousal permission for wives, whether Medicaid funding could be used to pay for abortions, whether abortions needed to be performed in or near hospitals, what doctors were allowed to perform abortions, and others. This array of rules–as they were proposed, passed or failed in legislatures, and were upheld or not by courts–provides a rich set of contexts for researchers.

Here’s one example. In North Carolina in the 1980s and into the 1990s, there was a state fund to pay for abortions for low-income women: in this way, the state did not draw on federal Medicaid funds to pay for abortions. But the state fund sometimes ran out of money. Myers writes: “Cook et al. (25) exploit a natural experiment that took place within North Carolina between 1980 and 1994 when the state abortion fund ran out of money on five different occasions. Comparing changes in outcomes among women seeking abortions and eligible for funding, the authors conclude that when funding is unavailable, about one-third of pregnancies that would have been terminated are instead carried to term …”

This kind of study is referred to as a “natural experiment”–that is, there was no plan for the North Carolina fund to run out of money. It seems unlikely that sexual activity in North Carolina was being adjusted according to the state of the fund. Instead, some North Carolina women seeking abortions found that funding was available, and others didn’t, and this had an effect on their deicsions.

Myers goes into detail in considering the array of natural experiments that have been analyzed. For example, when a state altered its abortion laws, then women who lived relatively close to that state were also affected, because it was relatively easy for them to travel to that state, while women living farther from that state were less affected, because their costs of travelling to that state were higher. As another example, those interested in, for example, the application of difference-in-differences statistical methods may want to check out the paper.

Here, I’ll mention some of the bottom lines of this survey of the evidence (citations omitted here, but appear in the article iteself): When and where abortion is more restricted, birth rates are higher. Higher birth rates, especially for women at younger ages, are associated with lower levels of educational achievement, and thus with lasting effects on employment outcomes. These effect are typically larger for black women then for white women.

What about the period since the 2022 US Supreme Court decision in Dobbs v. Jackson, which struck down Roe v. Wade and thus gave states much wider latitude in setting abortion laws? Of course, the evidence on this point is still evolving, and the setting for abortion is now rather different than it was before 1973. Myers notes:

  • “Abortion prior to 12 weeks’ gestation remains legal in 34 states (65) and many states have bolstered their protections (22), providing many more destinations than existed in 1971, when abortion was legal in only 6 jurisdictions.”
  • “The delivery of abortion services has also evolved, with a major shift occurring in 2000 when the US Food and Drug Administration (FDA) approved the drug mifepristone for the termination of pregnancies. The proportion of medication abortions grew rapidly, from 6% of all abortions in 2001 to 39% in 2017.”
  • “[I]n December 2021 the FDA lifted the restriction permanently (55), allowing health care providers to dispense abortion medications directly to patients via mail without requiring the patient to receive in-person consultation or tests (85). This expanded abortion access in the 32 states that did not restrict telehealth abortion (5), likely fueling the rise in medication abortions to 63% of all abortions by 2023 … By the end of 2023, telehealth accounted for nearly 1 in 5 abortions in the United States (83), and national abortions had actually risen relative to pre-Dobbs levels …”
  • “Yet not everyone seeking an abortion can find a way to drive hundreds of miles to reach facilities in nonban states or will find telehealth medication abortion an acceptable option. Near-total abortion bans enforced in the first 6 months after Dobbs are estimated to have increased births in ban states by an average of 2.3% relative to if no ban had been enforced (26). The estimated effects of bans on fertility are greater in states where distances are greatest, reaching 4.4% in Mississippi and 5.0% in Texas …”

In addition, teenage birth rates have fallen dramatically over the last three decades for an array of reasons not directly related to availability of abortion: less sexual activity, greater use of contraception, and more broadly, a larger share of young women viewing their early adulthood as a time for education and job experience, with later ages for marriage and childbearing.

Update on the Military Base Realignment and Closure Process

One of the strongest examples of how a commission report can overcome problems with a legislative process involves the Base Realignment and Closure Process that started in 1988, and has now gone through five rounds. The challenge was that the number of active duty US military personnel had rise to about 3.5 million during the Vietnam War in the late 1960s, but then had fallen to about 2 million by the later part of the 1980s. It was obvious that the number of military bases should also decline, but the US Congress had a very hard time doing it. Any Congressperson who had a military base in or near their district or state would not vote for cutting their own base; moreover, they would not vote for cutting bases in other places–out of fear that the base in their own district might be the next to do. Literally no military bases were close between 1977 and 1987.

The idea behind the Base Realignment and Closure Process was to have an outside commission draw up a list of bases to be closed and a timeline for closing them. Congress could then vote for or against the proposed list as a whole–but Congress committed in advance not to amend the list. Chandler S. Reilly and Christopher J. Coyne bring us up to date in ” The Political Economy of Military Base Redevelopment” (Eastern Economic Journal, 2025, 51: 7–26). As they write: “Since the initial round in 1988, four subsequent BRAC rounds were completed in 1991, 1993, 1995, and 2005, resulting in the closure of over a hundred major military bases, with the property transferred to local communities for redevelopment.”

But while the process has facilitated base closures, Reilly and Coyne point out that the visible hand of political clout has continued to play a role in the redevelopment process. They write:

In most cases, base property is not simply auctioned off with the rights to that property transferred to private parties. Instead, a political process governs base redevelopment from start to finish. Some forms of property transfer, such as Economic Development Conveyances (EDC) and Public Benefit Transfers (PBT), follow predetermined redevelopment paths with the intention of stimulating economic development or benefiting certain segments of the community. These rules incentivize redevelopment along predetermined lines which results in property being zoned for specific uses over many years. … [B]ase repurposing hinges on a political process where interest groups compete not by offering the highest market bid for reuse rights, but rather through waste-generating rent-seeking activities.

These zoning decisions may represent some mixture of special-interest lobbying, political clout, and attempts to pass costs to others. For example, the California State University Monterey Bay (CSUMB) was established after the closure of Fort Ord on that site. There had not been any previous plan for the establishment of a Cal State campus in that area; earlier reports noted that existing Cal State campuses had plenty of space for projected enrollments. But if the state was getting the land and buildings for “free” (ignoring opportunity cost of alternative uses, of course) and the federal government was chipping in with some spending to build out the rest of the campus, it seemed like a good idea.

Political processes can even lead to gridlock in redevelopment. The interests of thos living right beside the former base may not be aligned with the interests of those at the regional or state level (for example, should the base become a nature preserve, a shopping mall, a mixed-use development, or an industrial park?). It can take years, or in some cases decades, for these interjurisdictional issues to be resolved. The authors write: “As of 2017, there remained over 70 thousand acres of base land that had yet to be disposed, representing around 19 percent of the total acreage of closed bases over the five rounds …”

The authors suggest the merits of auctioning the land from base closures, which has the advantages of straightforwardness–although one expects that politicians would still push to play a heavy role. There is probably an intermediate approach in which politicians work with a master planner who would designate some of the land for parks or transportation or other uses, and then auctioning off the rest with the framework established. But when “free” land and buildings seem to be available, it’s not easy for local politicians to take a step back.

An obvious question is whether the commission approach might be used to resolve other logjammed issues. Back in the early 1980s, for example, when the Social Security system was verging on insolvency, a National Commission on Social Security Reform headed by Alan Greenspan proposed a set of changes that became the basis for a 1983 law which made the system solvent up until the early 2030s. Maybe it’s time for another such commission? Or imagine if Congress designated a target for spending cuts and another target for tax increases, and then set up two committees to propose how to meet that target. In this case, the arrangement might be that Congress could vote for or against, but any amendments suggesting more spending or lower taxes in one area would need to be include a revenue-neutal spending cut or tax increase in another area, to remain within the overall targets. Yes, it would be nice if Congress could debate and vote to address these kinds of issues like adults. But perhaps other arrangements are needed.

The Import-So-That-They-Can-Export Firms

Much of the discussion about trade and imports is based on discussions of products and sectors of the economy. But among the researchers who study international trade, a major shift has been a focus on relatively small firms that are directly involved in international trade. It turns out that many of these firms are both major importer and major exporters: indeed, they import intermediate goods in order as part of a global supply chain, to add economic value in the US economy while planning to export a finished (or more-finished) product. When you think about what US firms that are involved in international trade actually do, the arguments over tariffs take on a different flavor.

Pol Antràs provides a nice overview of thia research his FBBVA Lecture 2024: “The Uncharted Waters of International Trade,” delivered at the annual meetings of the European Economic Association, and now published in the Journal of the European Economic Association (February 2025, pp. 1-51). Researchers in international trade will be especially interested in the “uncharted waters” for future theoretical and empirical research that Antràs describes. Here, I’ll focus on looking back at the “charted waters” of key facts discovered by reseach in the previous decade or two.

(For an article from a few years back as this line of research got underway, I can recommend Andrew B. Bernard, J. Bradford Jensen, Stephen J. Redding, and Peter K. Schott, “Firms in International Trade,” from the Summer 2007 issue of the Journal of Economic Perspectives, where I labor as Managing Editor.)

Here’s Antràs with some facts about only a small share of US firms are involved in exporting.

First, … in the real world, only a small proportion of firms engage in exporting, with most exporting firms targeting just a few markets. … [O]nly 35% of all manufacturing firms in the United States exported in 2007. Furthermore, this is not driven by universal exporting in some sectors and zero exporting in import- competing sectors: The share of firms that export is highest among firms in “Computer and Electronic Products,”reaching 75% export participation, but this share is positive and significantly lower than 50% in most sectors.

Second, the distribution of exporters is highly skewed. Despite accounting for only 0.03% of all US manufacturing firms … the top 1% of exporters accounted for a staggering 80.9% of US manufacturing exports. The top 2%–5% and top 5%–10% accounted for an additional 12.1% and 3.3%, respectively, leaving the contribution of the bottom 90% at a mere 3.7% of total US exports. This phenomenon is not special to the United States. The top 1% of exporters accounted for 77% of exports in Hungary, 68% of exports in France, 59% of exports in Germany, 53% of exports in Norway, 51% of exports in China, 48% of exports in Belgium, 47% of exports in Denmark, 42% of exports in the United Kingdom, and 32% of exports in Italy (Mayer and Ottaviano 2008 ; Manova and Zhang 2012 ; Ciliberto and Jäkel 2021 ). Why are exporters often in the minority, even in an economy’s most competitive sectors, and why are aggregate exports so concentrated among a small number of firms?

The third stylized fact unveiled by empirical work in the late 1990s is that … exporters appear to be systematically different from non-exporters: they are larger, more productive, and operate at higher physical capital and skill intensities. … [T]hese differences are very large. US exporters are on average 1.11 log points (or 203% ) larger in terms of employment than non-exporters in the same sector, and even controlling for the number of employees, exporters feature substantially higher sales, labor productivity, total factor productivity (TFP), wages, capital intensity, and skill intensity.

A similar pattern arises for imports: that is, a relatively small share of firms account for a very large share of imports, and most of this trade involves inputs to finished goods, not the finished good themselves.

Perhaps most notably, the vast majority of world trade is not in finished products: It has been estimated that trade in intermediate inputs accounts for as much as two-thirds of world trade (Johnson and Noguera 2012 ). This implies that global firms not only export but also import. … More specifically, importers in the United States are in the minority, the distribution of US imports is as skewed as that for exports, importers are larger, more productive, and more capital and skill intensive than non- importers … Antràs, Fort, and Tintelnot ( 2017 ) further document that US importers are not only larger than non-importers, but that their relative size advantage is also increasing in the number of countries from which they source.

Indeed, in many cases imports and exports happen within a single firm: that is, the firm owns overseas suppliers and imports from them, and it owns overseas distributors and exports to them.: “Using a newly merged data on US firms’ exports and imports, and their global production locations in 2007, Antràs et al. ( 2024 ) estimate that around 80% of US exports and imports are accounted for by US firms that manufacture goods both in the US as well as in foreign countries.”

The current high-drama agenda of threatening tariffs, then backing away, then negotiating, then threatening again, all makes for lively headlines and talk shows. Yes, after a transition period of at least years and likely a decade or more, some of these firms that import-to-export could re-invent their production processes with much more reliance on domestic supply chains. But remember, these import-to-export firms evolved in this way because it was more cost-effective for them to do so–that is, there were gains from trade. These firms buy inputs in global markets either because the products aren’t available in US markets, or are available only at a substantially higher price; similarly, they export because global markets have the necessary demand to absorb the quantities that they produce.

These large US firms that import-to-export, often within the structure of the firm itself. are often among the crown jewels of the US economy. Remember, they are well above average in “sales, labor productivity, total factor productivity (TFP), wages, capital intensity, and skill intensity.” For these kinds of firms, which represent the lion’s share of US trade, the issue with tariffs isn’t about whether a family will be able to afford toys or T-shirts for their children. If these firms end up over time facing both substantially highe rtariffs on their imports of input for production and retaliatory tariffs on their exports, that policy will cut the heart out of their business model.

Prepping for the Next Pandemic

If you are like me, you spend a certain amount of time trying not to remember the pandemic experience. But COVID-19 pandemic did cause more than one million American deaths. In a world of sane and sensible prioritizing and policy-making, spending some time and effort focused on how to reduce the risks and costs of a future pandemic seems potentially productive. Alex Tabarrok discusses a few pragmatic possibilities in “Pandemic preparation without romance: insights from public choice” (Public Choice, published online April 16, 2025).

One metaphor for America’s level of unpreparedness for the COVID-19 pandemic is warehouses of rotting N95 masks. Tabarrok notes:

[C]onsider that The Strategic National Stockpile (SNS) of personal protective equipment (PPE) was severely inadequate to meet the demands of the COVID-19 pandemic. At the start of the pandemic, the stockpile had only about 35 million N95 masks on hand, far short of the estimated 3.5 billion that would have been needed to adequately protect healthcare workers and first responders. Moreover, much of the stockpile was rotting as the N95 masks were more than 10 years old by the time of the pandemic.


I should emphasize that even taken all together, Tabarrok is not claiming that his recommended policies can eliminate the costs of future pandemics. But if we could reduce the cost by say, just 10% , the US savings alone would have been more than 100,000 lives and more than $1 trillion in lost economic output. Here are four of his options:

#1: Testing for disease at sewage treatment plants

People infected with SARS-CoV-2 shed genetic material from the virus in their feces (Bivins et al. 2020). Wastewater surveillance can detect the presence, concentration and growth of this genetic material before people present clinically. Thus, wastewater surveillance gives public health officials an early warning which can be used to allocate scarce resources and to implement control measures. More generally, wastewater surveillance can detect a host of viral and bacterial pathogens including influenza viruses, poliovirus, norovirus, hepatitis A and E viruses and bacteria such as Escheric hia coli and Salmonella. Wastewater surveillance to monitor antibacterial resistance may be of special importance (Philo et al. 2023; Singer et al. 2023). As with surveillance for SARS-CoV-2, wastewater surveillance more generally can be used to predict disease outbreaks more quickly, track the spread and virulence of pathogens and novel variants of concern, and inform and provide feedback to public health decisions (Wu et al. 2020).

#2: Build a vaccine library, by doing the research in advance on vaccines for viruses most likely to cause an outbreak

In 2016, the WHO identified 11 viruses with the greatest potential to cause severe outbreaks. Gouglas et al. (2018) estimated that developing at least one vaccine candidate for each of these viruses up to phase 2a would cost approximately $2.8 to $3.7 billion in total (see also Krammer 2020). Bringing a vaccine candidate up to phase 2a means designing the vaccine and evaluating it for safety and essentially “proof of concept” in small trials. Prior to a significant outbreak, it would not be possible to run phase 3 efficacy trials. It should be clear that these costs are small, almost trivial, relative to the expected gains. It’s notable that SARS-CoV-1 was on the WHO’s list. The knowledge gained from studying SARS-CoV-1 helped to speed a vaccine for SARS-COV-II but had SARS-COV-I vaccines been developed to Phase 2a prior to the COVID pandemic, for example, we could have likely knocked months off the development process for SARS-COV-II, saving perhaps millions of lives and trillions of dollars worldwide.

#3: When a virus hits, test the vaccines with “human challenge trials”

COVID vaccines were tested through traditional randomized controlled trials (RCTs) in the field. In an RCT, participants are randomly assigned to either a vaccinated (treatment) group or an unvaccinated (control) group, and both groups resume their normal activities until enough participants contract COVID to establish a statistically significant difference in infection rates. A major drawback of RCTs in a pandemic is the unpredictability of reaching the infection threshold required for statistical significance. If infection rates are low or participants take steps to avoid exposure, trials can be prolonged, delaying vaccine rollout. While increasing the trial size can reduce these delays, it also increases the cost and complexity of the trials.

In contrast, in a human challenge trial (HCT), participants are randomly split into two groups and all of them are deliberately exposed to the virus, accelerating the timeline for obtaining results. Since participants are deliberately exposed the number of participants in a human challenge trial can be much smaller than in an RCT, perhaps on the order of 50–100. Most importantly, where an RCT might take years to produce results, a HCT can have results in a matter of months or weeks (Eyal and Lipsitch 2021; Nguyen et al. 2021). For a variety of reasons, HCT are not necessarily full substitutes for RCTs, but they are surely complements and should be used in emergencies.

#4: A Pandemic Trust Fund

As another example, some $60 billion was spent on special programs to pay furloughed pilots, flight attendants, and other airline staff as travel demand plummeted. Why? One factor was that the airlines were already well organized and politically active. The airlines, for example, spent over one hundred million dollars on lobbying in the year before the pandemic (Evers-Hillstrom 2020). During the pandemic, the airlines were also joined in their lobbying efforts by the airline unions making for a politically powerful team on both sides of the aisle. The lines of power were also well defined. The airlines knew, for example, which members of Congress sat on the requisite committees and what they needed.

In contrast, OWS [Operation Warp Speed, the program for developing vaccines] was a new program with few concentrated interest groups and no previous lobbying efforts. Although some of the vaccine manufacturers understood lobbying, there was no locus of support in Congress because committee responsibilities for a program like OWS had not been established. OWS was run primarily out of the executive and the DOD [Department of Defense]. The program was also controversial from the beginning and any lobbying at the time from the vaccine manufacturers would have been highly scrutinized.

The lesson from political economy is that we do not want emergency funds to be drawn, or to be perceived to be drawn, from other programs. Pre-approved legal authority to spend is necessary to quickly address a low-probability, high-cost emergency. One way to do this would be to establish a Pandemic Trust Fund (PTF) nominally composed of say $250 billion in US government bonds. The PTF would be something of an accounting fiction, similar to the Social Security Trust Fund, but accounting fictions can have real effects. … By clearly denoting pandemic spending rights, a pandemic trust fund would avoid budget battles in the event of a pandemic. At $250 billion and 3% interest, a PTF could also generate annual revenues of $7.5 billion for ongoing pandemic spending. Some of this spending would be wasted but sausages and legislation both require pork as an input.

In the context of total US federal spending, none of these steps are especially costly, but having them in place could make a real difference. As Tabarrok points out, there were plenty of well-publicized warnings in the decade or two about risks of pandemics, including high profile stories in outlets like TIME and CNN, a Bill Gates TED talk seen by millions, and even the 2011 movie Contagion. But when the crunch came, America was not well-prepared. There will be a next time.

How a Small Share of Firms Drive Economic Growth

My guess is that everyone would be happier if economic growth was evenly distributed, so that everyone’s income rose in lockstep. Instead, growth is a disruptive process, with some firms and sectors rising while others decline. As a wise economist once put it, the process growth could in theory be like “yeast,” with everything expanding at once, or like “mushrooms,” with spurts of growth in cerain areas. But most of the time, it’s mushrooms.

A team from the McKinsey Global Institute writes about the mushrooms in “The power of one: How standout firms grow national productivity” (May 6, 2025). The thesis, as stated in the subtitle: “National productivity growth is a matter of few firms taking bold strategic action rather than millions of firms raising efficiency.” For the relatively short time frame they analysis in this study, from 2011 to 2019, this seems likely to be true.

The authors have a dataset of 8300 firms across the US, UK, and German economy, all with at least 50 employees and many with more than 500 employees, and focused in four sectors: retail, automotive and aerospace, travel and logistics, and computers and electronics. They refer to this limited group of companies in each country as a “lab economy.” define a “Standout” firm as a company where the productivity growth in that single company, by itself, adds at least 0.01% to the productivity growth of the entire set of companies for the lab economy in one country. Conversely, they define a “Straggler” firm as a single company that, by itself, subtracts at least 0.01% of productivity growth from the entire economy. Of courses, most firms are between these extremes.

Two conclusions frm the report seem worth emphasizing, in part as explanations for why the US economy has been outperforming the UK and German economies.

First, a relatively small number of Standouts and Stragglers can drive the overall productivity growth patterns of an economy. The report notes: “Fewer than 100 firms in our sample of 8,300—a group that we have dubbed Standouts—accounted for about two-thirds of the positive productivity gains in each of the three country samples we analyzed. … To give a sense of how important a single firm can be, just another dozen or so of the largest Standouts could have doubled productivity growth in their entire country. … In the United States, for instance, 44 Standouts—5 percent of sample firms, accounting for 23 percent of employment share—generated 78 percent of positive productivity growth. … US Standouts included household names like Apple, Amazon, The Home Depot, and United Airlines.

Second, the US has a higher proportion of Standouts relative to Stragglers, compared to the UK and Germany: “US productivity growth from 2011 to 2019 was faster than that of the other countries in our sample at 2.1 percent, compared with 0.2 percent in Germany and close to zero in the United Kingdom. … The US sample had three times more Standouts than Stragglers, while the German and UK samples had almost even numbers.”

Third, US Standouts are more likely to grow and expand, while US Stragglers are more likely to contract, compared with the UK and Germany: “Firms in the US sample had more reallocation of employees from less productive to more productive firms. Leaders grew faster, and underperforming firms more swiftly restructured or exited. In the United States, Standouts include scalers (firms far above average sector productivity that contribute by gaining employees) and restructurers (firms with below-average sector productivity that contribute by losing employees). In Germany and the United Kingdom, this was not the case. Rather, these countries preserved underperforming firms as Stragglers. Frontier firms scaling and gaining share added 0.6 percentage point to productivity growth in the United States, and unproductive firms exiting contributed an additional 0.5 percentage point. Overall, dynamic reallocation, including reallocation across subsector boundaries, added 0.9 of 2.1 percentage points—slightly less than half—to productivity growth in the US sample. In contrast, the contribution of reallocation was negligible in Germany and the United Kingdom. This may be explained by the fact that the United States has highly dynamic factor markets, allowing for quick entry and exit as well as fast scale-up and restructuring.

I’ll add that over longer time periods, the “standout” firms will change, and gradual gains by all of the intermediate firms will loom larger. As the report notes, “The millions of MSMEs [micro, small, and medium sized enterprises] outside our sample collectively contributed up to 30 percent of productivity growth in the four sectors in the national statistics. Indeed, a handful of them may emerge as the Standouts of tomorrow.”

Perhaps the bigger lesson is that all nations claim that they want dynamic standout “superstar” firms (for previous discussions of the role of such firms, see here and here). But then, when those dynamic firms start expanding, they create economic disruption and start driving other competitors out of business. At that point, political pressure will arise to rein them in. But sustained economic growth, at least in the short- and medium-run, is typically mushrooms, not yeast.

What’s a New Drug Worth?

In a a juxtaposition of events that redefines the meaning of “coincidence,” President Trump announced a new policy for prescription drug pricing this morning, and the the Spring 2025 issue of the Journal of Economic Perspectives, released three days ago on Friday morning, begins with a four-paper symposium on drug pricing. (Full disclosure: I work as Managing Editor of the JEP, so this coincidence was perhaps more apparent to me than to others.) The four JEP papers are:

Trump’s proposal starts from the well-known fact that US consumers pay higher prices for brand-name prescription drugs than buyers in other countries. His executive order (yet to be tested in court) would require that US consumers pay prices for drugs no higher than charged in other countries. From the JEP paper by Margaret Kyle:

Kyle points out that Trump’s proposal fits under the category of “external reference pricing,” which is to say that US drug prices for brand-name drugs would be set based on prices in other countries. Of course, if this was to happen, the players in the market would adjust: for example, drug companies would probably seek to charge more for brand-name drugs in other countries. Trump’s executive order does not differentiate between brand-name and generic drugs, but the logic of the order suggests the possiblity of higher US prices for generic drugs.

Kyle points out that many European countries already have a version of “external reference pricing”–in which prices for a drug in one European country are not supposed to be more than in neighboring countries. Strategic maneuvering results. Kyle writes:

A less optimistic assessment of external reference pricing considers the European experience. As noted above, external reference pricing like this would induce a number of strategic responses from other stakeholders. These include delayed launch and/or supply limitations to lower-price markets, as well as efforts to make products less comparable across countries (Kyle 2007, 2011; Maini and Pammolli 2023). … Some European countries also use hidden rebates. For example, the use of France as a reference by other countries ultimately led to agreements between manufacturers and the government to establish a public price as well as secret rebates paid by manufacturers back to the government (Kanavos et al. 2017). This allows the official price (that which is referenced by other countries) to be higher, like the list price in the United States, than what is in fact paid. These nonpublic prices have prompted calls for greater price transparency, but the effects of increased transparency here are ambiguous. When (true) prices are secret, a manufacturer can more easily lower its price in a country, because it sees no negative consequences from having that secret price referenced by other countries. In concentrated markets, transparent prices could also facilitate collusion by manufacturers. However, nonpublic prices make economic assessments much more challenging. The evidence suggests that US adoption of reimportation or external reference pricing would have only modest effects on US drug prices (but would probably reduce access or price transparency in other countries).

But there are two elephants in the room along with this discussion. One is that the higher prices for brand-name drugs paid by Americans also fund the research and development costs of pharmaceutical companies. The Trump administration is seeking to cut government support for R&D in other ways, like reducing grants given through the National Science Foundation. If we are threatening to cut off the sources of funding for pharmaceutical R&D, it raises a fundamental question: What’s a new drug worth, anyway?

The fundamental tradeoff in US pharma markets is that drug companies do research, get patents, and then charge a lot for brand-name drugs. But after the patents expire, the drugs become available in generic versions, where US consumers actually pay less than those in other countries. Hemphill and Sampat point out in their JEP article how this tradeoff was formalized into law 40 years ago with the Hatch-Waxman act. As Conti and Wosińska point out in their JEP article: “In 2023, 92 percent of US drug prescriptions were filled as generics, representing less than 13 percent of overall invoice spending on drugs …”

Of course, a primary benefit of new drugs is their health benefits. In JEP, Garthwaite sketches some past and future benefits of new drugs:

Pharmaceutical innovations are responsible for 35 percent of the remarkable decline in cardiovascular mortality from 1990 to 2015 (Buxbaum et al. 2020). Previously deadly conditions such as HIV/AIDS have been transformed into manageable chronic maladies and others such as hepatitis-C have been cured. Gene therapies are becoming more commonplace as treatments for a wide range of rare and deadly genetic conditions. Advancements in immuno-oncology are providing meaningful advances across a variety of cancers as the body’s natural systems are used to combat cancer. Most recently, the first truly effective treatments for obesity in the form of GLP-1 agonists have emerged with corresponding improvements across a host of cardiometabolic outcomes such as heart disease, diabetes, and chronic kidney disease.

However, the benefits of successful pharma R&D go beyond immediate health benefits for the ill. Garthwaite writes:

[M]edical technologies transform the medical risk individuals face (that is, becoming afflicted with a condition for which there is no treatment) into a financial risk (that is, finding a way to finance the purchase of medical innovations if they get sick (Lakdawalla, Malani, and Reif 2017). All risk-averse consumers should value this reduction in health variance. Indeed, the insurance value of the new innovation can even exceed the value of health insurance in the first place, especially for disease areas where the existing treatment armamentarium is quite poor and the physical effects of the condition are quite severe. This could explain why many treatments for rare diseases so often exceed several thresholds based solely on clinical value. Another gain from new drugs is that scientific progress is often iterative, building on the knowledge and insights from previous advances. Thus, an optimal level of innovation will only be achieved to the extent the eventual value created for society by the next generation of innovations is in some way accounted for in revenues for the manufacturers making incremental progress. … Consider how medical innovations can change available treatment options for individuals who are not yet afflicted, but could become sick in the future.

To put it more bluntly, none of us knows what health conditions we or our loved ones may face in the future. Successful new drugs reduce this risk of what might happen. Paying a lot for a new drug when you need it is no fun, but not having the drug available at all is probably worse.

The other elephant in the room is about the long-term health of the pharmaceutical industry. The Trump administration has put a high priority on supporting US producers in many industies. Well, US firms account for 40-50% of global pharmaceutical sales, according to industry sources. There are about 350,000 US jobs in “Pharmaceutical and Medicine Manufacturing.” The success of the US firms is driven by spending 20% or more of its revenue on research and development, most years. In short, policies that dramatically reduce R&D spending by pharma companies will kneecap their ability to stay ahead as leading exporters in global markets, and pose a threat to several hundred thousand US jobs.

There are a variety of potentially useful mechanisms to negotiate lower drug prices for US consumers discussed in the papers of the JEP symposium, which do not threaten to cut off the future pipeline of new drugs.

But clearly, President Trump prefers what might be called a bumper-car approach to issues: that is, ram full-speed into a problem with a half-baked proposal, then spin the wheel back and forth while backing rapidly away, then ram full speed into the same problem again, and so on. Whatever the merits or demerits of this approach as a negotiating strategy, R&D projects are long-run investments that pay off only over extended periods of time. Playing bumper-car games means that industry will focus on project with a more immediate payoff, while reducing or postponing projeects that would only have longer-run payoffs. But it will be very hard to identify those groups of future patients who suffer because future breakthoughs in new drug therapies are delayed, or don’t happen at all.