Will the Courts Save Trump from His Tariffs?

The US Court of International Trade has acted to block pretty much all of President Trump’s tariffs. I guess the first question is “what the heck is the US Court of International Trade? The story seems to be that back in 1890, Congress created a “Board of General Appraisers, a quasi-judicial administrative unit within the Treasury Department. The nine general appraisers reviewed decisions by United States Customs officials …” In 1926, Congress replaced the Board of Appraisers with US Customs Court. The status of this court evolved over time, and in 1980 became the US Court  of International Trade, a “national court established under Article III of the Constitution”– the part of the constitution that establishes the federal judicial branch.

I’ve written before that a legal challenge to the Trump tariffs seemed inevitable. The key issue is that the Article 1 of the US Constitution–the part which lays out the structure and powers of the legislative branch–states in Section 8: “The Congress shall have Power To lay and collect taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States …” Over the years, Congress has written a number of exceptions into the law. For example, the International Emergency Economic Powers Act of 1977 (IEEPA) lets the President address “unusual and extraordinary” peacetime threats. For example , when Iran took US hostages in 1979, President Carter could immediately respond with trade sanctions.

The legal question is whether President Trump has the authority to invoke the “emergency” provisions and rewrite all tariffs for countries and goods all around the globe in whatever way he wishes. The law firm Reed Smith has been producing a “tariff tracker” that shows the results. The US Court of International Trade held that Trump has stretched the “emergency” provision considerably too far, that US importing firms are adversely affected, and that previous laws do not mean that Congress has given away all of its Constitutional power in this area to the President. (For those who keep score in this way, the Court decision was a 3-0 vote, and the three judges were appointed by Trump, Obama, and Reagan.)

The Trump administration justifications for its tariffs are full of goofy statements. For example, President Trump argues that the tariffs will all be paid by foreign companies, with no effect on US consumers and firms. Seems unlikely, but say that it’s true. In that case, foreign exporters to the US would have lower profits, but would be exporting the same quantity of goods at the same price to US markets. The idea that tariffs won’t affect the quantities or prices of what foreign exporters sell in the US market is inconsistent with the idea that the tariffs will give breathing space to US producers.

Or Secretary of Commerce Howard Lutnick explained in an interview a few weeks ago what kind of manufacturing jobs were going to return from China to the United States. He said, “The army of millions and millions of human beings screwing in little screws to make iPhones — that kind of thing is going to come to America …” I suppose Lutnick deserves some credit for expressing a concrete idea, but my guess is that most supporters of Trump’s tariffs do not have in mind a US economy based on an “army of millions and millions of human beings screwing in little screws to make iPhones …”

The trade economist Richard Baldwin has just published an e-book called The Great Trade Hack: How Trump’s trade war fails and global trade moves on. He rehearses the arguments over tariffs at some length: how they will not reduce the trade deficit, or revive US manufacturing, or help the middle class. Baldwin writes fluently, and this book is for a generalist readership. Here, I want to touch on one of Baldwin’s themes that I haven’t discussed recently. He writes:

Tariffs persist precisely because they fail economically, yet succeed politically. They provide symbolic relief, project toughness, and shift blame onto external actors without confronting difficult domestic policy challenges like higher taxes or expanded social programmes. …

Tariffs don’t coordinate investment across firms and sectors. They don’t train workers. They don’t bridge skill gaps or modernise vocational education. They don’t fund infrastructure, improve logistics, or support research and development. They don’t unlock capital, align upstream and downstream firms, or connect regions to supply chains. In short, tariffs can defend an industrial base, but they cannot create one.

Reindustrialisation requires more than tweaking relative prices. It needs a strategy. A real one. With planning, sequencing, and sustained commitment. It needs a trained workforce, one that matches the needs of 21st-century manufacturing. And to get those workers, federal and local governments must partner with industry. Firms can’t do it alone. No company will invest heavily in training workers if they’re unsure those workers will stay once their skills are upgraded. That’s why, in most countries, governments step in – funding training with tax dollars to solve the coordination problem. It’s a public good with private benefits, and it only works when governments and employers pull in the same direction.

It also needs reliable infrastructure, stable regulation, and targeted investment incentives. It needs the trust of industrialists – not just that the cost of imported goods will be higher this year, but that America will be a profitable place to make things for decades to come. And this is critical: building a modern manufacturing operation is a long-term proposition. From planning to permitting, from equipment procurement to workforce training, the timeline is measured in years, not months. For investors to commit, they need confidence that support policies – tariffs, subsidies, tax credits, training programmes – will remain in place long enough to generate a return. If the policy environment is unpredictable or politicised, those factories won’t get built.

That’s the real shortcoming of Trump’s pray-and-spray, tariff-first and tariffs-only approach to reshoring manufacturing. There’s no plan to use the breathing room tariffs might create. Without that plan, the most likely outcomes from the 2 April tariffs are higher prices, reduced manufacturing, riled allies, and retaliation against exports from industries where America is competitive today.

We are seeing with President Trump one of the dangers of electing someone with a business background to government: the lessons of business and government overlap in some areas, but are not the same. Baldwin writes:

Trump’s real estate experience also taught him one simple rule: the seller is ripping off the buyer. From that premise, it’s just a logic hop-skip-and-jump to the idea – which the President is firmly convinced of and which shapes his attitude towards trade – that a bilateral trade deficit is theft. … This notion is completely false – as anyone versed in mainstream, positive-sum business practices would attest. Nevertheless, it is a cornerstone of Trump’s belief system.

I’m confident that the US Court of International Trade decision will be appealed to the US Supreme Court, but I suspect that President Trump might benefit politically if the courts take his tariff plans off the table. Trump blocked by the courts is a powerful political force. On the other side, if Trump is forced to face the actual effects of his tariffs, my expectation is that as the gains from international trade are diminished, he won’t come out looking so good.

Permitting Reform: The Supreme Court Weighs In

The National Environmental Policy Act, commonly known as NEPA, requires that large projects obtain federal environmental permits if they cross state borders or federal property (including not just parks, but also interstate highways). Many states and localities have permitting processes as well. If you believe that the US needs to have a wave of building–perhaps to produce green energy and the associated electricity transmission lines, or perhap for additional housing develoment, or perhaps to expand mass transit in cities, or perhaps to build the data centers needed to run the new AI tools, or perhaps to build the factories for the US jobs of the future–then you should be concerned that the lawsuits from small and unrepresentative groups enabled by NEPA are a cause of serious delay.

I’ve written about permitting reform before. For example, Zachary Liscow wrote in the Winter 2025 issue of the Journal of Economic Perspectives on “Getting Infrastructure Built: The Law and Economics of Permitting.” Broadly speaking, his notion is to find ways to get broad public input earlier in the permitting process, and if such input is collected and taken into account, then courts would be quite hesitant to let a lawsuit from a small special-interest group block a project. Or for wincing and giggles, consider this the figure accompanying this post on “What Permits are Needed for New Electricity Transmission Lines?”

Now the US Supreme Court has weighed in, in the case of Seven County Infrastructure Coalition, et al., vs. Eagle County, Colorado. The decision was released earlier today. Here’s the fact setting as described by the court:

Under federal law, new railroad construction and operation must first be approved by the U. S. Surface Transportation Board. 49 U. S. C. §10901. In 2020, the Seven County Infrastructure Coalition applied tothe Board for approval of an 88-mile railroad line connecting Utah’s oil-rich Uinta Basin to the national freight rail network, facilitating the transportation of crude oil to refineries along the Gulf Coast. As part of its project review, the Board prepared an environmental impactstatement (EIS) that addressed significant environmental effects of the project and identified feasible alternatives that could mitigatethose effects, as required by the National Environmental Policy Act (NEPA). The Board issued a draft EIS and invited public comment. After holding six public meetings and collecting more than 1,900 comments, the Board prepared a 3,600-page EIS that analyzed numerous impacts of the railway’s construction and operation. Relevant here, the EIS noted, but did not fully analyze, the potential environmental effects of increased upstream oil drilling in the Uinta Basin and increased downstream refining of crude oil. The Board subsequently approved the railroad line, concluding that the project’s transportation and economic benefits outweighed its environmental impacts. Petitions challenging the Board’s action were filed in the D. C. Circuit by a Colorado county and several environmental organizations. The D. C. Circuit found “numerous NEPA violations arising from the EIS.” 82 F. 4th 1152, 1196. Specifically, the D. C. Circuit held that the Board impermissibly limited its analysis of the environmental effects from upstream oil drilling and downstream oil refining projects, concluding that those effects were reasonably foreseeable impacts that the EIS should have analyzed more extensively.

You can see the issue here. The Environment Impact Statement focuse on the construction and operation of the 88 miles of railroad track. However, it did not “upstream” and “downstream” issues, like the costs and benefits of increased oil drilling in Utah’s Uinta basin, or the effects of additional oil at Gulf refineries, or perhaps even the basic question of whether US oil production should rise or fall.

The Court’s decision was 8-0 (Judge Gorsuch did not take part). The main opinion says:

Some courts have strayed and not applied NEPA with the level of deference demanded by the statutory text and this Court’s cases. Those decisions have instead engaged in overly intrusive (and unpredictable) review in NEPA cases. Those rulings have slowed down or blocked many projects and, in turn, caused litigation-averse agencies to take ever more time and to prepare ever longer EISs for future projects.

The upshot: NEPA has transformed from a modest procedural requirement into a blunt and haphazard tool employed by project opponents (who may not always be entirely motivated by concern for the environment) to try to stop or at least slow down new infrastructure and construction projects. Some project opponents have invoked NEPA and sought to enlist the courts in blocking or delaying even those projects that otherwise comply with all relevant substantive environmental laws. Indeed, certain project opponents have relied on NEPA to fight even clean-energy projects—from wind farms to hydroelectric dams, from solar farms to geothermal wells. See, e.g., Brief for Chamber of Commerce of the United States of America, et al. as Amici Curiae 19–20.

All of that has led to more agency analysis of separateprojects, more consideration of attenuated effects, more exploration of alternatives to proposed agency action, more speculation and consultation and estimation and litigation. Delay upon delay, so much so that the process sometimes seems to “borde[r] on the Kafkaesque.” Vermont Yankee, 435 U. S., at 557. Fewer projects make it to the finish line. Indeed, fewer projects make it to the starting line. Those that survive often end up costing much more than is anticipated or necessary, both for the agency preparing the EIS and for the builder of the project. And that in turn means fewer and more expensive railroads, airports, wind turbines, transmission lines, dams, housing developments, highways, bridges, subways, stadiums, arenas, data centers, and the like. And that also means fewer jobs, as new projects become difficult to finance and build in a timely fashion. A 1970 legislative acorn has grown over the years into a judicial oak that has hindered infrastructure development “under the guise” of just a little more process.

The United States is a famously litigious society, and there will always be a small interest groups what wants to sue–not because they want the project to be done better, but because they don’t want the project at all. Having the Supreme Court alter the interpretation of the law in this way may be an imperfect way to proce4ed, but one way or another, some pushback on the current permitting process was in the wind.

US State-Level Abortion Regulations: Causes and Effects

Regulations about abortion are often wildly controversial. But what effects to they actually have? Caitlin Myers addresses these issues in “From Roe to Dobbs: 50 Years of Cause and Effect of US State Abortion Regulations” (Annual Review of Public Health 2025, pp. 433-446).

As a starting point, consider the years before and after the 1973 US Supreme Court decision in Roe v. Wade that struck down existing abortion restrictions across the country. The left-hand panel shows the states which has repealed the bans on abortion before Roe in purple, those that had relaxed but not eliminated their ban before Roe in pink, and those in which abortion was legalized by Roe in gray. In the purple states that had already repealed their ban on abortion, the number of abortions had risen in the years before Roe, but had then started declining–and the decline continued after the passage of Roe. Part of the reason for the decline in the early-legalization states is that, after Roe, women no longer had to travel from other states where abortion was illegal. In the other groups of states, the number of abortions rose.

As Myers argues, the effects on abortion levels of states that repealed their abortion bans before 1973 is very large–probably larger than the increase in abortion following the Roe decision. She writes:

Of the three broad policy changes liberalizing abortion access—early reforms, early repeal, and repeal with Roe—it is early repeal that results in the greatest effects on national abortion and birth rates. As Joyce et al. (51) conclude following a detailed analysis of the effects of distance to early repeal states, “The story that emerges from these data is that…Roe v. Wade was arguably less important for unintended childbearing than was access to services in California, the District of Columbia and especially New York in the years before Roe” (pp. 813–14) because so many people were able to travel to these early repeal states even if their state of residence had not yet legalized abortion.

States then tested the limits of what the Supreme Court would allow with a variety of restrictions: mandatory waiting periods before an abortion, mandatory counseling before an abortion, different types of content that might be involved in that counseling, parental permission for teenager and/or spousal permission for wives, whether Medicaid funding could be used to pay for abortions, whether abortions needed to be performed in or near hospitals, what doctors were allowed to perform abortions, and others. This array of rules–as they were proposed, passed or failed in legislatures, and were upheld or not by courts–provides a rich set of contexts for researchers.

Here’s one example. In North Carolina in the 1980s and into the 1990s, there was a state fund to pay for abortions for low-income women: in this way, the state did not draw on federal Medicaid funds to pay for abortions. But the state fund sometimes ran out of money. Myers writes: “Cook et al. (25) exploit a natural experiment that took place within North Carolina between 1980 and 1994 when the state abortion fund ran out of money on five different occasions. Comparing changes in outcomes among women seeking abortions and eligible for funding, the authors conclude that when funding is unavailable, about one-third of pregnancies that would have been terminated are instead carried to term …”

This kind of study is referred to as a “natural experiment”–that is, there was no plan for the North Carolina fund to run out of money. It seems unlikely that sexual activity in North Carolina was being adjusted according to the state of the fund. Instead, some North Carolina women seeking abortions found that funding was available, and others didn’t, and this had an effect on their deicsions.

Myers goes into detail in considering the array of natural experiments that have been analyzed. For example, when a state altered its abortion laws, then women who lived relatively close to that state were also affected, because it was relatively easy for them to travel to that state, while women living farther from that state were less affected, because their costs of travelling to that state were higher. As another example, those interested in, for example, the application of difference-in-differences statistical methods may want to check out the paper.

Here, I’ll mention some of the bottom lines of this survey of the evidence (citations omitted here, but appear in the article iteself): When and where abortion is more restricted, birth rates are higher. Higher birth rates, especially for women at younger ages, are associated with lower levels of educational achievement, and thus with lasting effects on employment outcomes. These effect are typically larger for black women then for white women.

What about the period since the 2022 US Supreme Court decision in Dobbs v. Jackson, which struck down Roe v. Wade and thus gave states much wider latitude in setting abortion laws? Of course, the evidence on this point is still evolving, and the setting for abortion is now rather different than it was before 1973. Myers notes:

  • “Abortion prior to 12 weeks’ gestation remains legal in 34 states (65) and many states have bolstered their protections (22), providing many more destinations than existed in 1971, when abortion was legal in only 6 jurisdictions.”
  • “The delivery of abortion services has also evolved, with a major shift occurring in 2000 when the US Food and Drug Administration (FDA) approved the drug mifepristone for the termination of pregnancies. The proportion of medication abortions grew rapidly, from 6% of all abortions in 2001 to 39% in 2017.”
  • “[I]n December 2021 the FDA lifted the restriction permanently (55), allowing health care providers to dispense abortion medications directly to patients via mail without requiring the patient to receive in-person consultation or tests (85). This expanded abortion access in the 32 states that did not restrict telehealth abortion (5), likely fueling the rise in medication abortions to 63% of all abortions by 2023 … By the end of 2023, telehealth accounted for nearly 1 in 5 abortions in the United States (83), and national abortions had actually risen relative to pre-Dobbs levels …”
  • “Yet not everyone seeking an abortion can find a way to drive hundreds of miles to reach facilities in nonban states or will find telehealth medication abortion an acceptable option. Near-total abortion bans enforced in the first 6 months after Dobbs are estimated to have increased births in ban states by an average of 2.3% relative to if no ban had been enforced (26). The estimated effects of bans on fertility are greater in states where distances are greatest, reaching 4.4% in Mississippi and 5.0% in Texas …”

In addition, teenage birth rates have fallen dramatically over the last three decades for an array of reasons not directly related to availability of abortion: less sexual activity, greater use of contraception, and more broadly, a larger share of young women viewing their early adulthood as a time for education and job experience, with later ages for marriage and childbearing.

Update on the Military Base Realignment and Closure Process

One of the strongest examples of how a commission report can overcome problems with a legislative process involves the Base Realignment and Closure Process that started in 1988, and has now gone through five rounds. The challenge was that the number of active duty US military personnel had rise to about 3.5 million during the Vietnam War in the late 1960s, but then had fallen to about 2 million by the later part of the 1980s. It was obvious that the number of military bases should also decline, but the US Congress had a very hard time doing it. Any Congressperson who had a military base in or near their district or state would not vote for cutting their own base; moreover, they would not vote for cutting bases in other places–out of fear that the base in their own district might be the next to do. Literally no military bases were close between 1977 and 1987.

The idea behind the Base Realignment and Closure Process was to have an outside commission draw up a list of bases to be closed and a timeline for closing them. Congress could then vote for or against the proposed list as a whole–but Congress committed in advance not to amend the list. Chandler S. Reilly and Christopher J. Coyne bring us up to date in ” The Political Economy of Military Base Redevelopment” (Eastern Economic Journal, 2025, 51: 7–26). As they write: “Since the initial round in 1988, four subsequent BRAC rounds were completed in 1991, 1993, 1995, and 2005, resulting in the closure of over a hundred major military bases, with the property transferred to local communities for redevelopment.”

But while the process has facilitated base closures, Reilly and Coyne point out that the visible hand of political clout has continued to play a role in the redevelopment process. They write:

In most cases, base property is not simply auctioned off with the rights to that property transferred to private parties. Instead, a political process governs base redevelopment from start to finish. Some forms of property transfer, such as Economic Development Conveyances (EDC) and Public Benefit Transfers (PBT), follow predetermined redevelopment paths with the intention of stimulating economic development or benefiting certain segments of the community. These rules incentivize redevelopment along predetermined lines which results in property being zoned for specific uses over many years. … [B]ase repurposing hinges on a political process where interest groups compete not by offering the highest market bid for reuse rights, but rather through waste-generating rent-seeking activities.

These zoning decisions may represent some mixture of special-interest lobbying, political clout, and attempts to pass costs to others. For example, the California State University Monterey Bay (CSUMB) was established after the closure of Fort Ord on that site. There had not been any previous plan for the establishment of a Cal State campus in that area; earlier reports noted that existing Cal State campuses had plenty of space for projected enrollments. But if the state was getting the land and buildings for “free” (ignoring opportunity cost of alternative uses, of course) and the federal government was chipping in with some spending to build out the rest of the campus, it seemed like a good idea.

Political processes can even lead to gridlock in redevelopment. The interests of thos living right beside the former base may not be aligned with the interests of those at the regional or state level (for example, should the base become a nature preserve, a shopping mall, a mixed-use development, or an industrial park?). It can take years, or in some cases decades, for these interjurisdictional issues to be resolved. The authors write: “As of 2017, there remained over 70 thousand acres of base land that had yet to be disposed, representing around 19 percent of the total acreage of closed bases over the five rounds …”

The authors suggest the merits of auctioning the land from base closures, which has the advantages of straightforwardness–although one expects that politicians would still push to play a heavy role. There is probably an intermediate approach in which politicians work with a master planner who would designate some of the land for parks or transportation or other uses, and then auctioning off the rest with the framework established. But when “free” land and buildings seem to be available, it’s not easy for local politicians to take a step back.

An obvious question is whether the commission approach might be used to resolve other logjammed issues. Back in the early 1980s, for example, when the Social Security system was verging on insolvency, a National Commission on Social Security Reform headed by Alan Greenspan proposed a set of changes that became the basis for a 1983 law which made the system solvent up until the early 2030s. Maybe it’s time for another such commission? Or imagine if Congress designated a target for spending cuts and another target for tax increases, and then set up two committees to propose how to meet that target. In this case, the arrangement might be that Congress could vote for or against, but any amendments suggesting more spending or lower taxes in one area would need to be include a revenue-neutal spending cut or tax increase in another area, to remain within the overall targets. Yes, it would be nice if Congress could debate and vote to address these kinds of issues like adults. But perhaps other arrangements are needed.

The Import-So-That-They-Can-Export Firms

Much of the discussion about trade and imports is based on discussions of products and sectors of the economy. But among the researchers who study international trade, a major shift has been a focus on relatively small firms that are directly involved in international trade. It turns out that many of these firms are both major importer and major exporters: indeed, they import intermediate goods in order as part of a global supply chain, to add economic value in the US economy while planning to export a finished (or more-finished) product. When you think about what US firms that are involved in international trade actually do, the arguments over tariffs take on a different flavor.

Pol Antràs provides a nice overview of thia research his FBBVA Lecture 2024: “The Uncharted Waters of International Trade,” delivered at the annual meetings of the European Economic Association, and now published in the Journal of the European Economic Association (February 2025, pp. 1-51). Researchers in international trade will be especially interested in the “uncharted waters” for future theoretical and empirical research that Antràs describes. Here, I’ll focus on looking back at the “charted waters” of key facts discovered by reseach in the previous decade or two.

(For an article from a few years back as this line of research got underway, I can recommend Andrew B. Bernard, J. Bradford Jensen, Stephen J. Redding, and Peter K. Schott, “Firms in International Trade,” from the Summer 2007 issue of the Journal of Economic Perspectives, where I labor as Managing Editor.)

Here’s Antràs with some facts about only a small share of US firms are involved in exporting.

First, … in the real world, only a small proportion of firms engage in exporting, with most exporting firms targeting just a few markets. … [O]nly 35% of all manufacturing firms in the United States exported in 2007. Furthermore, this is not driven by universal exporting in some sectors and zero exporting in import- competing sectors: The share of firms that export is highest among firms in “Computer and Electronic Products,”reaching 75% export participation, but this share is positive and significantly lower than 50% in most sectors.

Second, the distribution of exporters is highly skewed. Despite accounting for only 0.03% of all US manufacturing firms … the top 1% of exporters accounted for a staggering 80.9% of US manufacturing exports. The top 2%–5% and top 5%–10% accounted for an additional 12.1% and 3.3%, respectively, leaving the contribution of the bottom 90% at a mere 3.7% of total US exports. This phenomenon is not special to the United States. The top 1% of exporters accounted for 77% of exports in Hungary, 68% of exports in France, 59% of exports in Germany, 53% of exports in Norway, 51% of exports in China, 48% of exports in Belgium, 47% of exports in Denmark, 42% of exports in the United Kingdom, and 32% of exports in Italy (Mayer and Ottaviano 2008 ; Manova and Zhang 2012 ; Ciliberto and Jäkel 2021 ). Why are exporters often in the minority, even in an economy’s most competitive sectors, and why are aggregate exports so concentrated among a small number of firms?

The third stylized fact unveiled by empirical work in the late 1990s is that … exporters appear to be systematically different from non-exporters: they are larger, more productive, and operate at higher physical capital and skill intensities. … [T]hese differences are very large. US exporters are on average 1.11 log points (or 203% ) larger in terms of employment than non-exporters in the same sector, and even controlling for the number of employees, exporters feature substantially higher sales, labor productivity, total factor productivity (TFP), wages, capital intensity, and skill intensity.

A similar pattern arises for imports: that is, a relatively small share of firms account for a very large share of imports, and most of this trade involves inputs to finished goods, not the finished good themselves.

Perhaps most notably, the vast majority of world trade is not in finished products: It has been estimated that trade in intermediate inputs accounts for as much as two-thirds of world trade (Johnson and Noguera 2012 ). This implies that global firms not only export but also import. … More specifically, importers in the United States are in the minority, the distribution of US imports is as skewed as that for exports, importers are larger, more productive, and more capital and skill intensive than non- importers … Antràs, Fort, and Tintelnot ( 2017 ) further document that US importers are not only larger than non-importers, but that their relative size advantage is also increasing in the number of countries from which they source.

Indeed, in many cases imports and exports happen within a single firm: that is, the firm owns overseas suppliers and imports from them, and it owns overseas distributors and exports to them.: “Using a newly merged data on US firms’ exports and imports, and their global production locations in 2007, Antràs et al. ( 2024 ) estimate that around 80% of US exports and imports are accounted for by US firms that manufacture goods both in the US as well as in foreign countries.”

The current high-drama agenda of threatening tariffs, then backing away, then negotiating, then threatening again, all makes for lively headlines and talk shows. Yes, after a transition period of at least years and likely a decade or more, some of these firms that import-to-export could re-invent their production processes with much more reliance on domestic supply chains. But remember, these import-to-export firms evolved in this way because it was more cost-effective for them to do so–that is, there were gains from trade. These firms buy inputs in global markets either because the products aren’t available in US markets, or are available only at a substantially higher price; similarly, they export because global markets have the necessary demand to absorb the quantities that they produce.

These large US firms that import-to-export, often within the structure of the firm itself. are often among the crown jewels of the US economy. Remember, they are well above average in “sales, labor productivity, total factor productivity (TFP), wages, capital intensity, and skill intensity.” For these kinds of firms, which represent the lion’s share of US trade, the issue with tariffs isn’t about whether a family will be able to afford toys or T-shirts for their children. If these firms end up over time facing both substantially highe rtariffs on their imports of input for production and retaliatory tariffs on their exports, that policy will cut the heart out of their business model.

Prepping for the Next Pandemic

If you are like me, you spend a certain amount of time trying not to remember the pandemic experience. But COVID-19 pandemic did cause more than one million American deaths. In a world of sane and sensible prioritizing and policy-making, spending some time and effort focused on how to reduce the risks and costs of a future pandemic seems potentially productive. Alex Tabarrok discusses a few pragmatic possibilities in “Pandemic preparation without romance: insights from public choice” (Public Choice, published online April 16, 2025).

One metaphor for America’s level of unpreparedness for the COVID-19 pandemic is warehouses of rotting N95 masks. Tabarrok notes:

[C]onsider that The Strategic National Stockpile (SNS) of personal protective equipment (PPE) was severely inadequate to meet the demands of the COVID-19 pandemic. At the start of the pandemic, the stockpile had only about 35 million N95 masks on hand, far short of the estimated 3.5 billion that would have been needed to adequately protect healthcare workers and first responders. Moreover, much of the stockpile was rotting as the N95 masks were more than 10 years old by the time of the pandemic.


I should emphasize that even taken all together, Tabarrok is not claiming that his recommended policies can eliminate the costs of future pandemics. But if we could reduce the cost by say, just 10% , the US savings alone would have been more than 100,000 lives and more than $1 trillion in lost economic output. Here are four of his options:

#1: Testing for disease at sewage treatment plants

People infected with SARS-CoV-2 shed genetic material from the virus in their feces (Bivins et al. 2020). Wastewater surveillance can detect the presence, concentration and growth of this genetic material before people present clinically. Thus, wastewater surveillance gives public health officials an early warning which can be used to allocate scarce resources and to implement control measures. More generally, wastewater surveillance can detect a host of viral and bacterial pathogens including influenza viruses, poliovirus, norovirus, hepatitis A and E viruses and bacteria such as Escheric hia coli and Salmonella. Wastewater surveillance to monitor antibacterial resistance may be of special importance (Philo et al. 2023; Singer et al. 2023). As with surveillance for SARS-CoV-2, wastewater surveillance more generally can be used to predict disease outbreaks more quickly, track the spread and virulence of pathogens and novel variants of concern, and inform and provide feedback to public health decisions (Wu et al. 2020).

#2: Build a vaccine library, by doing the research in advance on vaccines for viruses most likely to cause an outbreak

In 2016, the WHO identified 11 viruses with the greatest potential to cause severe outbreaks. Gouglas et al. (2018) estimated that developing at least one vaccine candidate for each of these viruses up to phase 2a would cost approximately $2.8 to $3.7 billion in total (see also Krammer 2020). Bringing a vaccine candidate up to phase 2a means designing the vaccine and evaluating it for safety and essentially “proof of concept” in small trials. Prior to a significant outbreak, it would not be possible to run phase 3 efficacy trials. It should be clear that these costs are small, almost trivial, relative to the expected gains. It’s notable that SARS-CoV-1 was on the WHO’s list. The knowledge gained from studying SARS-CoV-1 helped to speed a vaccine for SARS-COV-II but had SARS-COV-I vaccines been developed to Phase 2a prior to the COVID pandemic, for example, we could have likely knocked months off the development process for SARS-COV-II, saving perhaps millions of lives and trillions of dollars worldwide.

#3: When a virus hits, test the vaccines with “human challenge trials”

COVID vaccines were tested through traditional randomized controlled trials (RCTs) in the field. In an RCT, participants are randomly assigned to either a vaccinated (treatment) group or an unvaccinated (control) group, and both groups resume their normal activities until enough participants contract COVID to establish a statistically significant difference in infection rates. A major drawback of RCTs in a pandemic is the unpredictability of reaching the infection threshold required for statistical significance. If infection rates are low or participants take steps to avoid exposure, trials can be prolonged, delaying vaccine rollout. While increasing the trial size can reduce these delays, it also increases the cost and complexity of the trials.

In contrast, in a human challenge trial (HCT), participants are randomly split into two groups and all of them are deliberately exposed to the virus, accelerating the timeline for obtaining results. Since participants are deliberately exposed the number of participants in a human challenge trial can be much smaller than in an RCT, perhaps on the order of 50–100. Most importantly, where an RCT might take years to produce results, a HCT can have results in a matter of months or weeks (Eyal and Lipsitch 2021; Nguyen et al. 2021). For a variety of reasons, HCT are not necessarily full substitutes for RCTs, but they are surely complements and should be used in emergencies.

#4: A Pandemic Trust Fund

As another example, some $60 billion was spent on special programs to pay furloughed pilots, flight attendants, and other airline staff as travel demand plummeted. Why? One factor was that the airlines were already well organized and politically active. The airlines, for example, spent over one hundred million dollars on lobbying in the year before the pandemic (Evers-Hillstrom 2020). During the pandemic, the airlines were also joined in their lobbying efforts by the airline unions making for a politically powerful team on both sides of the aisle. The lines of power were also well defined. The airlines knew, for example, which members of Congress sat on the requisite committees and what they needed.

In contrast, OWS [Operation Warp Speed, the program for developing vaccines] was a new program with few concentrated interest groups and no previous lobbying efforts. Although some of the vaccine manufacturers understood lobbying, there was no locus of support in Congress because committee responsibilities for a program like OWS had not been established. OWS was run primarily out of the executive and the DOD [Department of Defense]. The program was also controversial from the beginning and any lobbying at the time from the vaccine manufacturers would have been highly scrutinized.

The lesson from political economy is that we do not want emergency funds to be drawn, or to be perceived to be drawn, from other programs. Pre-approved legal authority to spend is necessary to quickly address a low-probability, high-cost emergency. One way to do this would be to establish a Pandemic Trust Fund (PTF) nominally composed of say $250 billion in US government bonds. The PTF would be something of an accounting fiction, similar to the Social Security Trust Fund, but accounting fictions can have real effects. … By clearly denoting pandemic spending rights, a pandemic trust fund would avoid budget battles in the event of a pandemic. At $250 billion and 3% interest, a PTF could also generate annual revenues of $7.5 billion for ongoing pandemic spending. Some of this spending would be wasted but sausages and legislation both require pork as an input.

In the context of total US federal spending, none of these steps are especially costly, but having them in place could make a real difference. As Tabarrok points out, there were plenty of well-publicized warnings in the decade or two about risks of pandemics, including high profile stories in outlets like TIME and CNN, a Bill Gates TED talk seen by millions, and even the 2011 movie Contagion. But when the crunch came, America was not well-prepared. There will be a next time.

How a Small Share of Firms Drive Economic Growth

My guess is that everyone would be happier if economic growth was evenly distributed, so that everyone’s income rose in lockstep. Instead, growth is a disruptive process, with some firms and sectors rising while others decline. As a wise economist once put it, the process growth could in theory be like “yeast,” with everything expanding at once, or like “mushrooms,” with spurts of growth in cerain areas. But most of the time, it’s mushrooms.

A team from the McKinsey Global Institute writes about the mushrooms in “The power of one: How standout firms grow national productivity” (May 6, 2025). The thesis, as stated in the subtitle: “National productivity growth is a matter of few firms taking bold strategic action rather than millions of firms raising efficiency.” For the relatively short time frame they analysis in this study, from 2011 to 2019, this seems likely to be true.

The authors have a dataset of 8300 firms across the US, UK, and German economy, all with at least 50 employees and many with more than 500 employees, and focused in four sectors: retail, automotive and aerospace, travel and logistics, and computers and electronics. They refer to this limited group of companies in each country as a “lab economy.” define a “Standout” firm as a company where the productivity growth in that single company, by itself, adds at least 0.01% to the productivity growth of the entire set of companies for the lab economy in one country. Conversely, they define a “Straggler” firm as a single company that, by itself, subtracts at least 0.01% of productivity growth from the entire economy. Of courses, most firms are between these extremes.

Two conclusions frm the report seem worth emphasizing, in part as explanations for why the US economy has been outperforming the UK and German economies.

First, a relatively small number of Standouts and Stragglers can drive the overall productivity growth patterns of an economy. The report notes: “Fewer than 100 firms in our sample of 8,300—a group that we have dubbed Standouts—accounted for about two-thirds of the positive productivity gains in each of the three country samples we analyzed. … To give a sense of how important a single firm can be, just another dozen or so of the largest Standouts could have doubled productivity growth in their entire country. … In the United States, for instance, 44 Standouts—5 percent of sample firms, accounting for 23 percent of employment share—generated 78 percent of positive productivity growth. … US Standouts included household names like Apple, Amazon, The Home Depot, and United Airlines.

Second, the US has a higher proportion of Standouts relative to Stragglers, compared to the UK and Germany: “US productivity growth from 2011 to 2019 was faster than that of the other countries in our sample at 2.1 percent, compared with 0.2 percent in Germany and close to zero in the United Kingdom. … The US sample had three times more Standouts than Stragglers, while the German and UK samples had almost even numbers.”

Third, US Standouts are more likely to grow and expand, while US Stragglers are more likely to contract, compared with the UK and Germany: “Firms in the US sample had more reallocation of employees from less productive to more productive firms. Leaders grew faster, and underperforming firms more swiftly restructured or exited. In the United States, Standouts include scalers (firms far above average sector productivity that contribute by gaining employees) and restructurers (firms with below-average sector productivity that contribute by losing employees). In Germany and the United Kingdom, this was not the case. Rather, these countries preserved underperforming firms as Stragglers. Frontier firms scaling and gaining share added 0.6 percentage point to productivity growth in the United States, and unproductive firms exiting contributed an additional 0.5 percentage point. Overall, dynamic reallocation, including reallocation across subsector boundaries, added 0.9 of 2.1 percentage points—slightly less than half—to productivity growth in the US sample. In contrast, the contribution of reallocation was negligible in Germany and the United Kingdom. This may be explained by the fact that the United States has highly dynamic factor markets, allowing for quick entry and exit as well as fast scale-up and restructuring.

I’ll add that over longer time periods, the “standout” firms will change, and gradual gains by all of the intermediate firms will loom larger. As the report notes, “The millions of MSMEs [micro, small, and medium sized enterprises] outside our sample collectively contributed up to 30 percent of productivity growth in the four sectors in the national statistics. Indeed, a handful of them may emerge as the Standouts of tomorrow.”

Perhaps the bigger lesson is that all nations claim that they want dynamic standout “superstar” firms (for previous discussions of the role of such firms, see here and here). But then, when those dynamic firms start expanding, they create economic disruption and start driving other competitors out of business. At that point, political pressure will arise to rein them in. But sustained economic growth, at least in the short- and medium-run, is typically mushrooms, not yeast.

What’s a New Drug Worth?

In a a juxtaposition of events that redefines the meaning of “coincidence,” President Trump announced a new policy for prescription drug pricing this morning, and the the Spring 2025 issue of the Journal of Economic Perspectives, released three days ago on Friday morning, begins with a four-paper symposium on drug pricing. (Full disclosure: I work as Managing Editor of the JEP, so this coincidence was perhaps more apparent to me than to others.) The four JEP papers are:

Trump’s proposal starts from the well-known fact that US consumers pay higher prices for brand-name prescription drugs than buyers in other countries. His executive order (yet to be tested in court) would require that US consumers pay prices for drugs no higher than charged in other countries. From the JEP paper by Margaret Kyle:

Kyle points out that Trump’s proposal fits under the category of “external reference pricing,” which is to say that US drug prices for brand-name drugs would be set based on prices in other countries. Of course, if this was to happen, the players in the market would adjust: for example, drug companies would probably seek to charge more for brand-name drugs in other countries. Trump’s executive order does not differentiate between brand-name and generic drugs, but the logic of the order suggests the possiblity of higher US prices for generic drugs.

Kyle points out that many European countries already have a version of “external reference pricing”–in which prices for a drug in one European country are not supposed to be more than in neighboring countries. Strategic maneuvering results. Kyle writes:

A less optimistic assessment of external reference pricing considers the European experience. As noted above, external reference pricing like this would induce a number of strategic responses from other stakeholders. These include delayed launch and/or supply limitations to lower-price markets, as well as efforts to make products less comparable across countries (Kyle 2007, 2011; Maini and Pammolli 2023). … Some European countries also use hidden rebates. For example, the use of France as a reference by other countries ultimately led to agreements between manufacturers and the government to establish a public price as well as secret rebates paid by manufacturers back to the government (Kanavos et al. 2017). This allows the official price (that which is referenced by other countries) to be higher, like the list price in the United States, than what is in fact paid. These nonpublic prices have prompted calls for greater price transparency, but the effects of increased transparency here are ambiguous. When (true) prices are secret, a manufacturer can more easily lower its price in a country, because it sees no negative consequences from having that secret price referenced by other countries. In concentrated markets, transparent prices could also facilitate collusion by manufacturers. However, nonpublic prices make economic assessments much more challenging. The evidence suggests that US adoption of reimportation or external reference pricing would have only modest effects on US drug prices (but would probably reduce access or price transparency in other countries).

But there are two elephants in the room along with this discussion. One is that the higher prices for brand-name drugs paid by Americans also fund the research and development costs of pharmaceutical companies. The Trump administration is seeking to cut government support for R&D in other ways, like reducing grants given through the National Science Foundation. If we are threatening to cut off the sources of funding for pharmaceutical R&D, it raises a fundamental question: What’s a new drug worth, anyway?

The fundamental tradeoff in US pharma markets is that drug companies do research, get patents, and then charge a lot for brand-name drugs. But after the patents expire, the drugs become available in generic versions, where US consumers actually pay less than those in other countries. Hemphill and Sampat point out in their JEP article how this tradeoff was formalized into law 40 years ago with the Hatch-Waxman act. As Conti and Wosińska point out in their JEP article: “In 2023, 92 percent of US drug prescriptions were filled as generics, representing less than 13 percent of overall invoice spending on drugs …”

Of course, a primary benefit of new drugs is their health benefits. In JEP, Garthwaite sketches some past and future benefits of new drugs:

Pharmaceutical innovations are responsible for 35 percent of the remarkable decline in cardiovascular mortality from 1990 to 2015 (Buxbaum et al. 2020). Previously deadly conditions such as HIV/AIDS have been transformed into manageable chronic maladies and others such as hepatitis-C have been cured. Gene therapies are becoming more commonplace as treatments for a wide range of rare and deadly genetic conditions. Advancements in immuno-oncology are providing meaningful advances across a variety of cancers as the body’s natural systems are used to combat cancer. Most recently, the first truly effective treatments for obesity in the form of GLP-1 agonists have emerged with corresponding improvements across a host of cardiometabolic outcomes such as heart disease, diabetes, and chronic kidney disease.

However, the benefits of successful pharma R&D go beyond immediate health benefits for the ill. Garthwaite writes:

[M]edical technologies transform the medical risk individuals face (that is, becoming afflicted with a condition for which there is no treatment) into a financial risk (that is, finding a way to finance the purchase of medical innovations if they get sick (Lakdawalla, Malani, and Reif 2017). All risk-averse consumers should value this reduction in health variance. Indeed, the insurance value of the new innovation can even exceed the value of health insurance in the first place, especially for disease areas where the existing treatment armamentarium is quite poor and the physical effects of the condition are quite severe. This could explain why many treatments for rare diseases so often exceed several thresholds based solely on clinical value. Another gain from new drugs is that scientific progress is often iterative, building on the knowledge and insights from previous advances. Thus, an optimal level of innovation will only be achieved to the extent the eventual value created for society by the next generation of innovations is in some way accounted for in revenues for the manufacturers making incremental progress. … Consider how medical innovations can change available treatment options for individuals who are not yet afflicted, but could become sick in the future.

To put it more bluntly, none of us knows what health conditions we or our loved ones may face in the future. Successful new drugs reduce this risk of what might happen. Paying a lot for a new drug when you need it is no fun, but not having the drug available at all is probably worse.

The other elephant in the room is about the long-term health of the pharmaceutical industry. The Trump administration has put a high priority on supporting US producers in many industies. Well, US firms account for 40-50% of global pharmaceutical sales, according to industry sources. There are about 350,000 US jobs in “Pharmaceutical and Medicine Manufacturing.” The success of the US firms is driven by spending 20% or more of its revenue on research and development, most years. In short, policies that dramatically reduce R&D spending by pharma companies will kneecap their ability to stay ahead as leading exporters in global markets, and pose a threat to several hundred thousand US jobs.

There are a variety of potentially useful mechanisms to negotiate lower drug prices for US consumers discussed in the papers of the JEP symposium, which do not threaten to cut off the future pipeline of new drugs.

But clearly, President Trump prefers what might be called a bumper-car approach to issues: that is, ram full-speed into a problem with a half-baked proposal, then spin the wheel back and forth while backing rapidly away, then ram full speed into the same problem again, and so on. Whatever the merits or demerits of this approach as a negotiating strategy, R&D projects are long-run investments that pay off only over extended periods of time. Playing bumper-car games means that industry will focus on project with a more immediate payoff, while reducing or postponing projeects that would only have longer-run payoffs. But it will be very hard to identify those groups of future patients who suffer because future breakthoughs in new drug therapies are delayed, or don’t happen at all.

Spring 2025 Journal of Economic Perspectives Freely Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Spring 2025 issue, which in the Taylor household is known as issue #152. Below that are abstracts and direct links for all of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.

________

Symposium: Drug Pricing and Regulation

“Economic Markets and Pharmaceutical Innovation,” by Craig Garthwaite

     Pharmaceutical innovations reach the market after a long and risky process that requires large, fixed, and sunk investments. Governments provide incentives for firms to make these investments through various forms of intellectual property protection that attempt to provide a return on capital for investors. As a result, pharmaceutical innovation results from an explicit intersection of public policy and private market incentives. Developing optimal policy therefore requires understanding market features such as how innovation is financed, how firms commercialize pharmaceutical products, the influence of insurance coverage on consumption and spending, and how competition emerges after intellectual property protection ends.

Patents, Innovation, and Competition in Pharmaceuticals: The Hatch-Waxman Act after 40 Years,” by C. Scott Hemphill and Bhaven N. Sampat

     A central policy issue in pharmaceuticals is how to balance the dynamic benefits of new drugs against the static benefits of low prices for existing drugs. In the United States, that balance is set by the Hatch-Waxman Act. We review the Act’s origins and key features, then present evidence on its effects on competition and innovation. On the competition side, we show how the Act creates incentives for brands to accumulate patents and generics to challenge them, with the result being a rough stalemate. We also discuss strategies deployed by brands to delay generic entry. On the innovation side, we show that the Act’s patent extension provisions—which aim to allow branded firms to make up for time lost during clinical trials and regulatory review—are incomplete, resulting in potential distortions. The net result is a convoluted and expensive approach to balancing innovation and competition.

Lessons for the United States from Pharmaceutical Regulation Abroad,” by    Margaret K. Kyle

     Pharmaceutical markets are characterized by barriers to entry and information problems. Many countries intervene in the pricing and reimbursement of drugs to a greater extent than the US government to date. Continued pressure from politicians and recent legislation are likely to change the market for pharmaceuticals in the United States. This article discusses the approaches adopted in other developed countries and the implications of their use in the United States, which due to its size, has far greater influence over the rate and direction of innovation. Alternative policy choices and the challenges of their implementation are also reviewed.

The Economics of Generic Drug Shortages: The Limits of Competition,” Rena M. Conti and Marta E. Wosińska

     We examine the economics of the US generic prescription drug market, which comprises the majority of medicines sold. The market is celebrated for its benefits in the form of high quality and low prices for consumers but is also increasingly challenged by shortages that may disrupt patient care. Shortages in the generic drug market present an economic puzzle—in the face of a shortage, prices should rise, encouraging entry, yet we observe shortages increasing in number and persistence. Moreover, if shortages cause patient harm, why don’t markets pay a premium for a reliable supply chain? We argue that the puzzle can be explained by the inability of generic drug prices to adjust easily due to regulatory and contracting frictions, and the coexisting presence of asymmetric information and agency problems in the US market. We conclude with a discussion of policy interventions aimed at addressing these challenges to ensure resilient US generic drug supply.

Symposium: Income Inequality

Measuring Income and Income Inequality,” by Conor Clarke and Wojciech Kopczuk

     Income inequality is important, but attempts to measure it arrive at strikingly different conclusions. Why? We use recent disputes over measuring United States income inequality to return to first principles about both the income concept and inequality measurement. We emphasize two broad points. First, no measure of the income distribution is truly comprehensive, or could attempt to be comprehensive without making controversial choices. We document the practical and conceptual problems that the standard ideal—comprehensive Haig-Simons income—raises. Second, much of the controversy in this area turns on the many tradeoffs between starting with individual tax data versus more expansive income concepts. Individual tax data reflect only a shrinking subset of a more comprehensive income concept–but they are individual data. More expansive alternatives, on the other hand, are harder to allocate to individuals. We document some of the most important and contestable assumptions that such an allocation requires.

Macro Perspectives on Income Inequality,” by Matthieu Gomez

     Inequality has become a defining challenge for modern economies and a central focus of economic research over the past two decades. I begin by revisiting the foundations of income measurement, showing that standard definitions—taxable income, factor income, and Haig-Simons income—suffer from important conceptual limitations. I contrast these income measures with the ideal notion of income from a welfare perspective—Hicksian income—which captures an individual’s ability to consume or save for future consumption. I then examine the drivers of rising top income inequality, with particular attention to the surge in entrepreneurial incomes. I highlight three key forces behind this phenomenon: higher returns on capital (technological factors), lower external financing costs (financial factors), and a lighter tax burden on business owners (fiscal factors).

“Public Finance Implications of Economic Inequality,” by Alan J. Auerbach

     This paper considers questions about the implications of rising inequality for the theory and practice of public finance. It begins by addressing fundamental reasons why the distribution of income or wealth on an annual basis before taxes and transfers offers insufficient information: (1) it does not tell us what resources are actually available to households for consumption; and (2) in providing a snapshot of the resources available to individuals of different ages at a given moment in time, without controlling for life-cycle related differences or income dynamics, it can provide a misleading estimate of the underlying degree of inequality. The paper then considers the implications of high and perhaps rising economic inequality for the design of government policy: top marginal tax rates, phase-outs of government policies for those with higher incomes, the political economy of inequality, and other subjects.

Symposium: Bond Markets

A Hitchhiker’s Guide to Federal Reserve Participation in Fixed Income Markets,” by Nina Boyarchenko and Or Shachar

     We review US dealer-intermediated fixed income markets, including Treasuries, agency mortgage-backed securities, corporate bonds, and municipal bonds. Through the lenses of primary dealers’ positions, we show these markets’ evolution over the past decade and the effects of recent episodes of abrupt deterioration in market functioning. We then overview how the Federal Reserve interacts with fixed income markets for the purposes of monetary policy implementation and liquidity interventions. We conclude by discussing the shifting composition of investors in US fixed income markets, and what consequences such changes in the investor base may have for monetary policy transmissions.

How US Treasuries Can Remain the World’s Safe Haven,” by Darrell Duffie

     Weaknesses in the design of the market for US Treasuries have reduced the effectiveness of world’s favored safe-haven asset. Since the Global Financial Crisis, the market’s intermediation capacity is far more constrained by the balance sheets of dealer banks, which handle virtually all investor trades. Since 2007, the total size of primary dealer balance sheets per dollar of Treasuries outstanding has shrunk by a factor of four. This trend continues because of large US fiscal deficits and post-GFC regulatory capital constraints, which are necessary for financial stability but limit the provision of liquidity under stress. For US Treasuries to remain a powerful safe haven, the intermediation capacity of the market will need to be expanded and further supported by official-sector backstops.

US Corporate Bond Markets: Bigger and (Maybe) Better?” by Maureen O’Hara and Xing (Alex) Zhou

       The US corporate bond market has expanded significantly, fueled by electronic trading, institutional innovation, and growing retail participation via mutual and exchange-traded funds. These developments have improved efficiency by reducing costs and enhancing transparency, yet they have also introduced new vulnerabilities. The market’s shift from relationship-based to transaction-based trading has weakened its ability to absorb stress, especially during periods of widespread selling. We examine the structural changes that have reduced dealer intermediation, the limited liquidity benefits of electronic platforms, and the destabilizing role of fund flows. The COVID-19 crisis exposed these weaknesses, prompting the Federal Reserve to act as a “market maker of last resort.” We argue that while the market is “better” in many ways, enhancing resilience through transparency and long-term investor participation is essential for future stability.

Why Is the Fragmented Municipal Bond Market So Costly to Investors and Issuers” by John M. Griffin, Nicholas Hirschey, and Samuel Kruger

       The municipal bond market plays a crucial role in providing capital to US municipalities and functions through a network of underwriters, municipal advisors, credit rating agencies, insurers, individual and institutional investors, and multiple regulators. Many of these market participants have significant asymmetric information and conflicting incentive structures, which can sometimes lead to disparate and seemingly inefficient outcomes. Puzzles documented in the academic literature include high underwriting costs, conflicting roles by municipal advisors, extreme and widely varying trade markups, investment holdings that are often not tax-efficient, inconsistent implied marginal tax rates, a heavy reliance on credit ratings, little benefit but widespread use of insurance, delayed use of call provisions, and inconsistent treatment of accounting information. We review issues in the municipal bond market and propose implementable suggestions that would hopefully allow for a more competitive and low-cost market for both taxpayers and investors.

Features

Retrospectives: Yair Mundlak and the Fixed Effects Estimator,” by Marc F. Bellemare and Daniel L. Millimet

       We discuss Yair Mundlak’s (1927–2015) contribution to econometrics through the lens of the fixed effects estimator. We set the stage by discussing Mundlak’s life and his seminal 1961 article in the Journal of Farm Economics, showing how it was looking at the right application—the study of agricultural productivity, which had hitherto been thought to be marred by the presence of management bias—that led Mundlak to use the fixed effects estimator. After discussing Mundlak’s contribution, we briefly discuss the historical economic and statistical contexts in which he made that contribution. We then highlight the dialogue that took place between the proponents of fixed versus random effects and discuss how Mundlak settled the debate in his 1978 Econometrica article. We conclude by discussing how, between fixed and random effects, the fixed effects estimator won the day, becoming the de facto estimator of choice among applied economists because of the Credibility Revolution, culminating in the popularity nowadays of difference-in-differences designs and of two-way fixed effects estimators.

Recommendations for Further Reading,” by Timothy Taylor

How Can You Tell if Health Insurance Helps Health?

It may seem obvious that health insurance helps health, but very few cause-and-effect conclusions are obvious to economists. For example, suppose that we just compared the health of everyone who has health insurance and everyone who doesn’t. It would be unsurprising to find that those with health insurance are healthier, but the two groups will also differ in many other ways. For example, given that many Americans get health insurance through their employer, the chances are good that those with health insurance are more likely to be employed and on average to have higher incomes. How can we disentangle effect of health insurance from other possible confounding factors?

Or imagine that you compared the health of people before-and-after they had health insurance. This approach has some promise, but again, if getting health insurance is also connected to getting a job with benefits, a higher income, and perhaps a more settled life in other ways, then the task of separating out the effect of health insurance from other confounding factors remains.

Or one can imagine a social experiment in which a large group is randomly divided, with part of the group receiving health insurance and part not. Then you could track the two randomlly selected groups over time, and see what happens. This is essentially the approach used to test the safety and efficacy of new drugs, for example. Thus, social scientists are on the lookout for situations where this kind of random selection in to health insurance happened, but perhaps by accident rather than policy.

In their essay, “The Impact of Health Insurance on Mortality,” Helen Levy and Thomas C. Buchmueller focus in some of these situations in which access to health insurance was determined in a way with a high degree of randomness (Annual Review of Public Health, April 2025).

One of the most clear-cut examples happened in Oregon in 2008. The state wanted to expand eligibility for Medicaid, but didn’t have the money to expand it for everyone. The result, as the authors describe it was”the 2008 Oregon Health Insurance Experiment, which studied ∼75,000 low-income adults under age 65, 40% of whom were selected by lottery to be eligible for Medicaid (the treatment group) with the remaining 60% serving as a control group.” Thus, some randomly received health insurance, and some did not.

Another truly randomized study looked at an “IRS initiative that sent letters in early 2017 with information about HealthCare.gov to a randomly selected sample of 3.9 million households that had been subject to the ACA [Affordable Care Act] individual mandate penalty for failing to have coverage in the previous year. The study finds that the letters led to a small but significant increase in coverage.” In this case, some randomly received a letter that increased the share of that group with health insurance, while others did not.

Yet another approach looked at those admitted to California hospitals who were either just under age 65, and thus not eligible for Medicare, or just over age 65, and thus covered by Medicare. The idea here is that the just-unders and just-overs should be highly comparable groups: after all, the only way they differ was in being born a few months apart. In this “discontinuity” approach (in this example, the discontinuity is age 65), the greater or lesser share of health insurance across groups is quite similar to random.

Other examples involve Medicaid coverage Medicaid is a joint federal-state program, so the program was often introduce in a staggered way, over time, across states. This was true back in the 1960s, when Medicaid first enacted, and it was also true in the 2010s, when states were allowed to expand Medicaid coverage, but over several years, only some did so. A researcher can look at this data and see if, when a group of people become eligible for Medicaid, the pattern of their health outcomes then shifts from previous patterns–and the patterns of health outcomes for groups that did not become eligible at that time. Here, the random ingredient is the staggered time periods in which health insurance was introduced.

My theme here is that there are plausible ways for researchers to study a cause-and-effect relationship between health insurance and health. Of course, not all of these studies cover the same age groups, or find the same outcomes. But my guess is that a number of readers care less about the way the studies are done, and more about how the authors of this review would summarize the overall results. Here, I quote from the abstract of their paper:

A 2008 review in the Annual Review of Public Health considered the question of whether health insurance improves health. The answer was a cautious yes because few studies provided convincing causal evidence. We revisit this question by focusing on a single outcome: mortality. Because of multiple high-quality studies published since 2008, which exploit new sources of quasi-experimental variation as well as new empirical approaches to evaluating older data, our answer is more definitive. Studies using different data sources and research designs provide credible evidence that health insurance coverage reduces mortality. The effects, which tend to be strongest for adults in middle age or older and for children, are generally evident shortly after coverage gains and grow over time. The evidence now unequivocally supports the conclusion that health insurance improves health.