Save the Whales by Pricing the Ship Noise Externality

Ships play a central role in the global trade of goods, but they also make noise–which might affect undersea creatures like whales that depend on sound for communication and navigation. M. Scott Taylor investigates this issue in “Saving Killer Whales Without Sinking Trade: A market solution to noise pollution” (Property and Environment Research Center (PERC), September 25, 2025). Taylor (no relation) begins:

Maritime shipping is key to global trade, and international trade is key to prosperity worldwide. Today, approximately 80 percent of world trade by volume and over 60 percent by value is transported by ships. This trade brings new goods, technologies, and ideas from around the world and is critical to maintaining our standard of living and growing it into the future. Despite these benefits, shipping—like all economic activity—has environmental impacts.

Maritime shipping is responsible for perhaps 3 percent of global carbon emissions and a significant share of the world’s particulate and sulfur dioxide emissions. International organizations like the International Maritime Organization and regional governments like the E.U. have set new regulations and plans to reduce these impacts over the coming decades. There is, however, an impact of shipping on the environment that researchers have only recently recognized—underwater noise pollution—that may have deleterious effects on marine mammals. …

There is now widespread recognition that over the post-World War II period, maritime shipping has raised ambient noise levels in the world’s oceans. While estimates of the long-term change in ambient noise are somewhat speculative, one commonly cited source1 suggests ambient ocean noise has risen by three to four decibels (dB) per decade since the 1950s. This increase, tied to a simultaneous increase in maritime shipping, raised the ambient noise level in the low-frequency zone most relevant to marine mammals from approximately 52 dB in 1950 to more than 90 dB by 2007 …

Not surprisingly, marine biologists and ecologists have started to study the impact of rising ambient noise levels on marine mammals. The reason is simple: Sound to marine mammals is much like sight is to humans—it is their primary sense for moving through the world around them. Low-frequency sounds (less than several hundred Hz) may be interfering with whale communication and social calls, whereas higher-frequency sounds (greater than several hundred Hz) potentially interfere with the echolocation employed to track prey. Therefore, the sounds emitted by maritime shipping may affect almost all aspects of whale life, making it more difficult to communicate, socialize, and hunt.

Thus, two questions arise. First, is there some evidence that underwater sound affects whale populations in a negative way (and what would that evidence look like?). Second, if so, what might be done about it.

For evidence, Taylor offers an analysis of the “Southern Resident killer whales,” who spend most of their year living in what is called the Salish Sea, off the coast of Seattle in Washington state and Vancouver in British Columbia. This is “perhaps the most studied whale population in the world, with detailed health and genealogical information on births and deaths accurately measured since the late 1970s.” Taylor points out that as shipping into this area has increased, the whales living closer to the main ports have seen a decrease in population, while those living further north and away from the ports have seen an increase–suggestive, but one can imagine a variety of explanations.

Thus, Taylor zooms in on the variations in shipping across years, often caused by factors like the 2001 dot-com recession or the 2008-09 Great Recession. As it turns out: “For example, at age 40, a Southern Resident killer whale is over 30 percent more likely to die in a noisy year.” Also, “[i]n years of peak fertility, noisy years lower the probability of a subsequent successful birth by over 25 percent.” The broader question of how shipping noise affects whale populations around the world is clearly worthy of additional study, but for present purposes, Taylor advances to the second question of what might be done.

Of course, what economists call a “command-and-control” approach might just require every ship to install equipment to reduce noise. But as economists also like to point out, there might be an array of ways to reduce noise: along with quieter engines, perhaps a quieter propeller (or even just polishing the existing propellers), redesigning ducts that direct water to the propeller, perhaps travelling at different speeds (recognizing that a faster ship for less time might be preferable to a slower ship for a longer time), perhaps ships of different hull design or different sizes. Indeed, the best answer for minimizing the noise involved in shipping a certain amount of cargo may not be knowable in advance, because it would require research and experimentation over time.

Taylor proposes a way of measuring the underwater noise from ships, and suggests that marketable permits could be used. The idea of such permits is that a shipping company must have a permit for the underwater noise it emits. The original distribution of permits can be done by handing them out to existing shipowners, or by requiring that existing shipowners buy them (perhaps through an auction). Shipping companies that find ways to move cargo more quietly will have extra permits, which they can sell to other firms. At a minimum, such permits could prevent the amount of underwater noise from rising; in addition, the amount of noise allowed by a given permit can be preset to diminish over time.

The basic idea here is that, in this case, public policy should focus not on dictating a command-and-control solution, but instead to set a goal for how much underwater noise from shipping should be reduced, and then to give shipowning firms a financial incentive to innovate and experiment to meet and exceed that goal.

Mortgage Lock-In

The problem of “mortgage lock-in” arises when a homeowner has a mortgage that was obtained a few years back at a lower interest rate. If that homeowner wants to move to another place (often by using the equity in their current home for the down-payment), it would be necessary to obtain a new mortgage at a substantially higher interest rate. In this way, a move could lead to a substantial rise in monthly mortgage payments. Of course, when potential sellers are discouraged from selling, the supply of homes available in the market drops–and even the potential buyers who are willing to take out a mortgage at the higher interest will have to search harder or wait longer to find the home they desire.

Kyle Mangum of the Philadelphia Federal Reserve provides some background in an overview article, “How Mortgage Lock-In Affects the Price of Housing,” which is subtitled, “There has never been such a huge gap between the rate homeowners pay and the rate for new mortgages” (Economic Insights, 2025: Q3).

This figure shows interest rates for a 30-year fixed rate mortgage going back to the year 2000, along with the policy interest rate controlled by the Federal Reserve–the “federal funds rate.” As you can see, 30-year mortgages are above 8% back in 2000, and then gradually dropped through the pandemic in 2020-21.

At least to me, it’s interesting to note that the first two times the Fed raised interest rates since 2000, the 30-year fixed interest rate went up only a little. But the third and most recent time the Fed raised interest rates has been accompanied by a substantial rise in interest rates for a 30-year fixed rate mortgage. One obvious possible explanation for this greater movement is that when the Fed raised interest rates earlier, inflation had barely budged over time–so lenders were willing to lend at, say, 4% interest with an expectation that inflation would be close to 2%. But after the burst of inflation in 2022, lenders looking at a 30-year time horizon are not willing to assume that inflation will remain at around 2%, and the interest rate has risen accordingly.

The result is that lots of people who took out or refinanced a mortgage from, say, 2010 to 2021 are paying an interest rate well below the going rate for a new mortgage. Mangum offers this interesting figure. From about 2008 up through 2022, relatively few of those who had a mortgage had an interest rate that was more than 1 percent below the rate for a new mortgage. They were not “locked in.” But at present, something like 90% of those with mortgages are paying an interest rate that is at least 1 percentage point below the market rate for a new mortgage, so if they sold their current house and took out a new mortgage as part of buying a different house, their monthly payments could be substantially higher. As Mangum points out, the constricted housing supply from mortgage-holders who feel locked-in helps to explain why US housing markets have in recent years experienced both low sales volumes and high price growth.

To put this another way, mortgage-holders have in recent decade been able to take advance of lower interest rates to refinance their fixed-rate mortgages. But with the recent rise in interest rates, now most mortgage-holders are paying an interest rate below the market interest rate for a new mortgage, and so refinancing has dried up. Here’s a figure from Freddie Mac, showing the extent of mortgage refinancing since 1980. In particular, the lower mortgate interest rates in the early 2000s led to a surge of refinancing. But over time, mortgages were being taken out at lower interest rates, and although refinancing tended to surge when mortgage interest rates fell, the total amounts dropped lower than in the early 2000s. Since 2022, when interest rates went up, refinancing of mortgages has fallen to extremely low levels.

Is there an answer to the locked-in problem? As Mangum points out, the options within the current framework of US mortgages are limited. If the Federal Reserve reduced its federal funds target rate substantially, it would bring down mortgage interest rates as well, but a return to the days of a federal funds interest rate at nearly zero percent seem unlikely. Also, my guess is that the fear of future inflation will tend to keep 30-year fixed rate mortgages high. If Americans liked adjustable-rate mortgages, then lock-in (and refinancing) would not be such big issues–but Americans as a group have shown a preference for fixed mortgage payments.

What about if US mortgage institutions could change? For example, what if mortgates could be “portable,” so that you keep your earlier mortgage when you move to another home? However, a “portable” mortgage would have a different interest rate than a “nonportable” mortgage–because it might end up applying to different houses and on average might be held for different periods of time–and so trying to just pass a law to make all mortgages portable may not be practical.

Another approach comes from Denmark. The idea is that some of the investors who in the past purchased housing-backed securities that involved low mortgage rates would like to ditch those investments when interest rates go up. In Denmark, it’s possible for a mortgage-holder in this situation to buy back their own mortgage at a significant discount–thus letting the mortgage-holder benefit in this way when interest rates rise. If someone who took out a mortgage at a low interest rate could pay off the mortgage at a discount, they might become more willing to sell–and in this way to make the housing market more flexible.

But setting aside these types of more fundamental reforms, the locked-in problem is more likely just to resolve slowly over time. As Mangum writes:

[T]he unwinding of lock-in will likely come about through normal housing market turnover—that is, through changes in family status, jobs, health, and so on. Thanks to this turnover, most new and existing mortgages will eventually converge to the market rate. But this unwinding will take time to run its course—and extra time because the rate at which people move is dampened by lock-in …

Do Rules to Limit High Government Debt Work?

Every government faces a temptation to make two popular choices at the same time: hold taxes lower and raise spending higher. But of course, the result is higher debt. Thus, a number of countries have been attempting to set up rules that would prevent governments from giving into temptation. One immediate challenge for such rules is that they need to contain some flexibility: after all, tying the hands of govenment during a pandemic or a deep recession doesn’t seem sensible. A second challenge is governments today would prefer not to follow the rule passed by government yesterday.

A group of IMF economist describe these issues after studying a databased of fiscal rules in more than 120 countries in “Fiscal Guardrails against High Debt and Looming Spending Pressures” (IMF Staff Discussion Note SDN/2025/004, September 2025, by Julien Acalin, Virginia Alonso-Albarran, Clara Arroyo, Waikei Raphael Lam, Leonardo Martinez, Anh D. M. Nguyen, Francisco Roch, Galen Sher, and Alexandra Solovyeva).

Here’s the rise in countries with a fiscal rule of some kind since 1990. The upward trend started in advanced economies, but has since been spreading.

The IMF authors describe how the fiscal rules are working along these lines:

Although earlier fiscal rules were often too rigid, efforts to introduce greater flexibility have not translated into stronger compliance. … [F]ewer than two-thirds of countries adhere to their deficit rules on average, with lower share for emerging market and developing countries and debt rules. … Fiscal deficits four years after the pandemic continue to exceed fiscal rule limits by a median of 2.0–2.5 percentage points of GDP for about 40 percent of advanced economies and 60 percent of EMDEs (Alonso and others, 2025b). In most countries, public debt has surpassed the ceilings in the debt rule by an average of 25 percentage points of GDP. Such large deviations from fiscal rule limits in many countries are driven by both severe shocks and limitations in the design of fiscal rules (Davoodi and others 2022a). During the severe shocks, the magnitudes and the share of countries that deviate from fiscal rule limits increased as expenditures or deficits tend to rise. But even in normal times, some countries have deficits and debt persistently exceeding their fiscal rule limits, partly because of multiple exclusions from the rules, limited fiscal oversight, or lack of fiscal adjustments to reduce debt and deficits. In recent years, fiscal adjustments have been limited, complicating the return to fiscal rule limits (Caselli and others 2022).

Choosing a fiscal rule is easy enough: for example, a government can specify thar it will balance its budget annually, or that total government debt will not surpass a certain debt/GDP ratio, along with other approaches. The rule can also offer some flexibility for pandemic and recessions. The hard part is what to do when the government is either thinking about blowing right through the rule, or has already done so. To address the harder part, the IMF team describes two useful elements of a fiscal rule.

First, any agreement on a fiscal rule should also be an agreement on what corrective action will be taken if the rule is bent or broken. For example:

Ecuador and Spain mandate corrective actions when fiscal outcomes are close to the fiscal rule limits. Some countries implement progressive triggers with corresponding tighter measures. For example, Czech Republic sets thresholds on the debt-to-GDP ratios, each involving larger fiscal adjustments if triggered. … In the event of deviations, many fiscal rules require corrective actions to be implemented within one and a half or two years (Finland, Spain) after the breach, and sometimes within three years (Grenada). More stringent correction mechanisms may require remedial action to be included in the next budget. … Some fiscal rules … call for fully unwinding past cumulative deviations in the corrective mechanism. For example, Switzerland’s mechanism accumulates any deviation from the budgeted expenditures in a notional account, requiring the government to take sufficient measures to bring the expenditures within the limit in next three annual budgets if the negative balance in the account exceeds 6 percent of expenditure. Mechanisms in Germany, Grenada, and Jamaica require corrective actions for cumulative deviations. … Some countries specify particular measures; for instance, the fiscal rule in Slovak Republic mandates a freeze on public sector wages if debt exceeds 53 percent of GDP, with further spending cuts if debt surpasses 55 percent of GDP.

In short, if your proposed fiscal rule doesn’t specify what consequences will result from breaking it, and the timing for those consequences, it isn’t much of a rule. the other main element of a successful fiscal rule is that, given that the current government is either breaking the rule or on the verge of doing so, it’s important to have some institutions that can create pressure from outside the government.

For example, many countries publish a “medium-term fiscal framework,” or MMTF, which seeks to read broad agreement on budgetary decisions before getting down into the details. The IMF economists write:

The MTFF sets top-down limits on government expenditure and fiscal balance, guiding the annual budget process. The MTFF report should include the fiscal strategy, medium-term macro fiscal projections, measures for achieving fiscal targets, and fiscal risks assessment (Curristine and others 2024). The MTFF should be prepared and published before the budget, incorporating multiyear ceilings for fiscal aggregates, which can also be disaggregated into sector-specific or programmatic frameworks (for example, France, Rwanda, South Africa, Sweden) to facilitate the translation of targets into annual budgets and spending priorities.

I confess that in the context of the US federal budget, which seems to be run by a combination of continuing resolutions punctuated by an occasional omnibus bill that loads everything together, the idea of agreement on an MMTF seems very hard. But a number of countries do manage it. Another form of outside pressure is to have a “fiscal council” with a degree of operational independence, which has the power to point out and publicize whether the fiscal rules are under threat.

Fiscal oversight can take different institutional forms, ranging from parliamentary budget committees and auditor offices to independent fiscal councils. Fiscal councils can provide technical assessments of compliance with fiscal rules and can alert in year deviations. Their expertise is critical for evaluating risks to public finances and the realism of macro-fiscal forecasts in the budget and MTFF. Fiscal councils should have direct communication with the media. To secure their operational independence, they should have a well-defined mandate aligned with their resources, budget safeguards, and timely access to information …

In a US context, the idea of a fiscal council also doesn’t seem very realistic. The US government decided back in the 1930s that the Federal Reserve would have its goals set by laws duly passed by Congress and signed by the President, but would then be operationally independent from politics. But that arrangement is now under political challenge, and an independent fiscal council responsible for fiscal strategy and targets seems even less politicall plausible.

Again, fiscal rules are easy. The hard part is specifying what corrective actions will happen if the rules aren’t followed, and what credible institutions will advocate for the rules and the corrective actions when needed.

Snapshots of the Global Robot Population

The International Federation of Robotics is a nonprofit but industry-based trade group. Each year the IFR issue a World Robotics Report, which costs way too much for me to get a copy. However, the report is accompanied by a useful press release with slides showing big-picture trends in the spread of robots around the world. Here are a few points that jumped out at me.

Here’s a figure showing the tripling in the total stock of industrial robots in the last decade, now reaching 4.6 million:

About half of those industrial robots are in China, with Japan and Korea also in the top five. The number of manufacturing jobs in China has been declining for several years now.

What are some of the shifting patterns in global robotics?

1) For a number of years now, the use of industrial robots has been primarily in two industries: electronics and cars. While those are still the two biggest users of industrial robots, they now account for less than half of the market and the “other industries” category is on the rise.

2) The IFR divides robots into two categories: the industrial robots just mentioned, but also a rising category of “service, mobile, and medical” robots. This includes, for example, robots that can autonomously drive around in warehouses and even pick items off shelves, robots for professional cleaning, search and rescue robots, and robots that can conduct laboratory tests or even assist with surgery.

3) Humanoid robots are not really a thing yet. Such robots are still in the R&D and prototype stage. As far as I can tell, the underlying issue is that robots are usually designed to carry out particular tasks, and when you do that, the best design for a specific task is usually not shaped like the human body.

Measuring Benefits of High-Skilled Immigration

How can economists measure the benefits of high-skilled immigration? The challenge is to use real-world data to separate this immigration from other factors, recognizing that some anecdotes about particular high-skill immigrants doesn’t offer real evidence, and that corellation is not causation. Economists often tackle questions like this by looking for a “natural experiment”–that is, some kind of event or policy that created a shock of more (or less) high-skill immigration. Michael A. Clemons describes some of this evidence in his useful short essay, “New US curb on high-skill immigrant workers ignores evidence of its likely harms” (Peterson Institute for International Economics,” September 22, 2025).

For example, consider the H-1B visa, which allows a US employer to hire a foreign professional–defined as someone who has a least a bachelor’s degree in a “specialty occupation” that typically involves advanced technology. The visa is typically for three years, extendable to six years. In 1998, Congress tripled the number of these H-1B visas. Then in 2004, Congress cut the number by more than half. Set aside for the moment the issue of whether these policy choices made sense, and just look at it as a research opportunity.

When Congress tripled and then halved the number of H-1B visas, the effects were not evenly distributed across US cities. Some cities saw much bigger increases and declines in H-1B visa-holders than others. Thus, one can compare urban areas that were similar in these techology industries before 1998, and then see what happened when some of these cities received an influx of talent while others did not.

In addition, more companies would like to hire through the H-1B visa program than the number of actual visas available, so the visas are actually allocated across firms by lottery. Again, think of this as a research opportunity. A researcher can compare those companies that by random chance won the lottery and were allowed to hire additional skilled labor to those companies that were not.

In short, the results of such studies are not theoretical claims, but instead are real-world results based on fairly recent US experience. Clemons describes what the studies show:

That’s how we know that workers on H-1B visas cause dynamism and opportunity for natives. They cause more patenting of new inventions, ideas that create new products and even new industries. They cause entrepreneurs to found more (and more successful) high-growth startup firms. The resulting productivity growth causes more higher-paying jobs for native workers, both with and without a college education, across all sectors. American firms able to hire more H-1B workers grow more, generating far more jobs inside and outside the firm than the foreign workers take.

An important, rigorous new study found the firms that win a government lottery allowing them to hire H-1B workers produce 27 percent more than otherwise-identical firms that don’t win, employing more immigrants but no fewer US natives—thus expanding the economy outside their own walls. So, when an influx of H-1B workers raised a US city’s share of foreign tech workers by 1 percentage point during 1990–2010, that caused 7 percent to 8 percent higher wages for college-educated workers and 3 percent to 4 percent higher wages for workers without any college education.

The key point is that in high-tech growth industries, the number and size of firms and the number of jobs is not static. An increase in the number of high-skilled immigrant workers raises the number of jobs and wages for native-born workers across a range of skill levels. Openness to innovators and innovation is a key driver for a rising US standard of living.

I’ll just add that the H-1B visa program is undoubtedly imperfect, like most real-world policies. The receiver of the visa is effectively tied to the employer for a period of time, which creates a potential for abuse. There are sure to be some native-born high-skill workers who look at the influx of immigrant high-skilled workers and worry that it will negatively affect their job prospects or wages. Economic growth is disruptive. Economic stagnation will often appear less disruptive–until people all over the economy recognize that in a zero-growth or low-growth economy, the only way to get ahead is for someone else to have less. As Paul Romer has said: “Everyone wants progress. Nobody wants change.

Hat tip: I was directed to the Clemons article by Tyler Cowen in a post at the ever-useful “Marginal Revolution” website.

50 Years Ago: When the US Encouraged Coal Use

Coal is the dirtiest of the fossil fuels, both for its contribution to the standard pollutants like particulates and sulfur, but also because it emits more carbon per unit of energy produces than natural gas or petroleum. Thus, it’s good environmental news that, in the last couple of decades, US coal has declined to just 9% of total US primary energy consumption. The US Energy Information Administration reports: “In terms of coal’s total primary energy content, annual U.S. coal consumption peaked in 2005 at about 22.80 quads and production peaked in 1998 at about 24.05 quads.”

(For the curious, “primary” energy consumption refers to the original source of the energy. “Electricity” is not included, because electricity needs to be generated from something else like a natural gas power plant or a solar panel–electricity is not a primary source of energy by itself.)

(For those still more curious, NGPL refers to “Natural Gas Plant Liquids,” which are hydrocarbons like propane, which are separated from natural gas at processing plants.)

(For the additionally curious, the “renewable” energy category here includes hydropower, wind, solar, and biofuels like ethanol and wood. Of the 9% of total US energy consumption that traces to renewable energy in 2023, about three-fifths is biomass, like ethanol and wood. Not quite one-third of the 9% of US energy consumption from renewable energy in 2023 traces to wind and solar.)

But there was a time a half-century ago, when promoting coal use was a primary energy policy for the US government. Karen Clay, Akshaya Jha, Joshua Lewis, and Edson Severnini provide the background as part of their overall history in “Carbon Rollercoaster: A Historical Analysis of Decarbonization in the United States,” in the Summer 20205 issue of the Journal of Economic Perspectives (where I work as Managing Editor). 

If you flash back to a half-century ago, you may know that in 1973, the members of OPEC, the Organization of the Petroleum Exporting Countries, embargoed oil exports to the United States and any other countries that had supported Israel during the Yom Kippur War.  As Clay, Jha, Lewis and Severnini write: “The real price of imported oil rose dramatically, from $10.67 per barrel in 1972 (in 2007 US dollars) to $36.05 in 1974 (Seiferlein 2007, p. 171). Turbulence in the Middle East kept prices high. Unrest in Iran and the Iran-Iraq War caused further disruption, driving oil prices to $62.71 per barrel in 1980.”

In response, one policy goal of the time was to shift US energy use away from oil. The authors report:

Various regulations passed during and after the crisis reinforced the continued use of coal in electricity and other sectors. The first major piece of legislation was the Energy Supply and Environmental Coordination Act of 1974, which required that, if feasible, electric power plants burning oil and natural gas would have to convert to coal (Meltz 1975). This law was then largely superseded by the Fuel Use Act of 1978. Edward Lublin, Acting Deputy Assistant General Counsel for Coal Regulations in the Department of Energy, wrote: “The Fuel Use Act prohibits new facilities and allows DOE to prohibit existing facilities, from using petroleum or natural gas as a primary energy source unless DOE determines to grant to such facility an exemption from the Fuel Use Act’s prohibitions (Lublin 1981, p. 355).” This pro-coal legislation was often justified in terms of energy independence, given the abundant US reserves of coal. The legislation covered both electric utilities and major industrial fuel-burning installations …

The Three Mile Island nuclear powerplant meltdown happened in March 1979. Thus, an additional policy goal at this time was to shift away from nuclear. The authors write:

After Three Mile Island, no new nuclear power plant construction was authorized until 2012. Because nuclear plants displaced coal-fired electricity generation—one gigawatt-hour of nuclear generation resulted in a roughly 0.8 gigawatt-hour decrease in coal-fired generation historically (Adler, Jha, and Severnini 2020)—the nuclear upheaval kept coal consumption higher than it would otherwise have been.

One additional step was that the anti-pollution efforts of the original Clean Air Act had the useful effect of reducing “conventional” pollutants like ozone, particulate matter, carbon monoxide, and others. However, reducing carbon emissions was not yet on the policy agenda. Reducing these other pollutants had a tradeoff that coal was burned with lower efficiency–which meant that more carbon was emitted.

[E]fforts to cut local air pollution often increased carbon emissions. The 1970 Clean Air Act and subsequent amendments in 1977 coincided with less efficiency in converting coal to electricity sold and higher carbon emissions … The aggregate implications of this shift from 1970 to 1990 are meaningful: annual total carbon emissions in 1990 from coal-fired generation was 1,607 million tons, but would have been 1,415 million tons if the same amount of coal-fired electricity had been generated at 1970 levels of carbon emissions per gigawatt-hour. Similarly, the aggregate kilowatt hours of electricity sold per ton of coal burned decreased from 2,529 in 1970 to 2,065 in 1990. Thus, regulation increased coal consumption and carbon emissions.

Putting all of these together, “By 2005, coal consumption was five times what it had been in 1960.”

One of my complaints about the world, which I’m confident will never really be addressed, is that those who advocated for policies that turned out to have undesireable tradeoffs pretty much never acknowledge that reality. The US economy doesn’t really start getting off coal until the fracking revolution greatly expanded the supply of natural gas (as shown by the light blue area in the figure above). But what if it had been possible to move to natural gas sooner? Or France reacted to the OPEC oil embargo of 1973 by building nuclear power plants, which means that France’s carbon emissions have been quite low since then. Perhaps the worst thing about the US stepping away from nuclear is that several decades went by without intensive research on how to make the technology safer. What if solar and wind technology could have been accelerated as well? The carbon from the additional coal that was burned from, say, 1975 to 2005 is still in the atmosphere now, and will remain there for a very long time.

Recent Trends in US Antitrust Enforcement

The Biden administration appointed people to key antitrust positions in the Federal Trade Commission and the Antitrust Division at the US Department of Justice who, in general, promised to make antitrust regulation tougher. I’ve written here before about questions of doctrine: that is, how should antitrust cases be evaluated? But there’s also a more basic question: what changes in mergers and enforcement have actually happened?

Under the Hart-Scott-Rodino Act of 1976, all proposed mergers and acquisitions above a minimum threshold size must be reported to the US government, which gives the antitrust authorities a chance to look them over in advance. In 2024, the minimum threshold size over which a transaction needed to be reported in advance was $119.5 million. Each year, the Federal Trade Commission and the Antitrust Division of the US Department of Justice report on the transactions from the previous year, as well as what enforcement actions were taken. The most recent is the 47th Hart-Scott-Rodino Annual Report (FY 2024). Thus, the report is a chance to see what the newly aggressive antitrust administrators of the Biden administration did through 2024.

As a starting point, here are the number of proposed merger reported under the Hart-Scott-Rodino rules. Obviously, there’s a fair amount of year-to-year variation: for example, the low level in 2020 is probably attributable in part to the disruptions of the pandemic. The higher levels in 2021 and 2022 were partly a bounceback from the pandemic year, but also there was some talk in the financial press that firms were trying to complete transactions before the new Biden antitrust warriors wrote a new set of new merger guidelines. The drop in the last two years is essentially back to pre-pandemic levels. Part of the issue here is that at least some merger transactions are financed by debt, which is less enticing when interest rates go up. Overall, it seems fair to say that the number of proposed mergers in 2024 was near-average for the the previous decade.

Was there an effect on the average size of mergers? Here, the pattern is more clear: The share of proposed mergers with value of more than $1 billion has risen substantially in the lat few years.

When the FTC and the DoJ are notified of a potential merger, they can either allow it to proceed without challenge or they can put in a “second request” that expresses concern and asks for more information. This “second request” percentage is always pretty low. After all, the presumption of antitrust authorities is that they are not second-guessing whether a proposed deal will be a money-maker, but only whether it poses a risk of reduced competition. Also, the antitrust authorities have limited budgetary resources and need to pick and choose. That said, the share of transactions getting “second request” had been relatively low under the Biden antitrust team, although in 2024 it rose back to a level that had been common during the first Trump administration.

Ultimately, after these second requests, “The [Federal Trade] Commission took enforcement action against 18 transactions: 12 that the parties abandoned or restructured as a result of antitrust concerns raised during the investigation; and six that resulted in the Commission initiating administrative or federal court litigation. The [Antitrust] Division took enforcement action against 14 transactions: 12 that the parties abandoned in the face of questions from the Division; and two that were restructured after the Division raised concerns about the threat they posed to competition.”

Of course, it’s always hard to draw linkages from enforcement efforts to outcomes, in antitrust as in other areas. The tough talk from the Biden antitrust enforcers, their doctrinal arguments over what antitrust enforcement should be, and the specific cases where they brought enforcement actions surely shaped the types of mergers that firms were willing to propose. But with such effects duly noted, it’s hard to look at the raw number of proposed mergers, proposed large mergers, and enforcement efforts and interpret it as a sharp break from past antitrust practice.

For those who want more on antitrust doctrine, I’ve commented on this blog from time to time about the Biden antitrust team, the new merger guidelines, some current antitrust cases, and the historical changes in merger law over time. Some of these posts include:

Also, the Winter 2025 issue of the Journal of Economic Perspectives (where I work as Managing Editor) has a three-paper symposium on “The 2023 Merger Guidelines and Beyond.”

Some Trends in Global Debt from the IMF

The IMF has updated its Global Debt Database, and Vitor Gaspar, Carlos Eduardo Goncalves, and Marcos Poplawski-Ribeiro point out a few of big-picture changes in a short article “Global Debt Remains Above 235% of World GDP” (IMF Blog, September 17, 2025).

Here’s an overall view of global debt since 1950, measured as a share of global GDP:

The big-picture patterns here are intriguing. From 1950 up to about 1980, global debt remains at roughly 100% of global GDP. However, during this time the share of public debt (yellow bars) is falling, while the share of private debt (blue bars) is rising. in 1950, public debt is substantially larger than private debt; by 1980, private debt is substantially larger than public debt–and has remained larger ever since.

But in the 1980s, global debt as a share of GDP starts rising. Comparing the early 1990s to the present, corporate debt as a share of global GDP hasn’t risen much, household debt has risen moderately, and government debt has risen by a lot–although it has dropped a bit in the last few years as pandemic-related spending has diminished.

In one way, rising debt is not a surprise. Countries as a low level of economic development often have little debt, because their banking and financial sector is also underdeveloped. Such economies lack a well-developed channel through which savings by households and firms can become loanable funds for others in the economy.

But at some point, for any organization or household, rising levels of debt become a worry. It’s thought-provoking to me that the corporate sector, where outside investors in corporate stocks and bonds are monitoring company financial records, hasn’t seen much of a rise in debt. Instead, the rising debt levels are traceable to households and government.

Here’s another figure from the IMF authors, focused on changes in 2024 for the US, China, and for advanced and other economies around the world. In the US, public debt rose in 2024, but private debt dropped–in part because many US corporations have high profits and thus can reduce their borrowing, and perhaps also in part because rising public debt is leading to higher interest rates in a way that leads to “crowding out” of private borrowers

But when it comes to higher debt levels in 2024, the obvious “winner” is China, with dramatic rises in both public and private debt. Indeed, given that the banking and financial system in China is heavily controlled and backstopped by its government, even the private debt listed here is in some sense “public.” Many of the causes behind China’s economic growth involve real changes, like a better-skilled workforce, improved infrastructure, capital investment, and better technology. But at least one of the causes has also involved turning the debt spigots wide open, especially through local government lending to companies. Debt can be a facilitator of growth, but excessive debt can also cripple growth. China seemed to be attempting to address its pre-existing debt problem with additional debt–a policy approach that rarely ends well.

Economics of Trade Sanctions

The exercise of US foreign policy (along with the European Union and the United Nations) has been increasingly characterized by the use (or threat) of trade sanctions. What do we know about how such sanctions work? Gabriel Felbermayr, T. Clifton Morgan, Constantinos Syropoulos, and Yoto V. Yotov review the evidence in “Economic Sanctions: Stylized Facts and Quantitative Evidence” (Annual Review of Economics, 2025, 17: 175-195). They write:

According to the newest version of the Global Sanctions Data Base (GSDB; Yalcin et al. 2024), the number of sanction programs in place globally has shot up from about 200 10 years ago to about 600 in 2023. What is more, about 12% of all existing country pairs and 27% of world trade are currently affected by some type of sanction. In their various forms, sanctions are the leading geoeconomic tool aiming to coerce foreign governments into actions that they would not undertake otherwise. …

[S]anction processes are much better understood today than they were 30 years ago. The political science community has come to accept what economists already knew (i.e., that sanctions bring substantial economic effects), and economists have come to accept what political scientists have long understood (i.e., that substantial economic costs do not always bring changes in policy). Over that same period of time, the use of sanctions has dramatically increased, and they have come to affect many more bilateral economic relationships. This is a puzzling phenomenon: If sanctions are costly and frequently fail to deliver the desired policy objectives, why have they become so endemic?

Their discuss of this question offers various hypotheses about bargaining and negotiating. But I found one of the insights especially persuasive: “[T]he primary effects on [sanction] targets are negative, large, often long-lasting, and very heterogeneous; in contrast, the corresponding effects on senders tend to be small and short-lived.” In other words, sanctions have at least a chance of producing substantial pain, at little cost to those doing the sanctioning.

For more details, I useful starting point is a two-paper symposium on “Trade Sanctions and International Relations” in the Winter 2023 issue of the Journal of Economic Perspectives (where I work as Managing Editor):

Better Permitting and More Building: Possible?

It seems natural enough, at least based on US experience, to believe that building and permitting are in a natural opposition: that is, stronger permitting means less building. Zachary Liscow has been looking for a way out of this opposition. He spells out some of his thoughts in “Reforming Permitting to Build Infrastructure” (Hutchins Center on Fiscal & Monetary Policy at Brookings, September 2025).

I confess that I was drawn to the paper, in part, by footnote 1 after the first sentence. (Have I ever written a sentence more geeky than that? Probably.) It reads: “This report builds on Zachary Liscow, “Getting Infrastructure Built: The Law and Economics of Permitting,” Journal of Economic Perspectives 39, no. 1 (2025): 151–80). As the Managing editor of JEP, I recommend the earlier paper as well.

In particular, Liscow is concerned that the US needs more instructure, in particular for energy and transportation, and that the existing system of permitting has evolved in such a way that it can allow a well-funded and/or noisy minority to have effective power to slow and to block the needed infrastructure. Liscow’s central idea is to have permitting work better, in particular by thinking of ways that permitting might work better at allowing open consideration and evaluation of environmental and other issues–and also allosing for adjustments to the original plans. However, once this improved permitting process has occurred, the follow-up part of Liscow’s proposal is that judges would be considerably more hesitant to intervene in the decision of the permitting process, whether that interference would allow or block the proposed construction.

In this paper, Liscow offers a four-part plan to reform the permitting requirements created by the National Environmental Policy Act (NEPA). I’ll quote here from the summary of the paper at the Brookings website:

  1. Shifting legal power: Judicial oversight should be curtailed to reduce excessive litigation. This includes reforming the “hard look” standard courts use to assess agency actions, limiting the range of alternatives agencies must consider, shortening statutes of limitations for lawsuits, restricting standing to sue, and limiting the scope of judicial injunctions.
  2. Facilitating popular decision-making and negotiated agreements: Since the permitting process often pits government agencies against fragmented community opposition, we need new tools to foster popular decision-making and negotiated agreements that would bindingly preclude litigation. These could include mechanisms like local legislative approval, compensation through community benefit agreements, or more experimental models in which the government would designate a representative set of interest groups to negotiate on the public’s behalf.
  3. Strengthening state capacity: A well-functioning permitting regime requires well-resourced institutions. Agencies should be able to expand staffing, collect better data, coordinate their efforts, and make greater use of categorical exclusions and expedited reviews—especially for critical clean energy and transit projects.
  4. Improving public participation: A more democratic and equitable permitting process requires early and broad-based outreach. Such outreach should include not only the most vocal opponents but also previously marginalized groups and those who would stand to benefit from planned development. Experimentation with new models to ensure diverse stakeholder involvement should be encouraged.

The summary continues: “Taken together, these reforms comprise a `green bargain,’ speeding construction and lowering costs, allowing the construction of the infrastructure needed for the green transition, and empowering the broader public—especially lower-income communities most hurt from failing infrastructure—over narrow interests.”

It’s of course uncertain whether Liscow’s proposals will work. Would the increases in public/popular input and decision-making end up creating an even bigger obstruction to building infrastructure? Would judges actually back off, if the earlier process had taken place? What are the chances of a substantial increase in state and federal capability to oversee these kinds of changes? But it also seems clear to me that the current permitting system isn’t working well. Liscow’s proposed course of action seems better than at least one alternative, which would involve a dramatic reduction or outright scrapping of permit requirements.