Do Rules to Limit High Government Debt Work?

Every government faces a temptation to make two popular choices at the same time: hold taxes lower and raise spending higher. But of course, the result is higher debt. Thus, a number of countries have been attempting to set up rules that would prevent governments from giving into temptation. One immediate challenge for such rules is that they need to contain some flexibility: after all, tying the hands of govenment during a pandemic or a deep recession doesn’t seem sensible. A second challenge is governments today would prefer not to follow the rule passed by government yesterday.

A group of IMF economist describe these issues after studying a databased of fiscal rules in more than 120 countries in “Fiscal Guardrails against High Debt and Looming Spending Pressures” (IMF Staff Discussion Note SDN/2025/004, September 2025, by Julien Acalin, Virginia Alonso-Albarran, Clara Arroyo, Waikei Raphael Lam, Leonardo Martinez, Anh D. M. Nguyen, Francisco Roch, Galen Sher, and Alexandra Solovyeva).

Here’s the rise in countries with a fiscal rule of some kind since 1990. The upward trend started in advanced economies, but has since been spreading.

The IMF authors describe how the fiscal rules are working along these lines:

Although earlier fiscal rules were often too rigid, efforts to introduce greater flexibility have not translated into stronger compliance. … [F]ewer than two-thirds of countries adhere to their deficit rules on average, with lower share for emerging market and developing countries and debt rules. … Fiscal deficits four years after the pandemic continue to exceed fiscal rule limits by a median of 2.0–2.5 percentage points of GDP for about 40 percent of advanced economies and 60 percent of EMDEs (Alonso and others, 2025b). In most countries, public debt has surpassed the ceilings in the debt rule by an average of 25 percentage points of GDP. Such large deviations from fiscal rule limits in many countries are driven by both severe shocks and limitations in the design of fiscal rules (Davoodi and others 2022a). During the severe shocks, the magnitudes and the share of countries that deviate from fiscal rule limits increased as expenditures or deficits tend to rise. But even in normal times, some countries have deficits and debt persistently exceeding their fiscal rule limits, partly because of multiple exclusions from the rules, limited fiscal oversight, or lack of fiscal adjustments to reduce debt and deficits. In recent years, fiscal adjustments have been limited, complicating the return to fiscal rule limits (Caselli and others 2022).

Choosing a fiscal rule is easy enough: for example, a government can specify thar it will balance its budget annually, or that total government debt will not surpass a certain debt/GDP ratio, along with other approaches. The rule can also offer some flexibility for pandemic and recessions. The hard part is what to do when the government is either thinking about blowing right through the rule, or has already done so. To address the harder part, the IMF team describes two useful elements of a fiscal rule.

First, any agreement on a fiscal rule should also be an agreement on what corrective action will be taken if the rule is bent or broken. For example:

Ecuador and Spain mandate corrective actions when fiscal outcomes are close to the fiscal rule limits. Some countries implement progressive triggers with corresponding tighter measures. For example, Czech Republic sets thresholds on the debt-to-GDP ratios, each involving larger fiscal adjustments if triggered. … In the event of deviations, many fiscal rules require corrective actions to be implemented within one and a half or two years (Finland, Spain) after the breach, and sometimes within three years (Grenada). More stringent correction mechanisms may require remedial action to be included in the next budget. … Some fiscal rules … call for fully unwinding past cumulative deviations in the corrective mechanism. For example, Switzerland’s mechanism accumulates any deviation from the budgeted expenditures in a notional account, requiring the government to take sufficient measures to bring the expenditures within the limit in next three annual budgets if the negative balance in the account exceeds 6 percent of expenditure. Mechanisms in Germany, Grenada, and Jamaica require corrective actions for cumulative deviations. … Some countries specify particular measures; for instance, the fiscal rule in Slovak Republic mandates a freeze on public sector wages if debt exceeds 53 percent of GDP, with further spending cuts if debt surpasses 55 percent of GDP.

In short, if your proposed fiscal rule doesn’t specify what consequences will result from breaking it, and the timing for those consequences, it isn’t much of a rule. the other main element of a successful fiscal rule is that, given that the current government is either breaking the rule or on the verge of doing so, it’s important to have some institutions that can create pressure from outside the government.

For example, many countries publish a “medium-term fiscal framework,” or MMTF, which seeks to read broad agreement on budgetary decisions before getting down into the details. The IMF economists write:

The MTFF sets top-down limits on government expenditure and fiscal balance, guiding the annual budget process. The MTFF report should include the fiscal strategy, medium-term macro fiscal projections, measures for achieving fiscal targets, and fiscal risks assessment (Curristine and others 2024). The MTFF should be prepared and published before the budget, incorporating multiyear ceilings for fiscal aggregates, which can also be disaggregated into sector-specific or programmatic frameworks (for example, France, Rwanda, South Africa, Sweden) to facilitate the translation of targets into annual budgets and spending priorities.

I confess that in the context of the US federal budget, which seems to be run by a combination of continuing resolutions punctuated by an occasional omnibus bill that loads everything together, the idea of agreement on an MMTF seems very hard. But a number of countries do manage it. Another form of outside pressure is to have a “fiscal council” with a degree of operational independence, which has the power to point out and publicize whether the fiscal rules are under threat.

Fiscal oversight can take different institutional forms, ranging from parliamentary budget committees and auditor offices to independent fiscal councils. Fiscal councils can provide technical assessments of compliance with fiscal rules and can alert in year deviations. Their expertise is critical for evaluating risks to public finances and the realism of macro-fiscal forecasts in the budget and MTFF. Fiscal councils should have direct communication with the media. To secure their operational independence, they should have a well-defined mandate aligned with their resources, budget safeguards, and timely access to information …

In a US context, the idea of a fiscal council also doesn’t seem very realistic. The US government decided back in the 1930s that the Federal Reserve would have its goals set by laws duly passed by Congress and signed by the President, but would then be operationally independent from politics. But that arrangement is now under political challenge, and an independent fiscal council responsible for fiscal strategy and targets seems even less politicall plausible.

Again, fiscal rules are easy. The hard part is specifying what corrective actions will happen if the rules aren’t followed, and what credible institutions will advocate for the rules and the corrective actions when needed.

Snapshots of the Global Robot Population

The International Federation of Robotics is a nonprofit but industry-based trade group. Each year the IFR issue a World Robotics Report, which costs way too much for me to get a copy. However, the report is accompanied by a useful press release with slides showing big-picture trends in the spread of robots around the world. Here are a few points that jumped out at me.

Here’s a figure showing the tripling in the total stock of industrial robots in the last decade, now reaching 4.6 million:

About half of those industrial robots are in China, with Japan and Korea also in the top five. The number of manufacturing jobs in China has been declining for several years now.

What are some of the shifting patterns in global robotics?

1) For a number of years now, the use of industrial robots has been primarily in two industries: electronics and cars. While those are still the two biggest users of industrial robots, they now account for less than half of the market and the “other industries” category is on the rise.

2) The IFR divides robots into two categories: the industrial robots just mentioned, but also a rising category of “service, mobile, and medical” robots. This includes, for example, robots that can autonomously drive around in warehouses and even pick items off shelves, robots for professional cleaning, search and rescue robots, and robots that can conduct laboratory tests or even assist with surgery.

3) Humanoid robots are not really a thing yet. Such robots are still in the R&D and prototype stage. As far as I can tell, the underlying issue is that robots are usually designed to carry out particular tasks, and when you do that, the best design for a specific task is usually not shaped like the human body.

Measuring Benefits of High-Skilled Immigration

How can economists measure the benefits of high-skilled immigration? The challenge is to use real-world data to separate this immigration from other factors, recognizing that some anecdotes about particular high-skill immigrants doesn’t offer real evidence, and that corellation is not causation. Economists often tackle questions like this by looking for a “natural experiment”–that is, some kind of event or policy that created a shock of more (or less) high-skill immigration. Michael A. Clemons describes some of this evidence in his useful short essay, “New US curb on high-skill immigrant workers ignores evidence of its likely harms” (Peterson Institute for International Economics,” September 22, 2025).

For example, consider the H-1B visa, which allows a US employer to hire a foreign professional–defined as someone who has a least a bachelor’s degree in a “specialty occupation” that typically involves advanced technology. The visa is typically for three years, extendable to six years. In 1998, Congress tripled the number of these H-1B visas. Then in 2004, Congress cut the number by more than half. Set aside for the moment the issue of whether these policy choices made sense, and just look at it as a research opportunity.

When Congress tripled and then halved the number of H-1B visas, the effects were not evenly distributed across US cities. Some cities saw much bigger increases and declines in H-1B visa-holders than others. Thus, one can compare urban areas that were similar in these techology industries before 1998, and then see what happened when some of these cities received an influx of talent while others did not.

In addition, more companies would like to hire through the H-1B visa program than the number of actual visas available, so the visas are actually allocated across firms by lottery. Again, think of this as a research opportunity. A researcher can compare those companies that by random chance won the lottery and were allowed to hire additional skilled labor to those companies that were not.

In short, the results of such studies are not theoretical claims, but instead are real-world results based on fairly recent US experience. Clemons describes what the studies show:

That’s how we know that workers on H-1B visas cause dynamism and opportunity for natives. They cause more patenting of new inventions, ideas that create new products and even new industries. They cause entrepreneurs to found more (and more successful) high-growth startup firms. The resulting productivity growth causes more higher-paying jobs for native workers, both with and without a college education, across all sectors. American firms able to hire more H-1B workers grow more, generating far more jobs inside and outside the firm than the foreign workers take.

An important, rigorous new study found the firms that win a government lottery allowing them to hire H-1B workers produce 27 percent more than otherwise-identical firms that don’t win, employing more immigrants but no fewer US natives—thus expanding the economy outside their own walls. So, when an influx of H-1B workers raised a US city’s share of foreign tech workers by 1 percentage point during 1990–2010, that caused 7 percent to 8 percent higher wages for college-educated workers and 3 percent to 4 percent higher wages for workers without any college education.

The key point is that in high-tech growth industries, the number and size of firms and the number of jobs is not static. An increase in the number of high-skilled immigrant workers raises the number of jobs and wages for native-born workers across a range of skill levels. Openness to innovators and innovation is a key driver for a rising US standard of living.

I’ll just add that the H-1B visa program is undoubtedly imperfect, like most real-world policies. The receiver of the visa is effectively tied to the employer for a period of time, which creates a potential for abuse. There are sure to be some native-born high-skill workers who look at the influx of immigrant high-skilled workers and worry that it will negatively affect their job prospects or wages. Economic growth is disruptive. Economic stagnation will often appear less disruptive–until people all over the economy recognize that in a zero-growth or low-growth economy, the only way to get ahead is for someone else to have less. As Paul Romer has said: “Everyone wants progress. Nobody wants change.

Hat tip: I was directed to the Clemons article by Tyler Cowen in a post at the ever-useful “Marginal Revolution” website.

50 Years Ago: When the US Encouraged Coal Use

Coal is the dirtiest of the fossil fuels, both for its contribution to the standard pollutants like particulates and sulfur, but also because it emits more carbon per unit of energy produces than natural gas or petroleum. Thus, it’s good environmental news that, in the last couple of decades, US coal has declined to just 9% of total US primary energy consumption. The US Energy Information Administration reports: “In terms of coal’s total primary energy content, annual U.S. coal consumption peaked in 2005 at about 22.80 quads and production peaked in 1998 at about 24.05 quads.”

(For the curious, “primary” energy consumption refers to the original source of the energy. “Electricity” is not included, because electricity needs to be generated from something else like a natural gas power plant or a solar panel–electricity is not a primary source of energy by itself.)

(For those still more curious, NGPL refers to “Natural Gas Plant Liquids,” which are hydrocarbons like propane, which are separated from natural gas at processing plants.)

(For the additionally curious, the “renewable” energy category here includes hydropower, wind, solar, and biofuels like ethanol and wood. Of the 9% of total US energy consumption that traces to renewable energy in 2023, about three-fifths is biomass, like ethanol and wood. Not quite one-third of the 9% of US energy consumption from renewable energy in 2023 traces to wind and solar.)

But there was a time a half-century ago, when promoting coal use was a primary energy policy for the US government. Karen Clay, Akshaya Jha, Joshua Lewis, and Edson Severnini provide the background as part of their overall history in “Carbon Rollercoaster: A Historical Analysis of Decarbonization in the United States,” in the Summer 20205 issue of the Journal of Economic Perspectives (where I work as Managing Editor). 

If you flash back to a half-century ago, you may know that in 1973, the members of OPEC, the Organization of the Petroleum Exporting Countries, embargoed oil exports to the United States and any other countries that had supported Israel during the Yom Kippur War.  As Clay, Jha, Lewis and Severnini write: “The real price of imported oil rose dramatically, from $10.67 per barrel in 1972 (in 2007 US dollars) to $36.05 in 1974 (Seiferlein 2007, p. 171). Turbulence in the Middle East kept prices high. Unrest in Iran and the Iran-Iraq War caused further disruption, driving oil prices to $62.71 per barrel in 1980.”

In response, one policy goal of the time was to shift US energy use away from oil. The authors report:

Various regulations passed during and after the crisis reinforced the continued use of coal in electricity and other sectors. The first major piece of legislation was the Energy Supply and Environmental Coordination Act of 1974, which required that, if feasible, electric power plants burning oil and natural gas would have to convert to coal (Meltz 1975). This law was then largely superseded by the Fuel Use Act of 1978. Edward Lublin, Acting Deputy Assistant General Counsel for Coal Regulations in the Department of Energy, wrote: “The Fuel Use Act prohibits new facilities and allows DOE to prohibit existing facilities, from using petroleum or natural gas as a primary energy source unless DOE determines to grant to such facility an exemption from the Fuel Use Act’s prohibitions (Lublin 1981, p. 355).” This pro-coal legislation was often justified in terms of energy independence, given the abundant US reserves of coal. The legislation covered both electric utilities and major industrial fuel-burning installations …

The Three Mile Island nuclear powerplant meltdown happened in March 1979. Thus, an additional policy goal at this time was to shift away from nuclear. The authors write:

After Three Mile Island, no new nuclear power plant construction was authorized until 2012. Because nuclear plants displaced coal-fired electricity generation—one gigawatt-hour of nuclear generation resulted in a roughly 0.8 gigawatt-hour decrease in coal-fired generation historically (Adler, Jha, and Severnini 2020)—the nuclear upheaval kept coal consumption higher than it would otherwise have been.

One additional step was that the anti-pollution efforts of the original Clean Air Act had the useful effect of reducing “conventional” pollutants like ozone, particulate matter, carbon monoxide, and others. However, reducing carbon emissions was not yet on the policy agenda. Reducing these other pollutants had a tradeoff that coal was burned with lower efficiency–which meant that more carbon was emitted.

[E]fforts to cut local air pollution often increased carbon emissions. The 1970 Clean Air Act and subsequent amendments in 1977 coincided with less efficiency in converting coal to electricity sold and higher carbon emissions … The aggregate implications of this shift from 1970 to 1990 are meaningful: annual total carbon emissions in 1990 from coal-fired generation was 1,607 million tons, but would have been 1,415 million tons if the same amount of coal-fired electricity had been generated at 1970 levels of carbon emissions per gigawatt-hour. Similarly, the aggregate kilowatt hours of electricity sold per ton of coal burned decreased from 2,529 in 1970 to 2,065 in 1990. Thus, regulation increased coal consumption and carbon emissions.

Putting all of these together, “By 2005, coal consumption was five times what it had been in 1960.”

One of my complaints about the world, which I’m confident will never really be addressed, is that those who advocated for policies that turned out to have undesireable tradeoffs pretty much never acknowledge that reality. The US economy doesn’t really start getting off coal until the fracking revolution greatly expanded the supply of natural gas (as shown by the light blue area in the figure above). But what if it had been possible to move to natural gas sooner? Or France reacted to the OPEC oil embargo of 1973 by building nuclear power plants, which means that France’s carbon emissions have been quite low since then. Perhaps the worst thing about the US stepping away from nuclear is that several decades went by without intensive research on how to make the technology safer. What if solar and wind technology could have been accelerated as well? The carbon from the additional coal that was burned from, say, 1975 to 2005 is still in the atmosphere now, and will remain there for a very long time.

Recent Trends in US Antitrust Enforcement

The Biden administration appointed people to key antitrust positions in the Federal Trade Commission and the Antitrust Division at the US Department of Justice who, in general, promised to make antitrust regulation tougher. I’ve written here before about questions of doctrine: that is, how should antitrust cases be evaluated? But there’s also a more basic question: what changes in mergers and enforcement have actually happened?

Under the Hart-Scott-Rodino Act of 1976, all proposed mergers and acquisitions above a minimum threshold size must be reported to the US government, which gives the antitrust authorities a chance to look them over in advance. In 2024, the minimum threshold size over which a transaction needed to be reported in advance was $119.5 million. Each year, the Federal Trade Commission and the Antitrust Division of the US Department of Justice report on the transactions from the previous year, as well as what enforcement actions were taken. The most recent is the 47th Hart-Scott-Rodino Annual Report (FY 2024). Thus, the report is a chance to see what the newly aggressive antitrust administrators of the Biden administration did through 2024.

As a starting point, here are the number of proposed merger reported under the Hart-Scott-Rodino rules. Obviously, there’s a fair amount of year-to-year variation: for example, the low level in 2020 is probably attributable in part to the disruptions of the pandemic. The higher levels in 2021 and 2022 were partly a bounceback from the pandemic year, but also there was some talk in the financial press that firms were trying to complete transactions before the new Biden antitrust warriors wrote a new set of new merger guidelines. The drop in the last two years is essentially back to pre-pandemic levels. Part of the issue here is that at least some merger transactions are financed by debt, which is less enticing when interest rates go up. Overall, it seems fair to say that the number of proposed mergers in 2024 was near-average for the the previous decade.

Was there an effect on the average size of mergers? Here, the pattern is more clear: The share of proposed mergers with value of more than $1 billion has risen substantially in the lat few years.

When the FTC and the DoJ are notified of a potential merger, they can either allow it to proceed without challenge or they can put in a “second request” that expresses concern and asks for more information. This “second request” percentage is always pretty low. After all, the presumption of antitrust authorities is that they are not second-guessing whether a proposed deal will be a money-maker, but only whether it poses a risk of reduced competition. Also, the antitrust authorities have limited budgetary resources and need to pick and choose. That said, the share of transactions getting “second request” had been relatively low under the Biden antitrust team, although in 2024 it rose back to a level that had been common during the first Trump administration.

Ultimately, after these second requests, “The [Federal Trade] Commission took enforcement action against 18 transactions: 12 that the parties abandoned or restructured as a result of antitrust concerns raised during the investigation; and six that resulted in the Commission initiating administrative or federal court litigation. The [Antitrust] Division took enforcement action against 14 transactions: 12 that the parties abandoned in the face of questions from the Division; and two that were restructured after the Division raised concerns about the threat they posed to competition.”

Of course, it’s always hard to draw linkages from enforcement efforts to outcomes, in antitrust as in other areas. The tough talk from the Biden antitrust enforcers, their doctrinal arguments over what antitrust enforcement should be, and the specific cases where they brought enforcement actions surely shaped the types of mergers that firms were willing to propose. But with such effects duly noted, it’s hard to look at the raw number of proposed mergers, proposed large mergers, and enforcement efforts and interpret it as a sharp break from past antitrust practice.

For those who want more on antitrust doctrine, I’ve commented on this blog from time to time about the Biden antitrust team, the new merger guidelines, some current antitrust cases, and the historical changes in merger law over time. Some of these posts include:

Also, the Winter 2025 issue of the Journal of Economic Perspectives (where I work as Managing Editor) has a three-paper symposium on “The 2023 Merger Guidelines and Beyond.”

Some Trends in Global Debt from the IMF

The IMF has updated its Global Debt Database, and Vitor Gaspar, Carlos Eduardo Goncalves, and Marcos Poplawski-Ribeiro point out a few of big-picture changes in a short article “Global Debt Remains Above 235% of World GDP” (IMF Blog, September 17, 2025).

Here’s an overall view of global debt since 1950, measured as a share of global GDP:

The big-picture patterns here are intriguing. From 1950 up to about 1980, global debt remains at roughly 100% of global GDP. However, during this time the share of public debt (yellow bars) is falling, while the share of private debt (blue bars) is rising. in 1950, public debt is substantially larger than private debt; by 1980, private debt is substantially larger than public debt–and has remained larger ever since.

But in the 1980s, global debt as a share of GDP starts rising. Comparing the early 1990s to the present, corporate debt as a share of global GDP hasn’t risen much, household debt has risen moderately, and government debt has risen by a lot–although it has dropped a bit in the last few years as pandemic-related spending has diminished.

In one way, rising debt is not a surprise. Countries as a low level of economic development often have little debt, because their banking and financial sector is also underdeveloped. Such economies lack a well-developed channel through which savings by households and firms can become loanable funds for others in the economy.

But at some point, for any organization or household, rising levels of debt become a worry. It’s thought-provoking to me that the corporate sector, where outside investors in corporate stocks and bonds are monitoring company financial records, hasn’t seen much of a rise in debt. Instead, the rising debt levels are traceable to households and government.

Here’s another figure from the IMF authors, focused on changes in 2024 for the US, China, and for advanced and other economies around the world. In the US, public debt rose in 2024, but private debt dropped–in part because many US corporations have high profits and thus can reduce their borrowing, and perhaps also in part because rising public debt is leading to higher interest rates in a way that leads to “crowding out” of private borrowers

But when it comes to higher debt levels in 2024, the obvious “winner” is China, with dramatic rises in both public and private debt. Indeed, given that the banking and financial system in China is heavily controlled and backstopped by its government, even the private debt listed here is in some sense “public.” Many of the causes behind China’s economic growth involve real changes, like a better-skilled workforce, improved infrastructure, capital investment, and better technology. But at least one of the causes has also involved turning the debt spigots wide open, especially through local government lending to companies. Debt can be a facilitator of growth, but excessive debt can also cripple growth. China seemed to be attempting to address its pre-existing debt problem with additional debt–a policy approach that rarely ends well.

Economics of Trade Sanctions

The exercise of US foreign policy (along with the European Union and the United Nations) has been increasingly characterized by the use (or threat) of trade sanctions. What do we know about how such sanctions work? Gabriel Felbermayr, T. Clifton Morgan, Constantinos Syropoulos, and Yoto V. Yotov review the evidence in “Economic Sanctions: Stylized Facts and Quantitative Evidence” (Annual Review of Economics, 2025, 17: 175-195). They write:

According to the newest version of the Global Sanctions Data Base (GSDB; Yalcin et al. 2024), the number of sanction programs in place globally has shot up from about 200 10 years ago to about 600 in 2023. What is more, about 12% of all existing country pairs and 27% of world trade are currently affected by some type of sanction. In their various forms, sanctions are the leading geoeconomic tool aiming to coerce foreign governments into actions that they would not undertake otherwise. …

[S]anction processes are much better understood today than they were 30 years ago. The political science community has come to accept what economists already knew (i.e., that sanctions bring substantial economic effects), and economists have come to accept what political scientists have long understood (i.e., that substantial economic costs do not always bring changes in policy). Over that same period of time, the use of sanctions has dramatically increased, and they have come to affect many more bilateral economic relationships. This is a puzzling phenomenon: If sanctions are costly and frequently fail to deliver the desired policy objectives, why have they become so endemic?

Their discuss of this question offers various hypotheses about bargaining and negotiating. But I found one of the insights especially persuasive: “[T]he primary effects on [sanction] targets are negative, large, often long-lasting, and very heterogeneous; in contrast, the corresponding effects on senders tend to be small and short-lived.” In other words, sanctions have at least a chance of producing substantial pain, at little cost to those doing the sanctioning.

For more details, I useful starting point is a two-paper symposium on “Trade Sanctions and International Relations” in the Winter 2023 issue of the Journal of Economic Perspectives (where I work as Managing Editor):

Better Permitting and More Building: Possible?

It seems natural enough, at least based on US experience, to believe that building and permitting are in a natural opposition: that is, stronger permitting means less building. Zachary Liscow has been looking for a way out of this opposition. He spells out some of his thoughts in “Reforming Permitting to Build Infrastructure” (Hutchins Center on Fiscal & Monetary Policy at Brookings, September 2025).

I confess that I was drawn to the paper, in part, by footnote 1 after the first sentence. (Have I ever written a sentence more geeky than that? Probably.) It reads: “This report builds on Zachary Liscow, “Getting Infrastructure Built: The Law and Economics of Permitting,” Journal of Economic Perspectives 39, no. 1 (2025): 151–80). As the Managing editor of JEP, I recommend the earlier paper as well.

In particular, Liscow is concerned that the US needs more instructure, in particular for energy and transportation, and that the existing system of permitting has evolved in such a way that it can allow a well-funded and/or noisy minority to have effective power to slow and to block the needed infrastructure. Liscow’s central idea is to have permitting work better, in particular by thinking of ways that permitting might work better at allowing open consideration and evaluation of environmental and other issues–and also allosing for adjustments to the original plans. However, once this improved permitting process has occurred, the follow-up part of Liscow’s proposal is that judges would be considerably more hesitant to intervene in the decision of the permitting process, whether that interference would allow or block the proposed construction.

In this paper, Liscow offers a four-part plan to reform the permitting requirements created by the National Environmental Policy Act (NEPA). I’ll quote here from the summary of the paper at the Brookings website:

  1. Shifting legal power: Judicial oversight should be curtailed to reduce excessive litigation. This includes reforming the “hard look” standard courts use to assess agency actions, limiting the range of alternatives agencies must consider, shortening statutes of limitations for lawsuits, restricting standing to sue, and limiting the scope of judicial injunctions.
  2. Facilitating popular decision-making and negotiated agreements: Since the permitting process often pits government agencies against fragmented community opposition, we need new tools to foster popular decision-making and negotiated agreements that would bindingly preclude litigation. These could include mechanisms like local legislative approval, compensation through community benefit agreements, or more experimental models in which the government would designate a representative set of interest groups to negotiate on the public’s behalf.
  3. Strengthening state capacity: A well-functioning permitting regime requires well-resourced institutions. Agencies should be able to expand staffing, collect better data, coordinate their efforts, and make greater use of categorical exclusions and expedited reviews—especially for critical clean energy and transit projects.
  4. Improving public participation: A more democratic and equitable permitting process requires early and broad-based outreach. Such outreach should include not only the most vocal opponents but also previously marginalized groups and those who would stand to benefit from planned development. Experimentation with new models to ensure diverse stakeholder involvement should be encouraged.

The summary continues: “Taken together, these reforms comprise a `green bargain,’ speeding construction and lowering costs, allowing the construction of the infrastructure needed for the green transition, and empowering the broader public—especially lower-income communities most hurt from failing infrastructure—over narrow interests.”

It’s of course uncertain whether Liscow’s proposals will work. Would the increases in public/popular input and decision-making end up creating an even bigger obstruction to building infrastructure? Would judges actually back off, if the earlier process had taken place? What are the chances of a substantial increase in state and federal capability to oversee these kinds of changes? But it also seems clear to me that the current permitting system isn’t working well. Liscow’s proposed course of action seems better than at least one alternative, which would involve a dramatic reduction or outright scrapping of permit requirements.

Predistribution, Not Redistribution, in the Nordic Countries

Maybe it’s just because I live in Minnesota, a state where the differences between immigrants from Sweden, Norway, and Finland are still apparent in the names of towns and the surnames of people. But when I run into people who would prefer that the US distribution of income be more equal, they often point to the economies of northern Europe as a real-world example of what they have in mind.

How do these countries do it? Magne Mogstad, Kjell G. Salvanes, and Gaute Torsvik explore the evidence in “Income Inequality in the Nordic Countries: Myths, Facts, and Lessons” (Journal of Economic Literature 2025, 63:3, 791–839).

In thinking about why greater inequality of income prevails in the Nordic countries, it’s useful to divide possible reasons into redistribution and predistribution. An example of redistribution after income is received would be public policy decisions like higher marginal tax rates on the well-off, or greater support for those less well-off, or some combination of those two. In contrast, predistribution involves affecting what income is received in the first place, before taxes and transfer payments. Examples might include minimum wage laws, greater workers representation (though unions or other mechanisms), or rules that affect the ability of top executives to be paid in the form of bonuses and stock options. Thus, Mogstad, Salvanes, and Torsvik write:

We argue that the contemporary Nordic model is built on four principal pillars: (i) significant public investment in family policies, education, and health services; (ii) coordinated wage setting within and across industries; (iii) substantial expenditure on social insurance to safeguard against income losses due to unemployment, disability, and illness; and (iv) high and progressive taxation of labor income, complemented by subsidies for services that support employment. …

A key finding is that a more equal predistribution of earnings, rather than income redistribution, is the main reason for the lower income inequality in the Nordic countries compared to the United States and the United Kingdom. While the direct effects of taxes and transfers contribute to the relatively low income inequality in the Nordic countries, the key factor is that the distribution of pretax market income, particularly labor earnings, is much more equal in the Nordics than in the United States and the United Kingdom. Another key finding is that equality in hourly pay, not work hours, is the primary explanation for why the Nordic countries have much lower inequality in labor earnings than the United States and the United Kingdom. … Quantitatively, the compression of hourly wages matters the most, explaining a large majority of the difference in earnings inequality between the Nordic countries and the United States and the United Kingdom.

The authors go through possible alternative reasons for why the four elements of the Nordic model might lead to greater equality. For example, “Nordic governments spend heavily on children and families through heavily subsidized day care, education, and health programs. Although these programs are typically universal, they could help equalize the distribution of skills and human capital if the take-up or the positive effects of the program are concentrated among children from poor or disadvantaged families. We argue that most of the available evidence suggests that this is not the key explanation for income equality in the Nordics. A substantial body of research evaluating the causal effects of day care, education, and health policies in the Nordics suggests that these policies have a relatively modest impact on inequality in skills, educational attainment, and labor market outcomes.”

It’s important here to be clear on how the minds of economists operate. The authors are not arguing in an overall sense either for or against these universal social programs. They are only making the very specific argument that the evidence about the effects of these programs does not support the claim that they are a primary cause of the greater income equality that exists in the Nordic countries. For the authors, the key difference is that those with higher education and skills are paid a substantially higher premium in the US and UK economies than in the Nordic economies, compared with those who have lower levels of education and skills.

US Income Inequality Before Taxes and (Many) Transfers: Census Data

Each year, the US Census Bureau publishes three overview reports to update the annual data on income, poverty rates, and health insurance. Here, I focus on some figures from Income in the United States: 2024, by Melissa Kollar and Zach Scherer (September 2025, P60-286). Here’s a figure showing several measures of pre-tax income inequality.

It’s perhaps useful for most readers to start at the bottom. US households as a grouap are divided into five parts, or quintiles. The share of income going to the top quintile has been rising for decades, with an especially sharp jump in the 1990s. The middle figure offer several ratios with the income at the 90th percentile of the income, the 50th percentile. The 90th percentile is rising substantially compared to the 10th percentile, and rising but less so compared to the 50th percentile. The ratio of the 50th to the 10th percentile hasn’t moved much. Both of these figures suggest a rise in pre-tax income concentrated at the top of the income distribution.

The top panel shows the “Gini coefficient,” which will be less intuitively clear to many reader. I offered my own explanation of it here. But basically, it is a way of measuring the extent to which an income distribution departs from perfect equality of incomes. On this scale, perfect equality of incomes has a Gini coefficient of zero, while perfect inequality of incomes–that is, all income going to one person–has a Gini coefficient of 1. Again, this measure shows a steady rise over time.

A few thoughts here:

1) This is a measure of inequality in money income. It includes wages and salaries, rent payments, interest and dividends, as well as government cash payments like Social Security, and cash forms of public assistance. It does not include capital gains. It does not include after-tax effects, including both taxes paid and assistance to those with low incomes that happens through tax credits like the Earned Income Tax Credit. It also does not inclued the value of non-cash assistance programs like food stamps or Medicaid. But it seems safe to say that the rising inequality at the top of the distribution is being primarily driven by wage and salary payments at the top of the income distribution–which is a pattern of interest all by itself.

2) The steady rise in income inequality over time suggests that the driving factor is not something short-term, like the policies of a given president. I won’t try to run through a list of possible candidates for causal factors here. But it does seem worth noting some data on education and income also just released by the Census Bureau. This figure only goes back two decades. As the Census Bureau notes:

Overall, the income gap between householders with a bachelor’s degree or higher and those with a high school degree but no college widened during the 20-year period. In 2004, households headed by those with at least a bachelor’s degree had about twice as much income as those headed by someone with a high school degree but no college. By 2024, householders with a bachelor’s degree or higher had median household income 2.3 times higher than those with a high school degree. The makeup of educational attainment groups also changed over time. In recent decades, growth in the population with a bachelor’s degree or higher has been concentrated among racial and ethnic groups with historically low attainment. This growth has also disproportionately come from increasing educational attainment among women. 

One of the driving forces between rising inequality over time lies in the race between demand for skilled labor and supply of skilled labor. If the supply isn’t keeping up with demand, then the wage gap between the two groups will tend to rise. This is surely not the entire story of rising pre-tax, pre-transfer US income inequality, but it’s a part.