Economic Uncertainty in the US Economy

It’s intuitively obvious that “uncertainty” matters in economic decision-making. If the risks of making a choice–starting a company, making an investment, buying a house–look especially big in the present, then there is reason to postpone that decision. As a result, higher uncertainty can lead to a drop in economic activity. Thus, it’s a concern that, by some measures, economic uncertainty is on the rise.

For example, here’s the Economic Policy Uncertainty Index for the United States, as reported by the FRED website run by the St. Louis Fed. You can see the recent spike on the far right.

Or here’s the Global Economic Policy Uncertainty Index:

These graphs are surely a reason for concern. Whatever the merits of a “move fast and break things” approach in certain contexts, it obviously will increase uncertainty. But how does one measure uncertainty? What is being measured here?

The US uncertainty index is not official government data. It is based on a method developed by three economists, Scott R. Baker, Nick Bloom, and Steven J. Davis. I mentioned their approach here when it was first being developed back 2012. They combine three sources of data: “the frequency of newspaper articles that reference economic uncertainty and the role of policy; the number of federal tax code provisions that are set to expire in coming years; and the extent of disagreement among economic forecasters about future inflation and future government spending on goods and services.” The average value from 1985-2010 is arbitrarily set at 100. Thus, you can see spikes during the Great Recession, the pandemic, and now early in 2025.

For a discussion from Nicholas Bloom about this and other ways of measuring uncertainty, and how they relate to actual economic outcomes, a useful starting point is his article, “Fluctuations in Uncertainty,” in the Spring 2014 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

An obvious concern about measures of uncertainty based (at least partly) on news reports is that there may be a divergence between how the media is covering the news of the day and what actual investors and business people are doing and saying. At least at the moment, there appears to such a divergence.

For example, a standard measure of uncertainty in financial markets is the CBOE Volatility Index from the Chicago Board Options Exchange, commonly called the VIX. The basic idea is to look at expectations of volatility of the stock market, by looking at the options that investors are buying on future values of the S&P stock market index. For example, if more investors are buying options to protect themselves against especially large falls in stock prices, then volatility would be up. But the VIX isn’t showing a rise in uncertainty just now.

Another way to measure uncertainty is ask businesses about their sales and employment forecasts 12 months in the future, and how much uncertainty they feel about those forecasts. The Federal Reserve Bank of Atlanta carries out a Survey of Business Uncertainty with this approach, and it does not show a prominent recent uptick in business uncertainty.

I don’t quite know what to make of these various meausures. The Baker-Bloom-Davis measure of uncertainty has been tested and used in research, and it cannot be casually dismissed. However, other meausures of uncertainty are not spiking int he same way. At the moment, it seems fair to say that there’s uncertainty about uncertainty, which isn’t the same thing as greater uncertainty, but perhaps headed in that direction.

High and Low Stress in the Workplace: A World War II Example

Will workers in an inherently high-stress environment perform better if their bosses seek to defuse that stress? Or if their bosses play up and emphasize the stress? The answer probably depends on specific contexts of the workforce and the boss. But for some evidence and a speculative answer in one context, Oded Stark offers “Stress in the air: A conjecture” (Economics and Human Biology, December 2024). From the abstract:

The 1949 study The American SoldierCombat and Its Aftermath, Volume II, by Stouffer et al. presents detailed accounts of the attitudes of American fighter pilots toward the stress experienced by them and of the policies and practices of the American Air Force command in addressing this stress during WWII. The 2022 study “Killer incentives” by Ager et al. documents an aspect and a repercussion of the stress of German fighter pilots and can be used to identify the response to that stress by the German Air Force command during WWII. Drawing on these two studies, in this paper I construct fighter pilot stress profiles in the two air forces. The picture that emerges is that there is a stark difference between the approaches of the two commands. This diversity leads me to conjecture that the American Air Force command explicitly sought to forestall and curtail fighter pilots’ stress, whereas the German Air Force command implicitly cultivated and engineered fighter pilots’ stress.

Stark points out that the American Air Force command was very aware of the stress experienced by fighter pilots, and how performance tended to diminish with additional missions. They and tried to address the stress in various ways. One approach was to set a limit: “The limit to the tour of duty of fighter pilots was 300 hours of combat flying, which was typically achieved in six or seven months of active combat duty.” This limit was pre-announced and socially sanctioned. In addition, Stark quotes Stauffer about ongoing evaluation of fighter pilots: “All fighter pilots were systematically examined throughout the entire period that they were on operational duty; as soon as any … anxiety reaction to combat flying was detected, the man was immediately removed from combat duty as a fighter pilot”–although those removed from combat duty could be reassigned to less risky flights. Finally, American flight crews were rewarded in terms of total missions the completed, and medals were typically awarded after the tour of duty was complete, not on whether a particular mission had been especially risky.

The German Air Command during World War II took a different approach. It encouraged rivalry between fighter pilots, and gave decorations and promotions based in part on whether a mission was especially risky. Stark adds: “As noted many times in the Stouffer et al. study, the American army was well aware of and sympathetic to the problem of psychiatric combat breakdown (by 1943 providing treatment for psychiatric casualties, either at forward stations near the front or in dedicated hospitals closer to the rear), whereas the German army was generally hostile to the idea of psychiatric breakdown and those who were considered guilty of malingering or cowardice were not treated well.”

One can easily hypothesize reasons why organizations might take different approaches to stress management. In certain contexts (financial markets?), some kinds of risk-taking might be especially remunerative. In wartime, perhaps an aggressor has reason to encourage risk-taking, while the party fighting back (and expecting a surge of wartime production to arrive) will want to deal with stress differently. Even before the war started, the culture that generated US fighter pilots in World War II might have been quite different from the culture that generated German fighter pilots.

Thus, Stark’s point is not that there is one best approach that organizations should follow for motivating workers in stressful environments. But it can be useful to think explicitly about whether a given work environment seeks to be stress-reducing or stress-increasing–and what tradeoffs can arise.

Debt Risks Rising for Low- and Middle-income Countries

One of the puzzles of international macroeconomics is that if capital has diminishing returns, then it seems plausible that capital should flow from capital-rich high-income countries to capital-poor low-income countries. After all, the potential returns to capital investment should presumably be high in a capital-poor environment.

However, for some decades now, net capital inflows have been coming into the US economy. Moreover, as the World Bank International Debt Report 2024 points out, since 2022, the interest payments that low- and middle-income countries are making on their external (that is, outside their own country) debts are greater than the amount of new debt capital flowing in. Indermit Gill describes it this way in the “Foreward” of the report:

Since 2022, foreign private creditors have extracted nearly US$141 billion more in debt service payments from public sector borrowers in developing economies than they disbursed in new financing. As this report documents, that withdrawal has upended the financing landscape for development. For two years in a row now, the external creditors of developing economies have been pulling out more than they have been putting in—with one striking exception. The World Bank and other multilateral institutions pumped in nearly US$85 billion more in 2022 and 2023 than they collected in debt service payments.

That has thrust some multilateral institutions into a role they were never designed
to play—as lenders of last resort, deploying scarce long-term development finance
to compensate for the exit of other creditors. Last year, multilateral institutions
accounted for about 20 percent of the long-term external debt stock of developing
economies, five points higher than in 2019. … In 2023, the World Bank accounted for fully a third of the overall net debt inflows going into IDA-eligible countries—US$16.7 billion, more than three times the volume a decade ago.

That reflects a broken financing system. Capital—both public and private—is
essential for development. Long-term progress will depend to an important degree
on restarting the capital flows that most developing countries enjoyed in the first
decade of this century. But the risk-reward balance cannot be allowed to remain
as lopsided as it is today, with multilateral institutions and government creditors
bearing nearly all the risk and private creditors reaping nearly all the rewards.

The report provides a wealth of underlying detail, but here’s one point that stuck with me. If one looks at the ratio of external debt to gross national income, it’s not rising a lot for these low- and middle-income countries in the last couple of years. (IDA refers to International Development Association: it’s the part of the World Bank focused on lending to the lowest-income countries.)

Instead, there was a gradual but substantial run-up of debt for these countries over the last decade or so. A key fact here is that low-income countries typically cannot borrow in their own currency: instead, they commonly borrow in US dollars or sometimes in euros. In addition, they often borrow at adjustable interest rates, while a large developed economy like the United States can typically borrow at fixed nominal interest rates. Thus, when it comes to repaying external debt, low-income countries are vulnerable to higher interest rates and to accompanying shifts in exchange rates. The report notes:

Total debt servicing costs (principal plus interest payments) of LMICs [low-and middle-income countries] reached an all-time high of US$1.4 trillion in 2023. For LMICs excluding China, debt servicing costs climbed to a record of US$971.1 billion in 2023, an increase of 19.7 percent over the previous year and almost double the amounts seen a decade ago. In 2023, LMICs faced historically challenging debt service burdens due to high debt levels, interest rates that hit a two-decade high, and depreciation of local currencies against a strong US dollar. The tightening of monetary policy in the United States in 2022 affected exchange rate movements and drove an increase in the value of the US dollar relative to other currencies, which persisted in 2023 and made repayment of non–local currency debt more costly for LMICs as their local currencies depreciated.

In short, interest payments for the US government are rising as new borrowing at higher interest rates plays a greater role compared to earlier debt accumulated at lower interest rates. But low- and middle-income countries face a double-whammy of their currencies being less valuable on exchange rate markets and also a share of their debts adjusting automatically to higher global interest rates. When those factors hit on top of the greater debt accumulated in the last decade, difficulties can arise. Some countries have already managed to reschedule their debts, but I suspect those are the easy cases–and the hard cases will be coming in the next few years.

The 2023 Merger Guidelines Will Remain: What Does That Mean?

Under current law, any US companies considering a merger or acquisition that is above $125 million in size must first report it to the government. The most recent data for 2023 says that 1,805 such transactions were reported in 2023, which was a relatively low number for recent years. In 2021 and 2022, for example, more than 3,000 proposed transactions were reported.

Government antitrust enforcers at the Federal Trade Comission and the Antitrust Division of the US Department of Justice don’t have the time or people to scrutinize all of these proposed transactions deeply. In 2023, for example, the FTC challenged 16 deals and the DoJ challenged another 12; in total, that’s less than 2% of the proposed transactions. It’s appropriate that this number be relatively low: after all, the role of the government authorities isn’t to decide whether the transaction is likely to be profitable (we can assume the firms involved have analyzed that question), nor whether it is “socially desireable” in some broad and nebulous sense (it’s a fairly free-market economy, after all), but only whether the transaction is likely to reduce competition in the economy.

Going back to 1968, the antitrust authorities at the FTC and the DoJ have published a set of Merger Guidelines to describe which transactions are likely to get closest scrutiny, and then updated these guidelines once or twice a decade. It’s important to recognize that these Guidelines are not legally binding; actual antitrust law is the legislation passed by Congress and the case law established by court precedents. Thus, the evolving Merger Guidelines have always been a combination of summarizing what the laws and the courts actually say, and what the enforcement agencies think the courts should say.

When President Biden took office in 2021, he appointed people to the key antitrust policy-making positions at FTC and DoJ who were strongly on the record that the Merger Guidelines of 1968 were largely correct, but that the updates starting in 1982 were on the wrong track, and continued on the wrong track with the updates of 1984, 1992, 1997, 2010, and 2020. As you might expect, those who had been involved in the updates of Merger Guidelines through the Reagan, Bush, Clinton, Obama, and Trump presidencies were mostly unimpressed.

A new set of Merger Guidelines was published in December 2023. I won’t try to summarize all the pros and cons here. But as a rough summary, the prevailing bipartisan perspective had been that the purpose of competition is to make sure that consumers benefit. From this view, the goal of antitrust authorities was to evaluate if a given merger would benefit consumers through lower prices and/or quality improvements. The critics of this position in the Biden administration argued that the law required preservation of “competition” itself–which could involve blocking mergers and acquisitions just because they increased the size of existing firms. One justification for this view is that it takes a longer view of competition: if smaller firms are not absorbed into larger ones, some of them later develop into full-fledged competitors. Another justification is large firms (and their rich owners) should be viewed as politically dangerous because of their power to influence government.

Here, I’ll just add three additional thoughts to the situation.

First, one possibility was that the 2023 Merger Guidelines would turn out to be extremely short-lived, because they would be overturned by the incoming Trump administration. However, Andrew N. Ferguson, the new head of the Federal Trade Commission, has written a “Memorandum” saying that the 2023 Merger Guidelines will remain in place. Ferguson notes:

Insofar as there is any ambiguity, let me be clear: the FTC’s and DOJ’s joint 2023 Merger Guidelines are in effect and are the framework for this agency’s merger review analysis. … Stability across administrations of both parties has thus been the name of the game. President Clinton retained the 1992 Guidelines promulgated by the George H.W. Bush Administration until 1997. President George W. Bush retained the 1997 Guidelines unchanged. And President Trump retained unchanged the 2010 Guidelines issued by the Obama Administration. … I think the clear lesson of history is that we should prize stability and disfavor wholesale rescission. … A recriminatory cycle of partisan rescissions will not help the economy. If merger guidelines change with every new administration, they will become largely worthless to businesses and the courts. No business can plan for the future on the basis of guidelines they know are one election away from rescission, and no court will rely on guidance that is so obviously partisan.

Stability is also good for the enforcement agencies. The wholesale rescission and reworking of guidelines is time consuming and expensive. We should undertake this process sparingly. We have limited resources to patrol the beat and constant turnover undermines agency credibility. By and large, the 2023 Merger Guidelines are a restatement of prior iterations of the guidelines, and a reflection of what can be found in case law. That is good reason to retain them. That is not to say that the 2023 Merger Guidelines are perfect. No guidelines are perfect. If experience teaches that revisions are appropriate, then the agencies can consider revisions as they have done in the past. This iterative and transparent revision process promotes the stability that the guidelines need to succeed. For the foreseeable future, and until any such revisions are adopted, the FTC will use the 2023 Merger Guidelines as the framework to do our important merger-enforcement work.

The second question is whether or to what extent the Biden antitrust authorities have “won” in overturning the earlier doctrines and transforming merger guidelines back to their version of the good old days. The answer here is unclear. The Winter 2025 issue of the Journal of Economic Perspectives (where I work as Managing Editor) has a three-paper symposium on “The 2023 Merger Guidelines and Beyond.”

I’ll focus here on the essay by Francis. He sketches the evolution of the merger guidelines over time. He points out that the Biden antitrust appointees argued vehemently that they intended to overturn the existing guidelines, with comments like “the era of lax enforcement is over, and the new era of vigorous and effective antitrust law enforcement has begun” and promised to reverse “decades of lax . . .
enforcement” and “broad government inaction.” They strongly criticized the idea that antitrust decisions should focus on consumer welfare, saying that while consumer harm “might matter in some contexts,” “to say it always matters, and is indeed the lodestone of the law, is . . . unsupportable and can . . . border on the ridiculous.”

Francis argues that the “draft” version of the 2023 Merger Guidelines released in June 2023 was fairly radical. It eliminated language from earlier guidelines about whether firms had “market power” to raise wages and the importance of consumer welfare. Instead, it focused on a set of 13 rules phrased in terms of “Mergers Should Not…” For example, as Francis notes, mergers should not “significantly increase concentration in highly concentrated markets,” “eliminate a potential entrant in a
concentrated market,” or “entrench or extend a dominant position.” None of these 13 rules in the draft version mentioned whether the actions by a firm might benefit consumers or not.

The draft Merger Guidelines received 3,000 public comments. Many of them focused on the issue of whether antitrust should focus on consumer welfare: some in favor, some against. Francis writes:

Above all, the post-draft debate centered on the draft’s relationship with
welfarism. The draft had sketched a merger policy that, to a significant extent,
would diverge from modern welfarist antitrust. Some commenters demanded that
the agencies go further and entirely reject the welfarist paradigm; others demanded
a clear recommitment to it. The agencies were at a crossroads. … On my reading, the final document effectively declines to make the choice between welfarism and non-welfarism that commenters had demanded. Instead, it opts for ambiguity: it can plausibly bear both a welfarist and a non-welfarist reading. Like a Rorschach test, the meaning of the 2023 Merger Guidelines depends upon what one expects, hopes, or fears to find there. Above all, it invites, but does not answer, a basic question: Is a tendency to harm consumers (or other trading partners) necessary for condemnation under the 2023 guidelines?

Thus, Francis argues that the meaning of the new guidelines will only become apparent as they are asserted by government antitrust authorities and as those arguments are accepted or rejected by courts.

Francis makes the interesting point that in the 2023 Merger Guidelines and indeed going back more than a century in antitrust laws, “competition” has served as a term of compromise, because it allows alternative interpretations. Most American favor “competition” between firms as a general concept, rather than monopoly. But imagine that a town starts with a bunch of locally-owned restaurants, grocery stores, and drug stores. Then a bunch of national chains and superstores move in. A number of local customers turn to the new options, where prices are often lower, and some of the locally-owned stores go out of business. Is this a reduction in competition, because competitors have been driven out of business? Or an increase in competition that benefits the consumers who choose the new options? What about if online firms with home delivery drive local merchants and malls into financial distress? What if online streaming of movies and shows drives movie theaters into financial distress? Many people are in favor of competition when it offers them new options, but if and when some of the competitors lose out, they become dubious.

I would add that Americans have a similar ambivalence about large firms. On one hand, many people look back with some fondness on the days when giant American firms making cars or steel or chemicals had a dominant role in the US and global economy. The US political system becomes seriously concerned if US producers in a certain industry don’t seem dominant enough, and there is often political support for putting tariffs on foreign producers and subsidizing domestic firms (as in semiconductors) to close the gap. But in the areas where the US does in fact have the dominant firms, like many of its big tech companies, along with firms like Walmart, ExxonMobil, and CVS Health, we then are concerned that their size should be a public concern and antitrust authorities should take a close look. In short, lack of large and dominant US firms in certain industries is a public concern needing a policy response, and the presence of large and dominant US firms in certain industries is also a public concern needing a policy response.

Finally, for those who would like a range of expert opinions about the 2023 Merger Guidelines, a useful starting point is the 11-paper symposium organized by the Review of Industrial Organization and published in its August 2024 issue.

I’ve commented on this blog from time to time about the Biden antitrust team, the new merger guidelines, some current antitrust cases, and the historical changes in merger law over time. Some of these posts include:

Interview with Alan Auerbach: Federal Debt and Social Security

David A. Price of the Federal Reserve Bank of Richmond interviews Alan Auerbach “On the federal debt, the Social Security trust fund, and how Uncle Sam discourages seniors from working” (Econ Focus, First/Second Quarter 2025). Here are a few of the points that caught my eye.

The US Has Lost its Debt-Restraint Religion

It’s the case, as I’ve said in recent years, that the U.S. doesn’t pay any attention to the national debt. That was not true if you go back, say, 20, 25 years or more. If you look, for example, during the Reagan administration as well as the first Bush and Clinton administrations, it was the case that when debt or projected deficits went up, government undertook actions to reduce them, either by increasing taxes or by cutting spending.

That ended sometime in the early 2000s. In the last 20 years or so, it’s just not there. If we went back to the way we were behaving then, the kinds of shocks that are going to keep hitting the budget, either because of interest rates or pandemics or financial crises or other things, could be dealt with by those kinds of government reactions.

So it’s both good news and bad news. It’s good news in the sense that we’ve been there before. It’s not as though we have to undertake an approach that’s never been contemplated or practiced. But on the other hand, we lost religion sometime in the last 20 to 25 years. And it’s not exactly clear how we’re going to get that back because we lost it in a bipartisan way. There used to be bigger constituencies in Congress and in the White House for dealing with national debt, at least when problems became more apparent.

How Inflation, and Limited Inflation Adjustments, Reduces US Deficits

[T]here are different ways in which inflation interacts with the fiscal system to affect the taxes that people pay and the benefits that they receive. It could help them or hurt them; it mostly hurts them.

Some things are not indexed for inflation at all. … [T]he threshold over which you’re taxed on your Social Security benefits … has been fixed in nominal terms since it was implemented. That means that the more inflation we have, the more people are going to be subject to tax on some or all of their Social Security benefits.

Where we do have indexing for a lot of elements of the tax system and benefits, there are delays before the system catches up. For example, once you’re receiving Social Security, your benefits go up every year because of inflation. On the tax side, the federal tax brackets are indexed for inflation so that if your income goes up by 10 percent because inflation is 10 percent, it’s not going to change your bracket because the bracket’s indexed for inflation. However, there’s a delay in the indexing. What that means is that if there’s a sudden surge in inflation, the first year or so is going to happen before the brackets and the benefits start reacting to it. For example, if we went from an inflation rate of zero to an inflation rate of 10 percent on a permanent basis, that would cause a 10 percent decline in people’s Social Security benefits because it would happen once and then we’d be forever one year behind.

The final thing is that capital income — interest, capital gains, things like that — are mismeasured because of inflation. For example, if I buy an asset for $100 and the price level doubles over the period that I hold it, and I sell the asset for $200, my real gain is zero. But I’d be taxable on a gain of $100, because we don’t index capital gains for inflation. We don’t index interest income. If the inflation rate is 4 percent and I’m getting 4 percent nominal interest, my real interest is zero, yet I’m still taxable on the 4 percent.

So through lack of indexing, delayed indexing, mismeasurement of capital income, as well as similar effects on the benefits side in terms of delayed indexing, people in general — not every person — have a reduction in resources as a result of inflation. In one sense, that makes inflation a more effective tool for dealing with the deficit. … I don’t think it’s a particularly attractive way to do it because it’s quite arbitrary. If you look at the distribution of effects, it varies a lot across households depending on the type of income they have. We wouldn’t say it’s very well designed.

Might Social Security Switch to Relying on General Tax Revenues?

It’s true that in 1983, which was the last time the Social Security trust fund was nearing exhaustion, we had the Greenspan Commission that recommended changes in Social Security, which were then adopted, which raised the retirement age very slowly and increased payroll taxes. That put the Social Security system on a better financial footing for many decades.

That could happen again. But it could also be the case that Congress and the government don’t have the appetite for providing this kind of bad news to people in the Social Security system. They could just say, well, we’ll use general revenue funding to cover the shortfalls of Social Security. We already do that for Medicare Part B, the health insurance, and Medicare Part D, the drug benefit. They are not self-sustaining; we have premiums paying for a small part of the benefits and the rest comes from general revenues.

Some of the traditional supporters of Social Security say it’s good to have it be a self-financing system because it makes people feel that they have a stake in it when they’re paying their payroll taxes and so forth. But if the choice of the government is to cut benefits, raise payroll taxes, or use general revenue funding, given their behavior in recent years, I’m fearful that they’ll choose general revenue funding and just kick the can down the road.

Re-Interpreting US Law for Environmental Permitting

The National Environmental Policy Act, commonly known as NEPA, became law in 1970. The law itself remains intact, but the way in which the law is implemented is undergoing major changes.

The back-story goes like this. In 1977, President Jimmy Carter signed an Executive Order that gave the Council on Environmental Quality (CEQ)–which is an office in the White House administrative structure–the authority to interpret the provisions of the law. Thus, the law includes terms how environmental review and permitting is required for any “major federal action” with a “significant effect,” but under the CEQ interpretations, this has turned into requiring environmental review for pretty much any action with even an arguable environmental efect. Over the decades, courts have often treated the CEQ interpretations as functionally the same as the law itself.

This arrangement began to come apart back last November, when the US Court of Appeals held in Marin Audubon Society v. Federal Aviation Administration that the CEQ lacked authority to interpret the provisions of NEPA. The admittedly thin legal line here seems to be that CEQ could issue “guidelines,” which did not have the force of law, but did not have the power to issue “regulations,” which would have the force of law. I’ll add that this distinction is not unusual: for economists, a better-known example might be the Merger Guidelines issued by the Federal Trade Commission and the Antitrust Division of the US Department of Justice. Courts are welcome to read the Guidelines for input, but courts are not bound by how the Guidelines have chosen to interpret the law.

A presidential Executive Order is not a law, and can be overridden by any future president. Thus, emboldened by the Court of Appeals decision, President Trump signed his own Executive Order revoking Carter’s 1977 order. The order essentially sets aside all previous CEQ rulemaking with regard to NEPA, and thus also calls into question all court decisions since 1977 that were based on the CEQ rules. Instead, in the next 30 days, all federal agencies with plans affected by NEPA rules are required to “develop and begin implementing action plans to suspend, revise, or rescind all agency actions identified as unduly burdensome.” The CEQ has now published an Interim Final Rule to implement Trump’s Executive Order, which is open for public comment.

Setting aside the legalities, what policy choices are on the table here? It seems clear that the Trump administration would like to roll back the reach of NEPA, so that it is focused on a smaller group of major actions, with less need for agencies to submit (and to defend in court) Environment Impact Statements. For those with environmentalist leanings, this general stance may seem obviously regressive. But in a country where green energy projects and infrastructure projects can be held up for years at a time by permitting requirements, the issues are not clear-cut. Zachary Liscow lays it out in “Getting Infrastructure Built: The Law and Economics of Permitting” (Journal of Economic Perspectives, Winter 2025,  pp. 151-180).

(Full disclosure: I’m Managing Editor of the JEP, and thus predisposed to find the articles of particular interest.)

As Liscow points out, NEPA and other rules requiring environmental permitting emerged in the 1960s, in response to examples where government had approved and facilitated large infrastructure and energy projects with little or no public input, which often involved imposing costs on those with little political power. However, under the environmental permitting regime as it has evolved, the blocking power of even small groups has been magnified. As Liscow writes: “In the 1960s, the United States did big things with little public consultation. Now, even smaller things can be held up by small opposition groups.”

If you support, say, substantial building of low-carbon energy projects along with the transmission lines to get that energy to market, or substantial building of mass transit project, then it needs to be a concern that anyone who can pay to hire lawyers can drive up costs of such projects, delay them, and even block them entirely. By Liscow’s calculation, the average environmental impact statement in 2022 took 4.2 years to prepare.

Is there a way to strike a more functional balance between concern over environmental protection and the importance of public feedback, and allowing projects that have large positive expected value to proceed somewhat expeditiously? As Liscow points out, the US has been deciding these questions through a process of “adversarial legalism,” in which opposing parties slug it out in the courts. Some obvious difficultie with this approach is that it can involve severe delays, and those with the most lawyers may have an outsized chance of winning.

For example, many countries have a mechanism for producing a long-term plan for infrastructure, and once the plan has been debated and approved, it becomes much harder for anyone to file a lawsuit to block it. One study compared these plans across countries–except that the United States was not included in the study, because it doesn’t have a long-term infrastructure plan. Energy companies in Europe have a requirement to cooperate with national infrastructure plans. In Canada, province-level government regulate energy companies inside the province, but the federal government controls decisions about energy infrastructure between provinces.

The US system spends much, much more on lawyers to argue draw up rules and to argue them in court than it spends on planners who would actually get down into the details, obtain public feedback, and think about how projects might be adjusted to keep their benefits but minimize their costs. Liscow is full of nuggets like this:

In Italy, a country with low transit construction costs, Milan’s transit agency has built up so much planning and design capacity that it consults not only on other Italian projects, but also projects abroad (Goldwyn et al. 2023). In contrast, when Boston began building its Green Line mass transit extension, its transit agency had only four to six full-time employees “managing the largest capital project in the agency’s history” (Goldwyn et al. 2023, p. 24), leading to poor design choices, reliance on consultants, and high costs. Similarly, “[i]n New York, where consultants largely designed and managed construction for Phase 1 of the Second Avenue Subway, the project management and design contracts were 21 percent of construction costs,” whereas in countries with more in- house capacity for planning, like France, “the typical range is 5–10%, with 7–8% most common, and in Italy and Istanbul, it is typically 10%” (Goldwyn et al. 2023, p. 25).

Liscow raises the idea of a “green bargain.” The notion is that the US would seek a dramatic increase in resources for planning and public participation in large infrastructure projects. The counterbalance would be that when a project was completed, courts would then be generally disposed to accept the outcome and to let the project proceed without further investigation. As Liscow writes:

After all, broadening public participation and moving it upstream could well be a better way of generating outcomes that reflect public preferences than a series of not-in-my-backward lawsuits brought by small special-interest groups. There is little “democratic” about a small handful of people using the courts to hold up projects that have been thoroughly evaluated, with issues widely aired.

In the present setting, the steps being taken by the Trump administration in response to NEPA and CEQ seem to be all about loosening constraints, and not about improved planning and public participation. But it’s important to remember that the problem to which the Trump administration is responding–the way in which the US method of environmental permitting delays and drives up costs of transportation and energy infrastructure–is a real problem. Liscow offers a vision for how to address the problem, with a number of detailed policy suggestions, while balancing all the values at stake.

Europe’s Internal Trade Barriers: A Long Way From a Single Market

One remarkable advantage for the US economy is the large size of its internal market. US firms can make investments in new goods and services knowing that they can potentially sell, with only a few limitations rooted in state laws, to a large number of customers across a broad area.

Indeed, the openness of the US internal market is rooted in the US Constitution. Article 1, Section 8, lists the powers of Congress, and the third clause gives Congress the power to “regulate Commerce … among the several States.” By giving that power to Congress and the federal government, the Constitution blocked states from setting up barriers to trade with each other–for example, although US states can pass laws that may create indirect costs for companies buying and selling across stated, they can’t impose tariffs or quotas on goods and services imported from other US states.

A primary goal in creating the European Union was to replicate this “single market,” and thus to give European firms the incentives for innovation, investment, and expansion that result from wide-open access to a large internal market. But according to the IMF Regional Economic Outlook report on “Europe: A Recovery Short of Europe’s Full Potential,” the EU “single market” project has a long way to go (October 2024). Here’s a sample (references to text “boxes” have been cut:

Europe’s productivity gap with the global frontier can be traced back to a more limited market size, capital market constraints, skilled labor shortages, and stalled structural reforms. Firm-data analysis shows that Europe’s segmented good and services markets are keeping businesses from becoming larger, spending more on R&D, and exploiting economies of scale. Moreover, fragmented capital markets mean that firms do not draw enough on equity financing. As a result, business dynamics are dampened especially in the services sector where start-ups tend to operate with large intangible capital. …

There is widespread agreement on the sources of Europe’s growth weakness. Recently released expert studies (Letta 2024; Draghi 2024) come to a similar conclusion that Europe’s low productivity is related to lack of market depth and scale. Both reports link Europe’s lack of competitiveness to Europe’s incomplete single market in the trade of goods, services, and factors of production (capital, labor). Remaining barriers are considered to be still substantial and have resulted in less investment and innovation than necessary to accelerate growth and productivity to levels seen in other advanced regions.

A deeper and larger single market offers the potential for a resurgence in productivity growth. European integration delivered tangible growth benefits in the past and could do so again. Following the two EU enlargement waves in 1995 and 2004, EU member countries began trading more with each other (Figure 15, panel 1). As a consequence, in the decade following accession, regions in new member states saw on average GDP per capita rise by more than 30 percent relative to comparable non-accession regions and existing member states gained too.

It is important to note that regions within Europe that were better integrated through value chains and transport networks registered higher gains. However, value chain integration has stalled since the last decade … and substantial barriers to goods and trade flows remain … New IMF analysis finds that in 2020 trade costs within Europe were equivalent to a sizable ad-valorem tariff of 44 percent for the average manufacturing sector compared to 15 percent between US states, and as high as 110 percent in the case of services sectors …

Here is a figure to which the IMF is referring. The darker blue line shows intra-EU trade in goods; the lighter blue line shows intra-EU trade in services. As you can see, intra-EU trade in goods had risen substantially up to about 2008, but has only crept a little further since. Intra-EU trade in services remains less than 10% of their value, 30 years after the birth of the “single market” initiative back in 1993.

Again, the IMF estimates that remaining barriers to trade within the countries of the EU are equivalent to a 44% tariff on trade in goods, and a 110% tariff on trade in services. These high tariffs are bad for economic growth in Europe, just as similar state-level tariffs would be bad for US growth. Of course, the underlying economic reasoning also explains why a global outbreak of tariffs would be disadvantageous for both US and global growth.

Costs of Pennies and Nickels

The US Mint, in its 2024 Annual Report report, includes a table showing the cost of producing pennies and nickels. As the bottom row shows, total cost of making a penny is now 3.69 cents; total cost of making a nickel is 13.78 cents. It is apparently the 19th consecutive year that the cost of making pennies and nickels is above their face value.

There’s been a long-standing concern that if pennies and nickels were eliminated, then prices would jump up to the nearest multiple of 10 cents. But as inflation has gradually raise price levels, the effect of such a change–if it was ever all that likely or large to begin with–doesn’t seem major. Also, with cash in rapid decline as a share of all transactions and credit cards rising, it’s not clear why vendors would need to raise prices at all (except perhaps for those paying cash?)

It’s perhaps odd here is that during the pandemic in 2021, there was “coin shortage.” What seemed to be happening was that while people were continuing to receive coins as change for cash purchases, they were not recirculating those coins for future purposes. A lot of people (like me) seem to have a coin jar accumulating loose change on a shelf somewhere. Even during the coin shortage, the number of coins officially in circulation, including my coin jar, remained fairly high. But in 2024, the number of coins in circulation has plummeted. My guess is that a change in spending papers, with buyers using credit and debit cards for a larger share of all purchases, means that economic transactions can now happen just fine with a smaller number of coins.

It seems as if whether the US government makes an official decision to phase out pennies and nickels, or not, the actual use of those coins is in decline.

A Meditation: An Academic Journal Goes Paperless

The American Economic Association publishes a suite of 10 academic journals: one of them is the Journal of Economic Perspectives, where I have worked as Managing Editor for 39 years. Starting in February 2025, none of these journals is going to be printed on paper. For more than a decade now, the JEP has been freely available on-line, including the current issue and all the archives. Articles in the other AEA journals require access through an AEA membership or via a library subscription.

I remember when we were designing the first physical issue of JEP back in 1986. In those pre-Internet days, one issue was what kind of paper to use: it went without saying that we would use library-grade paper that would last more than 100 years, but we still needed to think about the thickness and whiteness of the paper. Those discussions feel weirdly anachronistic now.

Back in the 1980s, the JEP sought to take advantage of the newfangled personal computer technology. We were one of the first journals in economics (maybe the first?) to require all authors to send us their first draft on a floppy disk, which was then a 5 1/4-inch square, fragile enough that when sending it through the mail in those pre-Internet days, you first slipped the disk into a cardboard envelope so it wouldn’t be injured in transit. I would edit the paper on the actual computer file. We would send the edited file back to the author on another disk for further revisions. We spent several thousand dollars a year on overnight mail, shipping physical disks, and we shipped physical galley proofs on paper between typesetters, authors, and the JEP editorial offices as well. But with the arrival of email attachments, we haven’t spent a dime on overnight mail for years now.

In a number of ways, this shift to a paperless journal is not only efficient and effective, but even poetic. The JEP started with the idea that we didn’t need to ship paper back and forth as part of receiving articles, doing comments and editing, and getting revisions. Now, we don’t need to ship paper for readers to see the finished journal, either.

Back in those early issues of JEP in the late 1980s, the printer created about 28,000 copies of each issue. It was roughly a tractor-trailer worth of paper. But over the years, fewer and fewer readers wanted to pay to receive paper copies. In the last few years, it turns out that even most academic libraries don’t want the paper copies, either. For the final paper issue in Fall 2024, we were printing only a little more than 2,000 copies.

Of course, having access to the articles in the JEP via the internet is vastly more accessible than having to track down one of those 28,000 paper copies from the old days. A rough count suggests that the average JEP article is downloaded in PDF form more than 60,000 times. Those who want to download an entire issue can do so as well.

The cost of producing the journal is lower, too. When looking at the JEP annual budget up into the early 2000s, it was a reasonable rule-of-thumb that printing and postage costs were about half of the total. Those cost have been diminishing over time, and now will vanish. The total budget for the JEP is published each year in the “Report of the Treasurer” of the American Economic Association. Total cost of the JEP was projected in May 2024 at $772,000 (final audited figures will be available later in 2025). Back in 2009 the total cost of the journal was projected at $882,000.

The comparison of JEP costs between 2009 and the the present isn’t quite apples-to-apples: for example, because the allocation of some internal costs at the AEA production process are now treated differently. But that said, the general increases in salaries and other costs for those working at the journal over the last 15 years have been more than offset by the decline in printing and mailing costs. Thus, the JEP is now both far more easily and cheaply accessible, because it’s freely online, and also less expensive to produce.

But as the kind of person who squints up at every sunny sky, wondering about storms, I find myself pondering two potential costs that that these calculations may not be taking into account.

In the short-run, there is a shift from physical to digital proximity. When I was on the high school debate team, a half-century ago, we would go to the University of Minnesota libraries after school to do research. For me, a common pattern was to find the book or report or article that I was looking for–but then also to find that the other books on the same shelf, or other reports in the same series, or other articles in that issue or in a more recent issue were even more useful to me. (Yes, I was definitely a fun teenager.) There was a physical serendipity to research and to learning.

It’s possible to mimic this physical closeness with online tools. I’ve seen a “virtual library shelf” where you can see the books that would have been shelved side-by-side. If you look up a report or an article, it’s usually straightforward to look up the series of reports, or the other articles in the issue, or other issues of a publication. Perhaps the 21st century version of teenage (and adult) me will both search for document and skim the “neighboring” ones.

But at least in the current state of technology, skimming seem harder to me. Clicking through neighboring volumes, and chapters in those volumes, remains harder for me than yanking volumes of shelves. Also, when I have a physical copy of a journal or report or book, it sits on my desk for a few days (or more!). I’m reminded of what’s in the issue multiple times as I see it, and reminded again even when I decide to throw the physical copy away. As the journal goes paperless, readers no longer have a paper version sitting in their in-basket, or their desk, or a coffee-table in the economics department lounge. Instead, readers receive an email that a new issue has been published, mixed in with the deluge of other emails that arrive each day, and soon buried under tomorrow’s deluge of emails.

Another way to phrase this tradeoff is that the extraordinary accessibility of so many articles, journals, and reports via the internet can make it harder to set priorities over what might be important to keep around for a longer look. Not everything can have top priority, after all.

In the long-run, the issue is that when the JEP was starting and we were worrying about library-grade paper, we had no doubt that the paper would actually last at least 100 years. After all, I have personally looked up books and reports that were more than 100 years old. Digitized records of old books and journal are based on the paper copies held in libraries.

But will the digital record of the journal still be available in 50 or 100 years? Will readers still be using Adobe Acrobat PDF files five or ten decades from now? Will companies still be supporting the necessary software, and the underlying operating systems? Will the hardware of that time, 100 years from now, be well-suited to reading this text? It’s easy to say “yes,” but ongoing digital access doesn’t happen without ongoing investment. There are plenty of examples of digital data and information from decades ago either aren’t accessible easily, or aren’t accessible at all, because the ongoing investment in hardware and software to keep them accessible didn’t happen.

Maxwell Neely-Cohen of the Library Innovation Lab at Harvard Law School tackles this issue in “Century-Scale Storage” that is, “If you had to store something
for 100 years,how would you do it?” There are some commonsensical rules.

For example, the Smithsonian endorses  a “3-2-1 Rule” when it comes to data storage: “3 copies of the data, stored on 2 different media, with at least 1 stored off-site or in the cloud.”  Or as archivist Trevor Owens puts it in his seminal text Theory and Craft of Digital Preservation,  “In digital preservation we place our trust in having multiple copies. We cannot trust the durability of digital media, so we need to make multiple copies.” When storing digital data, archivists recommend utilizing file formats that are widespread and not dependent on a single commercial entity—in the words of the Smithsonian, “non-proprietary, platform-independent, unencrypted, lossless, uncompressed, [and] commonly used.” But at the century scale, even our most widely adopted file formats are completely untested.

When it comes to very long-term storage and retrieval, there is also a tendency to assume that someone else is thinking about it, and somewhere up in “the cloud,” it’s all being taken care of. For the ordinary dangers like an occational power outage, this assumption seems reasonable. But when thinking about safe storage and access over a century, the question becomes whether there is protection against extreme and unexpected events. There are three main companies that rule the market for cloud storage. There’s no historical reason to believe that those same companies will exist 100 years from now. There’s no reason to believe that those companies view their cloud storage operations like a long-term library for future generations. Archives can and do disappear, or become inaccessible. Neely-Cohen writes:

The cloud’s current data center regime is only designed for conditions of utter stability. The physical threats to data centers are not dissimilar to the threats faced by traditional libraries, with a few additions: fire, water, physical destruction, neglect of maintenance, power failures, connection failures, theft, vandalism, and the constant forever need for software that works. During the writing of this piece, in July 2024, a Crowdstrike update bug caused archives that were using Microsoft Azure’s cloud storage services to lose access to their holdings. Natural disasters, wars, and political upheavals are all capable of causing immediate and irrevocable disruptions. … [I]t’s fairly certain any substantive nuclear exchange would render the cloud unusable. Even aside from such nightmare scenarios, the cloud is made possible by a relatively small number of undersea cables that require constant maintenance.  Any blue water naval power already has the firepower and capability to severely damage global access to the internet, and thus the cloud. The global geographic distribution of data centers heavily tilts toward the U.S. and Europe
The cloud is fairly centralized, because the companies that run it are fairly centralized.

Cloud storage requires paying someone, an outside entity, for as long as you are engaged in the act of storing. … Amazon S3 has tried to combat this by offering a storage class for slower but more permanent storage, “Glacier,” designed to be competitive with offline cold storage options, by separating storage pricing from retrieval pricing (their “Flexible Retrieval” option is $3.60 per TB as of November 2024). But you still have to pay them. Every month or every year. Forever. You can turn off the machines that you own for a while and then turn them back on, and everything you stored will still be there, but if you stop paying your cloud storage fee the data is gone, probably forever.

Of course, these issues of how information is accessed, consumed, and stored are much larger than my own individual journal, or the journals of the American Economic Association. I don’t think society is going back to paper as a mechanism of information dissemination and long-term storage–and that’s overall a good thing. But in specific contexts, we are very much still working through the habits and practices of interacting with the ever-evolving digital world.

Winter 2025 Journal of Economic Perspectives Freely Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2025 isssue, which in the Taylor household is known as issue #151. Below that are abstracts and direct links for all of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.

________________

The 2023 Merger Guidelines and Beyond

The 2023 Merger Guidelines and the Arc of Antitrust History,” by Daniel Francis

In 2023, the federal antitrust agencies rewrote the nation’s flagship merger policy document, as part of a broader “Neo-Brandeisian” effort to bring about a deep reform of the antitrust system. The result—the 2023 Merger Guidelines—has been highly controversial: celebrated by some as a revolutionary advance, and criticized by others as a step back toward a benighted past. This article evaluates the 2023 guidance against the arc of antitrust’s modern history. It argues that the new guidance breaks a long trend of migration from structure toward welfare as the primary orientation of merger enforcement, but that it does so cautiously, by achieving a fraught ambiguity between welfarist and nonwelfarist policies. In inviting both revolutionary and evolutionary readings, the agencies have sacrificed clarity and discouraged beneficial deals, but they have also deferred—at least for now—a sharp conflict between those who would preserve antitrust’s governing paradigm and those who would remake it.

Improving Economic Analysis in Merger Guidelines,” by Louis Kaplow

Merger review should reflect basic precepts of decision analysis, best practices in industrial organization economics, and teachings from related fields. Unfortunately, the analytical methods in modern merger guidelines fall short. Protocols violate standard prescriptions for information collection and decision-making, rely on a market definition paradigm that deviates significantly from core models of competitive interaction, fail to leverage central advances in understanding the efficiency consequences of mergers, and contravene or ignore fundamental dynamics relating to entry. This article elaborates correct analysis and contrasts it with that embodied in modern merger guidelines generally employed throughout the developed world, including the 2023 Merger Guidelines revision in the United States.

Acquisitions to Enter New Markets,” by Carl Shapiro

      How should antitrust enforcers treat acquisitions by successful firms to enter new markets? Should a major pharmaceutical company with an extensive sales and distribution network be permitted to acquire a popular drug that does not compete against any of the drugs it already owns? Throughout American economic history, expansion by successful firms into new markets has played a vital role in promoting competition and spurring economic growth. However, acquisitions to enter new markets also can harm competition by enabling monopolists to enlarge their empires. The 2023 Merger Guidelines break new ground by announcing that the US antitrust agencies will challenge mergers and acquisitions that “could enable the merged firm to extend a dominant position from one market into a related market,” but they say very little about how such acquisitions will be evaluated. This article explains how antitrust enforcers can use economic evidence and theory to distinguish between acquisitions to enter new markets that are harmful and those that are beneficial.

The US Safety Net

Two Histories of the Public Safety Net,” by Christopher Howard

      Although poverty in the United States has declined over the last half century, it remains a serious problem. This article charts the historical development of the public safety net, starting with means-tested programs and then adding inclusive social insurance programs. Over time, programs targeted at people with low incomes gradually shifted from the local to the state to the national level. Nevertheless, they remained politically vulnerable as policymakers questioned the deservingness of recipients and often tried to limit cash welfare. Those concerns were less salient with inclusive programs like Social Security and Medicare, which expanded rapidly between 1950 and 1980, largely to the benefit of older Americans. The concluding section highlights recent trends that challenge the supposed weakness of means-tested programs and strength of inclusive programs.

”  Did Welfare Reform End the Safety Net as We Knew It? The Record since 1996,” Lucie Schmidt, Lara Shore-Sheppard, and Tara Watson

      This paper examines the evolution of the safety net for low-income families since welfare reform in 1996 promised to “end welfare as we know it”. The total package of supports has become substantially more generous, but has changed in character. Support has shifted away from monthly cash transfers towards tax credits and in-kind benefits, and has expanded for working families while declining for those without earnings. Resources available to married-parent families have expanded, whereas those for adults without dependents remain scant. We also document that, despite expanded state flexibilities, variability in generosity across states did not grow due to simultaneous expansions of federal food assistance and tax credits. Overall, these changes reflect ongoing contention over two key policy issues. First, what is the appropriate trade-off between promoting work versus preventing material hardship? Second, what is the appropriate role for states versus the federal government in determining safety net generosity?

Administrative Burdens in the Social Safety Net,” Pamela Herd and Donald Moynihan

      Administrative burdens shape people’s experiences of, and access to, social safety net programs. They can undermine the goals these programs are trying to achieve. Such burdens are the experience of policy implementation as onerous, and arise via learning costs (knowing about the existence of and requirements of public services), compliance costs (time and effort spent dealing with bureaucratic demands, such as paperwork and documentation), and psychological costs (emotional responses to citizen-state interactions). Such frictions can substantially limit eligible peoples’ access to public services they want, would benefit from, and are legally entitled to receive. Those with the fewest resources, and the greatest needs, may struggle more to overcome burdens; the frictions thereby reinforcing existing inequality. As a research approach, administrative burden offers an intuitive and accessible way for policy actors and researchers to improve state capacity and the delivery of public services.

Articles

Getting Infrastructure Built: The Law and Economics of Permitting,” by Zachary Liscow

      Given the benefits to economic growth and economic mobility, and the need to transition to green energy, getting infrastructure built is an urgent issue. I first review the evidence on the costs and benefits of the current regime of government approvals for such building: in the US, permitting is slow, infrastructure is expensive, and environmental outcomes are not particularly good. I propose a framework for reform with two dimensions: the power of the executive branch to decide and its capacity to plan. After considering reform possibilities, I propose that reforming both dimensions could lead to a possible “green bargain” that benefits efficiency, the environment, and democracy.

A Practical Guide to Shift-Share Instruments,” by Kirill Borusyak, Peter Hull, and Xavier Jaravel

      A recent econometric literature shows two distinct paths for identification with shift-share instruments, leveraging either many exogenous shifts or exogenous shares. We present the core logic of both paths and practical takeaways via simple checklists. A variety of empirical settings illustrate key points.

Tax Privacy,” by Joel Slemrod

      Implementing an equitable and efficient tax system requires that the government have access to certain information about taxpayers. If the demand for privacy implies limiting government’s access to relevant information, it constrains the extent to which a tax system can achieve these goals. In this way, demand for limiting government access to information imposes social costs. This article discusses the aspects of privacy that matter, including leaks, and explores certain countries’ public disclosure of taxpayer information. It then discusses what is known about, and the difficulties of ascertaining, how taxpayers value tax privacy, whether offering choices to taxpayers about information revelation can ease the tension between privacy and otherwise optimal tax policy, and uses the wealth tax as an example of the policy tradeoffs that arise.

Philipp Strack, 2024 Clark Medalist,” by Drew Fudenberg

        The 2024 John Bates Clark Medal of the American Economic Association was awarded to Philipp Strack, Professor of Economics at Yale University, for his pathbreaking contributions to the study of individual decision making, which have introduced new techniques, improved our understanding of important economic phenomena, and helped spark a new wave of research on the economics of information while building bridges between modern economic theory and a wide range of adjacent disciplines. This article summarizes some of Philipp’s papers, and explains how they build on and improve previous work.

Recommendations for Further Reading,” by Timothy Taylor