The 2023 Merger Guidelines Will Remain: What Does That Mean?

Under current law, any US companies considering a merger or acquisition that is above $125 million in size must first report it to the government. The most recent data for 2023 says that 1,805 such transactions were reported in 2023, which was a relatively low number for recent years. In 2021 and 2022, for example, more than 3,000 proposed transactions were reported.

Government antitrust enforcers at the Federal Trade Comission and the Antitrust Division of the US Department of Justice don’t have the time or people to scrutinize all of these proposed transactions deeply. In 2023, for example, the FTC challenged 16 deals and the DoJ challenged another 12; in total, that’s less than 2% of the proposed transactions. It’s appropriate that this number be relatively low: after all, the role of the government authorities isn’t to decide whether the transaction is likely to be profitable (we can assume the firms involved have analyzed that question), nor whether it is “socially desireable” in some broad and nebulous sense (it’s a fairly free-market economy, after all), but only whether the transaction is likely to reduce competition in the economy.

Going back to 1968, the antitrust authorities at the FTC and the DoJ have published a set of Merger Guidelines to describe which transactions are likely to get closest scrutiny, and then updated these guidelines once or twice a decade. It’s important to recognize that these Guidelines are not legally binding; actual antitrust law is the legislation passed by Congress and the case law established by court precedents. Thus, the evolving Merger Guidelines have always been a combination of summarizing what the laws and the courts actually say, and what the enforcement agencies think the courts should say.

When President Biden took office in 2021, he appointed people to the key antitrust policy-making positions at FTC and DoJ who were strongly on the record that the Merger Guidelines of 1968 were largely correct, but that the updates starting in 1982 were on the wrong track, and continued on the wrong track with the updates of 1984, 1992, 1997, 2010, and 2020. As you might expect, those who had been involved in the updates of Merger Guidelines through the Reagan, Bush, Clinton, Obama, and Trump presidencies were mostly unimpressed.

A new set of Merger Guidelines was published in December 2023. I won’t try to summarize all the pros and cons here. But as a rough summary, the prevailing bipartisan perspective had been that the purpose of competition is to make sure that consumers benefit. From this view, the goal of antitrust authorities was to evaluate if a given merger would benefit consumers through lower prices and/or quality improvements. The critics of this position in the Biden administration argued that the law required preservation of “competition” itself–which could involve blocking mergers and acquisitions just because they increased the size of existing firms. One justification for this view is that it takes a longer view of competition: if smaller firms are not absorbed into larger ones, some of them later develop into full-fledged competitors. Another justification is large firms (and their rich owners) should be viewed as politically dangerous because of their power to influence government.

Here, I’ll just add three additional thoughts to the situation.

First, one possibility was that the 2023 Merger Guidelines would turn out to be extremely short-lived, because they would be overturned by the incoming Trump administration. However, Andrew N. Ferguson, the new head of the Federal Trade Commission, has written a “Memorandum” saying that the 2023 Merger Guidelines will remain in place. Ferguson notes:

Insofar as there is any ambiguity, let me be clear: the FTC’s and DOJ’s joint 2023 Merger Guidelines are in effect and are the framework for this agency’s merger review analysis. … Stability across administrations of both parties has thus been the name of the game. President Clinton retained the 1992 Guidelines promulgated by the George H.W. Bush Administration until 1997. President George W. Bush retained the 1997 Guidelines unchanged. And President Trump retained unchanged the 2010 Guidelines issued by the Obama Administration. … I think the clear lesson of history is that we should prize stability and disfavor wholesale rescission. … A recriminatory cycle of partisan rescissions will not help the economy. If merger guidelines change with every new administration, they will become largely worthless to businesses and the courts. No business can plan for the future on the basis of guidelines they know are one election away from rescission, and no court will rely on guidance that is so obviously partisan.

Stability is also good for the enforcement agencies. The wholesale rescission and reworking of guidelines is time consuming and expensive. We should undertake this process sparingly. We have limited resources to patrol the beat and constant turnover undermines agency credibility. By and large, the 2023 Merger Guidelines are a restatement of prior iterations of the guidelines, and a reflection of what can be found in case law. That is good reason to retain them. That is not to say that the 2023 Merger Guidelines are perfect. No guidelines are perfect. If experience teaches that revisions are appropriate, then the agencies can consider revisions as they have done in the past. This iterative and transparent revision process promotes the stability that the guidelines need to succeed. For the foreseeable future, and until any such revisions are adopted, the FTC will use the 2023 Merger Guidelines as the framework to do our important merger-enforcement work.

The second question is whether or to what extent the Biden antitrust authorities have “won” in overturning the earlier doctrines and transforming merger guidelines back to their version of the good old days. The answer here is unclear. The Winter 2025 issue of the Journal of Economic Perspectives (where I work as Managing Editor) has a three-paper symposium on “The 2023 Merger Guidelines and Beyond.”

I’ll focus here on the essay by Francis. He sketches the evolution of the merger guidelines over time. He points out that the Biden antitrust appointees argued vehemently that they intended to overturn the existing guidelines, with comments like “the era of lax enforcement is over, and the new era of vigorous and effective antitrust law enforcement has begun” and promised to reverse “decades of lax . . .
enforcement” and “broad government inaction.” They strongly criticized the idea that antitrust decisions should focus on consumer welfare, saying that while consumer harm “might matter in some contexts,” “to say it always matters, and is indeed the lodestone of the law, is . . . unsupportable and can . . . border on the ridiculous.”

Francis argues that the “draft” version of the 2023 Merger Guidelines released in June 2023 was fairly radical. It eliminated language from earlier guidelines about whether firms had “market power” to raise wages and the importance of consumer welfare. Instead, it focused on a set of 13 rules phrased in terms of “Mergers Should Not…” For example, as Francis notes, mergers should not “significantly increase concentration in highly concentrated markets,” “eliminate a potential entrant in a
concentrated market,” or “entrench or extend a dominant position.” None of these 13 rules in the draft version mentioned whether the actions by a firm might benefit consumers or not.

The draft Merger Guidelines received 3,000 public comments. Many of them focused on the issue of whether antitrust should focus on consumer welfare: some in favor, some against. Francis writes:

Above all, the post-draft debate centered on the draft’s relationship with
welfarism. The draft had sketched a merger policy that, to a significant extent,
would diverge from modern welfarist antitrust. Some commenters demanded that
the agencies go further and entirely reject the welfarist paradigm; others demanded
a clear recommitment to it. The agencies were at a crossroads. … On my reading, the final document effectively declines to make the choice between welfarism and non-welfarism that commenters had demanded. Instead, it opts for ambiguity: it can plausibly bear both a welfarist and a non-welfarist reading. Like a Rorschach test, the meaning of the 2023 Merger Guidelines depends upon what one expects, hopes, or fears to find there. Above all, it invites, but does not answer, a basic question: Is a tendency to harm consumers (or other trading partners) necessary for condemnation under the 2023 guidelines?

Thus, Francis argues that the meaning of the new guidelines will only become apparent as they are asserted by government antitrust authorities and as those arguments are accepted or rejected by courts.

Francis makes the interesting point that in the 2023 Merger Guidelines and indeed going back more than a century in antitrust laws, “competition” has served as a term of compromise, because it allows alternative interpretations. Most American favor “competition” between firms as a general concept, rather than monopoly. But imagine that a town starts with a bunch of locally-owned restaurants, grocery stores, and drug stores. Then a bunch of national chains and superstores move in. A number of local customers turn to the new options, where prices are often lower, and some of the locally-owned stores go out of business. Is this a reduction in competition, because competitors have been driven out of business? Or an increase in competition that benefits the consumers who choose the new options? What about if online firms with home delivery drive local merchants and malls into financial distress? What if online streaming of movies and shows drives movie theaters into financial distress? Many people are in favor of competition when it offers them new options, but if and when some of the competitors lose out, they become dubious.

I would add that Americans have a similar ambivalence about large firms. On one hand, many people look back with some fondness on the days when giant American firms making cars or steel or chemicals had a dominant role in the US and global economy. The US political system becomes seriously concerned if US producers in a certain industry don’t seem dominant enough, and there is often political support for putting tariffs on foreign producers and subsidizing domestic firms (as in semiconductors) to close the gap. But in the areas where the US does in fact have the dominant firms, like many of its big tech companies, along with firms like Walmart, ExxonMobil, and CVS Health, we then are concerned that their size should be a public concern and antitrust authorities should take a close look. In short, lack of large and dominant US firms in certain industries is a public concern needing a policy response, and the presence of large and dominant US firms in certain industries is also a public concern needing a policy response.

Finally, for those who would like a range of expert opinions about the 2023 Merger Guidelines, a useful starting point is the 11-paper symposium organized by the Review of Industrial Organization and published in its August 2024 issue.

I’ve commented on this blog from time to time about the Biden antitrust team, the new merger guidelines, some current antitrust cases, and the historical changes in merger law over time. Some of these posts include:

Interview with Alan Auerbach: Federal Debt and Social Security

David A. Price of the Federal Reserve Bank of Richmond interviews Alan Auerbach “On the federal debt, the Social Security trust fund, and how Uncle Sam discourages seniors from working” (Econ Focus, First/Second Quarter 2025). Here are a few of the points that caught my eye.

The US Has Lost its Debt-Restraint Religion

It’s the case, as I’ve said in recent years, that the U.S. doesn’t pay any attention to the national debt. That was not true if you go back, say, 20, 25 years or more. If you look, for example, during the Reagan administration as well as the first Bush and Clinton administrations, it was the case that when debt or projected deficits went up, government undertook actions to reduce them, either by increasing taxes or by cutting spending.

That ended sometime in the early 2000s. In the last 20 years or so, it’s just not there. If we went back to the way we were behaving then, the kinds of shocks that are going to keep hitting the budget, either because of interest rates or pandemics or financial crises or other things, could be dealt with by those kinds of government reactions.

So it’s both good news and bad news. It’s good news in the sense that we’ve been there before. It’s not as though we have to undertake an approach that’s never been contemplated or practiced. But on the other hand, we lost religion sometime in the last 20 to 25 years. And it’s not exactly clear how we’re going to get that back because we lost it in a bipartisan way. There used to be bigger constituencies in Congress and in the White House for dealing with national debt, at least when problems became more apparent.

How Inflation, and Limited Inflation Adjustments, Reduces US Deficits

[T]here are different ways in which inflation interacts with the fiscal system to affect the taxes that people pay and the benefits that they receive. It could help them or hurt them; it mostly hurts them.

Some things are not indexed for inflation at all. … [T]he threshold over which you’re taxed on your Social Security benefits … has been fixed in nominal terms since it was implemented. That means that the more inflation we have, the more people are going to be subject to tax on some or all of their Social Security benefits.

Where we do have indexing for a lot of elements of the tax system and benefits, there are delays before the system catches up. For example, once you’re receiving Social Security, your benefits go up every year because of inflation. On the tax side, the federal tax brackets are indexed for inflation so that if your income goes up by 10 percent because inflation is 10 percent, it’s not going to change your bracket because the bracket’s indexed for inflation. However, there’s a delay in the indexing. What that means is that if there’s a sudden surge in inflation, the first year or so is going to happen before the brackets and the benefits start reacting to it. For example, if we went from an inflation rate of zero to an inflation rate of 10 percent on a permanent basis, that would cause a 10 percent decline in people’s Social Security benefits because it would happen once and then we’d be forever one year behind.

The final thing is that capital income — interest, capital gains, things like that — are mismeasured because of inflation. For example, if I buy an asset for $100 and the price level doubles over the period that I hold it, and I sell the asset for $200, my real gain is zero. But I’d be taxable on a gain of $100, because we don’t index capital gains for inflation. We don’t index interest income. If the inflation rate is 4 percent and I’m getting 4 percent nominal interest, my real interest is zero, yet I’m still taxable on the 4 percent.

So through lack of indexing, delayed indexing, mismeasurement of capital income, as well as similar effects on the benefits side in terms of delayed indexing, people in general — not every person — have a reduction in resources as a result of inflation. In one sense, that makes inflation a more effective tool for dealing with the deficit. … I don’t think it’s a particularly attractive way to do it because it’s quite arbitrary. If you look at the distribution of effects, it varies a lot across households depending on the type of income they have. We wouldn’t say it’s very well designed.

Might Social Security Switch to Relying on General Tax Revenues?

It’s true that in 1983, which was the last time the Social Security trust fund was nearing exhaustion, we had the Greenspan Commission that recommended changes in Social Security, which were then adopted, which raised the retirement age very slowly and increased payroll taxes. That put the Social Security system on a better financial footing for many decades.

That could happen again. But it could also be the case that Congress and the government don’t have the appetite for providing this kind of bad news to people in the Social Security system. They could just say, well, we’ll use general revenue funding to cover the shortfalls of Social Security. We already do that for Medicare Part B, the health insurance, and Medicare Part D, the drug benefit. They are not self-sustaining; we have premiums paying for a small part of the benefits and the rest comes from general revenues.

Some of the traditional supporters of Social Security say it’s good to have it be a self-financing system because it makes people feel that they have a stake in it when they’re paying their payroll taxes and so forth. But if the choice of the government is to cut benefits, raise payroll taxes, or use general revenue funding, given their behavior in recent years, I’m fearful that they’ll choose general revenue funding and just kick the can down the road.

Re-Interpreting US Law for Environmental Permitting

The National Environmental Policy Act, commonly known as NEPA, became law in 1970. The law itself remains intact, but the way in which the law is implemented is undergoing major changes.

The back-story goes like this. In 1977, President Jimmy Carter signed an Executive Order that gave the Council on Environmental Quality (CEQ)–which is an office in the White House administrative structure–the authority to interpret the provisions of the law. Thus, the law includes terms how environmental review and permitting is required for any “major federal action” with a “significant effect,” but under the CEQ interpretations, this has turned into requiring environmental review for pretty much any action with even an arguable environmental efect. Over the decades, courts have often treated the CEQ interpretations as functionally the same as the law itself.

This arrangement began to come apart back last November, when the US Court of Appeals held in Marin Audubon Society v. Federal Aviation Administration that the CEQ lacked authority to interpret the provisions of NEPA. The admittedly thin legal line here seems to be that CEQ could issue “guidelines,” which did not have the force of law, but did not have the power to issue “regulations,” which would have the force of law. I’ll add that this distinction is not unusual: for economists, a better-known example might be the Merger Guidelines issued by the Federal Trade Commission and the Antitrust Division of the US Department of Justice. Courts are welcome to read the Guidelines for input, but courts are not bound by how the Guidelines have chosen to interpret the law.

A presidential Executive Order is not a law, and can be overridden by any future president. Thus, emboldened by the Court of Appeals decision, President Trump signed his own Executive Order revoking Carter’s 1977 order. The order essentially sets aside all previous CEQ rulemaking with regard to NEPA, and thus also calls into question all court decisions since 1977 that were based on the CEQ rules. Instead, in the next 30 days, all federal agencies with plans affected by NEPA rules are required to “develop and begin implementing action plans to suspend, revise, or rescind all agency actions identified as unduly burdensome.” The CEQ has now published an Interim Final Rule to implement Trump’s Executive Order, which is open for public comment.

Setting aside the legalities, what policy choices are on the table here? It seems clear that the Trump administration would like to roll back the reach of NEPA, so that it is focused on a smaller group of major actions, with less need for agencies to submit (and to defend in court) Environment Impact Statements. For those with environmentalist leanings, this general stance may seem obviously regressive. But in a country where green energy projects and infrastructure projects can be held up for years at a time by permitting requirements, the issues are not clear-cut. Zachary Liscow lays it out in “Getting Infrastructure Built: The Law and Economics of Permitting” (Journal of Economic Perspectives, Winter 2025,  pp. 151-180).

(Full disclosure: I’m Managing Editor of the JEP, and thus predisposed to find the articles of particular interest.)

As Liscow points out, NEPA and other rules requiring environmental permitting emerged in the 1960s, in response to examples where government had approved and facilitated large infrastructure and energy projects with little or no public input, which often involved imposing costs on those with little political power. However, under the environmental permitting regime as it has evolved, the blocking power of even small groups has been magnified. As Liscow writes: “In the 1960s, the United States did big things with little public consultation. Now, even smaller things can be held up by small opposition groups.”

If you support, say, substantial building of low-carbon energy projects along with the transmission lines to get that energy to market, or substantial building of mass transit project, then it needs to be a concern that anyone who can pay to hire lawyers can drive up costs of such projects, delay them, and even block them entirely. By Liscow’s calculation, the average environmental impact statement in 2022 took 4.2 years to prepare.

Is there a way to strike a more functional balance between concern over environmental protection and the importance of public feedback, and allowing projects that have large positive expected value to proceed somewhat expeditiously? As Liscow points out, the US has been deciding these questions through a process of “adversarial legalism,” in which opposing parties slug it out in the courts. Some obvious difficultie with this approach is that it can involve severe delays, and those with the most lawyers may have an outsized chance of winning.

For example, many countries have a mechanism for producing a long-term plan for infrastructure, and once the plan has been debated and approved, it becomes much harder for anyone to file a lawsuit to block it. One study compared these plans across countries–except that the United States was not included in the study, because it doesn’t have a long-term infrastructure plan. Energy companies in Europe have a requirement to cooperate with national infrastructure plans. In Canada, province-level government regulate energy companies inside the province, but the federal government controls decisions about energy infrastructure between provinces.

The US system spends much, much more on lawyers to argue draw up rules and to argue them in court than it spends on planners who would actually get down into the details, obtain public feedback, and think about how projects might be adjusted to keep their benefits but minimize their costs. Liscow is full of nuggets like this:

In Italy, a country with low transit construction costs, Milan’s transit agency has built up so much planning and design capacity that it consults not only on other Italian projects, but also projects abroad (Goldwyn et al. 2023). In contrast, when Boston began building its Green Line mass transit extension, its transit agency had only four to six full-time employees “managing the largest capital project in the agency’s history” (Goldwyn et al. 2023, p. 24), leading to poor design choices, reliance on consultants, and high costs. Similarly, “[i]n New York, where consultants largely designed and managed construction for Phase 1 of the Second Avenue Subway, the project management and design contracts were 21 percent of construction costs,” whereas in countries with more in- house capacity for planning, like France, “the typical range is 5–10%, with 7–8% most common, and in Italy and Istanbul, it is typically 10%” (Goldwyn et al. 2023, p. 25).

Liscow raises the idea of a “green bargain.” The notion is that the US would seek a dramatic increase in resources for planning and public participation in large infrastructure projects. The counterbalance would be that when a project was completed, courts would then be generally disposed to accept the outcome and to let the project proceed without further investigation. As Liscow writes:

After all, broadening public participation and moving it upstream could well be a better way of generating outcomes that reflect public preferences than a series of not-in-my-backward lawsuits brought by small special-interest groups. There is little “democratic” about a small handful of people using the courts to hold up projects that have been thoroughly evaluated, with issues widely aired.

In the present setting, the steps being taken by the Trump administration in response to NEPA and CEQ seem to be all about loosening constraints, and not about improved planning and public participation. But it’s important to remember that the problem to which the Trump administration is responding–the way in which the US method of environmental permitting delays and drives up costs of transportation and energy infrastructure–is a real problem. Liscow offers a vision for how to address the problem, with a number of detailed policy suggestions, while balancing all the values at stake.

Europe’s Internal Trade Barriers: A Long Way From a Single Market

One remarkable advantage for the US economy is the large size of its internal market. US firms can make investments in new goods and services knowing that they can potentially sell, with only a few limitations rooted in state laws, to a large number of customers across a broad area.

Indeed, the openness of the US internal market is rooted in the US Constitution. Article 1, Section 8, lists the powers of Congress, and the third clause gives Congress the power to “regulate Commerce … among the several States.” By giving that power to Congress and the federal government, the Constitution blocked states from setting up barriers to trade with each other–for example, although US states can pass laws that may create indirect costs for companies buying and selling across stated, they can’t impose tariffs or quotas on goods and services imported from other US states.

A primary goal in creating the European Union was to replicate this “single market,” and thus to give European firms the incentives for innovation, investment, and expansion that result from wide-open access to a large internal market. But according to the IMF Regional Economic Outlook report on “Europe: A Recovery Short of Europe’s Full Potential,” the EU “single market” project has a long way to go (October 2024). Here’s a sample (references to text “boxes” have been cut:

Europe’s productivity gap with the global frontier can be traced back to a more limited market size, capital market constraints, skilled labor shortages, and stalled structural reforms. Firm-data analysis shows that Europe’s segmented good and services markets are keeping businesses from becoming larger, spending more on R&D, and exploiting economies of scale. Moreover, fragmented capital markets mean that firms do not draw enough on equity financing. As a result, business dynamics are dampened especially in the services sector where start-ups tend to operate with large intangible capital. …

There is widespread agreement on the sources of Europe’s growth weakness. Recently released expert studies (Letta 2024; Draghi 2024) come to a similar conclusion that Europe’s low productivity is related to lack of market depth and scale. Both reports link Europe’s lack of competitiveness to Europe’s incomplete single market in the trade of goods, services, and factors of production (capital, labor). Remaining barriers are considered to be still substantial and have resulted in less investment and innovation than necessary to accelerate growth and productivity to levels seen in other advanced regions.

A deeper and larger single market offers the potential for a resurgence in productivity growth. European integration delivered tangible growth benefits in the past and could do so again. Following the two EU enlargement waves in 1995 and 2004, EU member countries began trading more with each other (Figure 15, panel 1). As a consequence, in the decade following accession, regions in new member states saw on average GDP per capita rise by more than 30 percent relative to comparable non-accession regions and existing member states gained too.

It is important to note that regions within Europe that were better integrated through value chains and transport networks registered higher gains. However, value chain integration has stalled since the last decade … and substantial barriers to goods and trade flows remain … New IMF analysis finds that in 2020 trade costs within Europe were equivalent to a sizable ad-valorem tariff of 44 percent for the average manufacturing sector compared to 15 percent between US states, and as high as 110 percent in the case of services sectors …

Here is a figure to which the IMF is referring. The darker blue line shows intra-EU trade in goods; the lighter blue line shows intra-EU trade in services. As you can see, intra-EU trade in goods had risen substantially up to about 2008, but has only crept a little further since. Intra-EU trade in services remains less than 10% of their value, 30 years after the birth of the “single market” initiative back in 1993.

Again, the IMF estimates that remaining barriers to trade within the countries of the EU are equivalent to a 44% tariff on trade in goods, and a 110% tariff on trade in services. These high tariffs are bad for economic growth in Europe, just as similar state-level tariffs would be bad for US growth. Of course, the underlying economic reasoning also explains why a global outbreak of tariffs would be disadvantageous for both US and global growth.

Costs of Pennies and Nickels

The US Mint, in its 2024 Annual Report report, includes a table showing the cost of producing pennies and nickels. As the bottom row shows, total cost of making a penny is now 3.69 cents; total cost of making a nickel is 13.78 cents. It is apparently the 19th consecutive year that the cost of making pennies and nickels is above their face value.

There’s been a long-standing concern that if pennies and nickels were eliminated, then prices would jump up to the nearest multiple of 10 cents. But as inflation has gradually raise price levels, the effect of such a change–if it was ever all that likely or large to begin with–doesn’t seem major. Also, with cash in rapid decline as a share of all transactions and credit cards rising, it’s not clear why vendors would need to raise prices at all (except perhaps for those paying cash?)

It’s perhaps odd here is that during the pandemic in 2021, there was “coin shortage.” What seemed to be happening was that while people were continuing to receive coins as change for cash purchases, they were not recirculating those coins for future purposes. A lot of people (like me) seem to have a coin jar accumulating loose change on a shelf somewhere. Even during the coin shortage, the number of coins officially in circulation, including my coin jar, remained fairly high. But in 2024, the number of coins in circulation has plummeted. My guess is that a change in spending papers, with buyers using credit and debit cards for a larger share of all purchases, means that economic transactions can now happen just fine with a smaller number of coins.

It seems as if whether the US government makes an official decision to phase out pennies and nickels, or not, the actual use of those coins is in decline.

A Meditation: An Academic Journal Goes Paperless

The American Economic Association publishes a suite of 10 academic journals: one of them is the Journal of Economic Perspectives, where I have worked as Managing Editor for 39 years. Starting in February 2025, none of these journals is going to be printed on paper. For more than a decade now, the JEP has been freely available on-line, including the current issue and all the archives. Articles in the other AEA journals require access through an AEA membership or via a library subscription.

I remember when we were designing the first physical issue of JEP back in 1986. In those pre-Internet days, one issue was what kind of paper to use: it went without saying that we would use library-grade paper that would last more than 100 years, but we still needed to think about the thickness and whiteness of the paper. Those discussions feel weirdly anachronistic now.

Back in the 1980s, the JEP sought to take advantage of the newfangled personal computer technology. We were one of the first journals in economics (maybe the first?) to require all authors to send us their first draft on a floppy disk, which was then a 5 1/4-inch square, fragile enough that when sending it through the mail in those pre-Internet days, you first slipped the disk into a cardboard envelope so it wouldn’t be injured in transit. I would edit the paper on the actual computer file. We would send the edited file back to the author on another disk for further revisions. We spent several thousand dollars a year on overnight mail, shipping physical disks, and we shipped physical galley proofs on paper between typesetters, authors, and the JEP editorial offices as well. But with the arrival of email attachments, we haven’t spent a dime on overnight mail for years now.

In a number of ways, this shift to a paperless journal is not only efficient and effective, but even poetic. The JEP started with the idea that we didn’t need to ship paper back and forth as part of receiving articles, doing comments and editing, and getting revisions. Now, we don’t need to ship paper for readers to see the finished journal, either.

Back in those early issues of JEP in the late 1980s, the printer created about 28,000 copies of each issue. It was roughly a tractor-trailer worth of paper. But over the years, fewer and fewer readers wanted to pay to receive paper copies. In the last few years, it turns out that even most academic libraries don’t want the paper copies, either. For the final paper issue in Fall 2024, we were printing only a little more than 2,000 copies.

Of course, having access to the articles in the JEP via the internet is vastly more accessible than having to track down one of those 28,000 paper copies from the old days. A rough count suggests that the average JEP article is downloaded in PDF form more than 60,000 times. Those who want to download an entire issue can do so as well.

The cost of producing the journal is lower, too. When looking at the JEP annual budget up into the early 2000s, it was a reasonable rule-of-thumb that printing and postage costs were about half of the total. Those cost have been diminishing over time, and now will vanish. The total budget for the JEP is published each year in the “Report of the Treasurer” of the American Economic Association. Total cost of the JEP was projected in May 2024 at $772,000 (final audited figures will be available later in 2025). Back in 2009 the total cost of the journal was projected at $882,000.

The comparison of JEP costs between 2009 and the the present isn’t quite apples-to-apples: for example, because the allocation of some internal costs at the AEA production process are now treated differently. But that said, the general increases in salaries and other costs for those working at the journal over the last 15 years have been more than offset by the decline in printing and mailing costs. Thus, the JEP is now both far more easily and cheaply accessible, because it’s freely online, and also less expensive to produce.

But as the kind of person who squints up at every sunny sky, wondering about storms, I find myself pondering two potential costs that that these calculations may not be taking into account.

In the short-run, there is a shift from physical to digital proximity. When I was on the high school debate team, a half-century ago, we would go to the University of Minnesota libraries after school to do research. For me, a common pattern was to find the book or report or article that I was looking for–but then also to find that the other books on the same shelf, or other reports in the same series, or other articles in that issue or in a more recent issue were even more useful to me. (Yes, I was definitely a fun teenager.) There was a physical serendipity to research and to learning.

It’s possible to mimic this physical closeness with online tools. I’ve seen a “virtual library shelf” where you can see the books that would have been shelved side-by-side. If you look up a report or an article, it’s usually straightforward to look up the series of reports, or the other articles in the issue, or other issues of a publication. Perhaps the 21st century version of teenage (and adult) me will both search for document and skim the “neighboring” ones.

But at least in the current state of technology, skimming seem harder to me. Clicking through neighboring volumes, and chapters in those volumes, remains harder for me than yanking volumes of shelves. Also, when I have a physical copy of a journal or report or book, it sits on my desk for a few days (or more!). I’m reminded of what’s in the issue multiple times as I see it, and reminded again even when I decide to throw the physical copy away. As the journal goes paperless, readers no longer have a paper version sitting in their in-basket, or their desk, or a coffee-table in the economics department lounge. Instead, readers receive an email that a new issue has been published, mixed in with the deluge of other emails that arrive each day, and soon buried under tomorrow’s deluge of emails.

Another way to phrase this tradeoff is that the extraordinary accessibility of so many articles, journals, and reports via the internet can make it harder to set priorities over what might be important to keep around for a longer look. Not everything can have top priority, after all.

In the long-run, the issue is that when the JEP was starting and we were worrying about library-grade paper, we had no doubt that the paper would actually last at least 100 years. After all, I have personally looked up books and reports that were more than 100 years old. Digitized records of old books and journal are based on the paper copies held in libraries.

But will the digital record of the journal still be available in 50 or 100 years? Will readers still be using Adobe Acrobat PDF files five or ten decades from now? Will companies still be supporting the necessary software, and the underlying operating systems? Will the hardware of that time, 100 years from now, be well-suited to reading this text? It’s easy to say “yes,” but ongoing digital access doesn’t happen without ongoing investment. There are plenty of examples of digital data and information from decades ago either aren’t accessible easily, or aren’t accessible at all, because the ongoing investment in hardware and software to keep them accessible didn’t happen.

Maxwell Neely-Cohen of the Library Innovation Lab at Harvard Law School tackles this issue in “Century-Scale Storage” that is, “If you had to store something
for 100 years,how would you do it?” There are some commonsensical rules.

For example, the Smithsonian endorses  a “3-2-1 Rule” when it comes to data storage: “3 copies of the data, stored on 2 different media, with at least 1 stored off-site or in the cloud.”  Or as archivist Trevor Owens puts it in his seminal text Theory and Craft of Digital Preservation,  “In digital preservation we place our trust in having multiple copies. We cannot trust the durability of digital media, so we need to make multiple copies.” When storing digital data, archivists recommend utilizing file formats that are widespread and not dependent on a single commercial entity—in the words of the Smithsonian, “non-proprietary, platform-independent, unencrypted, lossless, uncompressed, [and] commonly used.” But at the century scale, even our most widely adopted file formats are completely untested.

When it comes to very long-term storage and retrieval, there is also a tendency to assume that someone else is thinking about it, and somewhere up in “the cloud,” it’s all being taken care of. For the ordinary dangers like an occational power outage, this assumption seems reasonable. But when thinking about safe storage and access over a century, the question becomes whether there is protection against extreme and unexpected events. There are three main companies that rule the market for cloud storage. There’s no historical reason to believe that those same companies will exist 100 years from now. There’s no reason to believe that those companies view their cloud storage operations like a long-term library for future generations. Archives can and do disappear, or become inaccessible. Neely-Cohen writes:

The cloud’s current data center regime is only designed for conditions of utter stability. The physical threats to data centers are not dissimilar to the threats faced by traditional libraries, with a few additions: fire, water, physical destruction, neglect of maintenance, power failures, connection failures, theft, vandalism, and the constant forever need for software that works. During the writing of this piece, in July 2024, a Crowdstrike update bug caused archives that were using Microsoft Azure’s cloud storage services to lose access to their holdings. Natural disasters, wars, and political upheavals are all capable of causing immediate and irrevocable disruptions. … [I]t’s fairly certain any substantive nuclear exchange would render the cloud unusable. Even aside from such nightmare scenarios, the cloud is made possible by a relatively small number of undersea cables that require constant maintenance.  Any blue water naval power already has the firepower and capability to severely damage global access to the internet, and thus the cloud. The global geographic distribution of data centers heavily tilts toward the U.S. and Europe
The cloud is fairly centralized, because the companies that run it are fairly centralized.

Cloud storage requires paying someone, an outside entity, for as long as you are engaged in the act of storing. … Amazon S3 has tried to combat this by offering a storage class for slower but more permanent storage, “Glacier,” designed to be competitive with offline cold storage options, by separating storage pricing from retrieval pricing (their “Flexible Retrieval” option is $3.60 per TB as of November 2024). But you still have to pay them. Every month or every year. Forever. You can turn off the machines that you own for a while and then turn them back on, and everything you stored will still be there, but if you stop paying your cloud storage fee the data is gone, probably forever.

Of course, these issues of how information is accessed, consumed, and stored are much larger than my own individual journal, or the journals of the American Economic Association. I don’t think society is going back to paper as a mechanism of information dissemination and long-term storage–and that’s overall a good thing. But in specific contexts, we are very much still working through the habits and practices of interacting with the ever-evolving digital world.

Winter 2025 Journal of Economic Perspectives Freely Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2025 isssue, which in the Taylor household is known as issue #151. Below that are abstracts and direct links for all of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.

________________

The 2023 Merger Guidelines and Beyond

The 2023 Merger Guidelines and the Arc of Antitrust History,” by Daniel Francis

In 2023, the federal antitrust agencies rewrote the nation’s flagship merger policy document, as part of a broader “Neo-Brandeisian” effort to bring about a deep reform of the antitrust system. The result—the 2023 Merger Guidelines—has been highly controversial: celebrated by some as a revolutionary advance, and criticized by others as a step back toward a benighted past. This article evaluates the 2023 guidance against the arc of antitrust’s modern history. It argues that the new guidance breaks a long trend of migration from structure toward welfare as the primary orientation of merger enforcement, but that it does so cautiously, by achieving a fraught ambiguity between welfarist and nonwelfarist policies. In inviting both revolutionary and evolutionary readings, the agencies have sacrificed clarity and discouraged beneficial deals, but they have also deferred—at least for now—a sharp conflict between those who would preserve antitrust’s governing paradigm and those who would remake it.

Improving Economic Analysis in Merger Guidelines,” by Louis Kaplow

Merger review should reflect basic precepts of decision analysis, best practices in industrial organization economics, and teachings from related fields. Unfortunately, the analytical methods in modern merger guidelines fall short. Protocols violate standard prescriptions for information collection and decision-making, rely on a market definition paradigm that deviates significantly from core models of competitive interaction, fail to leverage central advances in understanding the efficiency consequences of mergers, and contravene or ignore fundamental dynamics relating to entry. This article elaborates correct analysis and contrasts it with that embodied in modern merger guidelines generally employed throughout the developed world, including the 2023 Merger Guidelines revision in the United States.

Acquisitions to Enter New Markets,” by Carl Shapiro

      How should antitrust enforcers treat acquisitions by successful firms to enter new markets? Should a major pharmaceutical company with an extensive sales and distribution network be permitted to acquire a popular drug that does not compete against any of the drugs it already owns? Throughout American economic history, expansion by successful firms into new markets has played a vital role in promoting competition and spurring economic growth. However, acquisitions to enter new markets also can harm competition by enabling monopolists to enlarge their empires. The 2023 Merger Guidelines break new ground by announcing that the US antitrust agencies will challenge mergers and acquisitions that “could enable the merged firm to extend a dominant position from one market into a related market,” but they say very little about how such acquisitions will be evaluated. This article explains how antitrust enforcers can use economic evidence and theory to distinguish between acquisitions to enter new markets that are harmful and those that are beneficial.

The US Safety Net

Two Histories of the Public Safety Net,” by Christopher Howard

      Although poverty in the United States has declined over the last half century, it remains a serious problem. This article charts the historical development of the public safety net, starting with means-tested programs and then adding inclusive social insurance programs. Over time, programs targeted at people with low incomes gradually shifted from the local to the state to the national level. Nevertheless, they remained politically vulnerable as policymakers questioned the deservingness of recipients and often tried to limit cash welfare. Those concerns were less salient with inclusive programs like Social Security and Medicare, which expanded rapidly between 1950 and 1980, largely to the benefit of older Americans. The concluding section highlights recent trends that challenge the supposed weakness of means-tested programs and strength of inclusive programs.

”  Did Welfare Reform End the Safety Net as We Knew It? The Record since 1996,” Lucie Schmidt, Lara Shore-Sheppard, and Tara Watson

      This paper examines the evolution of the safety net for low-income families since welfare reform in 1996 promised to “end welfare as we know it”. The total package of supports has become substantially more generous, but has changed in character. Support has shifted away from monthly cash transfers towards tax credits and in-kind benefits, and has expanded for working families while declining for those without earnings. Resources available to married-parent families have expanded, whereas those for adults without dependents remain scant. We also document that, despite expanded state flexibilities, variability in generosity across states did not grow due to simultaneous expansions of federal food assistance and tax credits. Overall, these changes reflect ongoing contention over two key policy issues. First, what is the appropriate trade-off between promoting work versus preventing material hardship? Second, what is the appropriate role for states versus the federal government in determining safety net generosity?

Administrative Burdens in the Social Safety Net,” Pamela Herd and Donald Moynihan

      Administrative burdens shape people’s experiences of, and access to, social safety net programs. They can undermine the goals these programs are trying to achieve. Such burdens are the experience of policy implementation as onerous, and arise via learning costs (knowing about the existence of and requirements of public services), compliance costs (time and effort spent dealing with bureaucratic demands, such as paperwork and documentation), and psychological costs (emotional responses to citizen-state interactions). Such frictions can substantially limit eligible peoples’ access to public services they want, would benefit from, and are legally entitled to receive. Those with the fewest resources, and the greatest needs, may struggle more to overcome burdens; the frictions thereby reinforcing existing inequality. As a research approach, administrative burden offers an intuitive and accessible way for policy actors and researchers to improve state capacity and the delivery of public services.

Articles

Getting Infrastructure Built: The Law and Economics of Permitting,” by Zachary Liscow

      Given the benefits to economic growth and economic mobility, and the need to transition to green energy, getting infrastructure built is an urgent issue. I first review the evidence on the costs and benefits of the current regime of government approvals for such building: in the US, permitting is slow, infrastructure is expensive, and environmental outcomes are not particularly good. I propose a framework for reform with two dimensions: the power of the executive branch to decide and its capacity to plan. After considering reform possibilities, I propose that reforming both dimensions could lead to a possible “green bargain” that benefits efficiency, the environment, and democracy.

A Practical Guide to Shift-Share Instruments,” by Kirill Borusyak, Peter Hull, and Xavier Jaravel

      A recent econometric literature shows two distinct paths for identification with shift-share instruments, leveraging either many exogenous shifts or exogenous shares. We present the core logic of both paths and practical takeaways via simple checklists. A variety of empirical settings illustrate key points.

Tax Privacy,” by Joel Slemrod

      Implementing an equitable and efficient tax system requires that the government have access to certain information about taxpayers. If the demand for privacy implies limiting government’s access to relevant information, it constrains the extent to which a tax system can achieve these goals. In this way, demand for limiting government access to information imposes social costs. This article discusses the aspects of privacy that matter, including leaks, and explores certain countries’ public disclosure of taxpayer information. It then discusses what is known about, and the difficulties of ascertaining, how taxpayers value tax privacy, whether offering choices to taxpayers about information revelation can ease the tension between privacy and otherwise optimal tax policy, and uses the wealth tax as an example of the policy tradeoffs that arise.

Philipp Strack, 2024 Clark Medalist,” by Drew Fudenberg

        The 2024 John Bates Clark Medal of the American Economic Association was awarded to Philipp Strack, Professor of Economics at Yale University, for his pathbreaking contributions to the study of individual decision making, which have introduced new techniques, improved our understanding of important economic phenomena, and helped spark a new wave of research on the economics of information while building bridges between modern economic theory and a wide range of adjacent disciplines. This article summarizes some of Philipp’s papers, and explains how they build on and improve previous work.

Recommendations for Further Reading,” by Timothy Taylor

       

Dangers of Rising Federal Debt

When talking about dangers of rising US government debt, I’ve found that at least some people who are concerned about the large debt want to hear a “sword of Damocles” story: that is, the federal debt is poised above our economy, held only by a thread, and even a small change could cause it to fall and wreak havoc on us all. In less picturesque terms, these folks want a plausible story about how the US economy is about to follow in the path of Argentina or Greece.

The converse is that if you don’t have “sword of Damocles” story, then others will argue concern about federal debt is overrated. This view seems based on a belief that if there isn’t the immediate threat of a dire catastrophe, then the problem can be ignored for now.

But of course, there are lots of real-world problems that come upon a person, or a nation, more slowly. You can spend a decade or two not exercising and overeating, often with no catastrophic effects in that time–but the negative consequences for health are nonethless real. A nation can spend a few decades underperforming in some area, perhaps K-12 education or national defense preparedness, and while the effects may not be catastrophic in the near-term, negative consequences over time will be real as well.

In this spirit, the costs and dangers of rising federal debt can be divided into the ordinary and the extraordinary. Wendy Edelberg, Benjamin Harris, and Louise Sheiner provide such a perspective in “Assessing the Risks and Costs of the Rising US Federal Debt” (Economic Studies at Brookings, February 2025).

For perspective, here’s the standard figure showing the trajectory of the US government debt/GDP ratio. It’s now approaching the previous all-time high, which was the level of debt to finance World War II, and it’s projected to keep going up. Looking at data for the last couple of decades, you can see the jump in debt in response to the Great Recession, and also in response to the pandemic recession. The baseline for the future path of debt is based on current law–that is, it doesn’t include an event like an economic, health, or political crisis in the next two decades that leads to an additional surge of deficit spending.

The ordinary dangers of high and rising debt happen because higher government debt leads to higher consumption and less saving. The Brookings authors explain:

Deficits are costly to future generations to the extent they reduce national saving. A reduction in saving can reduce private investment, leaving a smaller capital stock (known as “crowd out”), higher interest rates, and lower GDP in the future. A reduction in national saving can also induce an influx of foreign capital; these foreign flows offset the impact of deficits on the domestic capital stock, GDP, and interest rates but increase the foreign ownership of U.S. assets. In either case, deficits mean that national wealth (and the net present value of future national
income) is lower than it otherwise would be. … Put differently, much of today’s government borrowing benefits current taxpayers at the expense of future ones.

If these lower levels of national saving also bring with them a lower rate of productivity growth, then the economy will grow more slowly for this reason as well. The result of growing, say, 0.5% slower each year over a period of 20 years means that the US economy would continue to grow, but at the end of that period it be about 10% smaller than otherwise.

To put that percentage in more concrete terms, that the equivalent of several trillion dollars not available for some combination of higher pay to workers and additional government programs. Also, if other countries in the global economy don’t make the same mistakes, then the US economy will be relatively smaller compared to its competitors a decade or two down the road.

I would also add that sustained high levels of government borrowing can feed othe problems as well. The high levels of government deficits during the pandemic were one of the causes feeding the surge of inflation in 2021-22. The high interest payments on past borrowing reduce future budgetary flexibility: for example, what the federal government pays in interest on past borrowing already exceeds what is collected from the corporate income tax, and in a few years will probably exceed the defense budget.

The extraordinary consequences of high government debt involve scenarios of a crisis. Edelberg, Harris, and Sheiner write:

What could spark a fiscal crisis? We see four main sources of risk. …

  1. Market disruptions unrelated to default: Demand or supply of Treasuries could abruptly shift for reasons unrelated to inflation or default risk such that interest rates spike, causing financial market disruptions that the Federal Reserve is unable to mitigate.
  2. Political brinkmanship and missed payments: Investors may fear the U.S. Treasury will miss payments due to political gridlock or brinkmanship, leading to a loss of credibility and default concerns.
  3. Loss of inflation control: The Federal Reserve could be perceived as abandoning its mandate to preserve price stability and instead allowing for hyperinflation.
  4. Strategic default amid a dramatic deterioration in the fiscal outlook: The long-term fiscal outlook could deteriorate so significantly and so sharply that investors abruptly worry about some form of strategic default, leading them to abandon Treasuries until policymakers make conditions more stable.

As we discuss below, we think that these scenarios are unlikely to occur, but it would be foolhardy to suggest that they couldn’t happen. In each case, the depth of the resulting crisis would depend critically on the ensuing response of policymakers.

    As the Brookings authors point out, the mighty US economy is not Argentina (where the national economy is about the same as the US state of Virginia) or Greece (where the national economy is about the same as the US state of Nevada). For me, the ordinary costs of high budget deficits, including risks of moderate inflation and lack of budgetary flexibility, are sufficient reason to believe that a gradual effort to moderate and phase down the projected rise in the federal debt/GDP ratio is a good idea.

    But I would not be too quick to dismiss more extraordinary and extreme scenarios. If you had asked me circa 2000 or 2005 if the US economy would experience a near-meltdown in September 2008, I would have put an extremely low probability on such an event. But a low probability at any given time, especially over a period of decades, doesn’t mean the risk can be prudently ignored.

    Structure-Conduct-Performance: An Earlier Generation of Antitrust

    The birth of US antitrust law dates back to the Sherman Anti-Trust Act of 1890. That law was so vague and poorly worded that it had only modest effects–although it did provide sufficient force to break up the Standard Oil Trust in 1911. The Clayton Antitrust Act of 1914, along with the creation of the Federal Trade Commission that same year, put teeth into antitrust enforcement. But at this time the issues of antitrust were typically discussed one firm or one industry at a time. It isn’t until the 1930s that “industrial organization” develops as a field of economics, with the notion that these concerns about how the structure of an industry could lead to lack of competition in a way that would manifest itself in higher prices could be formulated in general framework.

    Although a number of economists were involved in developing these insights, the work of Joe S. Bain was especially prominent. Back when I was entering graduate school in economics in 1982, Bain was named a Distinguished Fellow of the American Economic Association. The prize citation read:

    Joe S. Bain is the undisputed father of modem Industrial Organization Economics. (Edward S. Mason and Edward H. Chamberlin were its two grandparents; but Joe Bain was the father.) His classic text, Industrial Organization, published twenty-three years ago, gave the field the rationale and structure that it retains to this day. … Bain’s theoretical and empirical work on market concentration and the condition of entry, culminating in his ” barnbuster,” Barriers to New Competition, offered the possibility of new, determinate solutions to the oligopoly problem, and added important new insights into the relationship between industry structure, behavior and performance …”

    The Structure-Conduct-Performance paradigm, as it was broadly known, was the starting point for industrial organization analysis from the 1950s up into the 1970s. There are some, including antitrust authorities in the Biden administration, who seem to believe it should still be the main starting point. But even back in the early 1980s, we were being taught that the “SCP paradigm” had become outdated. Matthew T. Panhans provides an overview of this evolution in The Rise, Fall, and Legacy of the Structure-Conduct-Performance Paradigm” (Journal of the History of Economic Thought, 46: 3, September 2024).

    The basic idea of SCP involved doing comparisons across industries. The theory suggested that as the structure of an industry became more concentrated, with fewer firms, the result would be less competition. The relatively small number of firms would find it easier to raise prices, either with implicit or explicit agreement. These firms would earn higher profits, while consumers would pay higher prices.

    The theory surely seems plausible enough to justify investigation, and industrial organization economists back in the 1950s and ’60s spent much of their time trying to measure and estimate these relationships. But in seeking a common pattern across all industries, they soon ran into troubles.

    One issue discussed by Panhaus, and earlier by Bain, involved the rise of grocery store chains in the 1940s and 1950s and how these chains displaced small independent grocery stores. But as Bain pointed out, this displacement happened in substantial part because the large chains were more efficient. They had the scale to invest in supply chains that led to lower prices for consumers. In turn, the remaining independent groceries responded to incentives and became more efficient as well. As Bain recognized, it clearly wasn’t automatic that an industry structure of fewer firms automatically led to higher prices for consumers.

    Thus, Bain supported an antitrust policy that would allow active competition between medium-sized firms that could take advantage of economies of scale. However, he also suggested that antitrust authorities should be empowered to break up very large firms just on the grounds that they were very large, without any particular evidence that the firm was raising prices. In turn, the large firm could offer as a legal defense that it was large because of economies of scale or technological efficiencies–and benefiting consumers as a result. The antitrust lawsuit to break up IBM, initiated in 1969, was a classic example of this approach. However, Supreme Court decisions in the 1960s commonly interpreted the antitrust law to mean that any movement toward greater concentration, even a merger between two small shoe companies or two small grocery store chains, should be presumptively illegal.

    There had always been academic challenges to the SCP approach, but two main concerns emerged in force by the 1970s. As Panhaus explains, one concern was that “structure-conduct-performance” has the causality backward: that is, it wasn’t that concentration of industry led to certain conduct by firms, but instead that innovative firms tended to succeed and gain market share. From this view, concentration should often be viewed as a sign of success, not a concern about exploitation of consumers. The other critique was that if a successful firm was earning high profits, it would typically attract new entry, which would tend to restore competition. From this point of view, antitrust authorities should focus on explicit price-fixing agreements between firms and on large mergers that led to near-monopoly outcomes, but otherwise get out of the way.

    My own sense is that while the old-school structure-conduct-performance approach focused heavily on “structure,” and specifically on whether a firm had a large market share, the new-school antitrust approach has come to focus on “conduct.” Thus, the Microsoft antitrust case from early in the 21st century wasn’t primarily about whether Microsoft was large (spoiler alert: it was) but instead whether Microsoft was taking advantage of its dominant position in operating systems to pressure people to use the Microsoft Internet Explorer browser, and thus blocking the use of the Netscape Navigator browser. Of course that particular browser battle does not look especially significant in retrospect.

    Similarly, the current antitrust case against Google is not primarily about whether Google is big (it is), but whether Google is blocking competition from other search engines by entering into agreements with firms like Apple to become the default browser on Apple’s smartphones. The recent antitrust case against Amazon is not over whether Amazon is big (it is), but whether the ways in which Amazon lists third-party firms in its search results, and whether it pressures them to use Amazon’s delivery service, should be viewed as an anticompetitive practice. Still another issue is whether existing successful firms should be able to buy firms in different but nearby industries, like the antitrust case scheduled to go to trial in a few months over whether Facebook acted in an anticompetitive manner by purchasing Instagram and WhatsApp.

    I don’t mean to take a position here on the merits of these cases, which I suspect may ultimately lead to negotiated settlements over details of the agreements at issue. My point here is that when people harken back to antitrust authorities breaking up Standard Oil, or IBM, or AT&T, they are channelling the old structure-conduct-performance paradigm, where the focus was to break up a structure. But modern antitrust is focused on the idea that society should want large and technologically successful firms to keep making innovative investments, and the goal of antitrust is to encourage investments in greater productivity by discouraging the large successful firms from a focus on blocking new competitors. It is reasonable to have controversy about just how to draw that line in particular cases.

    What Share of Federal Spending is Borrowed?

    In a recent year, or a typical year, what share of federal spending is borrowed deficit spending? Here’s a figure constructed from the ever-useful FRED website run by the Federal Reserve Bank of St. Louis. The numbers on the vertical axis can be read as percentages: that is, .2 is 20 percent, and so on.

    In 2024, 27% of all federal spending–a little more than a quarter–was borrowed as deficit spending. This is not an all-time low. In 2020 at the depth of the pandemic recession, 48% of all federal spending was borrowed. In 2009 at the depth of the Great Recession, 40% of all federal spending in that year was borrowed.

    But before those events, you have to go back in time. In 1943, at the depth of World War II, 70% of US government spending that year was borrowed. In 1932, in the Great Depression, 59% of government spending was borrowed.

    However, the 27% of federal spending that was borrowed in 2024 was a greater share than in any year from the end of World War II up to the Great Recession. In addition, the graph above suggests a downward trend: that is, the annual deficits as a share of spending during bad years are getting larger, while the deficits during good years are not bouncing back as much. Remember, 2024 was not a year of a national emergency like a pandemic or a Great Recession. It was a year with a growing economy and relatively low unemployment. It’s a time when reflexive political demands for tax cuts and/or spending increases deserve a gimlet eye and a dose of skepticism.