Dangers of Rising Federal Debt

When talking about dangers of rising US government debt, I’ve found that at least some people who are concerned about the large debt want to hear a “sword of Damocles” story: that is, the federal debt is poised above our economy, held only by a thread, and even a small change could cause it to fall and wreak havoc on us all. In less picturesque terms, these folks want a plausible story about how the US economy is about to follow in the path of Argentina or Greece.

The converse is that if you don’t have “sword of Damocles” story, then others will argue concern about federal debt is overrated. This view seems based on a belief that if there isn’t the immediate threat of a dire catastrophe, then the problem can be ignored for now.

But of course, there are lots of real-world problems that come upon a person, or a nation, more slowly. You can spend a decade or two not exercising and overeating, often with no catastrophic effects in that time–but the negative consequences for health are nonethless real. A nation can spend a few decades underperforming in some area, perhaps K-12 education or national defense preparedness, and while the effects may not be catastrophic in the near-term, negative consequences over time will be real as well.

In this spirit, the costs and dangers of rising federal debt can be divided into the ordinary and the extraordinary. Wendy Edelberg, Benjamin Harris, and Louise Sheiner provide such a perspective in “Assessing the Risks and Costs of the Rising US Federal Debt” (Economic Studies at Brookings, February 2025).

For perspective, here’s the standard figure showing the trajectory of the US government debt/GDP ratio. It’s now approaching the previous all-time high, which was the level of debt to finance World War II, and it’s projected to keep going up. Looking at data for the last couple of decades, you can see the jump in debt in response to the Great Recession, and also in response to the pandemic recession. The baseline for the future path of debt is based on current law–that is, it doesn’t include an event like an economic, health, or political crisis in the next two decades that leads to an additional surge of deficit spending.

The ordinary dangers of high and rising debt happen because higher government debt leads to higher consumption and less saving. The Brookings authors explain:

Deficits are costly to future generations to the extent they reduce national saving. A reduction in saving can reduce private investment, leaving a smaller capital stock (known as “crowd out”), higher interest rates, and lower GDP in the future. A reduction in national saving can also induce an influx of foreign capital; these foreign flows offset the impact of deficits on the domestic capital stock, GDP, and interest rates but increase the foreign ownership of U.S. assets. In either case, deficits mean that national wealth (and the net present value of future national
income) is lower than it otherwise would be. … Put differently, much of today’s government borrowing benefits current taxpayers at the expense of future ones.

If these lower levels of national saving also bring with them a lower rate of productivity growth, then the economy will grow more slowly for this reason as well. The result of growing, say, 0.5% slower each year over a period of 20 years means that the US economy would continue to grow, but at the end of that period it be about 10% smaller than otherwise.

To put that percentage in more concrete terms, that the equivalent of several trillion dollars not available for some combination of higher pay to workers and additional government programs. Also, if other countries in the global economy don’t make the same mistakes, then the US economy will be relatively smaller compared to its competitors a decade or two down the road.

I would also add that sustained high levels of government borrowing can feed othe problems as well. The high levels of government deficits during the pandemic were one of the causes feeding the surge of inflation in 2021-22. The high interest payments on past borrowing reduce future budgetary flexibility: for example, what the federal government pays in interest on past borrowing already exceeds what is collected from the corporate income tax, and in a few years will probably exceed the defense budget.

The extraordinary consequences of high government debt involve scenarios of a crisis. Edelberg, Harris, and Sheiner write:

What could spark a fiscal crisis? We see four main sources of risk. …

  1. Market disruptions unrelated to default: Demand or supply of Treasuries could abruptly shift for reasons unrelated to inflation or default risk such that interest rates spike, causing financial market disruptions that the Federal Reserve is unable to mitigate.
  2. Political brinkmanship and missed payments: Investors may fear the U.S. Treasury will miss payments due to political gridlock or brinkmanship, leading to a loss of credibility and default concerns.
  3. Loss of inflation control: The Federal Reserve could be perceived as abandoning its mandate to preserve price stability and instead allowing for hyperinflation.
  4. Strategic default amid a dramatic deterioration in the fiscal outlook: The long-term fiscal outlook could deteriorate so significantly and so sharply that investors abruptly worry about some form of strategic default, leading them to abandon Treasuries until policymakers make conditions more stable.

As we discuss below, we think that these scenarios are unlikely to occur, but it would be foolhardy to suggest that they couldn’t happen. In each case, the depth of the resulting crisis would depend critically on the ensuing response of policymakers.

    As the Brookings authors point out, the mighty US economy is not Argentina (where the national economy is about the same as the US state of Virginia) or Greece (where the national economy is about the same as the US state of Nevada). For me, the ordinary costs of high budget deficits, including risks of moderate inflation and lack of budgetary flexibility, are sufficient reason to believe that a gradual effort to moderate and phase down the projected rise in the federal debt/GDP ratio is a good idea.

    But I would not be too quick to dismiss more extraordinary and extreme scenarios. If you had asked me circa 2000 or 2005 if the US economy would experience a near-meltdown in September 2008, I would have put an extremely low probability on such an event. But a low probability at any given time, especially over a period of decades, doesn’t mean the risk can be prudently ignored.

    Structure-Conduct-Performance: An Earlier Generation of Antitrust

    The birth of US antitrust law dates back to the Sherman Anti-Trust Act of 1890. That law was so vague and poorly worded that it had only modest effects–although it did provide sufficient force to break up the Standard Oil Trust in 1911. The Clayton Antitrust Act of 1914, along with the creation of the Federal Trade Commission that same year, put teeth into antitrust enforcement. But at this time the issues of antitrust were typically discussed one firm or one industry at a time. It isn’t until the 1930s that “industrial organization” develops as a field of economics, with the notion that these concerns about how the structure of an industry could lead to lack of competition in a way that would manifest itself in higher prices could be formulated in general framework.

    Although a number of economists were involved in developing these insights, the work of Joe S. Bain was especially prominent. Back when I was entering graduate school in economics in 1982, Bain was named a Distinguished Fellow of the American Economic Association. The prize citation read:

    Joe S. Bain is the undisputed father of modem Industrial Organization Economics. (Edward S. Mason and Edward H. Chamberlin were its two grandparents; but Joe Bain was the father.) His classic text, Industrial Organization, published twenty-three years ago, gave the field the rationale and structure that it retains to this day. … Bain’s theoretical and empirical work on market concentration and the condition of entry, culminating in his ” barnbuster,” Barriers to New Competition, offered the possibility of new, determinate solutions to the oligopoly problem, and added important new insights into the relationship between industry structure, behavior and performance …”

    The Structure-Conduct-Performance paradigm, as it was broadly known, was the starting point for industrial organization analysis from the 1950s up into the 1970s. There are some, including antitrust authorities in the Biden administration, who seem to believe it should still be the main starting point. But even back in the early 1980s, we were being taught that the “SCP paradigm” had become outdated. Matthew T. Panhans provides an overview of this evolution in The Rise, Fall, and Legacy of the Structure-Conduct-Performance Paradigm” (Journal of the History of Economic Thought, 46: 3, September 2024).

    The basic idea of SCP involved doing comparisons across industries. The theory suggested that as the structure of an industry became more concentrated, with fewer firms, the result would be less competition. The relatively small number of firms would find it easier to raise prices, either with implicit or explicit agreement. These firms would earn higher profits, while consumers would pay higher prices.

    The theory surely seems plausible enough to justify investigation, and industrial organization economists back in the 1950s and ’60s spent much of their time trying to measure and estimate these relationships. But in seeking a common pattern across all industries, they soon ran into troubles.

    One issue discussed by Panhaus, and earlier by Bain, involved the rise of grocery store chains in the 1940s and 1950s and how these chains displaced small independent grocery stores. But as Bain pointed out, this displacement happened in substantial part because the large chains were more efficient. They had the scale to invest in supply chains that led to lower prices for consumers. In turn, the remaining independent groceries responded to incentives and became more efficient as well. As Bain recognized, it clearly wasn’t automatic that an industry structure of fewer firms automatically led to higher prices for consumers.

    Thus, Bain supported an antitrust policy that would allow active competition between medium-sized firms that could take advantage of economies of scale. However, he also suggested that antitrust authorities should be empowered to break up very large firms just on the grounds that they were very large, without any particular evidence that the firm was raising prices. In turn, the large firm could offer as a legal defense that it was large because of economies of scale or technological efficiencies–and benefiting consumers as a result. The antitrust lawsuit to break up IBM, initiated in 1969, was a classic example of this approach. However, Supreme Court decisions in the 1960s commonly interpreted the antitrust law to mean that any movement toward greater concentration, even a merger between two small shoe companies or two small grocery store chains, should be presumptively illegal.

    There had always been academic challenges to the SCP approach, but two main concerns emerged in force by the 1970s. As Panhaus explains, one concern was that “structure-conduct-performance” has the causality backward: that is, it wasn’t that concentration of industry led to certain conduct by firms, but instead that innovative firms tended to succeed and gain market share. From this view, concentration should often be viewed as a sign of success, not a concern about exploitation of consumers. The other critique was that if a successful firm was earning high profits, it would typically attract new entry, which would tend to restore competition. From this point of view, antitrust authorities should focus on explicit price-fixing agreements between firms and on large mergers that led to near-monopoly outcomes, but otherwise get out of the way.

    My own sense is that while the old-school structure-conduct-performance approach focused heavily on “structure,” and specifically on whether a firm had a large market share, the new-school antitrust approach has come to focus on “conduct.” Thus, the Microsoft antitrust case from early in the 21st century wasn’t primarily about whether Microsoft was large (spoiler alert: it was) but instead whether Microsoft was taking advantage of its dominant position in operating systems to pressure people to use the Microsoft Internet Explorer browser, and thus blocking the use of the Netscape Navigator browser. Of course that particular browser battle does not look especially significant in retrospect.

    Similarly, the current antitrust case against Google is not primarily about whether Google is big (it is), but whether Google is blocking competition from other search engines by entering into agreements with firms like Apple to become the default browser on Apple’s smartphones. The recent antitrust case against Amazon is not over whether Amazon is big (it is), but whether the ways in which Amazon lists third-party firms in its search results, and whether it pressures them to use Amazon’s delivery service, should be viewed as an anticompetitive practice. Still another issue is whether existing successful firms should be able to buy firms in different but nearby industries, like the antitrust case scheduled to go to trial in a few months over whether Facebook acted in an anticompetitive manner by purchasing Instagram and WhatsApp.

    I don’t mean to take a position here on the merits of these cases, which I suspect may ultimately lead to negotiated settlements over details of the agreements at issue. My point here is that when people harken back to antitrust authorities breaking up Standard Oil, or IBM, or AT&T, they are channelling the old structure-conduct-performance paradigm, where the focus was to break up a structure. But modern antitrust is focused on the idea that society should want large and technologically successful firms to keep making innovative investments, and the goal of antitrust is to encourage investments in greater productivity by discouraging the large successful firms from a focus on blocking new competitors. It is reasonable to have controversy about just how to draw that line in particular cases.

    What Share of Federal Spending is Borrowed?

    In a recent year, or a typical year, what share of federal spending is borrowed deficit spending? Here’s a figure constructed from the ever-useful FRED website run by the Federal Reserve Bank of St. Louis. The numbers on the vertical axis can be read as percentages: that is, .2 is 20 percent, and so on.

    In 2024, 27% of all federal spending–a little more than a quarter–was borrowed as deficit spending. This is not an all-time low. In 2020 at the depth of the pandemic recession, 48% of all federal spending was borrowed. In 2009 at the depth of the Great Recession, 40% of all federal spending in that year was borrowed.

    But before those events, you have to go back in time. In 1943, at the depth of World War II, 70% of US government spending that year was borrowed. In 1932, in the Great Depression, 59% of government spending was borrowed.

    However, the 27% of federal spending that was borrowed in 2024 was a greater share than in any year from the end of World War II up to the Great Recession. In addition, the graph above suggests a downward trend: that is, the annual deficits as a share of spending during bad years are getting larger, while the deficits during good years are not bouncing back as much. Remember, 2024 was not a year of a national emergency like a pandemic or a Great Recession. It was a year with a growing economy and relatively low unemployment. It’s a time when reflexive political demands for tax cuts and/or spending increases deserve a gimlet eye and a dose of skepticism.

    Warnings About Air Traffic Safety Two Months Before the Crash over the Potomac

    On January 29, a commercial flight collided with an Army helicopter over the Potomac River in Washington, DC, killing 67 people. Less than two months earlier, on December 12, 2024, the US Senate Committee on Commerce, Science, and Transportation held a “Subcommittee Hearing on U.S. Air Traffic Control Systems, Personnel and Safety.” The testimony offered at the hearing makes for grim reading, even if you didn’t know that a tragedy was around the corner.

    For example, Kevin Warsh of the Government Accountability Office described a just-published report called “Air Traffic Control: Urgent FAA Actions Are Needed to Modernize Aging Systems.” Here’s a flavor of the report:

    After a shutdown of the national airspace in 2023 due to an aging air traffic control (ATC) system outage, the Federal Aviation Administration (FAA) conducted an operational risk assessment to evaluate the sustainability of all ATC systems. The assessment determined that of FAA’s 138 systems, 51 (37 percent) were unsustainable and 54 (39 percent) were potentially unsustainable. Of the 105 unsustainable and potentially unsustainable systems, 58 (29 unsustainable and 29 potentially unsustainable systems) have critical operational impacts on the safety and efficiency of the national airspace … FAA had 64 ongoing investments aimed at modernizing 90 of the 105 unsustainable and potentially unsustainable systems; however, the agency has been slow to modernize the most critical and at-­risk systems. Specifically, when considering age, sustainability ratings, operational impact level, and expected date of modernization for each system, as of May 2024, FAA had 17 systems that were especially concerning. The investments intended to modernize these systems were not planned to be completed for at least 6 years. In some cases, they were not to be completed for at least 10 years. In addition, FAA did not have ongoing investments associated with four of these critical systems.

    Or here’s a sample of commentary from Dean Iacopelli, Chief of Staff of the National Air Traffic Controllers Association:

    FAA telecommunications are the backbone of the air traffic control system. The FAA needs extensive telecommunications services and networking capabilities to support the operation of the NAS [National Airspace System] and other agency functions. The FAA Telecommunications Infrastructure (FTI) program currently provides these services and networking capabilities through a service-based contract, in which the service provider continually updates the underlying technologies. The majority of FTI’s telecommunication lines function on an aging copper wire infrastructure, which is an outdated and no longer readily supported, as many local phone companies are discontinuing service to copper wire equipment throughout the country. As a result, air traffic controllers throughout the U.S. are experiencing a steady increase in unexpected outages of air traffic systems. Recent ground stops at airports in the New York and Washington, D.C., areas highlight the risks and consequences of telecommunication network failures. To date, there are over 30,000 services at over 4,600 FAA sites that must transition away from copper wire and onto a fiber optic cable network in order to avoid severe service disruptions and extensive flight delays. …

    Even before the FAA’s telecommunications crisis, the FAA was working to mitigate the risks associated with its faltering Notice to Airmen (NOTAM) system, which has been the source of significant disruptions throughout the NAS. The NOTAM system is vital for sharing and disseminating safety-critical flight information between both air traffic controllers and pilots. However, in early 2023, a complete failure of the NOTAM system caused nationwide ground stop causing significant flight delays. … Much like the FAA’s looming telecommunications crisis, the NOTAM crisis was not at the top of any F&E [facilities and equipment] priority lists until after the 2023 collapse resulted in cascading nationwide delays and ground stops. …

    Automation platforms such as ERAM and STARS deliver flight plan and surveillance information to air traffic controllers on a real-time basis. These platforms are the foundational systems that keep our NAS operating safely 24-hours a day, 7-days a week, 365-days a year. Over the past four years, air traffic levels have continued to grow at a rate of 6.2% per year post-COVID, excluding new entrant operations. Air traffic automation systems have components reaching end-of-life that need to be replaced. Due to historically flat F&E funding, as a result of the FAA requesting less than it needs to maintain the system, air traffic automation has been unable to meet the growing needs of the NAS reducing the efficiency of the system. In the near future, controllers will have to rely on this inadequate technology to maintain the safety and efficiency of the NAS.

    Or here is one example from David Spero, representing the Professional Aviation Safety Specialists who “install, maintain, repair and certify the radar, navigation, communication and power equipment that comprises the U.S. National Airspace System (NAS).”

    [T]he results of the survey [of PASS members] indicate top concerns are related to aging equipment, cumbersome procedures, parts that are unreliable or unavailable, system complexity, and staffing and training of the workforce. At the rapid pace with which technology changes, the FAA is getting further behind in replacing aging systems. … The biggest challenge is a lack of vision on behalf of the agency. The length of time it takes the FAA to implement new systems is directly related to the fact that current NAS systems and equipment are becoming obsolete. … For instance, many facilities are still relying on an aging communications technology known as Time Division Multiplexing (TDM). TDM is a method of combining multiple data streams into a single communication channel by allocating specific time slots for each data stream. Use of this antiquated technology is not only inefficient, but it is unnecessarily costly. Telecommunication companies now use carrier ethernet and are not required by the Federal Communications Commission to support TDM technology. … Unfortunately, the FAA is still relying on TDM and is being charged a premium by communications companies that no longer regularly use the technology.

    There are many additional complaints. The underlying issue here may be a structural one: the FAA is the provider of air navigation services, but it is also responsible for overseeing its own performance. In addition, the FAA budget goes though a political process, so instead of deciding what needs to be done, raising the money, and doing it, the FAA is at the mercy of the budget churned out by Congressional committees. Marc Scribner, Senior Transportation Policy Analyst at the Reason Foundation, points out that many other countries have decided in recent decades that this model doesn’t work. Instead, these other countries set up the provider of air traffic navigation services as a separate company: sometimes government-owned, sometimes a nonprofit, sometimes a for-profit. The company is a separate financial entity: it has the power to charge fees to airlines and airplanes, and the power to sell bonds to raise capital if needed. The role of the government is then limited to overseeing this company. Scribner argues:

    The United States was once the global leader in airspace management. However, in recent decades, we have fallen behind peer countries that have modernized their air traffic control practices and technologies. … The status-quo ANSP [air navigation services provider] model in the United States was historically the dominant model globally, whereby air traffic control was provided by a civil aviation authority within the transport ministry. That model has undergone major change since 1987 outside of the United States, starting when the government of New Zealand removed its air traffic control system from the transport ministry by restructuring it as Airways New Zealand, a self-supporting government corporation. Within 10 years, more than a dozen other countries had followed suit.

    Separating the provision of air navigation services from the civil aviation authority and putting the ANSP at arm’s length from its safety regulator, like all the other key players in aviation—airlines, business aviation, general aviation, airframe manufacturers, engine producers, pilots, mechanics, and so forth—is now the globally recognized best practice. For more than two decades, this has been International Civil Aviation Organization (ICAO) policy. The United States is among the last industrialized countries that have not taken this step to eliminate the fundamental conflict of interest of having an aviation regulator also operate a service it is tasked with regulating.

    The revenue source for ANSPs operated as public utilities is globally accepted cost-based user fees in accordance with the airport and air traffic control charging principles promulgated by ICAO. Prior to the conversion of these ANSPs to public utilities, those revenues were nearly always paid by airlines and other airspace users to the respective national governments. In most cases, once an ANSP has been converted to a utility, the user-fee revenue flows directly to the ANSP as its primary source of revenue. This makes it possible for the ANSPs to issue revenue bonds based on their projected revenue streams, just as airports do today in the United States and elsewhere. It is through their predictable streams of revenue that come directly from users that ANSPs outside the United States can successfully finance large-scale capital modernization efforts.

    Globally, three ANSPs have been moved out of the government entirely under either an independent nonprofit user cooperative model or as partially privatized companies. Another 55 operate as wholly owned government corporations. Just 19—mostly developing countries, but also including the United States, Japan, and Singapore—operate as part of legacy civil aeronautics authorities that also regulate aviation safety. ANSPs that operate as public utilities funded by user fees now number 62 and serve 83 countries globally.

    My understanding is that past efforts to reorganize the FAA on this alternative model have failed in Congress not because of opposition from large airlines, because of concerns from smaller airlines and those who fly smaller planes, who fear that running the FAA on user fees would increase their costs. Whatever the political reason, one would hope that the tragic mid-air collision over Washington could serve as a wake-up call. On this issue, it’s time to run over some special interests and create the organizational structure that will allow rapid modernization of America’s air traffic control systems.

    OECD Survey of Adult Skills: Where the US Stands

    The Survey of Adult Skills was carried out across 31 mainly high-income economies in 2003. It’s a survey that’s done by having actual interviewers meet people in their homes. For the US, the sample size was 3,765, which may not sound like much, but it’s worth remembering that a typical Gallup poll is only about 1,000 people.  As long as you remember that the results should be interpreted as plus-or-minus a few percentage points, you can can learn something from them.

    The results of the survey were published by the OECD as “Do Adults Have the Skills They Need to Thrive in a Changing World? Survey of Adult Skills 2023” in December. Here, I’ll focus on putting some of the US results in context.

    The survey focuses on three types of skills: numerary, literacy, and problem-solving. The skills tend to be correlated across categories: that is, if a country is good at one skill, it tend to be good at others. Here’s a figure showing numeracy and literacy scores. As you can see, the US falls well below average on numeracy scores, and slightly below average on literacy.

    Perhaps more troubling is that within the US scores, the gap between the 90th and the 10th percentile is either widest, or close to widest, across countries. In other words, the US average score is made up of both exceptionally high-performing and low-performing scores. The bars in the figure show the gap between 90th and 10th percentile scores, and you can see the US bar graphs on the far left.

    This matters. The world economy is evolving toward higher skills, which then can be combined with improved technology. A country in which adults are highly unequal in skills will be unequal in other ways as well. Here’s one more figure from the report, this one showing patterns of the tasks performed in US jobs over time. The intensify of “routine” tasks is falling. The intensity of “social” and “nonroutine analytical” tasks has generally been rising. Those who are only equipped for routine tasks are going to have a hard time in US job market.

    Why Does February (Usually) Have 28 Days?

    I understand why the calendar adds an extra day to February every four years. The revolution of the earth around the sun is approximately 365 and one-quarter days. Every four years, that adds up to one additional day, plus some extra minutes. The modest rounding error in this calculation is offset by steps like dropping the extra day of leap year for years ending in “00.”

    But my question is why February has only 28 days in other years. After all, January has 31 days and March has 31 days. If those two months each donated a day to February, then all three months could be 30 days long, three years out of four, and February could be 31 days in leap years. Every other month is either 30 or 31 days. Why does February only get 28 days?

    (This post is republished, with minor changes, from February 29 a year ago.)

    The answer to such questions leads to a digression back into the history of calendars. In this case, Jonathan Hogeback writing at the Britannica website tells me, it seems to settle on the Roman king Numa Pompilius back around 700 BCE, before the start of the Roman Empire. The ancient Roman calendar of that time had a flaw: it didn’t have nearly enough days. As Hogeback writes:

    The Gregorian calendar’s oldest ancestor, the first Roman calendar, had a glaring difference in structure from its later variants: it consisted of 10 months rather than 12. In order to fully sync the calendar with the lunar year, the Roman king Numa Pompilius added January and February to the original 10 months. The previous calendar had had 6 months of 30 days and 4 months of 31, for a total of 304 days. However, Numa wanted to avoid having even numbers in his calendar, as Roman superstition at the time held that even numbers were unlucky. He subtracted a day from each of the 30-day months to make them 29. The lunar year consists of 355 days (354.367 to be exact, but calling it 354 would have made the whole year unlucky!), which meant that he now had 56 days left to work with. In the end, at least 1 month out of the 12 needed to contain an even number of days. This is because of simple mathematical fact: the sum of any even amount (12 months) of odd numbers will always equal an even number—and he wanted the total to be odd. So Numa chose February, a month that would be host to Roman rituals honoring the dead, as the unlucky month to consist of 28 days.

    This discussion does explain why February would be singled out, since it was the month of rituals honoring the dead. In Numa’s calendar, the 355-day year would be made up of 11 months that had the lucky odd numbers of 29 or 31 days, plus unlucky February.

    The discussion also explains why months that start with the prefix “Oct-” or eight, “Nov” or nine, and “Dec-” or ten, are actually months 10, 11, and 12 in the calendar. Those names were originally part of a 10-month calendar year.

    But questions remains unanswered: Why did the Romans of that time view odd numbers as lucky, compared with unlucky even numbers? I suppose that explaining any superstition is hard, but I’ve never seen a great explanation. A Dartmouth course on “Geometry in Art and Architecture” describes Pythagorean feelings about odd and even numbers. For those of you keeping score at home, Pythagoras lived about two centuries after Numa Pompilius. The Dartmouth course material summarizes aspects of “Pythagorean Number Symbolism”:

    Odd numbers were considered masculine; even numbers feminine because they are weaker than the odd. When divided they have, unlike the odd, nothing in the center. Further, the odds are the master, because odd + even always give odd. And two evens can never produce an odd, while two odds produce an even. Since the birth of a son was considered more fortunate than birth of a daughter, odd numbers became associated with good luck.

    Various mentions of the luckiness of odd numbers recur over time. A few centuries later in the first century BCE, the poet Virgil has the character Alphesiboeus (a shepherd who sings about love rituals) say in Eklogue VIII (from the A.S. Kline translation):

    Bring Daphnis home, my song, bring him home from town.

    First I tie three threads, in three different colours, around you

    and pass your image three times round these altars:

    the god himself delights in uneven numbers.

    Bring Daphnis home, my song, bring him home from town.

    Or leaping ahead a millenium-and-a-half, at the start of Act V of the The Merry Wives of Windsor, Shakespeare has Falstaff say:

    Prithee, no more prattling. Go. I’ll hold. This
    is the third time; I hope good luck lies in odd numbers.
     Away, go. They say there is divinity in odd
     numbers, either in nativity, chance, or death.
    Away.

    While I acknowledge this history of a belief in odd numbers, as a person born on an even day of an even month in an even year, I’m not predisposed to accept it. But it’s interesting that modern photographers have a guideline for composing photographs called the “rule of odds.” Rick Ohnsman at the Digital Photography School, for example, describes it this way:

    This is where the rule of odds comes into play, a deceptively simple yet powerful tool in your photographic arsenal. It’s all about arranging your subjects in odd numbers to craft compositions that are naturally more pleasing to the eye. Unlike more static guidelines, the rule of odds offers a blend of structure and organic flow, making your images both aesthetically pleasing and impressively compelling.

    The revised calendar of Numa Pompilius couldn’t last. With only 355 days, it didn’t reflect the actual period of the earth revolving around the sun, and thus led to further revisions which are a story in themselves.

    But when you think about it, the question of February having 28 days all goes back to Numa Pompilius and the superstitions about odd numbers. The modern calendar has 365 days in a typical year. You might think that the obvious way to divide this up would be to start off with 12 months of 30 days, and then add five days. Indeed, the ancient Egyptians had a calendar of this type, with five “epagomenal” or “outside the calendar days added each year.

    The preference over the last two millennia, at least since the time of Julius Caesar, is to have 12 months, with a few of them being a day longer. But even so, why not in a typical year have five months of 31 days, and the rest with 30? The “problem,” I think, is that most months would then have unlucky totals of an even number of days. By holding February to 28 days rather than 30, you can redistribute two days from February and have 31 days in January and March. Thus, you can have only four months with an even total of 30 days every year (“Thirty days hath September, April, June, and November …”), and seven months always with the luckier odd total of 31 days. In leap years, when February has 29 days, then eight months have an odd number of days. I think this makes February 29 a lucky day?

    Modern China, the Old USSR, and American Attitudes about Trade

    The GATT, formally known as the General Agreement on Tariffs and Trade but informally known as the Gentleman’s Agreement to Talk and Talk, was first signed by 23 countrie back in 1947. Over the decades, all that talking led to a substantial decrease in tariffs all around the world. By 1994, the GATT morphed into the World Trade Organization. At that time, it has about 125 countries, accounting for about 90% of world trade. From a free trade perspective, it was a considerable success.

    Here’s my hypothetical question: Would the GATT have been able to expand the parameters of free trade around most of the world if it had also included the USSR?

    Of couse, this did not actually happen. The old Soviet Union perceived GATT as a club of geopolitical and capitalist opponents. In 1949, it started COMECON, the Council for Mutual Economic Assistance, as its counterbalance to the GATT. The original members were in eastern Europe: along with the Soviet Union, it included Bulgaria, Czechoslovakia, Hungary, Poland, Romania and later expanded to include Albania, East Germany, Mongolia, Cuba, and Vietnam. The Soviet Union had its own notion of “comparative advantage” and “gains from trade,” which was that it would organize global trade with non-Soviet countries having only a few major export industries, thus making it harder for those countries to become independent.

    Back in the day, the US had a fairly small amount of trade in a fairly small number of goods with the old Soviet Union: for example, we bought Soviet oil and sold them grain. But even though some prominent economists argued back in the 1960s and 1970s that the Soviet economy would outgrow the US economy, I don’t know of a time when American manufacturing workers felt as if their jobs were endangered by a flood of lower-cost imported cars or appliances or steel from Russia. The US worries of the 1970s and 1980s were about trade with Japan, or maybe Korea, but not the Soviet Union.

    Nonetheless, imagine an alternative global economy in which the USSR was part of the GATT during the Cold War: say, after Russia invaded Hungary in 1956, or after the Sputnik launch in 1957, or after the Cuban missile crisis in 1961. Would it have been politically possible to sustain a global free trade movement with a growing global membership, like GATT, with the US and the Soviet Union both as members?I suspect not.

    Now make the leap to the current day. The US and China have not yet had the equivalent of the old Soviet invasion of Hungary, nor a Sputnik moment (although the recent DeepSeek AI from China may come close), nor a direct confrontation like the Cuban missile crisis. But s the the level geopolitical confrontation rises, the pressures on international trade are rising as well.

    Indeed, there’s evidence that for many Americans, worries about international trade in general are actually worries about conflict with China in particular. Germany has had enormous trade surpluses for decades, and Japan has continued to have substantial trade surpluses as well, but it is China’s trade surpluses that have gotten the attention.

    In survey data on feelings about trade, Americans are something of a muddle. The answers we give depend on the questions that are asked. For example, a national survey last summer found that a strong majority of 63% favor increasing global trade, and a similarly strong majority of 58% believes that Americans favor having US firms “manufacture and make everything that we need within this country.” However, Americans don’t want to pay substantially higher prices for US products, if imports are cheaper. A majority of Americans are worried about the trade deficit, but if told that the trade deficit represents money invested in the United States, they are OK with it.

    In particular, Americans are likely to report that trade with China is “unfair.”

    In her own surveys, Laura Alfaro at Harvard Business School has also found that when trade with China is mentioned, any other positive attitudes about trade more-or-less vanish, because all people think about is loss of jobs.

    Back in the early 1990s, when Vice-President Al Gore was defending the propossed North American Free Trade Agreement in a prime-time televised debate with Ross Perot, one of the arguments was that Mexico’s economy was just so much smaller than that of the United State, so that fears of trade with Mexico were overblown. Back in 1980, when China began its economic reforms, the US economy was 15 times as large as that of China; by 2001, when Cina became as a member of the World Trade Organization, the US economy was about 8 times as large as that of China; in 2023, the US economy is now less than twice as large as China’s–about 60% larger (measured by current US dollars).

    On per capita basis, the US per capita GDP is still about six times as large as per capita GDP in China. But China’s population if of course much larger, and in global affairs, size matters. At least some of the official pronouncements from China suggest that it would like to jolt its economy out of the doldrums with a renewed surge of exporting. But a rate of Chinese export growth that was possible back in 1980 or 2001, given the relatively small size of China’s economy at that time, would be wildly disruptive to the rest of the global economy today. Moreover, the size of China’s economy is correlated with its military spending and defense posture.

    I’m in general a big supporter of free trade, as readers of this blog know. But economics happens against a backdrop of politics. I don’t think the GATT could have survived back in the 1950s and 1960s and 1970s if it had included both the US and the USSR. Perhaps if China took steps toward emphasizing domestic-driven economic growth, took steps to enforce intellectual property agreements, and backed off on threatening sounds about the China Sea, then trade agreemement including the US and China could be sustained. But that feels unlikely.

    Looking ahead, it seems like advances for free trade will be driven by technology, both ongoing reductions in transportation costs and in particular how the internet has both connected buyers and sellers around the world and made it possible to buy and sell services across national borders. When it comes to trade agreements, China’s presence in the World Trade Organization is one more difficulty for an organization that already was hobbled. Thus, trade agrements seem likely to be regional and bilateral, instead–and the agreements may be as full of rules and restrictions as they are attempts to reduce barriers to trade.

    The Child Penalty: An International View

    It’s well-known that when a couple has a child, the average woman experiences a “child penalty” in labor market outcomes, while outcomes for the man are largely unchanged. For a discussion of this pattern using US data, here’s an article by Jane Waldfogel from back in 1998 in the Journal of Economic Perspectives. As that paper points out: “As the gender gap in pay between women and men has been narrowing, the ‘family gap’ in pay between mothers and nonmothers has been widening.”

    This pattern is widespread around the world. Henrik Kleven, Camille Landais, and Gabriel Leite-Mariante consider data for 134 countries in “The Child Penalty Atlas* (published online in The Review of Economic Studies). For those who don’t have enough caffeine in their system at present to tackle the academic paper, the authors have set a “Child Penalty Atlas” website, with a useful overview of method and findings.

    Here’s the data problem they face. For a number of countries, there is fairly comprehensive annual data on labor market outcomes and births. Thus, a research can track a basic labor market outcome like whether someone is in the labor force or not, and how the pattern shifts when a couple’s first child is born. Here’s a relatively common pattern using data from Chile. In the years leading up to a first child, both men and women are more likely to be holding jobs (perhaps becasue they are leaving school). But when the first child is born, the employment rate for women drops off, while that for men continues rising a bit, but mainly levels off.

    However, many countries do not have annual data. Instead, they have occasional data from a government census or household survey. The researchers then take this approach:

    In those cases, we know the age of people’s oldest child, so we know what happens to women and men’s employment after having children. But because we do not observe the same people over time, we do not know what their outcomes were before they had children. How do we address this? In a nutshell, we ‘match’ each observed individual who has just had a first child (i.e., they are at t=0) to a childless person with similar characteristics who is n (n varying from 1 to 5) years younger. We then assume that this childless person will have a child in n years from now. Effectively, we create a population of “future parents” from the population of people who don’t have children and who are very similar to the actual parents we observe.

    Is this approach a sensible one? You can check it. Take the countries like Chile that have both kinds of data: annual data and occasional census/survey data. Apply this method of choosing people who are similar in observed characteristics based on the occasional data. Then look at the annual data and check whether this method offers accurate projections. It turns out that the method works pretty well.

    The result suggests that child penalties vary a lot across countries. As the map shows, the reduction in women’s labor force participation after a first birth is very low in parts of Africa as well as China and parts of east Asia; intermediate level in the US, Canada, Russia, and parts of Europe like France; and higher in Latin America, parts of the Middle East, and parts of Europe.

    in many high-income countries the child penalty explains nearly 100% of the gap of the gap in labor force participation between men and women: for example, it explains 84% of the gap in the United States, 95% in Canada, 97% in Germany, and more than 100% of the gap in Sweden. For many other countries around the world, the child penalty is only part of the labor force participation gap: in some cases, because the child penalty is so small (as in certain countrie in Africa and Asia), and in other cases because the gender gap in labor force participation is so large (as in Latin America and the Middle East).

    There is of course an ongoing argument in the United States over the extent to which government programs that support first-time parents, from work leave to child care, might reduce the gender gap in labor force participation. The evidence here doesn’t speak to that point directly. After all, many countries across Europe have considerably more extensive parental leave policies and child care support than the United States, but also a greater child penalty. Policies to support new parents probably have a different effect depending on broader social expectations: if the social expectation is that most mothers will return to the labor force, these policies might help the transition out of the labor force and back in; but if the social expectation is that mothers not return to the labor force soon, or at all, then parental leave and other supports may just smooth the path out of the labor force.

    Hayek on Decentralized Information in Markets

    Friedrich von Hayek won the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel in 1974. For the 50th anniversary of the prize, the IEA published a short collection of essays called Hayek’s Nobel: 50 Years On, edited by Kristian Niemietz. It Includes Hayek’s speech upon acceptance of the Nobel Prize, “The Pretence of Knowledge,” with three essays placing the essay in historical and modern context by Bruce Caldwell, Peter J. Boettke, and Donald J. Boudreaux.

    Hayek is perhaps best-known today for the line of argument famously laid out in his 1945 essay, “The Use of Knowledge in Society,” which is also the focus of his Nobel address. He points out that the operation of prices in a market offers a way of coordinating actions. One example focuses on the price of tin. He wrote:

    Fundamentally, in a system where the knowledge of the, relevant facts is dispersed among many people, prices can act to coordinate the separate actions of different people in the same way as subjective values help the individual to coordinate the parts of his plan. It is worth contemplating for a moment a very simple and commonplace instance of the action of the price system to see what precisely it accomplishes. Assume that somewhere in the world a new opportunity for the use of some raw material, say tin, has arisen, or that one of the sources of supply of tin has been eliminated. It does not matter for our purpose-and it is very significant that it does not matter- which of these two causes has made tin more scarce. All that the users of tin need to know is that some of the tin they used to consume is now more profitably employed elsewhere, and that in consequence they must economize tin. There is no need for the great majority of them even to know where the more urgent need has arisen, or in favor of what other needs they ought to husband the supply. If only some of them know directly of the new demand, and switch resources over to it, and if the people who are aware of the new gap thus created in turn fill it from still other sources, the effect will rapidly spread throughout the whole economic system and influence not only all the uses of tin, but also those of its substitutes and the substitutes of these substitutes, the supply of all the things made of tin, and their substitutes, and so on; and all this without the great majority of those instrumental in bringing about these substitutions knowing anything at all about the original cause of these changes. The whole acts as one market, not because any of its members survey the whole field, but because their limited individual fields of vision sufficiently overlap so that through many intermediaries the relevant information is communicated to all

    Thus, the coordinating action of a market is tightly related to how the price provides signals to producers and users. But Hayek’s point about markets and information operates at a more subtle level as well.

    Imagine that an economic planner observes that a supply of tin has been eliminated, and want to adjust economic outcomes accordingly. Presumably, there should be some mixture of efforts to expand production of tin in some ways, and to reduce the use of tin in other ways. In turn, those who reduce the use of tin may with to turn to other materials, and so production of those other materials should be increased as well. But what would be the appropriate mixture of these (and other) changes?

    Hayek argues that it is literally impossible for an economic planner to answer this question. The reason is that consumers of tin literally don’t know how much they might conserve on tin (or switch to substitutes) until they are actually forced experiment with different methods of doing so. Similarly, alternative producers of tin (or substitutes) literally don’t know about how they might adjust production in response to a shortage of tin that happens elsewhere until they actually try to do it. The knowledge of how future adjustments might take place if conditions change is predictable in terms of broad patterns–that is, in response to a shutdown of a supply of tin, users of tin will try to conserve and alternative producers of tin will try to increase output–but specifically who will be able to take these steps most easily and cost-effectively is not known in advance.

    In Hayek’s Nobel address, he writes:

    Unlike the position that exists in the physical sciences, in economics and other disciplines that deal with essentially complex phenomena, the aspects of the events to be accounted for about which we can get quantitative data are necessarily limited and may not include the important ones. While in the physical sciences it is generally assumed, probably with good reason, that any important factor which determines the observed events will itself be directly observable and measurable, in the study of such complex phenomena as the market, which depend on the actions of many individuals, all the circumstances which will determine the outcome of a process, for reasons which I shall explain later, will hardly ever be fully known or measurable. And while in the physical sciences the investigator will be able to measure what, on the basis of a prima facie theory, he thinks important, in the social sciences often that is treated as important which happens to be accessible to measurement.

    There is much to be said about the strengths and weakness of Hayek’s theory, which I won’t try to do here. But I will point out one consequence of his theory, which is that it is common for politicians to speak as if economic outcomes are just a matter of political will. Maybe the discussion is about saving a factory at risk of closing down, or saving or creating an industry, creating more well-paid jobs, making housing more affordable, reducing the price of eggs, cutting interest rates, and so on and so on. The language of politics often makes it sound as if these and other economic outcomes are just a matter of whether your favorite politician or party is “fighting” for you. It’s just a matter of whether they “want it enough,” as the sportcasters say of the winning team, suggesting that losing and other unwanted outcomes are just about a weakess of desire.

    In his 1988 essay, Hayek referred to this belief in the malleability of economic outcome as The Fatal Conceit: he wrote of “the fatal conceit that man is able to shape the world around him according to his wishes.”

    More to the present point, Hayek argued that many people have a tendency to emphasize the role that government plays in economic outcomes, because government actions are large-scale, associated with prominent leaders, and commemorated by writers. The actions of government are the facts that we have written down. But we typically don’t write down, or even observe, the myriad small-scale reactions of individual consumers and producers across an economy as they continually react to changes and shift. Hayek wrote:

    The role played by governments is greatly exaggerated in historical accounts because we necessarily know so much more about what organised government did than about what the spontaneous coordination of individual efforts accomplished. This deception, which stems from the nature of those things preserved, such as documents and monuments …

    Government economic policy, whatever its announced goals, doesn’t create outcomes. Instead, it changes the context in which economic actors make decisions, which in turn leads to economic outcomes. The distinction matters.

    Levels of Industrial Policy

    In arguments over industrial policy, there’s often a moment where someone makes an assertion like: “Every nation has industrial policy. Even not having an industrial policy is a type of industrial policy. The only relevant question is what kind of industrial policy we should choose.” In my experience, the people who make this argument then jump immediately to why a specific kind of industrial policy should be very aggressive indeed, including tools like subsidies and constraints on imports aimed at assisting specific domestic industries or companies.

    It’s true, of course, that every nation has some type of industrial policy, if that term is very broadly understood. But I find it usefult think of economic policy and its effects on industry in layers.

    The most basic layer is an economy with a legal system that enforces contracts, a functioning financial system, functional bankruptcy laws, low inflation, moderate government borrowing, good transportation and communications infrastructure, and a solid educational system from K-12 up through colleges and universities, workforce training for adults, and so on. These features surely support a more robust development of industry, but without taking sides in which industries will emerge.

    As a next step, one can imagine the insight that long-run growth in the standard of living has, in the last 2-3 centuries, been closely related to advances in science and technology. It’s a standard belief among economists that an unfettered free market will tend to underinvest in innovation, in large part because innovations can be copied, and much of the benefit of an innovation goes to users rather than to the inventor. Thus, high-income countries subsidize innovation in a number of way: through protection of patents and intellectual property rights to help raise the reward for successful innovators, through tax breaks for research and development done by firms, and through direct funding of science and innovation at research institutions. These kinds of steps seek to to shape the direction of an economy toward a greater emphasis on technology-based growth. I have argued that despite a recent moderate increase in US R&D spending, there is a plausible case for increasing these incentives with an aim to doubling US research and development spending.

    However, one can draw a conceptual line between general support for R&D and targetted support by industry. For example, a society might identify certain technological priorities: say, carbon-free energy production, anti-cancer drugs, stronger domestic production of semiconductors, artificial intelligence, and others. A certain amount of government support of R&D might be aimed at the desired areas. In addition, government might take other steps: perhaps prizes for certain kinds of inventions (think Operation Warp Speed for creating the COVID vaccines), or allow firms to cooperate, without fear of antitrust laws, to fund research jointly, or to build up joint ventures with the highest-performing firms in other countries. But all of these steps are focused on support for research and development of knowledge.

    The next level is direct support for industries, or even for certain specific companies. This support might take the form of direct government subsidies or tax breaks for certain firms and/or industries. It might also involve government becoming involved in transportation infrastructure or workforce training that is aimed quite specifically at industrial development in a particular location.

    The final layer of “industrial policy” is not just to build up domestic firms and industries with subsidies, infrastructure, and workforce development–as well as support for the underlying technological and scientific expertise–but to hinder international competition with tariffs and import quotas.

    There are probably other sensible ways to divide up these categories, but the point I’m trying to make is that using the term of “industrial policy” to refer to all of these steps seems to me to stretch the term so far that it stops being useful. My sense is that most of the economists who would view themselves as against “industrial policy” are also supporters of at least the first two or three levels of policy above–that is, the basic underpinnings of a strong economy including support for research and development. Instead, I would focus th term “industrial policy” on subsidies or trade barriers aimed at certain companies or industries.

    Sometimes this kind of industrial policy has worked. There are plenty of local examples where support (or at least not active opposition) from government was necessary for a large-scale firm to thrive, including specialized training for workers, infrastructure investment, making land available, a local research center, local tax breaks (“tax-increment financing”) and so on. Of course, there are also plenty of cases where local government tried to roll out the red carpet for a firm, and blew a lot of money without much success. As one of many examples, some will remember back in 2018 when President Trump announced to m much fanfare that Foxconn was going to build a giant manufacturing facility in central Wisconsin, which never happened.

    Similarly, there are some examples around the world of where countries used tariffs and import quotas–along with all the other technology, workforce, and infrastructure steps mentioned here–to help build a domestic industry, which over time became a global leader. But in the cases that seemed to work, like certain industries in South Korea, the government support for these industries was tied to the industry meeting certain goals for exports that would be cost-competitive in world market. If industries did not meet the goals, the subsidies were cut off. And there are many examples of countries that blocked imports simply to support domestic producers

    But all of these types of industrial policy happen through politics, and thus are more likely to be responsive to a combination of powerful incumbent special interests and to wishful thinking (after all, politicians aren’t putting their own money on the line). A lot of prominent industrial policy efforts have turned out badly. I write a few years ago about my qualms about industrial policy:

    For example, back in 1991 Linda Cohen and Roger Noll published a book called The Technology Pork Barrel, which was based on case studies of US attempts to build infant industries in supersonic planes, communications satellites, a space shuttle, breeder reactors, photovoltaics, and synthetic fuels. I remember back in the 1980s when Japan announced with great fanfare the “Fifth Generation” computer project, which then went away with out fanfare. I remember when Japan was the shining example of how industrial policy worked in the 1970s and into the 1980s, but somehow it abruptly stopped being a shining example when Japan’s economy entered three decades of stagnation starting in the Brazil decided that it would become a computer-producing power in the 1970s and 1980s, and when Argentina decide that it would become a global electronics superpower. I remember the economic disaster that was the industrial policy of the Soviet Union. I remember the places around the world that have tried to be the next “Silicon XXXX,” generally without success.

    Ultimately, every proposal for industrial policy must grapple with the problem of political discipline. As the levels of industrial policy move beyond the basic steps like health institutions and support of research and development, and start to focus on particular industries and companies, how likely is the policy to work? What are the intermediate goals that will be used to judge whether the policy affecting the industry as desired? Will the policy be cut off if the intermediate goals are not being met? The closer that industrial policy can be captured by firms at a certain company or industry, the political tensions

    There is often a heavy dose of irony in industrial policy. Back in the 1950s, the head of General Motors was nominated to become Secretary of Defense. The story goes that when he was asked if he could separate the interests of General Motors from the broader nation interest, he answered: “What’s good for General Motors is good for the country.” The line was quoted for decades to show as an example of an excessively pro-business attitude. (The story isn’t accurate, as I described here.) But when General Motors needed a government subsidy to survive during the Great Recession, a lot of people then argued that what was General Motors was good for the country. Similarly, current US industrial policy favords multi-billion subsidies directly for companies on Intel and TSMC, on the grouds that “the interests of domestic semiconductor manufactures are good for the country.”

    There’s an old line that “government should steer, not row.” The idea is that the useful role is to set up policies like appropriate institutions, as well as incentive for innovation in general and for specific industries. But when government gets into the business of direct subsidies and tariffs, it has moved into rowing rather than steering, and the danger of political incentives starting to override sensible economic policy begins to become a greater risk.

    These issues and others have been top-of-mind for me lately, because the most recent issue of the Journal of Economic Perspectives, where I work as Managing Editor, published a “Symposium on Industrial Policy” in the Fall 2024 issue. As with all JEP content and archives, the papers are freely available online:

    Want more? The most recent Annual Review of Economics also includes a couple of articles on industrial policy: