Who is Buying All the Global Debt?

Global debt is at an all-time high, and the buyers of that debt are shifting to players who are more sensitive to interest rates and risks. The OECD tells the story in its Global Debt Report 2026, subtitled “Sustaining Debt Market Resilience Under Growing Pressure” (March 2026). Here are some snapshots to tell the story.

Total government and corporate bond debt is now about $109 trillion. More bonds are being issued. As as share of global GDP, the combination of government and corporate debt issued during a given year peaked during the pandemic at 28%. However, bond issuance has been on a longer-term rise, from 15% of of global GDP back in 2007 to a projected 23% of global GDP this year. The jump in debt for the higher-income countries that are part of OECD is espeically apparent in recent years.

This figure shows the gross borrowingby governments on the horizontal axis, and the “yield” or expected interest rate to be paid on the vertical axis. The green dots show a combination of high borrowing and high yield in recent years.

The share of longer-term bonds being issued is down, which is typically a sign that the risks of issuing such bonds (and the interest rate that would need to be paid for issuing such bonds) appears to be up.

In this report, what especially caught my eye was a shift in the economic players that hold bonds. This figure seemed useful for organizing one’s thoughts on the subject. It shows ththe big categories of bond holders. The left-hand figure compares “duration appetite”–or the preference for long-term bonds–relative to whether the bonds are likely to be held to maturity. For example, life insurers like to purchase long-duration bonds; hedge funds and commercial banks are less likely to hold bonds to maturity. The right-hand figure shows that retail investors, exchange-tradded funds, commercial and investment banks, and hedge funds are the most price-sensitive about purchasing bonds.

The underlying story here is that holders of bonds are shifting. During the pandemic, central banks often bought bonds, as can be seen in the figure below. Central banks are not very price-sensitive, especially when buying bonds from their home country. But more recently, the share of bonds bought by price-sensitive investors like households, money market funds, hedge funds, and others is on the rise. If these investors perceive more risk–say, perhaps as a result of geopolitical tensions–they will want higher returns to compensate. The c

The overall message here is that debt markets are both growing, looking riskier (higher yields and shorter maturities), and increasingly reliant on investors who, unlike central banks, will be highly sensitive to price and risk and not planning to hold bonds to maturity. This doesn’t add up to impending catastrophe, nor anything close, but it’s something to watch. The OECD report notes: “These risks must be carefully managed to ensure that sovereign and corporate bond markets, with a combined size of USD 109 trillion, continue to provide stable financing to governments and corporations. This is especially important as they are set to play an increasing role in funding AI investment and defence spending, at a time when decisions on monetary policy, public debt and pension fund asset allocation are coming under growing pressure.”

When Fiscal and Monetary Policy Row Together–and Not

There are times when the direction for fiscal and monetary policy is obvious. During the Great Recession in 2007-09, it was clear to most that the Federal Reserve should be reducing interest rates and the federal government should be running larger budget deficits, to counter the effects of the recession. Perhaps this seems obvious? But during the Great Depression in 1932, the federal government reacted to lost tax revenue from higher unemployment with a large tax increase. A year earlier in 1931, the Federal Reserve has raised interest rates out of desire to maintain the gold standard (that is, to keep the same value between US dollars and gold). Fiscal and monetary policy in the early 1930s were rowing together, but in the wrong direction.

Christina D. Romer discusses these and other episodes in “Rowing Together:
Lessons on Policy Coordination from American History
” (Hutchins Center Working Paper #105, February 2026). She writes:

It is not enough for monetary and fiscal policy to be well coordinated. They also need to be moving toward the appropriate goal. To put it another way: Rowing together is great when the boat is headed in the right direction; it can be a disaster when the boat is headed in the wrong direction. Coordinated policy was a godsend in 2009; it was a tragedy in 1931. A corollary to this fundamental point is that sometimes rowing in opposite directions can be preferable. At least then, the boat stays where it is rather than move in the wrong direction. If monetary or fiscal policy is going astray, it is vitally important that the other tool of macropolicy be uncoordinated.

The current policy issue is that the federal government is running an expansionary fiscal policy with large budget deficits, and President Trump would like the Federal Reserve to run a more expansionary monetary policy as well with dramatic interest rate cuts. But as Romer points out in her review of historical examples, the US economy has some precedents here worth considering.

First, in the late 1960s and early 1970s, fiscal and monetary policy were coordinated on a substantial stimulus. There was a big tax cut in 1964, then spending increases related to the Vietnam War and social programs (“guns and butter,” it is sometimes called), followed by more tax cuts and spending increased when President Nixon assumed office. Meanwhile, the Federal Reserve was cutting interest rates. The new head of the Federal Reserve under Nixon, Arthur Burns, viewed himself as a political ally for Nixon and cut interest rates further in 1971 to stimulate the economy in the lead-up to the 1972 election.

A prevailing economic theory of that time held that stimulating the economy in this way could lead to faster growth and only modest inflation. That theory went badly off the tracks by the mid-1970s as inflation and recession combined in what was called “stagflation.”

A second example, from the late 1970s and into the early 1970s, was that the federal government kept running large budget deficits, in part in response to the deep recession of 1974-75 and the double-dip recessions from 1980-1982. However, the Federal Reserve under Paul Volcker did not coordinate with an expansionary monetary policy, and instead raised interest rates by six percentage points (!), and kept the rates that high for two years until inflation came down.

A third example, from the mid-1990s was that tax increases and minimal spending increases early in the Clinton administration, combined with the “dot-com” economic boom of the 1990s, led not only to lower budget deficits but to actual budget surpluses for a couple of years. During this time, the Federal Reserve did not raise interest rates, thus keeping a monetary stimuls in place. The overall result of this 1990s policy–contractionary fiscal policy and expansionary monetary policy–was that the US economy managed to dramatically reduce its budget deficits while continuing to grow.

These kinds of examples are what economists have in mind as they consider the current mix of fiscal and monetary policy. Here’s a figure showing the inflation rate on which the Federal Reserve focuses: core PCE inflation. “Core” means that price changes in food and energy are not included, because these fluctuate a lot, and the Fed is trying to focus on longer-term inflationary momentum. PCE refers to “personal consumption expenditures” index, which included more of consumer spending, and using a more flexible formula to allow for substitution between goods, than does the better-known Consumer Price Index measure of inflation.

Inflation spiked during pandemic, under pressure from coordinated strong expansions of fiscal and monetary policy, along with supply chain disruptions. Although core PCE inflation has come down since then, it’s still not yet down to pre-pandemic levels. In this situation, the Federal Reserve is going to be hesitant to cut interest rates dramatically. Among central bankers, the remembrance of what happened when Arthur Burns cut rates in the early 1970s and inflation took off remains crystal-clear.

As best I can tell, the strong preference for the Federal Reserve would be to re-run the 1990s, in which the government made a substantial effort to reduce budget deficits, and the Fed could then make sure that economic growth remained solid by counterbalancing the tighter fiscal policy with looser monetary policy. However, the Fed was also been gritting its teeth back around 2022 for a possible repeat of the early 1980s, when the central bank might need to fight inflation all by itself with a large jump in interest rates. Inflation has come down enough that a large jump no longer seems needed, but remains high enough that a large interest rate cut doesn’t make sense either. The lesson from the early 1970s about not letting a president prod a central bank into interest rate cuts for his own political purposes remains clear-cut, as well.

The Marketplace Exchanges for Health Insurance

One provision of the Patient Protection and Affordable Care Act of 2010 created what are commonly known as the “Marketplaces,” which are health insurance exchanges run at the state level, intended to allow those with medium and low incomes to purchase health insurance with the aid of federal subsidies? How are they working out?

Drew DeSilver provides a fact-based overview in “What the data says about Affordable Care Act health insurance exchanges” (Pew Reseach Center, January 22, 2026). Before the pandemic, the exchanges were a mechanism for health insurance coverage for about 10 million people. During the pandemic in 2021, Congress and President Biden passed into law a substantial expansion of the subsidies, and the number of people receiving insurance through the Marketplaces more than doubled to 23 million. “Data from the Centers for Medicare & Medicaid Services shows that the government spent $116.6 billion on marketplace tax credits and subsidies in the 2024 calendar year.” The additional subsidies enacted in 2021 had been scheduled to sunset at the end of 2025, and after considerable political acrimony, they did so.

Here is DeSilver’s quick overview of the health insurance offered by these plans:

All plans sold on ACA exchanges have to cover a set of “essential health benefits,” though the precise services vary by state.

As in the regular insurance market, exchange plans can have different premiums, deductibles, copayments, covered services and reimbursement rates. To help consumers sort through all that, exchange plans are sorted into four tiers, based on how much of patients’ covered health care costs they pay on average (that is, not for any specific customer or claim):

  • Platinum: 90% of costs covered, on average
  • Gold: 80%
  • Silver: 70% (or possibly more depending on income)
  • Bronze: 60%

In general, Platinum and Gold plans have the highest premiums and lowest deductibles, while Bronze plans have the lowest premiums and highest deductibles. There also are “Catastrophic” plans, with very low premiums and very high deductibles, but they’re only available to certain people. Fewer than 1% of exchange customers opt for Catastrophic plans.

In some cases, people who choose Silver plans – but no others – can get extra federal subsidies that lower their copayments and other out-of-pocket costs. Those subsidies, which are known as “cost-sharing reductions” and vary depending on income, can raise Silver plans’ payout shares to as much as 96%. Perhaps for that reason, Silver plans are by far the most popular. More than half (56%) of all plans selected on all exchanges during the pre-2025 open enrollment period were Silver …

In 2021, Congress made the premium tax credits more generous and made more people eligible for them. Before these changes, the required contribution percentages ranged from 2.07% to 9.86% of income. Afterward, they ranged from zero percent to 8.5%. The law also extended subsidies to people whose income exceeded 400% of the federal poverty level (which itself varies by household type). Previously, 400% was the upper limit.

Thus, some key questions include to what extent the failure to renew the additional federal subsidies added in 2021 will affect the number of people without health insurance, how much money will the government save, and the extent to which the spike in Marketplace enrollment came from people not actually eligible for the program. The Bipartisan Policy Center summarizes a rang of evidence on these points in “Enhanced Premium Tax Credits: Who Benefits, How Much, and What Happens Next?” (Octover 15, 2025).

It cites estimates that that the decision to extend the 2021 subsidies by a year, though 2026, would have led to to 2.0 million more people with health insurance. As noted a moment ago, because the subsidies already reached people up to 400% of the povert line in some states, and higher after 2021, many of those losing health insurance would have incomes above the poverty line. Extending the subsidies by a year would also have increased the federal deficit by $23.4 billion. Long division tells me that extending the 2021 expandion of benefits through 2026 would have led to an 2 million people with health insurance at an average cost to the federal goverment of $11,000 per person.

In addition, the Congressional Budget Office estimated that about 1.3 million of those receiving health insurance subsidies were not, in fact, eligible for them. The subsidies depend on income reported, but the specific amount of income that makes one eligible for subsidies varies across states. The CBO conclusion is based on observing that the share of people reporting the specific income levels that made them eligible for health insurance subsidies–for example, in many states, income between 105-110% of the poverty level–was much higher in the states where this level of income was necessary to receive the subsidy.

My own sense is that before the pandemic, the state-run health insurance marketplaces were working better than many of their critics had expected. However, a number of the policies enacted in haste during the pandemic deserve reconsideration, and the 2021 expansion that more than doubled the program size is one of them. I hold out no particular hope that a careful reconsideration is politically possible, but research in the next year or two may offer some guidance.

A Rising Gap Between Older and Young Workers

The US economy, like much of the rest of the world, is headed into the teeth of what economists sometimes bloodlessly call the “demographic transition:” that is, birthrates are falling as life expectancies continue to rise, so that the population is on average aging. Melissa A. Kearney and Luke Pardue have edited a four-paper collection of essays on the subject in Demographic Headwinds: The Economic Consequences of Lower Birth Rates and Longer Lives (Aspen Economic Strategy Group, February 2026).

Some of the consequences are relatively well-known, like the effects on the federal budget as the share of the receiving Social Security and Medicare is on the rise, as discusses in “Low Fertility and Fiscal Sustainability: The Effects of Past and Future Fertility Rates on the US Federal Budget Outlook,” by Lisa Dettline and Luke Pardue. Effects on budgets of state and local governments have been less discussed, but as Jeffrey Clemens points out in “Implications of Low Fertility and Declining Populations for the Operations of US State and Local Governments”: “Education accounts for a substantial portion of direct expenditure by both [state and local] layers of government, at 19 and 38 percent, respectively. For states, this spending is primarily on higher education, while local governments spend primarily on K–12 education.” As the number of children declines, one challenge will be to shrink the physical and employment footprint of the K-12 sector.

But here, I’ll focus on two other implications of the US demographic transition that seem to me even less discussed. One is the shift in the age of workers, and in particular, the ways in which a rising number of older workers has contributed to large gaps between older and younger workers. Nicola Bianchi and Matteo Paradisi discuss “The Age Divide in the American Workplace.” They write:

In the late 1970s, workers over 50 earned roughly 35 percent more per week than workers under 30. Over the following four decades, this gap widened steadily, reaching a peak in the early 2010s, when older workers earned about 55 percent more. It has narrowed somewhat in the last decade, as the oldest baby boomers have begun to retire and leave the very top of the wage distribution, but even in 2024, older workers still earned 47 percent more per week than younger workers on average. And because working lives are now longer, with many employees remaining in senior roles well into their late sixties, there is little reason to expect this pay gap to return to the more modest levels observed in the 1970s without further intervention from firms or policymakers.

There are two additional pieces of evidence that describe how lopsided the career outcomes of younger and older workers have become over the past five decades. First, … over time, younger workers have become considerably more likely than their 1970s counterparts to be in the bottom quarter of the wage distribution and less likely to be in the top quarter. For workers over 50, the pattern is broadly reversed. Relative to 1976, their probability of being in the bottom quartile has gradually fallen, reaching around 15–17 percent below its baseline level during the 2010s, while their probability of being in the top quartile has typically been between 5 and 15 percent higher.

Second, older workers have pulled further ahead of younger workers in access to desirable, higher-paying managerial jobs, as displayed in figure 4. In the mid-1970s, workers over 50 were about 5 percentage points more likely than workers under 30 to be employed in management occupations in the top quarter of the wage distribution. By 2024, this gap had widened to almost 8.3 percentage points. Over the same period, these higher-paying managerial roles grew from just over 5 percent to a little more than 7 percent of full-time private-sector jobs. Therefore, the inability of younger workers to capture a larger share of these positions was not simply due to a shrinking number of these career opportunities.

In conclusion, at a time when their numbers in the labor force were rising dramatically, older workers increasingly occupied higher-paying jobs and leadership positions. Younger workers, by contrast, faced declining representation at the top and limited access to managerial pathways. Bianchi and Paradisi (2024) show that these patterns are common to most high-income economies, rather than being unique to the United States.

My strong suspicion is that these patterns help to explain why so many young adults feel grim about the state of the US economy. They are, in fact, further behind their parents’ generation. Also, there is a considerable body of evidence that lower earnings earlier in life will continue to echo through a lifetime.

Another less-discussed aspect of the US demographic transition is a claim I’ve heard a few times that “at least lower population growth will reduce environmenal harms.” But this belief is not likely to be true, as Kevin Kuruc explains in “The Environmental Benefits of Low Fertility and Population Decline are Overstated“:

The discussion of impending population decline is often dismissed or minimized by arguments that downplay its urgency – or even welcome this development – because of the proposed environmental benefits. This paper argues that the environmental benefits of depopulation are far smaller than widely believed, and that complacency about population decline may be counterproductive to climate goals. First, there is a fundamental issue of timing mismatch. Demographic change unfolds over generations, while effective responses to emissions and environmental harm require immediate action. Second, effective climate strategies, such as carbon capture, require high fixed capital and labor costs. The smaller the economy, the larger the share of national income required to achieve climate goals. Beyond the climate, there is little evidence to suggest that increases in per-capita resource availability from depopulation would materially improve living standards, as modern natural-resource constraints on well-being are limited and declining. In contrast, sustainability depends on policy, human ingenuity, and fiscal capacity, none of which are aided by a shrinking and aging population.

The Status of Stablecoins

Although I hear a lot about “stablecoins,” it seems at present that in the context of the US or world economy, they remain small in size, and in the context of usage, they remain primarily a tool for facilitating other crypto-asset transactions–rather than acting as a medium of exchange for everyday buying and selling.

The European Systemic Risk Board offers an an overview in “Crypto-assets and decentralised finance: Report on stablecoins, crypto-investment products and multifunction groups” (October 2025). A “stablecoin” is a kind of crypto-currency, run on a blockchain, which is more-or-less guaranteed to keep its value because the company issuing the stablecoins owns safe assets like US Treasury debt to provide backing for the currency.

Total issuance of stablecoins seems to be around $300 billion in early 2026. Here’s a graph showing the rise up through mid-2025. As you can see, the market has been dominated pretty much since its inception by two companies: Tether (USDT) and Circle (USDC). Essentially all of the stablecoin market is backed by US dollar assets.

Given that the value of stablecoin is, well, stable, they aren’t of much interest to speculative investors searching for high returns. Moreover, stablecoins are barely used for ordinary transactions. The ESRB writes (footnotes omitted):

Despite widespread attention, stablecoins continue to play only a limited role in the global payment landscape, although their presence is steadily expanding. According to Worldpay’s 2025 Global Payments Report, stablecoins accounted for just 0.2% of global e-commerce transaction value in 2024. While interest in initiatives such as PayPal’s USD stablecoin via Xoom is rising, stablecoin adoption as a means of payment remains limited. A 2024 BIS survey found that over half of central banks viewed stablecoin use in their jurisdictions as negligible, confined mainly to niche remittance and retail users

Given stablecoins are mostly used as a gateway to the rest of the crypto-asset universe: that is, they are used for buying and selling other crypto-assets. As the report notes: “Stablecoins are still primarily used as a bridge between fiat and crypto-asset trading and as providers of liquidity in decentralised finance and lending. They are essential to decentralised finance ecosystems such as decentralised exchanges and lending protocols, where they help users reduce their exposure to price volatility by providing a more stable settlement asset and are also used as collateral for loans. They are also key to fiat-crypto conversions on centralised platforms.”

Might something go wrong in the stablecoin world that causes problems for the rest of the financial ecosystem? The ESRB suggests some possibilities that deserve attention. For example, stablecoins are held anonymously. What happens if a company is sold for a payment in stablecoins, so that the new owner is not legally visible? What happens if the stablecoin company does not own US Treasury debt directly, but instead holds deposits in a regular bank–and what if those bank deposits are large enough that they aren’t covered by deposit insurance. For example, it turned out that Circle was holding large deposits in Silicon Valley Bank, when that bank cratered in 2023. What if stablecoins start draining deposits out of the banking system, so that banks are less able to make loans? On the other side, if stablecoins start to play a substantially larger role outside the crypto-asset world, then regulators will presumably step up and banks and other financial institutions will have a strong incentive to innovate with comparable products.

All in all, I don’t worry much about stablecoins. But then, I don’t play in the crypto sandbox, either.

Fiscal Rules in Emerging Market Economies

Government debt isn’t just up in the US economy; it’s a common pattern around emerging market and developing economies, too. Thus, the most recent Global Economic Prospects report includes a special topics chapter focused on these countries in “Rebuilding Fiscal Space: The Case for Fiscal Rules” (World Bank, January 2026). Here’s a flavor:

At a time when global shocks have become more frequent and government debt among emerging market and developing economies (EMDEs) has climbed to a 55-year high, fiscal rules are an important policy tool for promoting fiscal discipline. More than half of EMDEs have at least one fiscal rule, up from about 15 percent in 2000. Fiscal rules are associated with improvements in budget balances that extend to the medium and long term. Among EMDEs, improvements in the cyclically adjusted primary balance (CAPB) peak five years after fiscal rules are adopted, reaching a cumulative 1.4 percentage points of trend GDP. The gains are more pronounced when institutions are strong and economic conditions are favorable at the time of adoption, and the use of a deficit rule is central to durable improvements. Fiscal rules are also associated with a greater likelihood of fiscal adjustment episodes—multiyear periods of improvement in the CAPB as a percent of trend GDP. During a fiscal adjustment episode, the CAPB in the typical EMDE improves by 1.6 percentage points of trend GDP per year. Fiscal rules with credible enforcement provisions are associated with a higher likelihood of expenditure-based adjustment. Further, fiscal rules need not be complex to be effective. Simple rule frameworks are associated with a higher likelihood of revenue-based adjustment

One of the obvious and intriguing questions here is whether countries that are already in weak economic shape, perhaps already with high levels of debt, are more likely to adopt fiscal rules. Or to put it another way, are fiscal rules a possible way of reducing the risk of high future debt? Or are they a response to already-high government debt?

Surprisingly to me, at least, about half the emerging market and developing economies that have adopted a fiscal rule did so when their existing level of debt was low, rather than high, and when their economies were weak, rather than strong. The effect of adopting a fiscal rule on future debt turns out to be larger for the already healthy economies with relatively little debt. It also turns out that about half of the countries adopting fiscal rules did so with slim rather than large parliamentary control–but this difference did not seem to affect the ability of the rule to reduce future deficits.

When Rubber Was the Critical Imported Good

At the start of World War II, the US economy relied almost exclusively on imported rubber as the key material for making, among other things, tires for cars and airplanes. The dependency was well-known, but in April 1942, when Japan cut off the foreign supply, the US was unprepared. Synthetic rubber ended up being part of the answer, but the rest of the answer involved a lot of muddling through. Alexander J. Field tells the story in “The World War II US Rubber Famine” (Business History Review, Autumn 2025, 99: 365-390). Field writes:

There were simply no satisfactory substitutes for rubber in a variety of critical uses, particularly tire carcasses and treads, the ultimate end use of 70% of rubber inputs. The severe shortage of natural rubber that resulted adversely affected the ability of the US military to project force and contributed to the disappointing record of wartime manufacturing productivity. … During World War II, the United States never escaped the threat of running out of rubber, which stood as a sword of Damocles over the entire economic and military effort. In 1944 the country almost ran out of natural rubber and would have, had the war continued into 1946.

One fun fact: Apparently the original World War II plan had been to launch a counterattack across the English Channel in 1943, rather than the D-Day invasion of June 1944. But in 1943, the lack of rubber meant that there weren’t enough landing vehicles for a counterattack to work.

A second fun fact: During World War II, “The rubber famine led directly to the imposition of a 35-mph speed limit and nationwide gas rationing in a country that, in the aggregate, was awash in petroleum. The intent was not to save fuel but to reduce tread wear on the tires installed on the nation’s 27 million automobiles and 5 million trucks, almost all of which were otherwise forecast to be off the roads within two years. These restrictions made it more difficult for people to get to and from work, contributing to absenteeism, and impacted the distribution of products by truck.”

The rubber famine is of interest for its own sake. But in addition, it echoes modern concerns about the risks of excessive dependence on key imported products–and thus may offer some food for thought on those issues as well. There were three possibilities for dealing with the rubber famine: grow rubber domestically, build up a stockpile of imported rubber, or produce synthetic rubber. Here’s what happened with each one:

Efforts to grow rubber plants from the far East in the western hemisphere had not worked well in the 1920s, but there was a plant called guayule that was a promising alternative natural source of latex–substitutable for natural rubber in many applications, and better than rubber for some of them. In 1930, a then-obscure Army major named Dwight D. Eisenhower toured the guayule plantations of Mexico, and wrote in his report: “Should our sea communications with [Southeast Asia] be cut in an emergency, shortage of rubber in the United States would rapidly become acute.” Eisenhower proposed in his 1930 report a US program to subsidize guayule. but at the time, nothing happened. On March 5, 1942, the US passed the Emergency Rubber Act, sometimes known as the Guayule Act, to subsidize farming of guayule in the the United States. But since guayule plants take four years of growth before they mature and produce latex, the law made no contribution to the wartime effort.

The US made efforts to build up its rubber stockpile starting in 1939, as war loomed, but by 1942 the total stockpile was equal to about one year of use. One problem was that the powers-that-be in Washington, DC, insisted that imported rubber travel by ship from the far East to New York, by way of the Panama Canal, rather than offloading in San Francisco and having the rubber cross the US by train. Having ships go all the way to New York was slightly cheaper than trains, but as a result, the ships carrying rubber were unable to go back-and-forth across the Pacific as often–and growth of the rubber stockpile was limited as a result..

The powers-that-be in Washington viewed research efforts in synthetic rubber as an overly costly sop to petrochemical companies up through February 1941, and the created a program that ended up costing as much as the Manhattan Project to develop the atomic bomb. As it turned out, total production of synthetic rubber during World War II was roughly two years of wartime quantity demanded–or bout twice as much as the stock of natural rubber entering the war.

As Field writes, the synthetic rubber progrm has been “uncritically lionized” in most writing about World War II, as if it just burst into existence and solved the problems. But aas Field writes: “The eventual availability of synthetic rubber in quantity did not end the rubber famine. Synthetic had to be blended with natural in the manufacture of almost all products, and in some cases synthetic could not be used at all.”

Moreover, the synthetic rubber program depended on a gas called butadiene, could be produced from petroleum or from alcohol. Standard Oil and other big petroleum companies favored a petroleum feedstock, although alcohol-based plants were also started. Field writes:

The first butadiene plant was not completed until April 1943. All three of the big alcohol plants opened that year, and they produced far more than their rated annual capacity. Of the five petroleum-based butadiene plants, the three largest did not begin production until 1944, and consistently produced below their rated capacity. … Without alcohol-based butadiene, it is hard to see how D-Day could have gone forward in June 1944. In retrospect, it is not clear that petroleum-based butadiene was needed at all to win the war. Butadiene from isobutylene was cheaper in the long run because, even though plants using this input were more expensive to construct, required more complex engineering, and relied on untested processes, the feedstock (petroleum) was ultimately cheaper. Due to huge agricultural surpluses accumulated during the Depression, however, the opportunity cost of ethanol was far lower during the war years, an advantage augmented by the much lower capital requirements of the process using it to produce butadiene.

The US government ended up establishing the foundations for a commercially successful synthetic rubber industry in the postwar period, one using petroleum as the principal feedstock, as Standard [Oil] intended. Given that synthetic rubber would be needed during the war, it would have been cheaper and faster to have focused from the outset on ethanol as the feedstock. Standard Oil bears responsibility for the emphasis on petroleum. Whether petroleum or ethanol was to be the feedstock, however, construction on the butadiene plants should have started earlier …

If Eisenhower’s recommendation had been followed back in 1930, or if the US rubber stockpile had been built up more expeditiously, then synthetic rubber would not have been needed to win World War II. If production of synthetic rubber had been alcohol-based from the start, it could have ramped up more quickly. If more rubber had been available, then the war might have ended sooner if the D-Day invasion could have happened in 1943.

The broader lesson is that a serious discussion of critical imported materials should never be a last-minute, against-the-deadline affair. Moreover, a serious discussion will need to resist pressure from domestic companies with a financial interest in the choices made. The US and Allied forces muddled through the rubber famine during World War II, but it was a nearer thing than most people realize. Field notes: “In 1945 the War Production Board forecast that, in the event of an invasion of Japan, the US would simply run out of natural rubber in 1946. Among other consequences, the US would then have been unable to manufacture airplane tires.” That fact surely played a role in the decision to use the atomic bomb.

Can AI Tools Be Pro-Worker?

There are certainly examples where new technology has replaced jobs. The US had 350,000 telephone switchboard operators in 1950, and the job just went away. The tractor played a big role in reducing the number of US farmers in the first half of the 20th century. Almost every village of any size had a blacksmith though much of the 19th century, but with the rise of industrial production, there weren’t enough of them left to be counted as a separate employment category in the 1900 census. And now here we are with the new artificial intelligence tools, and warnings that all manner of jobs that use computers–across a wide array of industries–could be at risk.

It seems obvious to me that many jobs will change as new technologies appear. Many of the tasks involved in my own job, running an academic economics journal since the late 1980s, changed substantially with the arrival of the internet, for example. But the job itself didn’t go away; indeed, the internet probably made me better at my job. For example, it’s a lot easier for me to look up cited articles from my desk than it was to walk over into the library stacks to find the article–and so I check many more articles as a result.

Thus, a key issue here is the extent to which the new AI tools replace workers outright, like telephone switchboard operators, or whether they allow workers to be more productive and effect–or even create the possibility for new and previously unimagined jobs. Daron Acemoglu, David Autor, and Simon Johnson work through these distinctions in “Building pro-worker artificial intelligence” (Hamilton Project at the Brookings Institution, February 2026). They write: “We define pro-worker technologies—including AI—as technologies that make human skills and expertise more valuable by expanding worker capabilities.”

They emphasize a key point about AI tools: such tools may actually be more useful when collaborating with humans. They write:

A modern AI system can ingest drone imagery and soil sensor data from a farm’s every acre, the complete sensor logs from a building’s HVAC system, or the detailed vital signs of a single patient observed over many months to support workers making high-stakes decisions. Drawing on this pretraining, AI tools can think alongside workers, identify relevant context, generate well-informed responses to questions, and present lucid, well-structured data to support decisionmaking. This is collaboration.

You may object: “If AI can behave like an expert, can’t it simply replace experts, thus automating their expertise into irrelevance?” In some cases, the answer is yes. But in many more cases, we think the answer is no: AI will prove more effective at collaboration than
at automation. Precisely because AI is not rule-bound, it is less trustworthy as an autonomous actor than a conventional computer system, and more valuable as a collaborator (Narayanan and Kapoor 2025).

To be useful, an automation tool must deliver near-flawless performance almost all the time. You would not tolerate a spreadsheet that hallucinated values, a robotic surgeon that glitched-out during bypass surgery, an agentic investing tool that squandered your money while you were not paying attention, or an AI-powered vending machine that gave away PlayStations and stocked live fish at the behest of persuasive customers (Stern 2025). For most of these tasks, the stakes are too consequential and the decisions too nuanced to be fully delegated to an automatic system that acts on its own discretion. The AI needs human expertise.

A collaboration tool does not need to be anywhere close to infallible to be useful. A doctor with a stethoscope can better diagnose a patient than the same doctor without one, and a contractor can pitch a squarer house frame with a laser level than they could by eyeballing it. These tools do not need to work flawlessly, because they do not promise to replace the expertise of their user. They make experts better at what they do—and extend their expertise to places it could not go unassisted. Rather than making expertise unnecessary, they render expertise more valuable by extending its efficacy and scope. It is this complementarity between machine capacity and human expertise that we believe imbues AI with vast pro-worker potential.

The authors provide concrete examples of pro-worker uses of AI for teachers, electricians’ assistants, patent examiners and others. They point out that the use of AI-assisted hearing aids might enable some workers to expand their on-the-job capabilities. However, they also worry that economic incentives and business habits may tend to emphasize AI applications that substitute for current jobs, rather than complement them. Thus, they argue for the public sector to nudge the incentives for pro-worker AI tools where possible. One of their examples stuck with me:

Indeed, the public sector already heavily shapes the path of technology in health care and education. For example, the federal Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 dramatically accelerated electronic health record adoption in U.S. hospitals through financial incentives and penalties. Within less than a decade, the
United States went from approximately 10 percent of hospitals with electronic health records to near-universal adoption (Office of the National Coordinator for Health Information Technology 2017). In a similar vein, the federal schools and libraries universal support (E-Rate) program, established by the Telecommunications Act of 1996, provides ongoing subsidies to schools and libraries for Internet connectivity. As of 2021, 95 percent of U.S. public school classrooms had WiFi (Munson 2023).

They also point out that the US corporate tax code treats investment in machinery more favorably than investment in, say, worker training and skills. That policy difference could be at least equalized. In this and other ways, the future uses of AI are not purely determined by technological advance, but instead by the incentives and beliefs of economic players–firms that develop AI technologies, firms that use them, managers thinking about how work should evolve, the the willingness of workers to build new skills.

Laying Off Workers: Cheap vs. Expensive

When thinking about what makes an economy flourish, many of us tend to focus on the success stories of innovation and growth. After all, success stories involve an element of risk, which means a chance of failure. When it’s more expensive to fail, then avoiding the risk of failure–by avoiding innovative but risky business choices–starts to make sense. Yann Coatanlem and Oliver Coste put some meat on the bones of this idea in “Cost of Failure, Disruptive Innovation and Targeted Flexicurity: More evidence supporting targeted reforms” (Institute for European Policymaking at Bocconi University Working Paper, November 2025).

The authors focus on the situation of a large company that took a business risk that did not work out, and wants to restructure significantly–which in turn involves the costs of laying off a large number of employees. They have data on 250 such restucturings across a number of European countries and the United States. For each major restructuring, they go through financial and government reports to estimate the costs of laying a worker off, expressed in terms of “months of average employee compensation.” Thus, the illustrative figures show that in Germany, each layoff cost 31 months of average employee compensation; in France, 38 months; in Italy, 52 months; and in Spain, 63 months.

In comparison, when a similar exercise is carried out for US firms, the average cost was 7 months of employee compensation.

Here’s a concrete US example:

In the United States, the responsiveness of tech companies to a technological shock such as the unexpected success of ChatGPT is quite staggering. OpenAI released ChatGPT to the public on 30 November 2022. This breakthrough AI solution reached 1 million users within five days and 100 million users within two months — taking even the most seasoned stakeholders in Silicon Valley by surprise. The industry’s reaction unfolded within weeks: On 18 January 2023, Microsoft announced 10,000 layoffs, representing 5% of its workforce. The restructuring plan was completed by end of March 2023. On 20 January 2023, Google followed with 12,000 layoffs, or 6% of its workforce. The U.S. based employees were notified individually (by email) on the same day. On 14 March 2023, Meta announced 10,000 additional layoffs — a second round following the 11,000 job cuts announced on 9 November 2022 — bringing the total reduction to approximately 25% of its workforce. Most of the restructuring plan was completed by May 2023. This kind of technological shock is by no means unique. It is a recurring pattern in the tech industry—seen with the advent of cloud computing, smartphones, social networks, e-commerce, mobile phones and the internet.

Agility in laying off also means agility in hiring. U.S. tech companies did not cut tens of thousands of engineering jobs to scale back investment—quite the opposite. These workforce reductions were aimed at reallocating resources to accelerate innovation where most promising. They immediately began hiring thousands of AI engineers and invested heavily in AI computing capacity. Meta provides a striking example: after reducing its workforce by 25% in just six months, as seen before, the company hired approximately 10,000 engineers and ramped up its investment in AI supercomputers from around $1 billion in 2022 to $20 billion in 2023, $37 billion in 2024, and a projected $65 billion in 2025. Microsoft and Google are committing even larger sums to AI infrastructure. … In practice, European Employment Protection Law (EPL) often makes such strategic shifts nearly impossible.

A broader discussion of this subject by Pieter Garicano was titled “Why Europe doesn’t have a Tesla” (Works in Progress, February 7, 2026). If you are fortunate enough to be hired as a worker for Volkswagen in Germany, you are essentially guaranteed a job for life–because the costs of laying off such workers are so very high. But the tradeoff is that no company is like to start a new auto manufacturing facility in Germany–not because the potential rewards are too low, but because the potential costs of failure and restructuring are so high. More broadly, if the costs of restructuring a large technology company will be very high, then spending the R&D money seek out new innovations and products will be less attractive as well.

So are we forced into a harsh choice between protecting workers from layoffs and economic dynamism? Perhaps the tradeoff need not be as severe as one might fear. In the Coatanlem and Coste study, they note that the costs of restructuring are at US levels, or even lower, in Denmark, Sweden, and Switzerland. The reason seems tied to a set of policies they call “flexicurity.” (Michael Svarer and Claus Thustrup Kreiner provide an overview of “Danish Flexicurity: Rights and Duties” in the Fall 2022 issue of the Journal of Economic Perspectives, where I work as Managing Editor).

The basic “flexicurity” idea is that the government provides a package of higher unemployment benefits combined with support for job search and job training–and if a displaced worker find an alternative job relatively soon, the higher unemployment benefits start to decline. To put it another way, a company that wishes to restructure bears a relatively modest share of the total costs of the adjustment, and the government steps in with policies and incentives to hasten the adjustment. These steps are also called “active labor market policies,” which differentiates them from “passive labor market policies” like just paying unemployment benefits for a time, and the US government has lagged far behind other high-income countries in such efforts.

24 Min

Europe’s cutting edge firms are falling far behind the American frontier because of restrictive labor laws.

being hired in a Volkswagen factory in Germany is a lifetime job

what if Volkswagen wants to shift to producing EV’s, and existing workers don’t have the skills?

Did Negative Interest Rates Work ?

When recessions hit, the US Federal Reserve lowers its target interest rate–the “federal funds interest rate.” This interest rate applies to extremely safe borrowing: essentially, to overnight borrowing by large and safe financial institutions. The idea is that by altering this ultra-safe interest rate, other riskier interest rates will also be under pressure to adjust, so the lower federal funds rate will be passed through by a country’s financial sector to lower rates for mortgage lending, car lending,and the like (as well as to lower interest rates paid to those with deposits in banks or money market funds).

But in the Great Recession and its aftermath, the Fed ran into a problem: it took the federal funds interest rate to just barely above zero percent. Here, it ran into what economists sometimes call the “zero lower bound.” The Fed felt that it couldn’t make interest rates negative (instead of the bank paying interest to depositors, the depositors would pay interest to the bank?). Thus, the Fed felt as if it needed to try other policies for stimulating the economy with names like “quantitative easing,” “forward guidance,” and the like.

But the European Central Banks, along with central banks in Switzerland, Sweden, Denmark, and Japan, decided to break through the “zero lower bound” (often abbreviated as the ZLB) and try out a negative policy interest rate. I wrote about this when it was happening: for example, “Negative Interest Rates: Practical, but Limited” (March 15, 2021), ”Negative Interest Rates: Evidence and Practicalities” (August 8, 2017); ”What Else Can Central Banks Do?\” (October 24, 2016); and ”Fed Policy: Negative Interest Rates, Neo-Fisherian, or No Change” (August 26, 2016).

But now we have the advantage of a few years to look back at the experience. Michael McLeay, Silvana Tenreyro, and Lukas von dem Berge look back at the evidence in “Negative Rates and the Effective Lower Bound: Theory and Evidence” (Journal of the European Economic Association, 24: 1, February 2026, pp. 1–57). The paper is based on the FBBVA Lecture delivered at the 2025 meetings of the European Economic Association. The second half of the paper is technical stuff (that is, it’s a new model of a heterogeneous, oligopolistic banking sector, with high- and low-deposit banks, embedded in an “open-economy macroeconomic model, featuring exchange-rate and capital market transmission channels). But the basic fact patterns at the start of the paper, which the model seeks to capture, are an easy read.

The short answer is that when central banks moved the key policy interest rate (slightly) below zero, it did seem to provide additional economic stimulus without weird or disastraous consequences. Here are some top-line takeaways.

  • “Empirical Observation 1. Pass-through of policy rates to household deposit rates is bounded by the ZLB. Meanwhile, corporate deposit and wholesale funding rates can fall below zero.”
  • “Empirical Observation 2. At low or negative policy rates, aggregate pass-through to bank lending rates and volumes still occurs, though it is typically reduced and potentially delayed.” However, the authors cite studies that the pass-through rate of negative interest rates to lower interest rates on loans in the economy is not 1:1. The pass-through of negative interest rates in the euro area seems to have been more than 50%, but in some other countries, it was less than that.
  • “Empirical Observation 3. Aggregate banking sector profitability is not necessarily adversely affected by negative policy rates, and may even improve owing to general equilibrium effects.” A concern with negative interest rates was that if banks were pressured to lend at extremely low interest rates, then the banks might go broke. But it turns out that the macroeconomic benefits banks received from lower interest rates stimulating the economy tended to offset the lower bank profits from lending at lower rates.

Of course, none of this means that it would be necessarily sensible policy for a central bank to make its policy interest rate deeply negative. But it turns out that businesses in particular can function with a negative interest rate at their bank–that is, they pay interest to the bank on their deposita at the bank. Also, as we have seen in the US, people can function with banks paying essentially zero interest on their deposits as well. I suspect the reason is that it’s worth something to business and to people to make and receive payments through the banking system (say, a business making payroll, or a person having paychecks automatically deposited). As a result, when a bank is paying zero or even slightly negative interest on deposits, the bank will not lose all of its deposits.