How Does Japan Sustain Such High Government Debt?

Hovering over discussions of high levels of US government debt is a question: How does Japan do it? The ratio of gross debt/GDP by Japan’s government is remarkably high. Here’s a figure using data from the OECD:

Sure, there are some other countries on this figure with high debt/GDP figures, like Greece and Italy, but they have had well-publicized debt crises, where it became hard for the government to issue additional debt without promising to pay very high interest rates. Moreover, when debt from Greece and Italy looked riskier, then all the Greek and Italian banks holding that government debt as an asset also looked riskier, which was one way that the risks of government debt propagated into the rest of the economy. But there’s Japan with a gross debt/GDP ratio of about 230%, and no especially visible debt crisis.

Yili Chien, Wenxin Du, and Hanno Lustig offer an answer to this puzzle in “Japan’s Debt Puzzle: Sovereign Wealth Fund from Borrowed Money” (Journal of Economic Perspectives, Fall 2025, 39:4, 3–26). (For the record, I work at the JEP as Managing Editor.)

A starting point of their explanation is that there is a difference between “gross” and “net” government debt–and how the difference matters. In a US context, for example, the Social Security Trust Fund, which is part of the US government, by law holds US Treasury debt. Thus, in a US context it is common to distinguish between “debt held by the public”–not including this debt held by the Social Security Trust Fund and certain other government trust funds–and gross or total US debt. The logic here is that if the government was only lending money to itself, there wouldn’t be much to worry about; it’s when government debt is drawing on savings from the public (including both domestic and foreign investors) that issues can arise. Here’s a figure contrasting gross federal debt with debt held by the public for the US economy.

Chien, Du, and Lustig take this basic insight and run with it. What if instead of just looking at gross debt, as in the OECD bar graph above, we were to look at the “consolidated” debt of Japan, including gross government borrowing, but also various government funds including pension funds and also funds held through the Bank of Japan?

Short answer: When you take all of this into account, the consolidated debt across all of Japan’s government is only 77% of GDP, even though the gross debt is by the adjusted measure Chien, Du, and Lustig calculation ios more like 270% of GDP. When the authors do a parallel calculation for the US economy, they find that the US has consolidated debt of about 83% of GDP. In other words, what looks like Japanese gross debt far above US levels is actually the same as consolidated Japanese governemtn debt below US levels.

You can read the Chien, Du, Lustig paper for a full explanation of how this happened, but here’s a basic version. For three decades or more, Japan has been able to borrow for its government debt at very low interest rates. In particular, the Bank of Japan buys a large share of Japanese government bonds, pays Japan’s government only a low interest rate for that lending, and then in turn pays Japanese banks a low interest rate on their bank reserves. This cheap borrowing for Japan’s government is supported and favored by an array of regulations, including rules that cause Japanese households to accept putting much of their money in low-interest accounts. Indeed, the authors write: “Japanese households hold half of their wealth in low-return demand deposits, providing the government with an abundant and inexpensive funding source. In contrast, the US household allocates a much larger share of its wealth to stocks, bonds, and other long- duration assets rather than deposits. For Japanese households, a much higher share in deposits is mirrored in a much lower financial market participation rate (around 23 percent own stocks, bonds, or mutual funds), compared to US households (well over 60 percent).”

Japan’s government then takes the funds that it has borrowed at low interest rates and invests in stocks. Thus, by 2024, Japan’s government owned Japanese stock equal in value to about 42% of Japan’s GDP, and in addition owned foreign investment (much of it US stock) equal in value to 62% of Japan’s GDP.

In short, Japan’s government has been borrowing at low interest rates and running up a huge gross government debt, but then investing much of that money in the stock market, building up financial assets that make its consolidated debt much lower. For a rough US analogy, imagine that the US govenment had made a decision to invest the $1 trillion that was in the Social Security Trust Fund back around 2001 in the US stock market, instead of holding Treasury debt. The total return in the US stock market since 2001 (price and dividends) is more than 600%, so that original $1 trillion would have climbed to $7 trillion. Or imagine that the US government had run even larger budget deficits during last 15 years or so, when interest rates were low, and invested that money in the US stock market.

So has Japan found both the solution to government debt and a magic money machine, all in one? Not quite. Japan’s fiscal strategy has relied on using financial and banking rules to assure that it could finance its debt cheaply, through the Bank of Japan, which meant that Japan’s households have received much lower returns on their saving. In addition, borrowing money in shorter-term to purchase stock market assets that are only expected to pay off in the longer term has “duration risk.” And for Japan’s government, borrowing in Japanese yen and then using the funds to purchase US dollar assets comes with the “exchange rate risk” that movement in exchange rates could wipe out gains.

In contrast, when the US government issues debt, it borrows at market-level interest rates determined by global financial markets–not interest rates set artificially low by the Federal Reserve. Unlike Japan, the US economy does not have a high level of domestic savings, and relies instead on net inflows of financial capital from the rest of the world. If the US government tried to follow a policy of borrowing even more, and then investing the additional funds in the US stock market, US debt would appear riskier to the rest of the world, driving up borrowing costs for the US economy. In addition, while the US stock market has been an excellent investment over the long run of the last few decades, there have been some notable downs along with the ups–and linking the riskiness of US government debt to the volatility of the stock market would be a risky financial strategy with some severe downside risks.

In short, the government of Japan imposed substantial costs on its citizens and took relatively high financial risks–but at least up through 2024, the risks have worked out for them! (And no, Japan’s government was not hedging against its duration and exchange rate risks.) Similarly, if the US government had been willing to take the risk of putting some share of the Social Security Trust Fund into the US stock market at the start of the 21st century, it would gave moderately reduced (by about one-quarter) the projected shortfall of US Social Security revenues compared to promised benefits over the next 75 years. But because Japan’s high-risk strategy has worked in the past few decades doesn’t mean that a similarly high-risk strategy is a sensible choice for other countries–and especially not for the US government–in the future.

The Late 1800s: High Tariffs and Expanding Trade Together?

The world economy has been in the midst of a second age of globalization for the last few decades–although, at present, it is not altogether clear whether the globalization trendline is up, flat, or down. However, Marc-William Palen takes a look back at the end of the first age of globalization that ran from the closing decades of the 19th century up through World War I in “When Free Trade First Faltered” (Finance & Development, September 2025). This short article covers some of the themes in Palen’s recent book about the same time period in Pax Economica: Left-Wing Visions of a Free-Trade World. Here are a couple of themes that jumped out at me:

The first era of globalization happened at a time of tariff wars and geopolitical conflict. Palen writes:

The “first age” of globalization was beset by contradictions. In the 60 years or so before World War I, global trade grew rapidly despite the ever-higher tariff walls built by the rising protectionist empires of the United States, Germany, Russia, France, and Japan. Geopolitical conflicts and trade wars grew more common even as markets became more integrated. These contradictions were at the heart of heated debates over free trade and economic nationalism that dominated the industrializing world at the time.

So how is it possible that global trade was rising dramatically at a time when trade barriers were also very high? President Trump is seeking to move US tariff levels back to where they were in the 1920s, or even the 1890s–but the 1890s were actually a heyday of globalization. The answer, worth remembering today, is that international trade is affected by govenment policies but perhaps even more affected by economic underpinnings. And the late 19th century saw technological shifts that dramatically increased the potential for international trade. Palen writes:

Transoceanic steamship lines drastically lowered transportation costs and travel times. The transatlantic cable, successfully laid in 1866, meant that messages between Wall Street and the City of London took mere minutes. The opening of the Suez Canal in Egypt and the completion of the US transcontinental railway in 1869 shrank the world even further. …

But globalization’s unprecedented interdependence soon landed the industrializing world in an unpredictable boom-bust economic cycle. Low transportation costs, mass industrialization, and trade liberalization cut costs for consumers, but the steep fall in prices also meant tighter profit margins, or even losses, for many of the world’s exporters. … The first age of globalization was facing the first Great Depression (1873–96), and protectionism and colonialism were the policies of choice of the industrializing world. Globalization’s protesters grew louder. As is common during economic crisis, cries for national self-sufficiency drowned out calls for cosmopolitan comity. Free trade fell out of fashion among Britain’s imperial rivals …

In our own time, technologies of transportation and communication have of course also developed rapidly around the world: container shipping, giant tankers, air freight, extended rail lines and ports, along with the ability to manage and oversee production and shipping of international trade in real time over the internet. Moreover, the ratio of value-to-weight has been rising for many technology goods, so the costs of shipping them become relatively lower. The share of services trade that can be done in one place and shipped online to another country keeps rising.

In short, one lesson from the first wave of globalization is that when the economic forces are expanding the potential for gains from trade, pushback from political forces is likely–but the pushback may have to be quite extreme if it is to reshape the economic trend toward additional flows of goods, services, and information across national borders.

Another main theme of Palen is that the forces of globalization are often regarded as synonymous with the forces of market capitalism. But back in the late 19th century, the political support for free trade came mainly from the political left. Palen emphasizes the pro-protectionism writings of Friedrich List, whose philosophy was that countries should close their borders to imports, build up their own industries, and then sell to the expanding but captive markets of their own colonies. As Palen writes:

Imperial-minded economic nationalists across the globe began to revere List’s national system as economic divination. Free trade was seen as part of a vast British conspiracy to thwart the industrialization projects of rivals—a self-serving trick to undermine emerging industries elsewhere. List-inspired economic nationalists saw geopolitics as a zero-sum game in which only the fittest would survive.

The technological tools of globalization that not so long ago promised to tie the world together in benign universalism now seemed better suited for binding colonies to imperial metropoles. Tariff walls grew ever higher, turning infant industries into monopolies, cartels, and trusts. Monopoly-induced market inefficiencies at home soon sparked an interimperial search for new markets to export surplus capital and acquire raw materials. Trade wars, military interventions, and the scramble for colonies in Africa and Asia picked up pace.

By 1880, economic nationalists had the upper hand. Their imperial protectionist politics moved ever more to the right. In the US, the Republican Party rebranded itself as the party of protectionism and big business, reversing the freer trade trend of the preceding decades. The 1890 McKinley Tariff, which imposed an unprecedented average rate of about 50 percent, plunged the country into trade wars with European trading partners.

An intriguing parallel with modern times is that, once again, the forces of high tariffs and trade protectionism are often discussed (as in the late 19th century) as if they were the embodiment of hard-headed national toughness. Palen argues that back in the late 19th century, it was the liberal radicals, socialists, internationalists, feminists, and Christians who advocated for free trade as part of their overall critique of imperialism and militarism. In modern US politics, it’s hard to identify a mainstream popular movement that openly contests President Trump’s vision that national prosperity can and should be built behind tariff walls. But it does seem as if people are noticing that the tariffs (as predicted) are helping to keep prices higher, while not triggering the promised renaissance in US manufacturing jobs or solving the US trade deficit.

Some Snapshots of Global Urbanization

During the last half-century or so, one of the biggest changes in how humans live is the greater share of people around the world who live in cities. The UN Department of Economic and Social Affairs describes some pattern in its report on World Urbanization Prospects 2025: Summary of Results (November 2025).

The report defines terms in this way: Cities “are areas with a high density (at least
1,500 inhabitants per km²) and a large population (at least 50,000 inhabitants).” Towns “are urban clusters outside of cities with a moderate density (at least 300 inhabitants per km²) and a population of at least 5,000 inhabitants.” Rural areas have density of less than 300 inhabitants per square kilometer. With these definitions in mind:

Close to 500 million people lived in cities in 1950, equivalent to just one fifth of the world’s 2.5 billion total population. The remaining four fifths were evenly split between towns and rural areas (figure 1.1). In the decades that followed, the number of people living in cities and towns grew rapidly, while the rural population increased only slowly. By the mid-1970s, city dwellers outnumbered those in rural areas, but towns remained the most common living environment globally. The balance tipped again some twenty years later, in 1996, when the collective population of cities overtook that of towns. Since then, both the number of city dwellers and their share of the global population have continued to grow. In 2025, 45 per cent of the world’s 8.2 billion people lived in cities, 36 per cent lived in towns, and the remaining 19
per cent lived in rural areas. Most of the future growth of the world’s population will occur in cities. Cities are expected to account for two thirds of the projected growth of the world’s population by 2050, with most of the remaining one third of growth concentrated in towns. The global rural population is expected to peak sometime during the 2040s and then begin to gradually decline.

fsadh

Around the world, a large share of the growth in city-dwellers over these 75 years has come in Latin America, east, south, and west, Asia, and Northern Africa. But some high-income countries continue to have relatively low densities: “In several major high-income countries, including Germany, Italy and the United States of America, towns remain the predominant settlement type as of 2025. Approximately 40 per cent of the total population of these three countries resides in towns, compared to about one third living in cities. … Notably, rural areas have been the most common place of residence in France over the past half-century, a trend that is projected to continue through 2050. A similar pattern is observed in several countries across Central and Eastern Europe, including Austria, Bulgaria, Croatia, Poland and Romania, where rural living remains prevalent.”

The definition of “city” used here is not based on legal boundaries of a city, but on actual population patterns: thus, if a “city” extends beyond its legal boundaries with a high enough population density, it is all counted as a single “city” in these metrics. Here are the 10 highest-population cities around the world in 2000, 2025, and projected for 2050.

This report is about counting population, density, and looking at trends and patterns around the world, not in a direct way about policy implications. But the report does take care to note and emphasize that lower-density towns and rural areas play important social, economic, and development roles along with high-density cities.

In addition, cities are always a tradeoff (in economic term) between benefits and disadvantages of agglomeration. Benefits include having expertise, workers, and consumers clustered together in a way that generates productivity and new ideas. However, that same clustering can also create costs of congestion and networks of crime. Environmental and health effects of cities can be dramatically different depending on how they are organized, what infrastructure is built, and how regulations and geography factors shape their growth. For 4 billion people, the day-to-day facts of their standard of living, broadly understood, are powerfully shaped by the realities of the urban area in which they live.

AI is Coming: Interview with Anton Korinek

Tim Sablik interviews Anton Korinek “On how rapid advances in AI might reshape the nature of work and how economists can help society prepare” (Econ Focus: Federal Reserve Bank of Richmond, Fourth Quarter 2025).

How AI can be used in economic research:

I use it essentially at all stages of the research process. It starts with ideation, the brainstorming part. I use it to help me with background research. I use it a lot as a writing assistant, giving it bullet points to steer it in a direction and letting it write a few paragraphs based on the points that I provide. I use it to derive economic models because, by methodology, I’m an applied theorist. More recently, since around fall 2024, the latest generation of reasoning models have become very powerful at doing formal math, and that has saved me a lot of time in performing derivations and proving results in economic models.

I use AI quite a bit for coding as well. I’m not currently working on any computational project, but I’m using it to code AI tools to perform text analysis, for example. In some sense, the line between making the AI do work and coding is blurring because I ask AI systems to perform all kinds of tasks, and in some cases, the AI writes code and then executes it. Since I’ve been a programmer for more than three decades now, it’s nice to let the AI do the lower-level stuff and for me to direct where it is going at a higher level in natural language — what people call “vibe coding.” …

Almost all economists I talk to have come to appreciate that these tools can be very helpful. Economics is a very instrumentalist discipline. When economists realize that something is economically useful, they won’t put up a lot of barriers against it. That said, it’s important to acknowledge that we need to be careful with these tools because they do sometimes produce mistakes. They need to be overseen. It’s kind of like working with a research assistant. We would not take everything that research assistants produce for us without checking it, and it’s the same with our AI systems.

How AI may shape the future of taxation:

 [I}f we get AI systems that transform society at the same scale as the Industrial Revolution, the economy is going to be a big part of that transformation. Economists are well positioned to provide insights into how our economy might be reshaped.

One thing to consider is that we may have to redesign our systems of taxation. Right now, roughly two-thirds of all income derives from labor, and probably more than two-thirds of all tax revenue comes from taxing that labor income. If the value of labor suddenly falls dramatically because of transformative AI, then we’re going to have to tax differently. I prepared a paper for an NBER meeting on public finance in the age of AI in September where my co-author and I argue that if labor becomes a less important part of the economy, we may want to switch to more consumption taxation. And then if human consumption becomes a less important part of the economy, we may ultimately have to switch to taxing the capital behind the AI systems themselves.

On teaching students about AI:

I would certainly say that we want everyone to be fluent in AI, no matter which level of education we’re speaking about. I’m currently teaching this to Ph.D. students, but it’s also true for undergraduates, high school students, and younger students. We want everybody to know how to use AI because it is such a force multiplier. … I have two kids, they’re 8 and 10. Together with another dad, I’m going to teach an AI course at our kids’ primary school starting in mid-October because we want them to be exposed to this technology. We want them to understand how it works and how to use it responsibly.

If AI advances very rapidly, it may turn out that we are spending a lot of time right now educating the next generation of proverbial spinners and weavers at the beginning of the Industrial Revolution. 

Government Data: Report of the American Statistical Association

Imagine for a moment that you are the sort of American who would like to know facts about the the US economy, population, employment and unemployment, energy, crime and criminal justice, health, education, transportation, agriculture, science and engineering, income, consumption, wealth, foreign trade, and stuff like that. As I see it, have several options.

You can generalize wildly from personal experience. You can repeat assertions from your preferred social media. You assume that what your preferred politician tells you is correct. You can assume that what your preferred publications tell you is correct. You can assume that the information given to the public by business is correct.

Or, along with these tried-and-true methods, you can turn to government statistical agencies. They are imperfect, of course, as are all human organizations. But the methods they use to collected and to calculate statistics are publicly described, so that they can be questioned and more deeply understood. Also, they try to collect data in the same way over time, making changes only gradually and after public consideration, so that when a government statistic changes, you can understand why. Those who produce the statistics are paid salaries, so they are not selling you something, they not get paid for clicks, and they are not asking for your vote.

Yes, government statistical agencies are imperfect. But consider the alternatives! If you have an imperfect flashlight, it doesn’t mean you will see better in the dark; if you have imperfect data, it doesn’t mean that less data would be an improvement. At a bare minimum, having government statistics as a cross-check on those other oh-so-reliable sources of information seems valuable.

As I have noted some years ago, the arguments for the value of government statistics are old and well-known; indeed, they date back to the legislation involved in the first Census, back in 1790. Section 2 of the just-adopted US Constitution called for an enumeration of people to determine the number of members each state would have in the House of Representatives: But when the bill to enact the first Census came before Congress in 1790, James Madison (then a member of the House of Representatives) argued that there was a great opportunity here to do more than just counting heads, and that it would be useful to gather more information. Our records of Congressional debates from that time do not quote exactly verbatim, but instead are paraphrased. Here are some highlights of what Madison had to say when the topic of just-count-the-people or gather-more information was debated on January 25 and then on  February 2, 1790 (emphasis added):

This kind of information, he observed, all Legislatures had wished for; but this kind of information had never been obtained in any country. …If gentlemen have any doubts with respect to its utility, I cannot satisfy them in a better manner, than by referring them to the debates which took place upon the bills, intend, collaterally, to benefit the agricultural, commercial, and manufacturing parts of the community. Did they not wish then to know the relative proportion of each, and the exact number of every division, in order that they might rest their arguments on facts, instead of assertions and conjectures?

But the level of support for government statistics has been diminishing over time. The American Statistical Association had just published “Nation’s Data at Risk” (December 10, 2025). The report has a number of sensible recommendations for funding and independence of statistical agencies. Here, I just want to point out that budgets for the main statistical agencies have been declining over time in real dollars, and give a sense of what “government statistical agencies” actually means.

These figures show budgets for three main government statistical agencies since 2009: the Bureau of Labor Statistics (which does the surveys and calculations behind official unemployment and inflation rates), the National Agricultural Statistics Service, and the National Center for Health Statistics. Adjusted for inflation, their budgets are down about 14% since 2009.

For broader perspective, here are the 13 main federal statistical agencies. It’s worth emphasizing that the people running the statistical agencies do not make policy. They report the results of data collection.

Finally, I’ll add that US government spending in fiscal year 2025 is about $7 trillion. Add up all 13 of the federal statistical programs, and it’s under $4 billion in a typical year, going up to maybe $8 billion total in a year where the decennial Census is conducted. Thus, total government statistical spending is roughly one one-thousandth of federal spending. Paying for government data is considerably less expensive for the US economy than the costs of not having government data.

Antitrust Enforcement Around the World

Most discussions of antitrust enforcement focus on the US and the European Union, but laws about enforcing competition have been spreading around the world. The October 2025 issue of the Review of Industrial Organization includes articles on the history and experience of the enforcement of competition policy in Brazil, China, Egypt, India, and five countries of Central and Eastern Europe. Russell Pittman offers some broader perspective in his “Editor’s Introduction to the Special Issue on Competition Law Enforcement in Developing Countries.” Pittman writes (citations omitted):

There are at least 125 countries and jurisdictions in the world with competition laws—
perhaps more. Some readers may not be aware of the relatively recent nature of this
state of affairs: There were only 12 competition law regimes worldwide in 1970, and
two of these—the Japanese and German—were forced upon the losers by the victors
in World War II. Countries with market economies gradually adopted competition
laws in the post-war period, to the point that there were about 40 competition laws
by the time of the fall of the Berlin Wall (1989). Over the 20 years that followed, the number exploded, to at least 110 by 2010; and, according to one authoritative source,
there are 135 today: 129 countries and 6 regional organizations.

What accounts for this explosion? In the most economically advanced of the Central and Eastern European (CEE) countries, reformers had included competition laws
in their legal and regulatory agendas from the beginning. In other CEE countries, the desire for an invitation to membership in the European Community clearly played more of a role … More broadly, many developing countries found that loans from the World Bank or the International Monetary Fund, as well as bilateral and multilateral trade agreements with wealthier countries, were conditioned on the writing and implementation of competition laws.

On the side of carrots—as opposed to sticks—developing countries around the
world have received technical assistance in writing and enforcing competition laws
and training agency staffs from sources as diverse as the U.S. Department of Justice,
Antitrust Division, and U.S. Federal Trade Commission, the European Commission,
OECD, UNCTAD, the World Bank, and successful younger competition agencies
such as the Hungarian Competition Authority, the Japanese Fair Trade Commission, and the Korea Fair Trade Commission.

The experience of competition law varies widely across countries. In Brazil, the laws don’t seem to have had much effect. In China, when antitrust authories took action against big platform firms like Alibaba, there were strong effect on the stock prices of firms in the same industry–that is, investors in Chinese stocks saw government antitrust as having considerable power. In India, the enforcement authorities have sometimes imposed fines for improper pre-merger notification, but have almost never blocked an actual merger. In my own reading, the evidence suggests that antitrust authorities in these countries do not seem to have much independent power. They rarely confront powerful incumbent firms, although in some cases (China), competition authority can be used as a club by the central government.

$9.5 Trillion Per Day: Foreign Exchange Markets

The size of the global economy in 2025 is about $117 trillion, according to the IMF. The volume of trading in foreign exchange markets is now up to $9.5 trillion per day, according to the Bank of International Settlements (BIS). Clearly, the volume of foreign exchange trading is much, much larger than the size of international trade in goods and services; for example, exports of goods and services are about 30% of world GDP.

Every three years since 1986, BIS (working with central banks around the world) has sought to measure the the amount of trading. The 2025 survey surveyed more than 1,100 dealers in foreign exchange markets, mostly banks, in 52 jurisdictions. Some results and analysis from the 2025 survey appear in the December 2025 issue of the BIS Quarterly Review. Here, I’ll focus on the overview article by Wenqian Huang, Ingomar Krohn, and Vladyslav Sushko, “Global FX markets when hedging takes centre stage.”

This figure shows the growth of FX markets over time. Overall, up 27% from the April 2022 data in the previous survey. The colors show the different kinds of financial instruments involved: comparing 2025 to 20233, you can see that the spot market and swaps market are the largest components.

The FX market can be viewed as reflecting several main motives. One is to facilitate transactions, like trade or investment with other countries. But these factors are far too small to account for $9.5 trillion per day. Thus, the two bigger motivations are either traders trying to make money by trading back-and-forth in different currencies, or those with exposures in different currencies seeing to reduce risk by hedging those exposures. Huang, Krohn, and Sushko describe the recent drivers of the market in this way:

Since the 2022 Triennial Survey, growth in FX volumes was primarily driven by reporting dealers’ trading with financial customers. Both spot transactions and trading of forwards and options with these counterparties rose noticeably. These instruments can be used to adjust exposures to currency risk on existing positions or to speculate on future currency moves. The growth of FX swap turnover was mainly due to trading with institutional investors, reflecting their funding and hedging needs across currencies. Overall, however, FX swap trading has grown only modestly since 2022, reflecting a stagnation in interbank activity.

Several forces shaped FX volumes in April 2025. Announcements of major shifts in US trade policy early in the month and an unexpected depreciation of the US dollar, including a sudden flip in the dollar’s correlations with major asset classes, roiled markets. Market participants rushed to hedge existing dollar exposures against further dollar depreciation (Shin et al (2025); Shin (2025)). This boosted turnover of forwards and options. 

Despite the extraordinary size of the FX market, it seemed to work fluently, without any obvious signs of stress, even as trading volumes rose dramatically in April 2025.

The foreign exchange markets also shows the continuing role of the US dollar as the global reserve currency. Every foriegn exchange transaction involves buying one currency and selling another. Thus, it’s conventional to think of the total volume of currencies as equal to 200% (that is, adding 100% of the buyers and 100% of the sellers). Out of that 200%, the US dollar accounted for 89% of all foreign exchange transactions in 2025, followed by the euro at 29%, the Japanese yen at 17%, the UK pound at 10%, and the Chinese yuan at 9%. What often seems to happen, in fact, is that when two non-US currencies are being traded, currency A is first turned into US dollars, and then the US dollars are turned into currency B–so what looks like a single transaction is split into two transactions, with the US dollar involved in both. The share of the US dollar in foreign exchange markets has been roughly the same for the last 25 years, even as these markets have grown very quickly.

Medicaid: What It Has Become

As Craig Garthwaite and Timothy Layton point out: “Originally a small, inexpensive safety-net program, Medicaid has grown into a major national health-insurance provider, covering nearly one in four Americans and more people than the public health insurance programs of the United Kingdom, Germany, or France.” They review the program and offer some recommendations in “Coverage Isn’t Care: An Abundance Agenda for Medicaid” (forthcoming in Advancing America’s Prosperity, edited by Melissa S. Kearney and Luke Pardue, published by the Aspen Economic Strategy Group.

I would add that whether you favor government-run national health insurance or oppose it, Medicaid is a major example of such a program in actual operation, and thus worthy of your attention. A few facts:

  • Total Medicaid spending by federal and state governments was $880 billion in 2024. “Medicaid is jointly financed by state and federal tax dollars while being designed and administered by each state. This setup leads to remarkable variation in the program’s structure across the country. … The program’s growth in size and scale means that it now comprises a substantial fraction of state budgets, with the average state spending almost one-third of its budget on Medicaid …” Indeed, a certain number of proposed changes to Medicaid from federal-level politicians focus on reducing federal spending by shifting a greater share of Medicaid spending to states.
  • Medicaid ” has expanded gradually from a program of categorical eligibility, restricted to specific low-income groups (such as pregnant women or the disabled), to—with the passing of the Affordable Care Act (ACA)—a broad-based entitlement for nearly all low-income adults.” Medicaid covered about 20 million people during its first two decades, up through the 1980s, but a series of expansions since the 1990s than has roughly quadrupled Medicaid enrollment in the last three decades, reaching 78.5 million by December 2024.
  • “This growth has been coupled with a structural shift, with roughly 75 percent of beneficiaries now receiving care through private managed-care organizations rather than government-operated insurance programs. These firms include familiar names from other health insurance markets such as United, Aetna, Humana, and Centene, making the modern version of Medicaid quite different from the classic perception of a safety-net healthcare program run and operated by legions of government bureaucrats.”
  • “Medicaid bothpays for 41 percent of births in the US and is the largest single payer for long-term care services in the US. It is the nation’s only true cradle-to-grave insurer. The medical requirements of these many different types of beneficiaries are meaningfully different, and it is therefore likely that the optimal insurance design differs, perhaps greatly, across these groups. Despite this fact, the program largely takes a one-size-fits-all approach and attempts to provide a single comprehensive set of benefits to all enrollees.”
  • “Medicaid involves relatively little expenditure per enrollee. Medicaid accomplishes this feat by paying very low rates to all medical providers. This frugality does not come without meaningful consequences for enrollees. Many providers simply refuse to accept Medicaid enrollees. Others consider treating these patients as a form of charity care. For example, many hospitals declare `underpayments’ from Medicaid as part of their contribution to the public good. … Beyond payment rates, state Medicaid programs also often make it fairly difficult for providers to actually get paid. Data suggests that fee-for-service (FFS) Medicaid is the biggest denier of bills from providers, with a “denial rate 17.8 percentage points higher than fee-for-service Medicare” (Gottlieb et al. 2018). Medicaid managed care is the second-most likely to deny, denying just under 10 percent of bills and challenging around 13 percent. Both FFS an managed-care Medicaid also have much longer times to payment, making working with Medicaid a much bigger hassle for providers than working with Medicare or commercial insurers.”

This last point is a central focus of the proposals offered by Garthwaite and Layton. As they say in their title, being covered by Medicaid is not the same as receiving actual health care through that coverage. On the subject of Medicaid reform, they write:

The current [Medicaid] program is defined by a stark economic tension—it promises access to the mainstream medical system while only providing the funding that can support a two-tiered one. This contradiction was manageable when Medicaid was a small program, but now that it covers a quarter of Americans, there is potential for an access crisis. Policymakers must therefore confront a fundamental choice: Continue to chase the mirage of equal access, or build a system that delivers abundant care to all Medicaid beneficiaries within its budget. We argue for the latter. An honest assessment reveals that an implicit—and dysfunctional—two-tiered system is already the reality. …

This effort should begin by explicitly acknowledging the existence of an implicit two-tiered system whereby Medicaid beneficiaries have coverage but lack access to high-quality medical care. Productive reforms should focus on a redesigned program that fosters an abumdant supply of providers of basic care for the Medicaid tier. Our proposal focuses on targeted regulatory relief and the integration of new artificial-intelligence technologies (AI) to create lower-cost, sustainable business models for providers who primarily serve Medicaid patients, with the goal of ensuring abundant access to basic care. While some might argue that these types of reforms provide a lower standard of care for low-income Americans and confine them to lower-quality healthcare services, we emphasize that the goal is not to diminish the quality of care received by Medicaid enrollees. Instead, our proposals aim to help the large number of Medicaid patients who currently have access to no care (or very limited care) under the current system to have easy and abundant access to (at least) basic healthcare services.

In that spirit, Garthwaite and Layton argue for allowing the immigration of additional internationally-trained health care providers to serve Medicaid patients, allowing intermediate-level health care practitioners like nurse practitioners and physician assistants to have greater autonomy in providing certain kinds of care, and to develop methods for AI-augmented care. They write: “For a beneficiary whose alternative is no access to care, the use of a new, well-designed technology is a clear improvement.” Frankly, I’d be happy to see these kind of reforms implemented across the entire US health care system. But using them in Medicaid would at least be a start.

The US Financial Services Industry

It’s a standard pattern that as an economy develops, it’s financial sector becomes a larger share of GDP. After, banks and bond and stock markets have less of a presence in low-income countries. But in addition, the shape of the financial services industry changes over time. Robin Greenwood, Robert Ialenti, and David Scharfstein explore these shifts and others in “The Evolution of Financial Services in the United States” (Annual Review of Financial Economics, 2025, 17: 189-206). The abstract says:

This article surveys the literature on the historical growth and transformation of the US financial sector. The sector expanded rapidly between 1980 and 2006, during which its contribution to GDP rose from 4.8% to 7.6%. After the global financial crisis, the size of the sector stabilized at approximately 7% of GDP. After reviewing this literature, we examine recent developments, including the continued growth of high-fee alternative asset management and the shift away from banks to lending by nonbank financial intermediaries. We interpret both the growth and recent evolution of the sector as reflecting a continued transition to a more market-based financial system, with risk migrating away from banks and into markets.

To illustrate, here’s a figure showing the US financial sector as a share of GDP going back to 1950, using a bunch of different measures. NIPA stands for “national income and product accounts,” which is how GDP is calculated. The references to “compensation” are because, for an economist, the output of a sector can be measured by how much compensation is paid to that sector. You can see the long rise leading up the Great Depression, another long rise leading up to the 1990s, and then a levelling out since the Great Recession

Much of the paper focuses on how the internal structure of the US financial sector has shifted since about 2006, although the total sector has remained much the same. There are two interrelated changes here. The authors write:

First, the growth of alternative investing (hedge funds, private equity) has continued to drive income in the securities and asset management subsector, with the distribution of fees becoming even more of a barbell—high fees for alternative investing and very low fees for traditional asset management. Second, there has been a notable shift in credit intermediation away from commercial banks. We argue that these two developments are likely connected. On the demand side, continued growth of pension funds has fueled demand for high-fee alternative investments, including private credit funds, as well as securities. On the supply side, post-GFC [global financial crisis] financial regulation has supported the development of the nonbank lending sector.

In short, it’s become easier and cheaper to invest using basic tools like a stock market fund. However, more specialized ways of investing like private equity funds or hedge funds have expanded and are still able to charge high fees. Moreover, banks have faced increasing regulatory restrictions since the Great Recession. Thus, many banks no longer make money by lending for home mortgages and business purposes, and using the interest received to pay expenses and some return to savers. Instead, more banks make money by charging fees, and when they do lend money, they often resell the loan to another financial sector player for collection. But as banks take on less risk, those who want to borrow are turning to alternative nonbank sources of finance. Businesses who want to borrow are more likely to do so through the bond market, or private credit funds and business development companies. Nonbanks owned 40% of home mortgages in 2007, but 65% of home mortgages by 2023.

A big player behind these shifts is large pension funds. The authors explain: “The growth of securities markets and nonbank credit comes alongside the growth of pension funds. And as banks have increased the allocation of their balance sheets to safer assets, pension funds and other investors like insurance companies have stepped in to hold risky credit assets. Thus, both a reduction in the demand for risky credit from banks and an increase in the demand for risky credit by institutional investors can help explain the growth of alternatives, securities, and nonbank credit.”

The broad pattern of the changes is that the investments that most people have in banks or stock market funds are getting easier, cheaper, and probably safer, but investments with a with a higher degree of risk are being sectioned off into hedge funds, private equity funds, corporate bonds, private credit finds, and others. Ultimately, it’s not clear to the authors (or to me) whether the evolving shape of the US financial system is better for supporting US economic growth, or less likely to melt down during a crisis.

How Much are US Firms Using AI Tools?

A question I’m getting asked a lot recently is whether all this fuss about AI is just a bubble. My standard answer is that two things are happening here, and it’s wise not to confuse them. One is the sky-high stock prices for companies closely involved in AI. The other is how much AI itself will actually end up mattering a great deal to the US economy.

I have no particular insight into the short- or medium-term dynamics of the stock market, much less individual stocks. But I wouldn’t be much surprised if stock prices for some of the currently leading AI companies drop substantially at some point in the next few years, nor would I be surprised if some AI companies I’ve never heard of see their stock price boom. It’s also perfectly possible that the stock price of some ;eading AI companies falls, but the reality of AI in the production of goods and services rise.

The stock market isn’t actually part of the real economy of goods and services. This is literally true: the stock market isn’t part of the gross domestic product, because when stock is bought and sold, there is just an exchange of an asset, but nothing is actually produced. For the same reason, building a new house is counted as part of GDP (that is, something is produced), but selling an existing house is not part of GDP (it’s just an exchange of an asset). There’s a saying that helps to clarify the separation between stock market prices and the real economy: “The dot-com stock market boom of the 1990s was a bubble, but the Internet was not a bubble.”

Of course, although the stock market and the real economy are not the same, there are feedbacks between them. If the stock market drops, I’ll see it in my retirement account, and the signal of a declining stock market will make it more expensive for firms to raise capital. Conversely, the stock price for AI companies will depend on the extent to which their products create value for firms and individuals. Here, I’ll focus on some recent survey evidence on how broadly AI tools are being adopted in corporate America–and for all the talk about AI, the adoption rate so far is lower than you might expect.

As one example, the US Census Bureau does a Business Trends and Outlook Survey based on responses from 1.2 million US businesses. Here are some results from the BTOS as updated on November 20, 2025. Here’s a figure showing businesses that are using AI tools: the blue line is those that have used such tools in the last two weeks: this has risen from 5% of firms at the start of 2024 to about 10% here near the end of 2025. The orange line shows firms that expect to use AI tools in the next six months. The lines are trending up, but not skyrocketing. A more detailed breakdown by industry shows higher levels of use in information industries, finance, and in professional/scientific/technical services, and near-zero levels in manufacturing and retail.

Here’s a breakdown of the Census data by size of firm. As you can see, the use of AI by the biggest firms (top line) seems to have levelled off or even dropped a bit in the last six months or so. The category of firms with the biggest ongoing rise in AI use are the small firms (lightest blue line) with 1-4 employees.

Here are some other survey results, these from the National Opinion Research Center at the University Chicago. Malihe Alikhani, Ben Harris, and Sanjay Patnaik report the results in “How are Americans using AI? Evidence from a nationwide survey” (Brookings Institution, November 25, 2025). Sample size here is about 1100 people, selected to be a nationally representative sample. They emphasize that those with higher education levels are more likely to be using AI tools.

This figure shows how AI tools are being used in the workplace, broken down by education level. The most common use is those with a BA degree using it for writing and editing documents. But many of these groups are reporting 10% or fewer people who use AI tools, “supported by your institution,” in the workplace. Again, the current use of these tools is lower than one might expect.

Indeed, the NORC survey asks some follow-up questions on whether workers feel that the AI tools are improving their productivity, and the results are decidedly mixed:

The impact of generative AI on worker productivity is often unclear, even to the workers themselves. Only 19% of all respondents report that AI increased their productivity in their daily tasks, and only 4% say it increased their productivity significantly. Even among respondents with a bachelor’s degree or more, just 28% say that AI increased their productivity in daily tasks. More than one in five respondents report that their daily productivity remained the same (22%) and over half of all respondents say they are either not sure about the effect of AI on their productivity or say it does not apply to them (53%).

Putting these kinds of results together, it’s perhaps no surprise to see a story in the Economist magazine (November 26 issue) headlined “Investors expect AI use to soar. That’s not happening” and subtitled “Recent surveys point to flatlining business adoption.” The article whimsically suggests that the GPT in ChatGPT might stand for “Generally Paused Technology.” Using the Census data above on the size of businesses using AI tools as well as other survey and research results, the article estimates that share of Americans using AI at work has recently declined slightly.

Of course, the uncertainties around how to design AI tools for business and personal use are very large at this stage. Sometimes when a remarkable new technology is being adopted, there is a period of several years where many people are learning about the technology and multiple application are being developed, but this doesn’t show up in the big-picture statistics. Then at some point, some of those applications gain traction with a broad group of users and the technology takes off. But at least so far, with the current abilities of AI and the AI applications currently available to US firms, that launching point hasn’t happened yet.