Winter 2026 Journal of Economic Perspectives Freely Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2026 issue, which in the Taylor household is known as issue #155. Below that are abstracts and direct links for each of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.

____________

Symposium on Fertility Rates

The Likelihood of Persistently Low Global Fertility,” by Michael Geruso and Dean Spears

     For the world as a whole, average birth rates have been falling for decades—from about 5 in 1950 to a little above 2 today. Two-thirds of people today live in a country where the birth rate is below an average of two children per two adults, which means below the fertility level needed to sustain population sizes (without net migration). In this paper, we assess whether low fertility is likely to persist as a global phenomenon. We distinguish cohort birth rates, which matter for generation-to-generation population change, from period birth rates, which present a snapshot of birth rates at a point in time, but may offer less insight on longer-run possibilities. Where cohort birth rates have fallen low, they have not subsequently rebounded. We show that both increasing rates of lifetime childlessness and smaller family sizes among parents have contributed to falling cohort birth rates. Pronatal policies, we discuss, can have large effects on the annual fertility data without substantially changing the average number of children women have over their lifetimes. Although future birth rates remain uncertain, we conclude from the evidence that, over a long horizon, persistent low fertility is a likely future.

How Much Would Continued Low Fertility Affect the US Standard of Living?,” by David N. Weil

     I assess the effect of continued sub-replacement fertility on age-adjusted consumption per capita. Channels assessed include transfers from working-age adults to children and the elderly, the effect of the labor force growth rate on required capital investment, sustainability of government debt, the interaction of population size with fixed natural resources (including a clean environment), and the effect of population size on the speed of technological progress. To isolate the effect of low fertility from other ongoing demographic changes, I use simulation models as well as projections from the United Nations and Social Security Administration that vary fertility rates while holding other factors constant. My main finding is that the impact of low fertility is likely to be negative but small. In addition, this negative impact arrives only after a long adjustment period. An increase in fertility back to the replacement rate would lower the standard of living for several decades.

Family Institutions and the Global Fertility Transition,” by Paula E. Gobbi, Anne Hannusch, and Pauline Rossi

     Much of the observed cross-country variation in fertility aligns with the predictions of classic theories of the fertility transition: countries with higher levels of human capital, higher GDP per capita, or lower mortality rates tend to exhibit lower fertility. However, when examining changes within countries over the past 60 years, larger fertility declines are only weakly associated with greater improvements in human capital, per capita GDP, or survival rates. To understand why, we focus on the role of family institutions, particularly marriage and inheritance customs. We argue that, together with the diffusion of cultural norms, they help explain variations in the timing, speed and magnitude of the fertility decline. We propose a stylized model integrating economic, health, institutional and cultural factors to study how these factors interact to shape fertility transition paths. We find that family institutions can mediate the effect of economic development by constraining fertility responses.

Global Labor Mobility between Shrinking and Growing Labor Forces,” by Lant Pritchett

     Falling fertility and improved mortality create a powerful and inexorable demographic arithmetic of ageing in the coming decades around the world, with three patterns. The richest countries, along with China and the Former Soviet Union, will see absolute declines in the labor force aged (15–64) population and absolute rises in those 65 plus. All major developing country regions except for Africa: Latin America, South-East Asia and Pacific, South Asia, and West Asia/Middle East, will experience modest labor force growth to 2050 (less than 30 percent) aged combined with rapid growth of those over 65 (doubling or tripling). The fall in fertility in Africa (Sub-Saharan and North) started later and has fallen much less and hence, in standard scenarios for 2050, Africa will account for 80 percent of all global net growth in the world’s labor force aged. A fundamental feature of the global economy over the medium-run to 2050 is that that highest labor productivity countries will have absolutely fewer native-born workers and Africa, home to many of the world’s lowest productivity countries, will have 800 million more labor force aged. The combination makes possible gains on the order of trillions of dollars to policies that creating legal pathways to allow people, particularly youth, to move from low productivity, labor abundance places to high productivity, labor scarcity places. But, so far, politics has not found the way to “yes” for this win-win scenario.

Symposium on Competition in Labor Markets

Labor Market Power: From Micro Evidence to Macro Consequences,” by David Berger, Kyle Herkenhoff, and Simon Mongey

     The traditional theoretical and empirical “micro approach” to studying labor market power (or monopsony) requires that firms are small and atomistic. This is at odds with the reality of labor markets in which monopsony potentially matters most. Empirically, many markets are concentrated and characterized by large, dominant employers. The actions of large employers in an occupation or industry affect local and national wages, employment and output. Employers that understand their largeness may then act strategically when hiring and setting wages, generating misallocation and harming workers. This paper advocates for a “macro approach”: (1) directly model equilibrium behavior of large employers, (2) combine macro data and empirical estimates of employers’ responses to policy changes—obtained using the “micro approach”—to estimate the model, (3) use the model to compute the aggregate costs of monopsony, and optimal policies. This approach provides new perspectives on minimum wage and antitrust policy.

Antitrust Enforcement in Labor Markets,” by Elena Prager

     Until recently, antitrust laws were rarely enforced in labor markets. Although the existence of labor market power has long been recognized, evidence only recently emerged that such market power regularly arises from sources that are actionable under antitrust law. Since 2010, antitrust agencies have substantially increased labor market enforcement actions. However, many questions relevant to enforcement remain unanswered, such as how to conduct market definition for labor markets and how best to incorporate concentration into models of the labor market. This article reviews how antitrust is beginning to be used in labor markets, the evidence for and against its use, and the remaining evidence gaps standing in the way of more effective use.

The Economics of Noncompete Clauses,” by Evan Starr

     For over 600 years, debates over noncompete clauses have centered on whether they function as efficient contracting tools or anticompetitive restraints on workers. This article reassesses that debate in light of recent policy attention and new empirical and theoretical research. Proponents argue that noncompetes are necessary to protect investments in training and trade secrets, increasing productivity and wages. However, recent studies indicate that the widespread use of noncompetes—frequently extending beyond roles involving sensitive information—and their enforceability lower mobility, wages, innovation, and entrepreneurship. Moreover, in many cases, less restrictive contractual terms appear to safeguard firm interests. Evidence of spillovers to other workers and across state boundaries, as well as behavioral effects even when noncompetes are unenforceable, raises questions about whether existing state-level enforcement regimes adequately address their observed impacts.

Occupational Licensing in the United States,” by Janna E. Johnson

     Occupational licensing—the requirement that individuals attain a license to legally perform a specific job—is now necessary for over a fifth of the US workforce. The policy is intended to protect consumers by ensuring members of licensed occupations meet a minimum quality standard but comes at the cost of higher prices for their services. Economic theory and research support the argument that at least in some cases the costs of licensure exceed its benefits. Incumbent members of licensed occupations gain from the higher wages caused by licensure policies, creating a strong incentive for them to push for stricter regulations and resist any efforts to remove or loosen licensure requirements. However, despite bipartisan interest in licensure reform, data limitations and vast heterogeneity in licensure policies limit the usefulness of existing research in guiding its design.

Symposium on Asian Americans

Asian Immigration to the United States in Historical Perspective,” by Hannah M. Postel

     Asian Americans are the fastest-growing immigrant group in the United States, yet Asian immigration remains relatively understudied in quantitative social science. This paper reviews the historical evolution of Asian immigration, focusing on six major origin countries—China, Japan, India, the Philippines, Korea, and Vietnam—to show how US immigration and foreign policy shaped the size and composition of immigrant arrivals. It then examines subsequent patterns of demographic composition, geographic settlement, and socioeconomic characteristics. Taken together, the evidence highlights the enduring influence of US policy regimes on Asian immigration over time.

“From Asia, with Skills,” by Gaurav Khanna

       This paper examines the rise of high-skill migration from Asia to the United States since 1990 and its consequences for sending and receiving economies. Over 1990–2019, migrants from India, China, South Korea, Japan, and the Philippines accounted for over one-third of US growth in software developers and a quarter of the increase in scientists, engineers, and physicians. Using census microdata, visa records, and administrative sources, I show how growing US demand for talent in information technology, higher education, and healthcare interacted with Asia’s demographic and educational transformations. Policy reforms in the H-1B, F-1, and J-1 programs and sectoral shifts—such as the internet revolution and aging-related healthcare demand—generated persistent needs for foreign students and workers. Asian economies were uniquely positioned to meet this demand through tertiary expansion, strong STEM institutions, English proficiency, and diaspora networks. These inflows boosted US innovation while fostering “brain gain” and “brain circulation” in Asia.

Features

“Recommendations for Further Reading,” by Timothy Taylor

When Social Security Went Haywire in the 1970s

A useful way to look at the generousity of Social Security payments is to look at the “replacement rate” of benefits. This can be calculated in various ways, but a standard approach is to compare Social Security payments that would be received by someone at the “normal retirement age” as an average of that person’s top 35 years of pre-retirement earnings–with those earnings adjusted for the growth rate of national average wages over time (and in this way capturing an aspect of both inflation and growth over time). In the early 1970s, the replacement rate went haywire, which is part of what caused the Social Security system to need rescuing in the early 1980s, and what will cause it to need rescuing again by the early 2030s. Andrew Biggs tells the story in “It’s Fine to Embrace FDR’s Vision of Social Security, But You Don’t Need to Embrace Nixon’s and Carter’s” (Little Known Facts, January 29, 2026).

This graph tells the basic story of what happened. Focus first on the line showing replacement rates for a “medium” wage-earner. At the start of the system, the medium wage-earning had a replacement rate of about 24%, but Congress raises this level in the 1950s, and by the late 1960s it’s about 29%. Then there is a dramatic spike: the Social Security replacement rate for a medium earner rises from 29% around 1970 to 51% by 1980. It then sags back down to about 40% by the mid-1980s, where it has stayed since.

Since the beginning of Social Security, the system has been set up so that while those with high incomes did and do receive higher monthly payments, the share of income replaced is higher for those with low incomes. The graph above shows the higher replacement rates for those with lower levels of income. It also shows how the spike in Social Security replacements rates applied to all groups.

So what happened? Biggs lays out details of the timeline, and who knew what. His basic story is that Congress wanted to adjust Social Security for rising inflation, but the formula that was enacted into law (obviously) did a lot more than just keep benefits aligned with inflation. He argues that many members of Congress literally didn’t know what they were doing.

He digs into the background staff reports at the time. For example, who can forget that timeless 1973 blockbuster from the actuaries, “Some Aspects of the Dynamic Projection of Benefits Under the 1972 Social Security Amendments”? Biggs makes the case that, behind the scenes, many staffers knew that the new formula would spike the Social Security replacement rate, but because the staffers wanted that to happen, they didn’t spell it out to Congress. The chief actuary for Social Security said in 1970: “Certain of the top policy-making officials at the Social Security Administration (who are holdovers from the Johnson Administration) have strong beliefs in the desirability—even the necessity—of the public sector taking over virtually all economic security provisions for the entire population and thus eliminating private efforts in this area.”

But the dramatic rise in replacement rates had consequences. According to calculations by Biggs: “Had Social Security replacement rates simply been kept at their 1969 levels, and therefore missed the large benefit increases of the 1970s, the average benefit for a new retiree in 2026 would have been about $20,375, about 25 percent less. That difference alone would have almost certainly avoided Social Security’s near-death experience in 1977, its brush with insolvency in 1983 and the projected exhaustion of the trust funds in 2032.”

Biggs also grasps the nettle of a more difficult policy question: Does this history have lessons for the necessary reform of Social Security finances by the early 2030s? After all, the Social Security reforms of the early 1980s put the system on a path it has followed for the last half-century, andt the reforms of the early 2030s may establish a path all the way to 2100. There are shorter-term and longer-term issues here.

In the shorter-term, there are elderly folks already relying on Social Security, and others who have been planning their retirement based on often-repeated promises of the Social Security income they will receive after retirement. It would seem grossly unfair, even cruel, to break those promises.

But now consider those born since, say, 2020. That generation will not enter the workforce in a significant way until 2040, and some of them (say, those who attend college and graduate school) until later than that. At this point, and by early 2030, they will not have paid any taxes into the Social Security system, and they have not built their financial futures based on, say, what their retirement income will look like when they turn age 65 in 2085. It would not be unfair or cruel to redesign Social Security for that generation.

Biggs puts forward his own view of what should happen. He points out that that when Social Security was first enacted in the 1930s, it was in an economy where very few people had private pension, and self-owned contributory retirement accounts like IRAs and 401k accounts didn’t exist. As he writes, it is “far, far easier for middle- and high-income Americans to save for retirement on their own today than it was in the world of 1935.” Thus, Biggs suggests that for this next generation, Social Security reforms should transition to a flat benefit, received by every elderly person, that would provide a robust above-the-poverty-line safety net for all seniors.

I’m uncertain about that recommendation. Part of the allure of Social Security is that it has not been a pure safety-net program for the elderly poor. Instead, even though it has always involved some redistribution from high- to low-incomes, the benefits you receive are linked, to some extent, to what you paid into the system.

However, Biggs seems clearly correct on three points. First, the financial ecosystem that helps those with middle- and high-incomes save for retirement is quite a bit different than in the 1930s. Second, the current replacement rates and tax rates in Social Security are not the original vision of the program. Instead, the current replacement rates were foisted on the public without discussion in the early 1970s, and the current tax rates were enacted in the early 1980s to as necesary to support those higher replacement rates. Third, when Congress gets around to redesigning the future of Social Security–probably in the early 2030s–for the future generations who are born only recently, or not yet born at all, we should not treat the current structure of Social Security as sacrosanct.

Tariffs and Inflation: Where Are We?

One of the predictions made by economists when President Trump announce the start of his freewheeling tariff policies in April 2025 was that the costs of the tariffs would ultimately be passed through to consumers, leading to overall higher inflation. Well, President Trump has been tossing out tariff threats, keeping some and withdrawing others. However, the Consumer Price Index showed a rise in total prices of 2.7% for the 12 months up through December 2025. So where’s the inflation?

The short answer goes like this: Imports are about 14% of the US economy. Say that the tariffs, with all exceptions and delays factored in, are imposed at an average rate across all imports of about 10%. If 14% of the economy has a 10% increase in tariffs, then the pass-through to consumer prices would be 1.4%. The evidence suggests that’s roughly what’s happening.

For an overall picture of the rising US tariffs, the Yale Budget Lab has published an update of its “State of Tariffs” report, dated January 19, 2026. The “average effective tariff” is calculated across all imports of goods. You can see the high tariff rates of the 19th century, and the much lower rates since the end of World War II. In the graph, the “pre-substitution rate” refers to the effective tariff rate before patterns of imports adjust to the higher tariffs, while the “post-substitution rate” refers the average rate after the adjustment of imports has happened.

For a focus on what happened in 2025 in particular, Gita Gopinath and Brent Neiman have published “The Incidence of Tariffs: Rates and Reality” (University of Chicago, Becker-Friedman Institute, BFI Working Paper 2025-151, January 2026). They differentiate between the announced or “statutory” rates and the actual rates with exceptions and timing lags included. The “weights” and “moving weights” are their version of the pre-substitution and post-substitution lines in the previous figure. As they point out: “A key reason why the price impact of the tariffs remains below many forecasts made in April is that the implemented policy remains much smaller than the announced policy.”

Gopinath and Neiman also look at the extent to which tariffs are being passed through in US prices. Their method involved looking at detailed breakdowns of US prices charged for specific goods. Then, they can trace back whether the goods that had tariffs imposed on them rose in price by more than other goods that did not have tariffs imposed. They carry out this exercise in a bunch of ways, focusing in some examples on tariffs on goods and in other on tariffs by country. They find: “When a 10 percent tariff is imposed on an imported good, U.S. importers appear to pay 8-10 percent more, including the tariff, for that good.”

The Yale Budget Lab calculates that the additional cost of the tariff so far work out to about $1400 per year for the median household. However, if the cost of the tariffs is expressed as a percent of income, rather than dollar amount, the negative effect is biggest for those with the lowest income levels, because they rely more heavily on less-expensive imported products.

The Yale Budget Lab also points out that the Federal Reserve has easy access to these kinds of estimates, and thus when the Fed looks at the 2.7% annual rate of inflation in December 2025, it mentally does the calculation: imports are 14% of the economy, tariffs are up about 10%, and 80-100% of the tariffs are passing through to prices. Put it together, and about 1.1-1.4 perentage points of the 2.7% inflation rate is probably due to the tariffs. To put it another way, without President Trump’s tariff policy in place, the measured inflation rate would probably be below the Fed’s target rate of 2% per year. Affordability concerns for the general public would be slightly diminished, and the Fed would be more willing to reduce policy interest rates more quickly. Meanwhile, President Trump’s bigger promises about how tariffs would save the US economy are not aging well.

The Liminal Status of US Immigration Policy

The US Congress has not enacted major changes to US immigration policy since the 1990s, three decades ago. Presidents of both parties have thus become accustomed to enacting immigration by decree, or “executive authority.” However, such policies are easily amended or erased. The result is that millions of immigrants live in a liminal status–that is, an uncertain space where the rules are unclear, where someone crossing the border may be allowed to enter the country under one set of rules, but then have those rules change. Pia M. Orrenius delivered the Presidential Address of the Southern Economic Association in 2025 on the subject “Temporary Fixes, Permanent Problems: Implications of the Growing Reliance on Liminal Status in U.S. Immigration Policy” (co-authored with Madeline Zavodny, Southern Economic Journal, October 2025, pp. 181-193).

This figure shows the number of monthly “encounters” between the US Border Patrol and potential border-crossers with the blue line, measured on the left-hand axis. The red line and the right-hand axis shows the share of potential border-crossers released into the United States. Perhaps unsurprisingly, a rising share of border-crossers being released into the US (red line) was accompanied by a rising number of border encounters (blue line).

Set aside questions over the merits of this policy for a moment. My theme here is that the Biden administration chose to alter US immigration rules not as a result of a law debated and passed by Congress, but purely a matter of “executive authority.” The Trump administration has pushed back with its own alteration of immigration rules, again through its own “executive authority.” In the US Constitution, Article 1 lays out the structure and powers of Congress. In Section 8, Clause 4, Congress is given the authority “To establish an uniform Rule of Naturalization.” Presidents and the executive branch are supposed to be carrying out the rules, not making them.

Orrenius and Zavodny offer some estimates of the size of this liminal population of immigrants, which has risen dramatically in the last decade or so. The categories are a reminder of how immigration policy has been evolving over time.

Temporary Protected Status (TPS) was created by the Immigration Act of 1990 “and was first designated to help migrants who had fled civil war in El Salvador remain in the U.S.” The Department of Homeland Security can designate countries where it would be unsafe to return. “As of the end of fiscal year (FY) 2024, almost 1.1 million immigrants from 17 countries were protected under TPS.”

Deferred Action for Childhood Arrivals (DACA) was created by the Obama administration in 2012 “in order to provide temporary legal presence to unauthorized immigrants who were children when they entered the U.S.” The Trump administration partially rescinded it, but courts blocked them from going further; the Biden administration tried to reinstate the original version, but courts blocked them from going further. “Almost 835,000 young people have been approved for DACA since 2012, including over 537,000 active beneficiaries as of the end of FY 2024.”

The Special Immigrant Juvenile (SIJ) program is so that “child immigrants who are unauthorized and have been abused, neglected, or abandoned may be eligible for a permanent resident visa.”

The Nonimmigrant T/U adjustment group refers to “immigrants who have been victims of criminal activity or trafficking,” who under certain circumstances “are eligible for a U or T nonimmigrant (temporary) visa, which allows them to apply for permanent residence after 3 years and to work in the meantime. The number of U visas for principal applicants is capped at 10,000 a year, and there is a substantial backlog—almost 400,000 petitions by victims and their family members were pending at the end of FY 2024. While they wait, victims with bona fide claims and their family members are protected from deportation and can apply for work authorization, but even that process takes years …”

Asylum seekers and the paroled are, at least in my mind, closely related groups: “A growing number of people with temporary legal presence in the U.S. are asylum seekers. Some of them presented themselves at the border, asked for asylum, and were paroled into the U.S. Others were pre-approved for parole status and sought asylum upon arrival. Yet others asked for asylum after illegally entering the U.S. and being apprehended by U.S. Border Patrol and were released into the U.S. to await determination of their asylum claim. There were so many of the latter that the Biden administration effectively eliminated the ability of apprehended migrants to ask for asylum in the summer of 2024. Regardless, it will take years for asylum officers and immigration courts to weigh the merits of pending asylum claims. As of the end of FY 2024, there were over 2.4 million asylum claims awaiting adjudication. In the meantime, asylum applicants can apply for work authorization. If their asylum claim is approved, they can adjust to permanent resident status. Typically, only about 20% to 30% of asylum claims are approved. If their claim is denied, they are supposed to leave the U.S., but evidence suggests many remain in the U.S. without legal status, adding to the unauthorized population.”

In addition to those with liminal status under these various rules, the number of those with explicit status under a temporary visa is also on the rise.

As the authors write: “The other large category of temporary migrants is those with temporary work visas, who make up the great majority of employment-based immigration. Over the last 10 years, the annual average ratio of temporary to permanent work-based visa issuances is 35 to 1. The U.S. issues temporary worker visas in an alphabet soup of categories. The most notable temporary worker visa categories are the H-1B visa for high-skilled professionals in specialty occupations; the H-2A visa for agricultural workers; the H-2B visa for non-agricultural workers; and the J-1 visa for exchange visitors. Together, they accounted for over 800,000 workers each year since 2018.”

In addition, there are F-1 visas for students: “International students who hold an F-1 visa can work under certain circumstances, including for up to 3 years after graduation under the Optional Practical Training (OPT) program if they majored in a STEM field. That post graduation work period gives STEM graduates multiple shots at winning a visa in the H-1B lottery. In 2023, almost 400,000 international students who had graduated from a U.S. university had work authorization, and another 140,000 were authorized to work off-campus while completing their studies.”

In a big-picture sense, the US population is entering a period of demographic transition, with lower birthrates and longer life expectancies. The ratio of retirees per worker is likely to rise. The US is overdue for a rethinking of immigration policy that doesn’t depend on temporary categories or on the caprice of whoever is president. Orrenius and Zavodny argue for a rise in the total levels of legal and permanent immigration, and I agree.

I’m confident that a Biden-style immigration policy of just releasing most people who show up at the US border into the US economy for indeterminate periods of time is not optimal. About two-thirds of all legal and permanent US immigration is based on family unification, with the other one-third is employment related. As a starting point, I would suggest keeping the levels of immigration related to family unification at the same level, but doubling or tripling the employment-related immigration. One starting point would be to make it more straightforward for foreign students at US colleges and universities to find a pathway to US citizenship. There is a global contest to attract talent, and the current US immigration system is a hindrance in that competition.

China’s Belt and Road Initiative: The Final Act

Lending from state-owned Chinese banks to developing countries took off around 2010. From China’s view, there were several justifications. The public name given to the wave of lending was the Belt and Road Initiative, with the broad idea of building infrastructure and trade connections across Asia, the Middle East, and Africa (for some posts in the last few years on the initiative, see here, here and here). The increasing trade ties would help to assure China’s economy of access to raw materials, while also providing additional market for China’s exports. In addition, these deals often involved other countries borrowing money from China in substantial part to hire Chinese companies to come and build the infrastructure. Hovering just behind these economic arguments was a hope or belief that China would deepen its political connections with the borrowing nations, and its influence over these nations.

Sebastian Horn, Carmen M. Reinhart, and Christoph Trebesch bring the story up to date in “The China’s Lending to Developing Countries: From Boom to Bust” (Journal of Economic Perspectives, Fall 2025). Here are two figures to show the trends:

This figure shows net lending by China and its state-owned banks to developing countries. As you can see, lending takes off around 2010, dries up around 2019, and in the last few years turns into repayments.

This companion figure shows total “official” debt of developing countries–that is, not debt to private banks or companies. The red line showing “debt to China” exceeds total debt to the IMF by around 2007 and has been roughly equal to total debt to the World Bank since about 2016.

A recent report by Mengdi Yue, Diego Morro, Nicolo Capirone, and Yiyuan Qi, published by the Global China Initiative at Boston University, focuses on Chinese loan commitments to Africa (“Selective Engagement and Strategic Retooling: The Chinese Loans to Africa Database, 2000–2024, January 2026).

As the United States learned back in the 1980s and 1990s, it’s easy to be popular during the opening act of international lending. But when the loans aren’t getting repaid in time, the lender can become unpopular pretty quickly.

When countries are having a hard time repaying debt, the standard policy response is to get all the lenders together in a room, and have them all agree that they will refinance the loan and take less money. (This is essentially the “Paris Club” shown in the figure above.) The logic is that no single lender will agree to take less if everyone else is being repaid in full. However, if everyone insists on getting repaid in full, and the result is an economic collapse in the borrowing country, then the lenders could end up with very little indeed. Thus, better to agree on taking a moderate loss now, rather than risking a much larger loss later. However, China’s lenders have been largely refusing to participate in these arrangements, instead holding the position that everyone else should take the losses instead.

It will take a few years to sort out the legacy of China’s foreign lending and the Belt and Road Initiative from about 2010 to 2019. Some of the infrastructure plans work out just fine; others looked better on the drawing board than in reality. Some countries have closer ties to China as a result; others are now looking at Chinese influence and money with a wary and even hostile eye. How the debts to China’s lenders are ultimately resolved, for better or worse, will be the climax of the story.

How Slavery Held Back Growth: Looking Across the River

Imagine a situation where a substantial area is run by a heavy-handed organized crime group. Those at the top of the organized crime pyramid live like royalty, with palatial housing and the highest-end food, drink, and clothing. Those who live in this area pay for this luxury in a variety of ways: payoffs for starting or running a profitable business, limits on jobs, higher higher prices for goods and services, and an ongoing shadow of violence.

Now imagine that someone argued that organized crime was a great success for the for the local economy. For evidence, this person adds up the value of economic activity within the organized crime empire, and points to the high incomes, wealth, and political influence of the organized crime leaders.

But of course, the fact that this hypothetical organized crime organization makes money, especially for its leaders, doesn’t make it economically beneficial. Similarly, the fact that slavery was the basis for a larger agriculture economy and was profitable for slaveowners doesn’t make it economically beneficial, either. To know if something is “beneficial,” one needs to engage in what economists call “counterfactual” reasoning: that is, what would the economy have looked like otherwise?

As a first example of one way to make this comparison, Hoyt Bleakley and Paul W. Rhode consider “The Economic Effects of American Slavery, Redux: Tests at the Border” (June 2024, NBER Working Paper 32640). They take inspiration from the famous voyage of Alexis de Tocqueville and Gustave de Beaumont to America in 1831-32. When they were travelling down the Ohio River, with the free state of Ohio on one bank and the slave state of Kentucky on the other bank, it seemed obvious that the Ohio side was flourishing, while the Kentucky side was not. De Tocqueville wrote (translated from the French): “It is impossible to attribute those differences to any other cause than slavery. It brutalizes the black population and debilitates the white. One can see its deathly effects, yet it continues and will continue for a long time. […] Man is not made for servitude.”

Many analyses of slavery look at state-level or regional-level averages of South vs. North. Instead, Bleakley and Rhode focuses on a long narrow stretch of land on both sides of the border between free and slave states. They write: “We take the testing ground of de Tocqueville and de Beaumont — the upper Ohio River valley– and extend the comparison east to cover the borders dividing Pennsylvania and New Jersey from Virginia, Maryland and Delaware and west to contrast free states of Illinois and Iowa with the slave state of Missouri. The border was the dividing line between slavery and free labor institutions within the same
country, with a common language, national laws, and shared heritage.”

The pattern they find is that land is only about half as likely to be utilized on the slave side of the border; on the free side, investments in land-clearing and farm buildings were much higher. This represents a dramatic reduction in potential economic output in slave states–extreme enought that it was visible from the deck of ships going down the Ohio River. (And yes, the authors also do roughly a gazillion statistical checks to see if the results might be accounted for by soil erosion, river access, soil composition, timing of earlier settlement, earlier glacial coverage, the existence of state borders, and more.)

So why didn’t free labor move across the border to the available land in the slave states? Bleakley and Rhode emphasize that a slave society was dominated by rich slaveowners, who were focused on their own source of wealth. These states did not invest in infrastructure or institutions to benefit the middle- and lower-class of free workers; instead, they were likely to impose requirements that free workers participate in the enforcement of slavery. Slave-owning states were much more likely to have institutions that also affected free whites: indentured servitude, public whippings, debt bondage (in which someone in debt is legally required to work for another until the debt is repaid–which had a nasty habit of taking a very long time), or the leasing of prisoners to private companies as forced labor. The slaveowners’ idea of “property rights” was very much about their own personal property, not anyone else’s. For free labor, the political, economic, and social institutions of slave-owning states were largely unattractive. The authors quote a 1854 speech by Abraham Lincoln, who said that “slave States are places for poor white people to remove FROM; not to remove TO. New free States are the places for poor people to go to and better their condition.”

An alternative way of comparing slavery to Emancipation can be done with statistical modelling. Treb Allen, Winston Chen, and Suresh Naidu take this approach in “The Economic Geography of American Slavery” (NBER Working Paper 34356, October 2025). They use data from the 1860 Census to build ” assemble a comprehensive dataset of the spatial and sectoral distribution of economic activity in the U.S. in the year 1860.” This includes a division into agricultural, manufacturing and service sectors, along with data on measures of output, wages for free workers, prices of slaves, whether the area was likely to have malaria infestations, and much more.

The model then allows a thought-experiment: what if the enslaved workers and their families were emancipated? Where would they relocate, and in what sectors would they work? The authors write: “Combining theory and data, we then quantify the impacts of slavery. We find that complete emancipation has large effects on the U.S. economy, inducing an expansion of manufacturing (26.5%) and services (21.0%) and a contraction of agriculture (-5.4%). The welfare of formerly enslaved workers increases by almost 1,200%, whereas free worker welfare declines 0.7% and slaveholders’ profits are erased.” In their calculations, Emancipation causes overall GDP to rise by 9.1%.

Of course, the authors can then compare the effects of Emancipation in their statistical model with the actual effects of Emancipation after the Civil War: “Finally, we show that the counterfactual changes in labor allocations from emancipation are strongly correlated with observed patterns of White and Black reallocations across all sectors following the Civil War, although the comparison offers suggestive evidence of substantial migration frictions for recently emancipated Black workers.” In other words, the statistical model allowed former slaves to reallocate quite easily, and it wasn’t in fact that easy, so the actual economic effects would have been smaller than the model predictions–but in the same direction.

In the lead-in to the US Civil War, it was common to hear arguments from the pro-South side that slavery had been essential to the growth of the US economy. This argument has been resuscitated in recent years, but has received a markedly unamused response from scholars in the field. The pro-North side argued that the US economy would do just as well, or better, without slavery. These studies, and others, help to explain why that argument was correct. Back in 1804, after Napoleon damaged his own political reputation by executing a political opponents, one commenter remarked: “It’s worse than a crime; it’s a mistake.” In a similar spirit, one might say that slavery was not just a moral abomination; it was also an inefficient and low-growth economic outcome.

Interdependencies in Federal Statistics

Yes, the headline above this post would have to be in the running for “least likely to attract readers.” And yes, I still mourn the decision, made for budgetary reasons 15 years ago, for the federal government to stop publishing an annual Statistical Abstract of the United States, which had been around for 130 years. But my point is that when talking about federal statistics, it’s easy to focus on specific big picture numbers like unemployment rates and inflation. It’s conversely easy to undervalue the extent to which the value of federal statistical agencies is created by their ability to reach out to a wide array of data sources in systematic way, so that it is possible to compare the combined results over time.

The American Statistical Association has published “The Nation’s Data at Risk: 2025 Report” (December 2025). It’s full of facts about the federal statistical agencies, including their modest budgets that have been getting cut in real terms for more than decade, and the value of their output. Here, I’ll just focus on their role in pulling together data from a variety of sources.

For a basic example, consider a report called Science and Engineering Indicators, which is published every two years. It’s pulled together by a branch of the National Science Foundation called the National Center for Science and Engineering Statistics. For example, the 2024 report is available here. As the ASA report notes: “S&E Indicators is widely used and cited across the public and private sectors and viewed as an important input to the measure of U.S. economic competitiveness.” If you care about these issues, it’s a basic resource.

But of course, the underlying data on science and engineering doesn’t just grow on trees, waiting to be picked. Instead, the underlying data comes from an array of government and private sources, which need to be culled and compiled. The ASA report notes:

For the 2024 cycle of S&E Indicators, there were seven indicator areas: K-12 education; higher education; science, technology, engineering, and mathematics (STEM) labor force; research and development (R&D); industry activities; innovation; and public attitudes towards S&E. Figure 2.2 illustrates the dependence of each indicator area, on the right side, on various data providers, on the left side: specific statistical agencies and the broader categories of international data providers, other (non-statistical agency) federal data providers, and private-sector data providers. In this Sankey diagram, the widths of flows linking providers and indicators are proportional to the number of times a provider’s datasets are used, which is the first number in parentheses in the figure’s labels for both providers and indicators. The second number in parentheses is the number of unique datasets for each provider and indicator. For example, 9 NCES datasets are used a total of 65 times in the 2024 cycle. For the K-12 indicator area, three NCES datasets are used 22 times. The [federal” statistical agency nodes and flows are in blue.

Here’s another example closer to my core interests in economics: the data on “personal income,” which is a key part (about three-quarters) of measuring gross domestic product. In the diagram, the list of categories down the right-hand-side show the components of personal income. The list of sources on the left-hand-side show where the data comes from. The top five (in blue) are federal statistical agencies, but obviously, much of the data is generated from other parts of the government and from nongovernment sources as well.

Looking ahead to the future of federal statistics, this capability to reach out to a wide array of data sources is only going to becom more important. For many decades, a number of key government statistics have relied on results from household survey, but the accuracy of these surveys was always disputable and the response rate has been dropping. The statistical agencies (and economic researachers) have responded by trying to shift toward “administrative” data–that is, data already generated for other purposes. For example, firms already need to submit wage data to states for the administration of unemployment insurance programs, and that data on wages is surely more accurate and complete than self-reported survey data on what people earn. As another example, it may be possible to scrape websites for data on prices in a way that allows calculations of inflation to be made more rapidly and accurately. Research projects on these alternative forms of measurement is ongoing.

Ultimately, the importance of federal statistics is whether you want to be able to evaluate past, present, and future policies based on consistent and regularly collected data, or whether you prefer government to be based on whatever charismatic anedotes bubble to the top of social media.

Snapshots of the US Income Distribution

It takes a couple of years to finalize the income distribution data, and thus the Congressional Budget Office has just published “The Distribution of
Household Income, 2022″
(January 2026). The report is mainly graphs and data, rather than analysis or policy recommendations. Here are some patterns that caught my eye.

Start with focus on the income distribution produced by the market: that is, not taking account how government tax and spending policies affect the distribution. The pattern over time shows a well-known fact that income at the top has grown more rapidly. This figure shows that average income growth for the bottom quintile has been much the same as for the middle three quintiles, while the top quintile has gained faster.

Moreover, within the top quintile it is the top 1%, and indeed the top 0.1% and top 0.01%, that has seen the fastest income growth. This pattern emerged with force in the 1990s and early 2000s, and has remained in place since then

As a broad pattern, federal income taxes do take a higher share from those with higher incomes, and federal transfer payments and refundable tax provisions do provider greater benefit for those with lower incomes. Thus, these lines will tend to be closer together when looking at after-tax-and-transfers income.

The CBO uses a standard tool called the Gini coefficient as a way of measuring inequality. At an intuitive level, the Gini measures the gap from a completely equal income distribution: thus, a completely equal income distribution would have a Gini of zero, while a completely unequal income distribution (all income goes to one person) would have a Gini of 1. (For a more detailed description of the Gini, this earlier post offers a starting point.) For perspective, countries in highly unequal regions of the world like Latin America and sub-Saharan Africa often have Gini coefficients in the range of 0.4-0.5, while countries in more equal regions like the advanced economies of Europe are closer to 0.3.

For the US, the top line shows the rise in the Gini coefficient based on market income. The category of “income before transfers and taxes” measures inequality after including income arising from benefits linked directly to earlier employment: Social Security, Medicare, unemployment insurance. The next line shows income inequality after transfers and before taxes, while the bottom line shows income after transfers and taxes.

The specific Gini coefficient number for income after transfers and taxes in 2022 is 0.434. While this level of inequality is toward the higher end of the range, it’s quite comparable to the level of after-taxes-and-transfers inequality in, say, 2018 (0.438), 2012 (0.444), 2007 (0.455), 2000 (0.440), or even 1986 (0.425). At least over the last quarter-century or so, government taxes and transfers have more-or-less offset any rise in market-income inequality.

US-China Competition for AI Markets

Much of what I read about developments in the new artificial intelligence technologies focuses on the capabilities of what the new models can do, like what problems they now seem ready to tackle. But a parallel and not-identical question is what models are actually being used in the global economy. Austin Horng-En Wang and Kyle Siler-Evans offer some evidence on this point in “U.S.-China Competition for Artificial Intelligence Markets: Analyzing Global Use Patterns of Large Language Models” (RAND Corporation, January 14, 2026).

The authors analyze “website traffic data across 135 countries from April 2024 through May 2025,” and in particular, they tracked “monthly website visits from each country to seven U.S.-based and thirteen China-based LLM [large language model] service websites.” The authors readily acknowledge that their numbers are imperfect. As one example, if some organization downloads an open-source AI tool and uses it, this will not be captured by web traffic. The timeframe is interesting in part because in January 2025,  a Chinese company called DeepSeek launched an AI tool with capabilities that considerably exceeded expectations. But did the capabilities of DeepSeek translate into actual use patterns? The answer seems to be that it made a noticeable but not overwhelming difference.

The figure shows monthly visits to prominent LLM websites, with visits to US websites in blue and Chinese websites in red. Chinese websites were getting about 2-3% of the total visits in 2024. With the arrival of DeepSeek, the Chinese share rose as high as 13% in February 2025, but by August 2025 had sagged back to about 6%. As you can see, the growth in use of AI sites in summer 2025 mostly happened at US-based websites.


The authors explore whether the dominance of US AI providers might be explained by looking at factors like pricing, language support, and diplomatic ties, but without much success. For example, while paid subscriptions to US AI services cost more, many users are still relying on free access. Given the lack of other plausible explanations, they suggest that the capabilities of the US-based AI tools are currently better. But the DeepSeek experience shows that lots of users around the world do not view themselves as locked-in to AI providers from any given provider or country, and are quite willing to give something else a try.

AI and Jobs: Interview with David Autor

Sara Frueh interviews David Autor on the subject: “How Is AI Shaping the Future of Work?” (Issues in Science and Technology, January 6, 2026). Here are some snippets that caught my eye, but it’s worth reading the essay and even clicking on some of the suggested additional readings:

How broadly are AI tools already being used at work?

At least half of workers, at this point, are using it in their jobs, and probably more. In fact, more workers use it on their jobs than employers even provide it because many people use it even without their employer’s knowledge. So it’s caught on incredibly quickly. It’s used at home, it’s used at work, it’s used by people of all ages, and it’s used now equally by men and women and across education groups.

The problem when people are paid for expertise–but the expertise becomes outdated

People are paid not for their education, not for just showing up, but because they have expertise in something. Could be coding an app, could be baking a loaf of bread, diagnosing a patient, or replacing a rusty water heater. When technology automates something that you were doing, in general, the expertise that you had invested in all of a sudden doesn’t have much market value. … And so my concern is not about us running out of jobs per se. In fact, we’re running out of workers. The concern is about devaluation of expertise. And especially, even if, again, we’re transitioning to something “better,” the transition is always costly unless it happens quite slowly. And that’s because changes in people’s occupations is usually generational. You don’t go from being a lawyer to a computer scientist, or a production worker to a graphic artist, or a food service worker to a lawyer in the course of a career. Most people aren’t going to make that transition because there’s huge educational requirements to making those types of changes. So it’s quite possible their kids will decide, “Well, I’m not going to go into translation, but I will go into data science,” but that doesn’t directly help the people who are displaced.

How is the “China shock” of rising imports from China in the early 2000s likely to be different from the current AI shock?

There are important differences. One of those differences is that the China shock was very regionally concentrated. It was, as I mentioned, in the South and the Deep South, in places that made textiles and clothing and commodity furniture and did doll and tool assembly and things like that. So it’s unlikely that the impacts of AI will be nearly as regionally concentrated. And that makes it less painful because it doesn’t sort of knock out an entire community all at once. We’ve lost millions of clerical and administrative support jobs over the last few decades, but nobody talks about the great clerical shock. Why don’t they? Well, one reason is there was never a clerical capital of the United States where all the clerical work was done. It was done in offices around the country. So it’s not nearly as salient or visible. And it’s also not nearly as devastating because it’s a relatively small number of people in a large set of places. So that’s one difference. The other is that AI will mostly affect specific occupations and roles and tasks rather than entire industries. We don’t expect entire industries to just go away. And so that, again, distributes the pain, as well as the benefits, more broadly.

Can the new AI tools be steered toward collaborating with people to improve their output, rather than displacing existing jobs?

So what does steering it mean? It means using it in ways that collaborate with people to make their expertise more valuable and more useful. Where are there opportunities to do that? They’re dispersed throughout the economy. One place where this could be very impactful is in healthcare. Healthcare is kind of one out of five US dollars at this point, employs a ton of people. It’s the fastest-growing, broadly, employment sector, and there’s expertise all up and down the line. We could, using these tools, enable people who are not medical doctors, but are nurses or nurse practitioners or nurses aides, for example, or x-ray techs, to do more skilled work, to do a broader variety or depth of services using better tools. And the tools are not just about automating paperwork, it’s about supporting judgment because professional expert work is really about decision making where the stakes are high and there’s not usually one correct answer, but it matters whether you get it approximately right or approximately wrong. And so I think that’s a huge opportunity. …

Another is how we educate. We could educate more effectively. We could help teachers be more effective in providing better tools. We could also provide better learning environments using these tools. Another is in areas like skilled repair or construction or interior design or contracting, where there’s a lot of expertise involved. Giving people tools to supplement the work they do could make them more effective at either doing more ambitious projects, doing more complex repairs, or even designing and engineering in a way where they would be able to do tasks that would otherwise require higher certification.