How Slavery Held Back Growth: Looking Across the River

Imagine a situation where a substantial area is run by a heavy-handed organized crime group. Those at the top of the organized crime pyramid live like royalty, with palatial housing and the highest-end food, drink, and clothing. Those who live in this area pay for this luxury in a variety of ways: payoffs for starting or running a profitable business, limits on jobs, higher higher prices for goods and services, and an ongoing shadow of violence.

Now imagine that someone argued that organized crime was a great success for the for the local economy. For evidence, this person adds up the value of economic activity within the organized crime empire, and points to the high incomes, wealth, and political influence of the organized crime leaders.

But of course, the fact that this hypothetical organized crime organization makes money, especially for its leaders, doesn’t make it economically beneficial. Similarly, the fact that slavery was the basis for a larger agriculture economy and was profitable for slaveowners doesn’t make it economically beneficial, either. To know if something is “beneficial,” one needs to engage in what economists call “counterfactual” reasoning: that is, what would the economy have looked like otherwise?

As a first example of one way to make this comparison, Hoyt Bleakley and Paul W. Rhode consider “The Economic Effects of American Slavery, Redux: Tests at the Border” (June 2024, NBER Working Paper 32640). They take inspiration from the famous voyage of Alexis de Tocqueville and Gustave de Beaumont to America in 1831-32. When they were travelling down the Ohio River, with the free state of Ohio on one bank and the slave state of Kentucky on the other bank, it seemed obvious that the Ohio side was flourishing, while the Kentucky side was not. De Tocqueville wrote (translated from the French): “It is impossible to attribute those differences to any other cause than slavery. It brutalizes the black population and debilitates the white. One can see its deathly effects, yet it continues and will continue for a long time. […] Man is not made for servitude.”

Many analyses of slavery look at state-level or regional-level averages of South vs. North. Instead, Bleakley and Rhode focuses on a long narrow stretch of land on both sides of the border between free and slave states. They write: “We take the testing ground of de Tocqueville and de Beaumont — the upper Ohio River valley– and extend the comparison east to cover the borders dividing Pennsylvania and New Jersey from Virginia, Maryland and Delaware and west to contrast free states of Illinois and Iowa with the slave state of Missouri. The border was the dividing line between slavery and free labor institutions within the same
country, with a common language, national laws, and shared heritage.”

The pattern they find is that land is only about half as likely to be utilized on the slave side of the border; on the free side, investments in land-clearing and farm buildings were much higher. This represents a dramatic reduction in potential economic output in slave states–extreme enought that it was visible from the deck of ships going down the Ohio River. (And yes, the authors also do roughly a gazillion statistical checks to see if the results might be accounted for by soil erosion, river access, soil composition, timing of earlier settlement, earlier glacial coverage, the existence of state borders, and more.)

So why didn’t free labor move across the border to the available land in the slave states? Bleakley and Rhode emphasize that a slave society was dominated by rich slaveowners, who were focused on their own source of wealth. These states did not invest in infrastructure or institutions to benefit the middle- and lower-class of free workers; instead, they were likely to impose requirements that free workers participate in the enforcement of slavery. Slave-owning states were much more likely to have institutions that also affected free whites: indentured servitude, public whippings, debt bondage (in which someone in debt is legally required to work for another until the debt is repaid–which had a nasty habit of taking a very long time), or the leasing of prisoners to private companies as forced labor. The slaveowners’ idea of “property rights” was very much about their own personal property, not anyone else’s. For free labor, the political, economic, and social institutions of slave-owning states were largely unattractive. The authors quote a 1854 speech by Abraham Lincoln, who said that “slave States are places for poor white people to remove FROM; not to remove TO. New free States are the places for poor people to go to and better their condition.”

An alternative way of comparing slavery to Emancipation can be done with statistical modelling. Treb Allen, Winston Chen, and Suresh Naidu take this approach in “The Economic Geography of American Slavery” (NBER Working Paper 34356, October 2025). They use data from the 1860 Census to build ” assemble a comprehensive dataset of the spatial and sectoral distribution of economic activity in the U.S. in the year 1860.” This includes a division into agricultural, manufacturing and service sectors, along with data on measures of output, wages for free workers, prices of slaves, whether the area was likely to have malaria infestations, and much more.

The model then allows a thought-experiment: what if the enslaved workers and their families were emancipated? Where would they relocate, and in what sectors would they work? The authors write: “Combining theory and data, we then quantify the impacts of slavery. We find that complete emancipation has large effects on the U.S. economy, inducing an expansion of manufacturing (26.5%) and services (21.0%) and a contraction of agriculture (-5.4%). The welfare of formerly enslaved workers increases by almost 1,200%, whereas free worker welfare declines 0.7% and slaveholders’ profits are erased.” In their calculations, Emancipation causes overall GDP to rise by 9.1%.

Of course, the authors can then compare the effects of Emancipation in their statistical model with the actual effects of Emancipation after the Civil War: “Finally, we show that the counterfactual changes in labor allocations from emancipation are strongly correlated with observed patterns of White and Black reallocations across all sectors following the Civil War, although the comparison offers suggestive evidence of substantial migration frictions for recently emancipated Black workers.” In other words, the statistical model allowed former slaves to reallocate quite easily, and it wasn’t in fact that easy, so the actual economic effects would have been smaller than the model predictions–but in the same direction.

In the lead-in to the US Civil War, it was common to hear arguments from the pro-South side that slavery had been essential to the growth of the US economy. This argument has been resuscitated in recent years, but has received a markedly unamused response from scholars in the field. The pro-North side argued that the US economy would do just as well, or better, without slavery. These studies, and others, help to explain why that argument was correct. Back in 1804, after Napoleon damaged his own political reputation by executing a political opponents, one commenter remarked: “It’s worse than a crime; it’s a mistake.” In a similar spirit, one might say that slavery was not just a moral abomination; it was also an inefficient and low-growth economic outcome.

Interdependencies in Federal Statistics

Yes, the headline above this post would have to be in the running for “least likely to attract readers.” And yes, I still mourn the decision, made for budgetary reasons 15 years ago, for the federal government to stop publishing an annual Statistical Abstract of the United States, which had been around for 130 years. But my point is that when talking about federal statistics, it’s easy to focus on specific big picture numbers like unemployment rates and inflation. It’s conversely easy to undervalue the extent to which the value of federal statistical agencies is created by their ability to reach out to a wide array of data sources in systematic way, so that it is possible to compare the combined results over time.

The American Statistical Association has published “The Nation’s Data at Risk: 2025 Report” (December 2025). It’s full of facts about the federal statistical agencies, including their modest budgets that have been getting cut in real terms for more than decade, and the value of their output. Here, I’ll just focus on their role in pulling together data from a variety of sources.

For a basic example, consider a report called Science and Engineering Indicators, which is published every two years. It’s pulled together by a branch of the National Science Foundation called the National Center for Science and Engineering Statistics. For example, the 2024 report is available here. As the ASA report notes: “S&E Indicators is widely used and cited across the public and private sectors and viewed as an important input to the measure of U.S. economic competitiveness.” If you care about these issues, it’s a basic resource.

But of course, the underlying data on science and engineering doesn’t just grow on trees, waiting to be picked. Instead, the underlying data comes from an array of government and private sources, which need to be culled and compiled. The ASA report notes:

For the 2024 cycle of S&E Indicators, there were seven indicator areas: K-12 education; higher education; science, technology, engineering, and mathematics (STEM) labor force; research and development (R&D); industry activities; innovation; and public attitudes towards S&E. Figure 2.2 illustrates the dependence of each indicator area, on the right side, on various data providers, on the left side: specific statistical agencies and the broader categories of international data providers, other (non-statistical agency) federal data providers, and private-sector data providers. In this Sankey diagram, the widths of flows linking providers and indicators are proportional to the number of times a provider’s datasets are used, which is the first number in parentheses in the figure’s labels for both providers and indicators. The second number in parentheses is the number of unique datasets for each provider and indicator. For example, 9 NCES datasets are used a total of 65 times in the 2024 cycle. For the K-12 indicator area, three NCES datasets are used 22 times. The [federal” statistical agency nodes and flows are in blue.

Here’s another example closer to my core interests in economics: the data on “personal income,” which is a key part (about three-quarters) of measuring gross domestic product. In the diagram, the list of categories down the right-hand-side show the components of personal income. The list of sources on the left-hand-side show where the data comes from. The top five (in blue) are federal statistical agencies, but obviously, much of the data is generated from other parts of the government and from nongovernment sources as well.

Looking ahead to the future of federal statistics, this capability to reach out to a wide array of data sources is only going to becom more important. For many decades, a number of key government statistics have relied on results from household survey, but the accuracy of these surveys was always disputable and the response rate has been dropping. The statistical agencies (and economic researachers) have responded by trying to shift toward “administrative” data–that is, data already generated for other purposes. For example, firms already need to submit wage data to states for the administration of unemployment insurance programs, and that data on wages is surely more accurate and complete than self-reported survey data on what people earn. As another example, it may be possible to scrape websites for data on prices in a way that allows calculations of inflation to be made more rapidly and accurately. Research projects on these alternative forms of measurement is ongoing.

Ultimately, the importance of federal statistics is whether you want to be able to evaluate past, present, and future policies based on consistent and regularly collected data, or whether you prefer government to be based on whatever charismatic anedotes bubble to the top of social media.

Snapshots of the US Income Distribution

It takes a couple of years to finalize the income distribution data, and thus the Congressional Budget Office has just published “The Distribution of
Household Income, 2022″
(January 2026). The report is mainly graphs and data, rather than analysis or policy recommendations. Here are some patterns that caught my eye.

Start with focus on the income distribution produced by the market: that is, not taking account how government tax and spending policies affect the distribution. The pattern over time shows a well-known fact that income at the top has grown more rapidly. This figure shows that average income growth for the bottom quintile has been much the same as for the middle three quintiles, while the top quintile has gained faster.

Moreover, within the top quintile it is the top 1%, and indeed the top 0.1% and top 0.01%, that has seen the fastest income growth. This pattern emerged with force in the 1990s and early 2000s, and has remained in place since then

As a broad pattern, federal income taxes do take a higher share from those with higher incomes, and federal transfer payments and refundable tax provisions do provider greater benefit for those with lower incomes. Thus, these lines will tend to be closer together when looking at after-tax-and-transfers income.

The CBO uses a standard tool called the Gini coefficient as a way of measuring inequality. At an intuitive level, the Gini measures the gap from a completely equal income distribution: thus, a completely equal income distribution would have a Gini of zero, while a completely unequal income distribution (all income goes to one person) would have a Gini of 1. (For a more detailed description of the Gini, this earlier post offers a starting point.) For perspective, countries in highly unequal regions of the world like Latin America and sub-Saharan Africa often have Gini coefficients in the range of 0.4-0.5, while countries in more equal regions like the advanced economies of Europe are closer to 0.3.

For the US, the top line shows the rise in the Gini coefficient based on market income. The category of “income before transfers and taxes” measures inequality after including income arising from benefits linked directly to earlier employment: Social Security, Medicare, unemployment insurance. The next line shows income inequality after transfers and before taxes, while the bottom line shows income after transfers and taxes.

The specific Gini coefficient number for income after transfers and taxes in 2022 is 0.434. While this level of inequality is toward the higher end of the range, it’s quite comparable to the level of after-taxes-and-transfers inequality in, say, 2018 (0.438), 2012 (0.444), 2007 (0.455), 2000 (0.440), or even 1986 (0.425). At least over the last quarter-century or so, government taxes and transfers have more-or-less offset any rise in market-income inequality.

US-China Competition for AI Markets

Much of what I read about developments in the new artificial intelligence technologies focuses on the capabilities of what the new models can do, like what problems they now seem ready to tackle. But a parallel and not-identical question is what models are actually being used in the global economy. Austin Horng-En Wang and Kyle Siler-Evans offer some evidence on this point in “U.S.-China Competition for Artificial Intelligence Markets: Analyzing Global Use Patterns of Large Language Models” (RAND Corporation, January 14, 2026).

The authors analyze “website traffic data across 135 countries from April 2024 through May 2025,” and in particular, they tracked “monthly website visits from each country to seven U.S.-based and thirteen China-based LLM [large language model] service websites.” The authors readily acknowledge that their numbers are imperfect. As one example, if some organization downloads an open-source AI tool and uses it, this will not be captured by web traffic. The timeframe is interesting in part because in January 2025,  a Chinese company called DeepSeek launched an AI tool with capabilities that considerably exceeded expectations. But did the capabilities of DeepSeek translate into actual use patterns? The answer seems to be that it made a noticeable but not overwhelming difference.

The figure shows monthly visits to prominent LLM websites, with visits to US websites in blue and Chinese websites in red. Chinese websites were getting about 2-3% of the total visits in 2024. With the arrival of DeepSeek, the Chinese share rose as high as 13% in February 2025, but by August 2025 had sagged back to about 6%. As you can see, the growth in use of AI sites in summer 2025 mostly happened at US-based websites.


The authors explore whether the dominance of US AI providers might be explained by looking at factors like pricing, language support, and diplomatic ties, but without much success. For example, while paid subscriptions to US AI services cost more, many users are still relying on free access. Given the lack of other plausible explanations, they suggest that the capabilities of the US-based AI tools are currently better. But the DeepSeek experience shows that lots of users around the world do not view themselves as locked-in to AI providers from any given provider or country, and are quite willing to give something else a try.

AI and Jobs: Interview with David Autor

Sara Frueh interviews David Autor on the subject: “How Is AI Shaping the Future of Work?” (Issues in Science and Technology, January 6, 2026). Here are some snippets that caught my eye, but it’s worth reading the essay and even clicking on some of the suggested additional readings:

How broadly are AI tools already being used at work?

At least half of workers, at this point, are using it in their jobs, and probably more. In fact, more workers use it on their jobs than employers even provide it because many people use it even without their employer’s knowledge. So it’s caught on incredibly quickly. It’s used at home, it’s used at work, it’s used by people of all ages, and it’s used now equally by men and women and across education groups.

The problem when people are paid for expertise–but the expertise becomes outdated

People are paid not for their education, not for just showing up, but because they have expertise in something. Could be coding an app, could be baking a loaf of bread, diagnosing a patient, or replacing a rusty water heater. When technology automates something that you were doing, in general, the expertise that you had invested in all of a sudden doesn’t have much market value. … And so my concern is not about us running out of jobs per se. In fact, we’re running out of workers. The concern is about devaluation of expertise. And especially, even if, again, we’re transitioning to something “better,” the transition is always costly unless it happens quite slowly. And that’s because changes in people’s occupations is usually generational. You don’t go from being a lawyer to a computer scientist, or a production worker to a graphic artist, or a food service worker to a lawyer in the course of a career. Most people aren’t going to make that transition because there’s huge educational requirements to making those types of changes. So it’s quite possible their kids will decide, “Well, I’m not going to go into translation, but I will go into data science,” but that doesn’t directly help the people who are displaced.

How is the “China shock” of rising imports from China in the early 2000s likely to be different from the current AI shock?

There are important differences. One of those differences is that the China shock was very regionally concentrated. It was, as I mentioned, in the South and the Deep South, in places that made textiles and clothing and commodity furniture and did doll and tool assembly and things like that. So it’s unlikely that the impacts of AI will be nearly as regionally concentrated. And that makes it less painful because it doesn’t sort of knock out an entire community all at once. We’ve lost millions of clerical and administrative support jobs over the last few decades, but nobody talks about the great clerical shock. Why don’t they? Well, one reason is there was never a clerical capital of the United States where all the clerical work was done. It was done in offices around the country. So it’s not nearly as salient or visible. And it’s also not nearly as devastating because it’s a relatively small number of people in a large set of places. So that’s one difference. The other is that AI will mostly affect specific occupations and roles and tasks rather than entire industries. We don’t expect entire industries to just go away. And so that, again, distributes the pain, as well as the benefits, more broadly.

Can the new AI tools be steered toward collaborating with people to improve their output, rather than displacing existing jobs?

So what does steering it mean? It means using it in ways that collaborate with people to make their expertise more valuable and more useful. Where are there opportunities to do that? They’re dispersed throughout the economy. One place where this could be very impactful is in healthcare. Healthcare is kind of one out of five US dollars at this point, employs a ton of people. It’s the fastest-growing, broadly, employment sector, and there’s expertise all up and down the line. We could, using these tools, enable people who are not medical doctors, but are nurses or nurse practitioners or nurses aides, for example, or x-ray techs, to do more skilled work, to do a broader variety or depth of services using better tools. And the tools are not just about automating paperwork, it’s about supporting judgment because professional expert work is really about decision making where the stakes are high and there’s not usually one correct answer, but it matters whether you get it approximately right or approximately wrong. And so I think that’s a huge opportunity. …

Another is how we educate. We could educate more effectively. We could help teachers be more effective in providing better tools. We could also provide better learning environments using these tools. Another is in areas like skilled repair or construction or interior design or contracting, where there’s a lot of expertise involved. Giving people tools to supplement the work they do could make them more effective at either doing more ambitious projects, doing more complex repairs, or even designing and engineering in a way where they would be able to do tasks that would otherwise require higher certification.

Standardization as a Tool for Development

Everyday life is easier because of certain types of standardization: you buy something with an electrical plug, and it fits the socket on your wall. But beyond issues like using common weights and measures, standards can be transformative for economic growth. The World Banks’s, World Development Report 2025, “Standards for Development,” explores the big picture role of standards–as well as the danger that standards can be used by incumbents to hinder competition from entrants.

One can make a plausible case that for the wave of globalization in the last half-century or so, the standardization of common containers was more important than all the inter-governmental negotiations about global tariffs. The report notes:

[T]he real revolution came quietly—and relatively recently: from a US trucking entrepreneur named Malcom McLean in the mid-1950s. Until then, goods were transported using methods that had hardly changed over the centuries. Cargo had to be loaded piece by piece, using crates, sacks, or barrels, onto carriages, trucks, trains, and ships. At each stage, everything was hauled off of one vehicle and then reloaded onto the next, usually with different types of specialized equipment. McLean standardized the humble steel box, readying it for easy loading and shipping across all forms of transportation: road, rail, air, and sea. In doing so, he crushed handling
costs and delays: The cost of shipping fell by at least 25 percent. The risk of theft and damage eased. If treaties set the stage for the rise of globalization after World War II, McLean’s container made the show possible.

McLean’s standardization did not just tidy up shipping. Standard containers gave the world a common commercial language. A container sealed in Shanghai could roll off a ship in Rotterdam and onto a truck, rarely opened or even touched by human hands. Standards turned chaos into order, unleashing the economic miracles of just-in-time manufacturing. Ships got bigger. Supply chains proliferated. Commerce surged. McLean then turbocharged the process by granting free licenses to his container patents to the International Organization for Standardization (ISO).

In 1965, ISO codified almost everything about the containers: dimensions, stacking rules, twist locks, strength, and lifting. Suddenly, there was a single playbook—and global interoperability.

The payoff was extraordinary. Containers delivered a permanent boost to trade: a 1,240 percent cumulative jump in trade among advanced economies after 15 years: by many estimates, more than the combined effect of all trade agreements of the previous half century. Across 22 industrial countries, standardized containers lifted bilateral trade by 300 percent in just 5 years and nearly 800 percent in 20. That far exceeded the 45 percent from bilateral free trade agreements over the same 20 years and 285 percent from membership in the General Agreement on Tariffs and
Trade (GATT), the precursor to the World Trade Organization (WTO).

Love globalization? Hate it? Either way, the underlying impetus is more about Malcom McLean than about WTO negotiations.

The report discusses all kinds of standards: environmental, banking, accounting, interoperability, performance, safety, reliability, testing, and more. Such standards are often an important part of forcing real competition between producers, as well as developing economies of scale. But there is an element of push and pull here. In many countries, the history of economic development is also a history of standardization. On the other side, the same standards may not apply well at all times and places–and can even end up as a tool for giving an advantage to existing firms.

For some historical examples of the link between standards and development, the report points out:

As a new sovereign nation in 1947, India launched its first National Sample Survey of living standards in 1950. The survey revealed a striking lack of standardization of weights and measures in the country’s rural areas: 143 different systems for measuring weight, 150 different systems for measuring volume, and 180 systems for measuring land area. The lack of consistency that was hobbling India’s economic union paralleled the mayhem in France before the metric system established order there; in the 1700s, France had about 250,000 local weights and measures.

In such settings, a common standard of weights and measures enables markets to function at scale. Or here’s a story from the 20th-century US experience:

It was also the government’s drive for “simplification,” initiated during World War I, to push industry toward compatibility: standard (fewer) sizes and mass production. Industrial standards in the early 1900s were mostly in house; fragmentation was rampant, fed by a tangle of state and local rules and custom-made orders. Mattresses came in 78 sizes in 1914; within a decade, that number had fallen to 4 for 90 percent of output. Wartime agencies, working through trade associations, slashed product variety across some 250 lines in 18 months.

President Hoover revived and institutionalized the effort in the 1920s, creating the Division of Simplified Practice as a neutral broker for voluntary, industrywide standards. Early wins—paving bricks, mattresses, bedsprings—cut varieties by more than 90 percent. By the early 1930s, 135 Simplified Practice Recommendations were in place, growing to 173 by 1939 and 267 by 1971. Each one tightened the link between design and efficiency, reducing waste, cutting costs, and freeing up capital for innovation. Compatibility standards powered the US leap in mass production and consumption, turning variety into scale and waste into efficiency. What looked like a technical exercise was in fact an economic policy of uncommon power, one that quietly multiplied productivity across an entire economy.

The authors also point out that standards can end up limiting competition in some cases. A few examples from my own mind: back in the 1970s, Sweden had rules that cars had to have wipers on the headlights, which many foreign producers of cars did not; in Japan, stringent standards have had the effect of limiting imports of rice. This report focuses more on standard adoption in developing countries. It points out that standards are often drawn up by the high-income countries of the world. The costs of complying with these standards–sometimes called an nontariff barrier–are often a bigger hindrance for developing countries to participate in global trade in certain products than actual tariff rates.

Perhaps the best kinds of standards are those that, most of the time, can be taken for granted.

Some Snapshots of the US Demographic Future

Demography is the study of the structure of human populations, including factors like births, deaths, aging, as well as health and economic factors. Some demographic changes happen slowly, over decades, but in a predicable way. For example, if you want to look at projections for the year 2050 of the ratio of the US working-age population–say, ages 25 to 64–to the ratio of the US over-65 population, you need to start by recognizing that all of the the native-born US population that will be 25-and-over in 2050 has already been born! So your demographic projections can tinker around the edges with possible future immigration rates, or how how health and mortality rates may change for the elderly, or how labor force participation rates may evolve for different groups in the next 25 years. But the basic age-group ratio for working-age vs. elderly in 2050 is already baked into the cake.

The Congressional Budget Office offers a look at these kinds of projections in “The Demographic Outlook: 2026 to 2056” (January 7, 2026). As you think about the future shape of the US economy–number of people, number of workers, overall population aging, the sustainability of government programs in which working-age people support the elderly, and more–these projections shape what’s possible.

For example, this figure shows how, in the past, US population growth has been a mix of immigration (gray bars) and births-exceeding-deaths (blue bars). But the blue bars are shrinking, and in a few years the number of deaths will exceed births. As result, all of US population growth will be tradeable to immigration–and the US population growth rates is headed for zero in the next 30 years or so. Overall, the US population rises from about 349 million people this year to 364 million in 2056–and then declines after that.

As the US gets older the ratio of (traditionally) working-age population to over-65 population will shrink. This is a long-term-trend, going back 70 years and more. The “baby boom” generation born in the 15 years or so after World War II helps support the working-age population for a time, but the US is in the middle of that population age group entering retirement age. The ratio of working-age to elderly does level off around 2036 (given the assumptions about immigration above).

US fertility rates have been dropping, and have fallen below the “replacement rate” of roughly 2.0. One big shift here is that the fertility rate for 30-and-older women now exceeds the fertility rate for 29-and-younger women. Between these forces, the overall US fertility rate seems to be levelling out.

Meanwhile, the overall mortality rate is falling as life expectancy rises.

The immigration rate spiked during the last few years of the Biden administration, then plummeted. There are rises and falls related to economic factors and the pandemic. The projections expect total immigration in the next few decades closer to average of levels prevailing in the first two decades of the 21st century.

Many of these underlying factors are susceptible to policy and happenstance. But such policies and events are likely to be around the edges of the big-picture evolution taking place.

Yellen on Fiscal Dominance

“Fiscal dominance” refers to a situation where government debt grows so large that the nation’s central bank feels that it has little choice except to focus on making sure the government does not default–even if it means a surge of inflation. Janet Yellen described the issue and risks of fiscal dominance concisely in her comments at a session on the future of the Federal Reserve at the recent meetings of the Allied Social Science Associations in Philadelphia (January 6, 2026).

This postwar policy framework is characterized by monetary policy dominance that is, the Fed is not and must never become the fiscal authority’s financing arm. Fiscal policy’s job is to set taxes and spending, and to finance deficits through issuing debt to the market at prevailing interest rates. It is the responsibility of Congress and the president—not the Federal Reserve—to insure that the government’s intertemporal budget constraint is satisfied. It is their duty to ensure that the path of debt is sustainable.   

Fiscal dominance refers to the opposite configuration—a situation where the government’s fiscal position—its deficits and debt—puts such pressure on its financing needs that monetary policy becomes subordinate to those needs.  As a result, the central bank is pressured, implicitly or explicitly, to keep interest rates lower than warranted by macroeconomic conditions; or to purchase large quantities of government debt, not primarily to stabilize inflation and employment but to ease the government’s financing burden. In a fiscally dominant world, the government’s intertemporal budget constraint drives the price level. If markets don’t expect future primary surpluses to cover the debt, the adjustment eventually comes via inflation or default. This is the “fiscal theory of the price level.”

Fiscal dominance is dangerous because it typically results in higher and more volatile inflation or politically driven business cycles. When the central bank is constrained from raising rates or shrinking its balance sheet because that would increase debt service or trigger fiscal stress, inflation expectations may become unanchored. Households and firms may come to expect that inflation is the path of least resistance for managing high debts. Once such expectations take hold, stabilizing prices becomes significantly more costly. If inflation is firmly under control, the Fed has more flexibility to respond to labor market weakness. Fiscal dominance is also likely to raise term premia and borrowing costs as investors become concerned that the government will rely on inflation or financial repression to manage its debt. In addition, a central bank that is perceived as an arm of the Treasury may have less space to act forcefully in a crisis. For all of these reasons, avoiding fiscal dominance has been a central objective of modern central banking frameworks. 

Yellen does not believe that the US Federal Reserve currently faces a situation of “fiscal dominance.” But with the federal government running high annual deficits, while having already accumulated historically high levels of total debt, political pressures are building in that direction. Yellen says:

What would keep the U.S. out of fiscal dominance? First and foremost, this requires credible medium-term fiscal adjustment—not abrupt austerity, but a believable path that stabilizes debt/GDP; for example, through gradual changes to taxes and entitlements or reforms that tilt growth and productivity higher. Unfortunately, however, the revealed preference of both parties has been toward deficit-increasing policy. … I doubt that Americans will end up on the fiscal dominance course, but I definitely think the dangers are real.  

AEA Distinguished Fellow 2025: That’s Me

I almost always steer away from the personal in this space, but it feels like time to make an exception. Last weekend, at the 2026 Annual Meetings of the Allied Social Science Associations (which includes a number of associations with overlapping memberships joined by economics and finance academics), I was named a Distinguished Fellow of the American Economic Association.

It’s a considerable honor. The Distinguished Fellow award started in 1965, and in the 60 years since then, about 200 people have received it. For comparison, the Nobel Prize in economics started in 1969, and has been awarded to 99 people since then. There is partial but meaningful overlap in the lists of those who have won the two awards.

For me, the honor was quite unexpected. Distinguished Fellows are typically PhD economists who have been prominent in published research. But my job since 1986 has been Managing Editor of the Journal of Economic Perspectives. (All issues of the JEP from the first one to the most recent are freely available online.) As the prize citation notes: “Steering the JEP and ensuring continuity in its unique approach and voice has been Taylor’s primary contribution to economics over his four-decade career.”

Thus, being named a Distinguished Fellow reminded me of the time a decade ago when Bob Dylan was awarded the Nobel Prize in literature. Yes, Bob in general deserves awards. Me, too. But this particular award was not one I ever expected would come my way. I am more pleased about it than I can easily say.

You can read the prize citation here. Here is a picture of me receiving the award from Katherine Abraham, the current president of the American Economic Association.

What is Actually the Problem with the Current US Labor Market?

By conventional big-picture measures, the US labor market looks pretty good. However, the mood about the US labor market feels undeniably grim. Is this a “vibe-session,” based on little more than gloomy moods? Or can we dig a little deeper into the data and find some reasons for concern?

Let’s start with the big-picture good news. The US unemployment rate at 4.4% has edged up a bit from the remarkably low levels that prevailed in the late days of the pandemic, but remains quite low by the standards of the last half century–for example, less than half the level at the worst of the Great Recession.

Overall real wages have been edging up. The figure shows inflation-adjusted median wages for wage and salary workers. The median means that half of workers are above this level and half below–so while a wage increase that only affects the upper-wage workers would cause the average to rise, it will not cause the median wage to rise.

The rise in real wages applies across the distribution of skill levels. This figure may look messy, but its message is straightforward. Divide up the labor force according to level of education. Focus again on workers over age 25, and the median wage. In the figure, the level of wages for each group has been set equal to 100 in the year 2000. Thus, the graph shows what education groups have received the highest increase in (nominal) wages over this time.

The orange line at the top shows that the biggest wage gains have gone to workers with less than a high school education. The purple line shows that the lowest wage gains have gone to those with “some college or associate degree.” The other three education categories–high school degree only, bachelor’s degree only, and bachelor’s degree and above–have seen median wages grow at about the same rate in the last 25 years.

Of course, these big picture labor market measures don’t mention everything. But they surely don’t suggest that the US labor market is in dire straits. So what is causing the feelings of gloom? Jeff Horwich of the Minneapolis Federal Reserve takes a deeper look at the labor market data in “Off the sidelines and into the low-hire economy: More Americans are diving back into the job hunt despite `ugliest’ labor market in years” (December 15, 2025). He points to several factors worth pondering.

First, the “hiring rate” is measured by the number of new hires divided by the number of current employees. Even as the unemployment rate nudges up, the hiring rate is sagging.

Second, the proportion of the unemployed who are long-term unemployed–that is, unemployed for 27 weeks or more–is on the rise. In the aftermath of the Great Recession, the long-run unemployment rate remained stubbornly high for years. It spiked again after the pandemic recession. But in the last few years, it’s on the rise again.

Third, labor market economists divide up adults into three groups: 1) those who are employed, 2) those who are in the labor force in the sense that they are actively looking for jobs, but are unemployed, and 3) those who are out of the labor force, not looking for a job, and thus not counted as unemployed. Horwich points out that people who were counted as “out of the labor force” are reentering the labor force (red line). The common pattern is that people move from out-of-the-labor-force straight into a job. But the number of people moving from out of the labor force into actively looking for a job but ending up unemployed is edging up (blue line).

Fourth, of those who remain out of the labor force in the sense that they are not actively searching for a job, a rising number say that they “want a job now.” As Horwich points out, about two million people each month are entering the labor force and looking for a job, but not finding one (the figure above), but another six million of those who are out of the labor force would like a job, with that number rising.

Fifth, as the share of those previously out of the labor force drops, the “labor force participation rate” (which counts both the employed and those actively looking for jobs but currently unemployed) is rising. This figure shows the proportion for “prime age workers” the 25-54 age bracket, but it’s rising for most age and demographic groups other than the elderly.

Finally, when existing workers perceive that the labor market is strong, they are more likely to quit an existing job to take another one. But when existing workers are more worried about finding an alternative job, the quit rate falls. For example, the quit rate plummets during the Great Recession from 2008-2010. After some big oscillations related to the pandemic and its aftermath (including changes in work-from-home rules), the quit rate has been dropping for several years now.

An overall picture begins to emerge from this data. A rising number of people are reentering the labor market seeking jobs–some after being absent from the labor market for several years–but hiring is down. Among the unemployed, long-run unemployment is on the rise. Existing workers don’t perceive that alternative jobs are plentiful, and quit rates have fallen. These issues aren’t apparent in the basic unemployment or median wage data, but they are nonetheless very real.

Observing the labor market data doesn’t reveal how to interpret it. Changes in immigration patterns may have some effect, but it’s not obvious how fewer immigrants looking for jobs would lead to lower hiring rates, fewer quits, or greater long-run unemployment. I’ve heard it speculated that employers perceive a rise in the uncertainty of the economic environment, with different reasons applying to different groups: for some firms, it’s the seesaw pattern of tariffs threatened, coming, and going; for others, the rapid advance and potential disruption of artificial intelligence technologies; and for still others in certain areas, the ongoing push for substantially higher minimum wages. When employers are doubt, they become more likely to default toward not hiring, at least not immediately.

As Horwich points out, burdens of consumer and mortgage debt are on the rise, in part because of higher interest rates, which can make getting a job feel even more urgent. In addition, I’ve heard it speculated that finding a job in the modern online labor market can feel forbiddingly dicey. You look at a website, fill out lots of online forms, maybe get an form letter notification back, or even a few interviews, but the sense is that every position that is posted online gets many, many applicants. Your chance of standing out from the crowd of other applicants feels small, unless you know someone or have a personal connection. To me, some of the grimness in the current labor market is about a justified and all-too-real feeling that navigating through the modern labor market feels like a long run up an icy hill–with the possibility of no reward waiting at the top.