The North American Trade Bloc at Risk

The “multipolar” world has been a reasonably popular framework for thinking about the global economy. The idea was that the world economy was sorting itself into three geographically defined regions: a North American group, a European group, and an East Asian group. Each group of countries had a combination of traits: a sufficient amount of industry and technological leadership in many areas, a mix of skilled and unskilled labor, access to natural resources. In addition, the three regions had the advantages of geographic proximity, and relatively free trade within the area, so that goods and services could flow back and forth with some ease.

For some earlier discussions of this idea here, see “NAFTA in a Multipolar World Economy” (August 11, 2017) and “A North American Vision” (November 5, 2014).

From this perspective, a main concern with President Trump’s threats to set off a trade war with Canada and Mexico is that it threatens to fracture the North American bloc. However, Germany and western Europe will remain at the heart of the European bloc, while a combination of China, Japan, and Korea will remain at the heart of the Asia bloc.

After all, President Trump negotiated and signed the United States-Mexico-Canada Agreement (USMCA) in 2020 during his first term, to address his concerns about the earlier North American Free Trade Agreement. It’s worth quoting some of Trump’s comments during the signing ceremony in 2020:

The USMCA is the largest, fairest, most balanced, and modern trade agreement ever achieved.  There’s never been anything like it.  Other countries are now looking at it, but there can’t be a border like that because, believe it or not, that is by far the biggest border anywhere in the world, in terms of economy, in terms of people.  There’s nothing even close.

This is a colossal victory for our farmers, ranchers, energy workers, factory workers, and American workers in all 50 states … The USMCA is estimated to add another 1.2 percent to our GDP and create countless new American jobs.  It will make our blue-collar boom — which is beyond anybody’s expectation — even bigger, stronger, and more extraordinary, delivering massive gains for the loyal citizens of our nation.

For the first time in American history, we have replaced a disastrous trade deal that rewarded outsourcing with a truly fair and reciprocal trade deal that will keep jobs, wealth, and growth right here in America.  And, in a true sense, it’s also a partnership with Mexico and Canada and ourselves against the world.  It’s really a trade partnership, if you look at it that way.  And it’s a day of great celebration in all three countries.

For some perspective on what can be lost by fracturing the North American trading bloc, the Brookings Institution has published USMCA Forward 2025, a collection of seven essays and additional short comments about some of the positive gains from Trump’s trade agreement. As Joshua P. Meltzer and Brahima Sangafowa Coulibaly point out in their introduction to this volume, U.S. exports to Mexico and Canada have increased by 46% since the USMCA agreement was signed in 2020.

This set of essays focuses on a few key areas of interest for the North American Trade bloc. One is critical minerals. Meltzer and Coulibaly note:

Critical minerals and rare earths are key inputs into the production of many technologies, such as batteries, mobile phones, and semiconductors and needed for defense purposes. … The challenge for North America is the heavy dependence on many of these minerals from China, particularly when it comes to processing. The Trump administration has also made secure supply chains a focus, and this will require addressing the heavy reliance on China for critical minerals. China’s recent announcement that it will restrict exports of various critical minerals to the U.S. in response to U.S. tariffs further underscores the strategic need for the U.S. to reduce this dependency. … [T]he U.S. is 100% reliant on imports of 16 critical minerals such as graphite and more than 50% reliant on imports for another 29 critical minerals, including rare earths, zinc, and nickel. About 40% of U.S. import of critical minerals come from Canada and Mexico. Moreover, the U.S., Canada, and Mexico have largely complimentary resources, meaning that U.S. support for the development of critical minerals and rare earths in Canada and Mexico does not compete with U.S. production but can replace existing dependencies on China. 

In short, if the US is going to be a world leader in technologies like batteries, mobile phones, and semiconductors, it needs easy access to minerals across the North American trade bloc. Moreover, if US manufacturing in these and other areas is to set the standard for global productivity, the US business sector needs to be able to locate different pieces of the goods and services supply chain across the US, Canada, and Mexico in ways that can boost efficiency.

I was pleased back in 2020 by the passage of the USMCA, since it seemed to assuage President Trump’s worries about the NAFTA agreement, while still supporting the North American trading bloc. Import tariffs aimed at China at least have the rationale of being aimed at a country where the US is involved with economic and geopolitical competition. But in Trump’s words from 2020, USMCA is “a partnership with Mexico and Canada and ourselves against the world” and “a truly fair and reciprocal trade deal that will keep jobs, wealth, and growth right here in America.”

Interview with Paul Krugman: Trade and Industrial Policy

Chad Bown interviews Paul Krugman at the Trade Talks podcast on a range of subject related to trade and industrial policy (Trade Talks, March 16, 2025, “Paul Krugman talks trade, industrial policy, and Trump”). Here are a few comments that caught my eye:

If you are worried about dependence of the US economy on foreign supply chains for certain products, the appropriate answer might be industrial policy, but not tariffs.

Max Corden’s 1974 book Trade Policy and Economic Welfare remains relevant. And what Corden and others said was, if there’s something that you think you need to be producing, then encourage production. The answer is industrial policy. The answer is to subsidize or otherwise promote. In general, a tariff has side effects that may not be what you want. If you were worried that too many of the world’s semiconductors are being produced within striking range of China, then you want to subsidize production of high-end semiconductors in the United States. But that’s not a good reason to raise the cost of semiconductors to the U.S. downstream industry. So, there’s a really pretty strong case for industrial policy here. That’s the generic principle.

Now actually implementing it is tricky, by the way. The thing about these agglomeration economies is that, once they’re well established, they’re really hard to break. … And so if you want to develop rival agglomerations to the existing agglomerations that you think are in the wrong place, it’s going to be expensive and hard, which doesn’t mean you shouldn’t do it, but you should realize that it’s not something you do by throwing a few dollars at the problem.

On Europe’s competitiveness problem:

I would suspect that the Europeans would be feeling relatively okay about their economic performance if it weren’t for the comparison with the United States. The old Eurosclerosis of persistent high unemployment is gone. In general, prime age workers are more likely to be working in Europe than they are in the U.S. In a lot of ways, the quality of life is decent. Their life expectancy is several years longer than ours. So Europe looks pretty good, except that they have clearly fallen behind in some advanced technologies and a significant productivity gap has opened up. …

A significant part of that gap in productivity between the U.S. and Europe is really very localized. It’s a reasonable guess that roughly half of the U.S.-European productivity differential reflects very high value-added per worker in Silicon Valley and also Seattle, which operates in somewhat the same way, on one side of the continent, and greater New York on the other. That it’s really the agglomerations of the tech industry in Silicon Valley and the agglomeration of the financial industry, on the East Coast, that are the difference.

On why tariffs aren’t the answer to reducing the US trade deficit:

It’s also probably not the case that tariffs will do much to reduce the trade deficit. There are some subtleties there, but the basic point in textbook economics says that the trade deficit is really determined by the capital account. It is the fact that foreigners want to invest in the United States – so there’s a net inflow of capital – and just as a matter of accounting that means that we have to have a trade deficit on the other side.

If you ask, “So what happens if you put on tariffs?” The answer is, even if other countries don’t retaliate, what happens is that the dollar rises. And we have lower imports, but we also have lower exports because we have a stronger dollar. And of course, if other countries do retaliate we don’t need as strong a dollar. But one way or another, exports fall to pretty much offset the effect on imports. …

Within the range that we’re talking about, tariffs are really unlikely to have an impact on the trade deficit. At the same time, they will raise costs. What’s really striking … was this disproportionate concentration of tariffs on intermediate goods rather than consumer goods, which meant that even manufacturing was probably not benefiting. You were probably actually reducing manufacturing employment. And we’re doing it again. As we’re holding this conversation, the tariffs that have already gone into effect are on steel and aluminum. That’s good for steel and aluminum manufacturer, I guess, and apparently lawn furniture, which for some reason is covered by this as well. But it’s pretty bad for everybody else who’s downstream. These are not tariffs that look like they’re going to achieve even their ostensible goals.

Private Credit: Replacing Banks for Business Loans?

In discussions of how businesses borrow money, there used to be essentially two choice: firms could either issue bonds or borrow through banks. But during the financial crisis of 2008-09, as well as other episodes, it seemed that a number of banks were taking too much risk, which in bad times meant that their solvency was threatened and sometimes emergency action from the Federal Reserve was needed to keep them going. A wave of additional regulations on bank lending followed, to limit risky loans and bolster bank safety.

But as banks pulled back from lending, a number of businesses found it useful to borrow money elsewhere–specifically, via “private credit.” Fernando Avalos, Sebastian Doerr and Gabor Pinter tell the story in “The global drivers of private credit” (BIS Quarterly Review, March 2025, pp. 13-30).  For additional background, the IMF devoted Chapter 2 of its semiannual Global Financial Stability Report in April 2024 to “The Rise and Risks of Private Credit.”

Avalos, Doerr and Pinter write: “Private credit funds have increased their assets under management (AUM) from about $0.2 billion in the early 2000s to over $2,500 billion today.” My personal rule is that quantities measured in trillions of dollars deserve some attention.

“Private credit” is usually set up as an investment fund, where investors put money in and borrowers–typically medium- and small-sized firms–get funding. The funds are “closed end,” meaning that they raise a fixed amount of money and then stop. Some of these funds just do direct lending; some offer more complex loans, which can include provisions for converting the loan into equity ownership in the firm; and some invest in the debt of “distressed” companies, which they can buy at a low price. The firms that borrow through private credit are, as one would expect, firms that aren’t able to borrow what they want through banks or bond markets. Thus, they can often be younger firms that don’t yet have the steady profits that risk-averse bank lenders are looking for.

Avalos, Doerr and Pinter write:

Most funds operate as closed-end structures that lock in capital for their life cycle, which typically ranges from five to eight years. They do not trade on exchanges and are not available to retail investors, which makes them illiquid and subject to lighter regulation. The life cycle of funds usually matches the average maturity of their loan portfolios … Some fund structures, however, offer investors more frequent redemption windows. An important example is BDCs in the United States, many of which list their shares on stock exchanges and are accessible to retail investors. They are subject to federal regulation and have disclosure requirements similar to those of mutual funds, providing transparency and investor protection. With over $300 billion in AUM [assets under management], BDCs represent 20% of the private credit market in the United States today. Attempts to bring retail investors into the fold have been a general trend in the private credit space.

If you aren’t a finance wheeler-dealer, private credit may seem like just another exotic fact about the economy. The lenders in private credit have, to this point, mostly been big firms with long time horizons, like pension funds, insurance companes, and sovereign wealth funds. The private credit funds often specialize in a certain type of industry or firm, and the managers of the fund often have deep knowledge about the industry and firms to which they are lending. In a way, the purpose of stricter bank regulation was to get riskier loans out of the banking system, so it shouldn’t be a surprise when such loans end up being organized in an alternative form.

But of course, financial regulators and international organizations like the International Monetary Fund and the Bank for International Settlements stay awake nights thinking not about how loan arrangements are working just fine in the present, but what the effects would be if such arrangements took a bad turn for the worse in the future–especially as retail-level investors with the ability to zoom in and out of markets become more common in this area.

For example, what if broader economic or financial conditions lead to a much higher risk of default in these funds? As a result, holders of these funds start trying to sell these not-very-liquid investments, and panicky selling drives down the price. Regulated pension funds and insurance companies–even some banks that invest in these funds–see that the value of their investment in these funds is falling. They start to draw on lines of credit and other sources of short-term funding, but with the increased risk and falling prices, those other sources of short-term funding start drying up. Yes, all of this is not at all a likely near-term scenario. But it’s why the IMF wrote last year:

Given the potential risk private credit poses to financial stability, authorities could consider a more proactive supervisory and regulatory approach to this fast-growing, interconnected asset class. … Several jurisdictions have already undertaken initiatives to enhance their regulatory framework in order to more comprehensively address potential systemic risks and challenges related to investor protection. The US Securities and Exchange Commission (SEC) is making substantial efforts to enhance regulatory requirements for private funds, including enhancing their reporting requirements. The European Union has recently amended the Alternative Investment Fund Managers Directive—commonly referred to as AIFMD II—to include enhanced reporting, risk management, and liquidity risk management. … Regulatory authorities in other countries (such as China, India, and the United Kingdom) have also enhanced the regulation and supervision of private funds. With the growth of the private funds sector in general, supervisors have also increased scrutiny over various aspects of private funds, particularly on conflicts, conduct, valuation, and disclosures.

Thus, the race between financial innovation and regulation continues to evolve.

Snapshots about the Federal Workforce

I saw some mention that there has been a sharp rise in federal civilian employment in 2023 and 2024. When I tracked down the numbers at the FRED website run by the Federal Reserve Bank of St. Louis, it looked like this:

Just to be clear, this is civilian employees only, and doesn’t count the 1.3 million or so in the armed forces. However, it does count about 600,000 postal workers, although the US Postal Service is a semi-autonomous agency. The spikes in federal employment every 10 years are temporary employment related to the decennial Census. If you squint a little, you can also see a pattern where government employment tends to rise in times of recession (shaded areas) or immediately after.

However, it also looks as if federal employment had settled at under 2.8 million workers during non-recession, non-Census periods in the late 1990s, the first decade of the 2000s, and from about 2013-2016. From this view, the increase of about 140,000 federal jobs from the start of 2023 to the present does look like a break with past patterns.

I know that the rise in federal employment since 2023 is not about additional post office workers. I’ve seen comments in the press that the higher federal employment relates to implementation of infrastructure and green energy grants. But I confess that I haven’t done the work of tracking the rise in federal employment in the last couple of years back to individual agencies. Someone who wants to spend the time digging around at the at the Office of Personel Management website could do so.

But if you focus on the 2.4 million non-postal but civilian employees, the breakdown across agencies looks like this, according to Drew Desilver at the Pew Research Organization (“What the data says about federal workers,” January 7, 2025). One example of a small-employment agency among the small boxes at the bottom right of the figure would be the US Department of Education, with fewer than 5,000 employees. But truly substantial cuts in federal employment would require truly substantial cuts from the big boxes.

I would not expect federal employment to be a constant share of the US workforce. After all, a substantial part of government work involves working with information, and the leaps and bounds of information technology should make it possible, in a broad sense, to accomplish similar tasks with fewer workers. Indeed, that seems to be the pattern over time. The figure below takes the number of federal employees and divides by total employees in the US economy. Back in the early 1990s, federal employees were almost 3% of the workforce. It’s now about 1.9% of the workforce–basically the same as before the federal employent spikes from the Census and the pandemic recession. Also, when you look at federal employment relative to total employment, the recent jump in federal jobs goes away; in other words, the last two years are a time when federal employment has been rising at about the same rate as total employment, but not faster.

Of course, these kinds of overall numbers don’t offer evidence that certain parts of the US government should have fewer workers, more workers, or the same number. The rise in federal employment in 2023-24 suggests that more attention might be paid to who was hired in what departments. The ratio of federal employment to total employment suggests that we have not seen (yet) seen a radical break with past federal employment patterns.

Mistaken Identities: The International Trade Version

It is common in current US political discourse to hear it asserted, as an incontrovertible truth, that the US economy is smaller because of the US trade deficit–or equivalently, that tariffs to reduce imports will cause the US economy to grow. Such claims are generally not well-founded. But here, I want to point out one of the arguments for this claim that reflects a more fundamental misunderstanding.

As you will learn from any introductory economic text, there are several ways of measuring the size of an economy, and one of the standard approaches is:

GDP = C + I + G + X – M.

This “national income accounting identity,” as it is sometimes called, is based on the idea that a nation’s economic output can be used in several main ways: it can be consumed (C), it can be invested (I), and it can be part of government consumption (where this term includes only government use of goods and services, not government spending that only represents a pass-through of income to households or firms). The final two terms cover international trade. Some portion of a nation’s economic output can be exported to other countries, but we also need to take imports into account, which wereproduced elsewhere.

This equation is not a “theory” about how the economy works. Instead, an “identity” is a statement that is true by the definition of the terms. This is one of the ways in which GDP is defined. If you go to the website of the Bureau of Economic Analysis and look at a press release for recent estimates of GDP, these are the categories that you see being estimated.

The problem arises when someone takes an accounting identity and believes you can just move the numbers around to achieve a goal. Maurice Obstfeld explains the issues in “Mistaken Identities Make for Bad Trade Policy” (Peterson Institute for International Economics 24-13, October 2024). He writes:

The national income and product (NIP) identity is often the basis of claims that a trade deficit—an excess of import spending over export earnings—causes reduced economic growth and job losses. The identity reflects that a nation’s total production output (gross domestic product, or GDP) must be consumed by households, invested by businesses, purchased by the government, or exported abroad.

GDP = consumption + investment + government purchases + net exports.

The last term on the right is net exports (export receipts minus import expenditures), the balance of trade. It is included because some parts of national consumption, investment, and government purchases are imported from abroad, and these components (which add up to total imports) must therefore be subtracted from the right-hand side above to make the identity a true representation of how GDP is allocated among its possible uses. The preceding relationship is an identity because every product within GDP that is sold on the market is purchased for some use: double-entry bookkeeping.

The claim that trade deficits (negative levels of net exports) cost production and jobs follows immediately from a superficial application of the NIP identity. Suppose net exports fall further, causing the trade deficit to grow, but nothing else on the right-hand side changes. Then the identity implies that GDP must be lower by the same amount. This opens a faulty line of reasoning through which bigger trade deficits are necessarily a drag on output and employment.

Perhaps the key phrase in that explanation is “but nothing else on the right-hand-side changes.” To be more specific, say that imports fall (set aside for the moment why they fall). Say that 100% of that decline in imports is matched by increased domestic output, so that GDP rises. However, “nothing else on the right-hand-side change”–that is, even though domestic production goes up, neither private nor government consumption rises, nor does investment, nor exports. Obstfeld puts it this way:

The prediction that implicitly underlies their calculations, however, is that if imports fall by some amount (for example), an equal amount of consumption or investment demand will automatically be redirected toward domestic products, leaving the sum of total consumption and investment spending unchanged. In terms of the NIP [national income and product] identity, they argue that net exports on the right-hand side will rise without any accompanying changes in the other right-hand side quantities, necessarily leading to higher GDP in precisely the amount of the trade balance improvement.

The flaw in this argument is that the trade deficit rarely if ever changes without some accompanying movement in consumption, investment, or government spending—and the way in which the trade balance interacts with other economic activity depends critically on why it is changing.

Notice the rhetorical shift that often happens here. We started with a statistical definition of GDP, which will always be true, because it’s the definition. It is true that if imports fall, something else in that definition of GDP will have to change, to preserve the identity. But the statement that the change will entirely happen in the form of greater domestic production is a specific theory about what will change–and it’s not at all obvious that the theory is correct. Here are some alternative theories about effects of import tariffs:

If the US imposes tariffs on imported goods, US imports will decline. However, other countries will retaliate with tariffs on US exports, so US exports will decline as well. If these two effects exactly offset one another, so that the lower US imports and lower US exports are the same, the trade deficit does not change and GDP does not change. Instead, there is a dislocation and reallocation in the US economy in which export-oriented industries take a hit, while US production for the US domestic market rises.

Or say that the US imposes tariffs on imported goods, so that US imports decline. This will necessarily mean that foreign producers who used to be selling into the US market are earning fewer US dollars. In the foreign exchange market, the supply of US dollars declines, and the exchange rate of the US dollar rises. As a result, US exports become more expensive in global markes, and exports end up falling as well.

Or say the US imposes tariffs on imported goods, so that US imports decline. Many of those imported goods are used by US firms as inputs to production. The reason US firms import these inputs is that they are either less expensive or higher quality (or both) than the same product would be if produced in US borders (if indeed they are even produced at all within US borders). Thus, all the US firms that have been depending on imported inputs (which includes most large and successful US multinationals) are facing a rise in their costs. As a result, they may decide to cut back on their levels of investment.

Or say the US imposes tariffs on imported goods, so that US imports decline. Many of these imports are purchased by consumers, who choose these items because that they are either less expensive or higher quality (or both) than the same product would be if produced in US borders (if indeed the product is even produced at all within US borders). Thus, when these consumers face the necessity to purchase alternative goods,they will be buying something that they would have preferred less–based either on higher price or a difference in quality. As a result, consumers may decide to cut back.

I want to emphasize two points here.

One is that all of these possibilities, as I have laid them out here, remain all consistent with the basic definition of GDP. The basic definition of GDP does not tell you which of these outcomes are more or less likely–it only tells you how GDP is calculated. The definition of GDP does not tell you that if tariffs are imposed on imports, GDP will rise, or that it will fall, or that it will remain the same. The definition doesn’t tell you whether a change in tariffs will affect exports, or consumption, or investment. It’s just a definition, not a theory of how the economy will react. Anyone who starts with the statistical definition of GDP, and then asserts that lower imports will necessarily lead to equivalently higher domestic production, is pulling a fast one. In Obstfeld’s phrasing, they are assuming that “nothing else on the right-hand-side changes.” It’s a whale of an assumption.

The other point is that to distinguish between possible theories, one needs to look at evidence. Obstfeld goes into considerably more detail about what theories are likely to play out in response to restrictions on imports, and why. For example, the “theory” that other countries will respond to tariffs by retaliating is happening in real time. The “theory” that tariffs on imports lead to a stronger exchange rate, and thus depress sales of exports, has happened in practice. Firms and households do suffer when their access to the imported goods they would prefer to have purchased is restricted.

There are lots of other arguments about import tariffs: I’ve discussed some of them in the past, and am sure to discuss more in the future. But the argument that import tariffs will increase total domestic production, when based on the definition of GDP and the national income and product identity, should be an embarrassment to anyone making it, and it should be ridiculed and laughed down wherever it is encountered.

Will the Next Generation Be Better Off? International Pessimism

In the US economy, since the modern pattern of economic growth started back in the early 19th century, average annual growth has been remarkably close to 2% per year on a per capita basis (as I have noted here and here). It would be an extraordinary reversal of fortune for this process to stop and then to reverse itself. But polling data suggests that about 75% of Americans believe that when today’s children grow up, they will be worse off than their parents.

It isn’t just America, either. Across high-income countries of the world, and a number of middle-income countries as well, majorities or near-majorities believe that when children in their country grow up, they will be worse off than their parents. Marta Doroszczyk compiles some polling data for a short article on “Generational Concerns” in the March 2025 issue of Finance & Development from the IMF. Here’s an illustrative figure:

What to make of this?

1) Polling data is rarely simple to interpret. My guess is that when many people are asked about economic prospects for the long-run and the next generation, they have a tendency to react based on medium-run or even short-run concerns–and probably not just economic concerns, either. My guess is that few people have recently looked up the per capita growth statistics before answering.

2) It’s not hard to understand why people in Japan, which has had an economy stuck in slow growth since the early 1990s, or Greece, which has been struggling through one economic crisis after another for a couple of decades now, might be pessimistic about the economic future.

3) But even those types of issues and patterns duly noted, there’s a widespread economic pessimism here, which reaches beyond the particulars of any single country.

4) Back in 2019, the OECD put out a report on what it means to be “middle class.” A central theme in the report was that, across many countries, “middle class” referred to a sense that access to consumption were available in three main areas spending on housing, health care, and higher education. In a lot of countries, those are areas where prices have been rising rapidly.

5) It’s interesting to consider some of the more optimistic countries at the bottom of the figure: India, Bangladesh, Indonesia, Israel, Philippines, Poland.

6) If someone is determined to be pessimistic, it can be hard to talk them out of it. But pessimism affects politics. If we are heading into a time when future generations are actually worse off, we are in a zero-sum or negative-sum economy, in which the only way to benefit some group–or to pursue objectives like enviromental protection–is to cause equivalent losses for other groups. The underlying politics of that setting will be full of bitterness and suspicion. in a US political context, and perhaps a European one as well, it feels to me as if there is room for a politics of optimism and abundance, but it needs to be backed up by actual public and private investments, accomplishments, and observable progress.

Three Questions on the US Safety Net

Here are three main often-asked questions about the US safety net:

1) If your goal is to expand the social safety net, are you more likely to be successful with a focus on universal programs (like Social Security) or means-tested programs (like food stamps or welfare)?

2) Back in 1996, President Bill Clinton signed what is often called the “welfare reform act,” following up on his campaign promise to “end welfare as we know it.” For better or worse, did that legislation keep Clinton’s promise?

3) Those who need social safety net programs are often at a point where their live are unstable: perhaps an unstable job, unstable housing, unstable family arrangements, unstable income, even unstable health. For people in this situation, finding out about available programs, signing up for them, and keeping track of changes in requirements to that they can remain eligible, may be difficult. Are the administrative burdens that safety net programs place on recipients a necessary evil, or something else?

For answers to these and many related questions, I recommend the three-paper symposium on the US safety net in the Winter 2025 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

Howard discusses the poverty-reduction history of means-tested and broader social insurance programs over the last century or so. He reminds readers of the old saying “programs for the poor are poor programs.”

Given the stark contrasts between means-tested “welfare” and inclusive Social
Security, it is not surprising when advocates for paid family and medical leave (Romig
and Bryant 2021) or long-term care (Powell 2019) endorse a social insurance
model. Nor is it surprising when they endorse Medicare for All (Scott 2019). Part of
me is attracted to these proposals, for reasons that range from lower administrative
costs to social solidarity. I am also attracted by the possibilities of giving extra help
to lower-income households within broadly inclusive programs. However, recent
history does not point decisively in favor of inclusive over means-tested programs.
In retrospect, the biggest achievements of social insurance, politically and
programmatically, occurred between the 1930s and 1970s. Since then, Social
Security and Medicare have had more success preventing retrenchment than accomplishing expansion.

Conversely, the last 30 years or so have seen dramatic increases in spending on a number of means-tested programs: the Earned Income Tax Credit, Medicaid, the Children’s Health Insurance Program (CHIP), the Child Tax Credit, and food stamps (now called the Supplemental Nutrition Assistance Program). Meanwhile, the trust funds that support Social Security and the hospital portion of Medicare are dwindling fast, and those programs face sharp pressures in how to constrain the projected future rise in spending. The evidence of the last three decades or so is that the US political system may be better at expanding means-tested programs, and less willing to expand universal programs, than previously thought.

Schmidt, Shore-Sheppard, and Watson dig into the evolution of specific safety net programs since 1996 in more detail. They point out that when President Clinton signed the welfare reform act into law in 1996, he was fulfilling a campaign promise to “end welfare as we know it.” The legislation in fact did so along several dimensions:

First, spending on conventional “welfare”–that is, the Aid to Families with Dependent Children (AFDC), which was converted into Temporary Assistance to Needy Families (TANF) in the 1996 welfare reform act–has dwindled in real dollars and has pivoted away from cash support to families and toward programs related to reducing poverty (for example, programs to improve high school graduate rates and reduce teen pregnancies). However, as noted a moment ago, total spending on means-tested programs has increased, not decreased.

Second, the earlier “welfare” program was a mixed federal-state program. Indeed, many safety-net programs were designed on a shared federal-state basis, including AFDC, Medicaid, and unemployment insurance. But the new surge of means-tested spending has in substantial part been federal spending–which also has the effect of equalizing support for the poor across states.

Third, many of the expanded means-tested programs have linkages to working parent, or to benefits aimed at children. As a result of this targeting, low-income working families have become better-off. However, low-income single people, or children in families where the adults do not have a job (and thus can be the poorest of the poor), may well be worse off.

Finally, Herd and Moynihan discuss administrative burdens of the safety net. Here’s an illustrative anecdote from the start of their paper:

By the age of five, Abel Sewell had survived cancer and was receiving monthly blood tests to ensure his leukemia had not returned. The tests were covered by TennCare, Tennessee’s Medicaid program, until, at a regular doctor’s visit, his mother discovered that the coverage had lapsed. Abel’s mother spent months fighting to get their coverage restored, ultimately taking out a second mortgage on their home to manage the mounting health care debt that came from being uninsured. The family very much wanted health services, and were willing to endure significant hardships to get it.

The Sewells were not an anomaly. Between 2016 and 2019, almost 250,000 children in Tennessee lost coverage. What happened? The Sewells, like many others, said they never received the TennCare renewal forms. Even those who did receive the renewal packets often struggled to complete the 47 pages (Kelman and Reicher 2019). Failure to return forms accounted for 67 percent of those who lost coverage, and it seems likely that many of them did not receive the forms due to outdated mailing addresses. Late or incomplete forms also resulted in a loss of coverage (Arbogast, Chorniy, and Currie 2022).

Full disclosure: The issue of administrative burdens is personal for me. My wife and I are the legal guardians for one of my adult children, who is disabled, and thus receives federal Disability Insurance (which means he is also on Medicare for health insurance) and state-level support through a Minnesota program. We are deeply grateful that such support is available, and the benefits make it possible for our son to live independently (although we live close by). But the paperwork burden is heavy, even for a family with two college-educated parents. We strongly suspect that a number of people who are eligible for the same benefits as my son are not actually receiving these benefits, because the legal guardians of those people can’t handle the paperwork burden.

As Herd and Moynihan point out, part of the problem here is “sludge,” the gradual accumulation of rules and forms that accumulate in a bureaucracy. As a simple example, imagine that there are two safety-net programs, and a large majority of those who qualify for one will also qualify for the other. One program has a work requirement; one does not. Should we add a work requirement to the second program? As a practical matter, adding the additional work requirement to the seconed is redundant: again, those in the second program already face a work requirement for the first program. But in a bureacracy, there is a natural tendency to reason along the lines, “they already meet the work requirement for one program, so why not add it for the other program, too?” Multiply this tendency, and you end up mailing out 47-page forms for annual renewals. For our family, a single 47-page form per year would be an enormous improvement over the administrative burden we actually face.

On average, those families with the lowest level of resources will have the hardest time meeting them. In that sense, adding to the paperwork burden for safety-net programs is a regressive method of rationing their

For the US population as a whole, 11.1% of the population is below the poverty line, and an additional 15.8% are above the poverty line, but have income of less than twice the poverty line. For those under 18 years of age–for whom the advice to “get a job” has zero practical relevance–15.2% of the population lives in a household below the poverty line, and an additional 19.8% are in a household above the povery line but with an income less than twice the poverty line. At its best, the safety net is about helping this substantial part of the US population to have sufficient resources to persevere through the hard time, and to have the space to envision and work toward better prospects in the future.

Adam Smith on Those Who Wish to Dominate Others

One of my long-ago professors–not an economist, and not a political conservative– sometimes said that Adam Smith was just flat out deeper and more interesting than many of his critics, who often try to reduce him to a cardboard cutout disciple of free-market fundamentalism. For example, I’ve heard (uninformed) criticisms of Smith that he assumes everyone wants to buy and sell, when a number of people instead would prefer to dominate and take. Paolo Santori engages with this corner of Smith’s work in “Domination vs. Persuasion: The Role of Libido Dominandi in Adam Smith’s Thought” (The Review of Politics, 2025, 1–18).

Santori seems to prefer the Latin libido dominandi, but as he points out, Smith writes of “love of domination” and “love of domineer.” Here’s Smith’ discussion of desire to dominate, in the context of masters who want to dominate slaves, from The Wealth of Nations (Book III, Chapter 2). Smith wrote:

The experience of all ages and nations, I believe, demonstrates that the work done by slaves, though it appears to cost only their maintenance, is in the end the dearest of any. A person who can acquire no property, can have no other interest but to eat as much, and to labour as little as possible. Whatever work he does beyond what is sufficient to purchase his own maintenance can be squeezed out of him by violence only, and not by any interest of his own. In ancient Italy, how much the cultivation of corn degenerated, how unprofitable it became to the master when it fell under the management of slaves, is remarked by both Pliny and Columella. In the time of Aristotle it had not been much better in ancient Greece. …

The pride of man makes him love to domineer, and nothing mortifies him so much as to be obliged to condescend to persuade his inferiors. Wherever the law allows it, and the nature of the work can afford it, therefore, he will generally prefer the service of slaves to that of freemen. The planting of sugar and tobacco can afford the expence of slave-cultivation. The raising of corn, it seems, in the present times, cannot. In the English colonies, of which the principal produce is corn, the far greater part of the work is done by freemen. … In our sugar colonies, on the contrary, the whole work is done by slaves, and in our tobacco colonies a very great part of it. The profits of a sugar-plantation in any of our West Indian colonies are generally much greater than those of any other cultivation that is known either in Europe or America; and the profits of a tobacco plantation, though inferior to those of sugar, are superior to those of corn, as has already been observed. Both can afford the expence of slave-cultivation, but sugar can afford it still better than tobacco. The number of negroes accordingly is much greater, in proportion to that of whites, in our sugar than in our tobacco colonies.

Santori traces Smith’s ideas about the “love to domineer” across Smith’s other works, like The Moral Sentiments and the Lectures on Jurisprudence. He argues that other authors have sometimes interpreted the “pride” that Smith speaks of as the root of a “love to domineer” as a kind of vanity or a desire for the recognition of others.

Santori argues that a more persuasive interpretation is to think of “pride” in this context as a sin. He quotes Smith in The Moral Sentiments: “”The proud man does not always feel himself at ease in the company of his equals, and still less of that of his superiors.” Santori argues for this kind of pride, there is a pleasure in not needing to spend time or energy persuading or obtaining consent. Indeed, this “love to domineer” is strong enough, in Smith’s argument, that those who hold slaves are willing to give up some of the material benefits they could have from hiring free labor.

In this part of the Wealth of Nations, Smith is discussing the historical transition from feudal to commercial society. In that context, Santori argues:

We read in the Lectures on Jurisprudence (LJ) and Wealth of Nations (WN) that masters’ love of domination is what will make slavery or servitude perpetual, in contrast with masters’ real interest that would be fostered by having free men rather than enslaved people working for them. … Smith argued that the emergence of European commercial society, grounded on free-market exchanges between individuals based on persuasion, marginalized and undermined libido dominandi. However, he knew that commercial society could not eliminate libido dominandi and that, whenever socio-economic circumstances allow, human beings will try to dominate each other. He saw this in the colonies and in specific markets (colliers and salters). …

To Smith, commercial society is a more mature way of conceiving life in common and civil society. In contrast, love of domination expresses a childish wanting to obtain everything without effort. Human beings can flourish when they learn to live in a society where they cannot impose their aims. They must deal with others’ aims and opinions in relations based on persuasion rather than domination. Adult life in a commercial society requires something better than the love of domination. Here, I am expanding Smith’s argument, but hope to have remained faithful to his spirit.

A common complaint about so-called “free markets” is that there are times they don’t feel especially “free,” like when it’s time to go to work in the morning or when the bills are due. Moreover, hierarchies in commercial firms and markets do provide some scope for those who “love to domineer” to do so. But the “love to domineer” doesn’t go away in countries where markets and politics are not free–and can manifest itself in even more distasteful ways.

Economic Uncertainty in the US Economy

It’s intuitively obvious that “uncertainty” matters in economic decision-making. If the risks of making a choice–starting a company, making an investment, buying a house–look especially big in the present, then there is reason to postpone that decision. As a result, higher uncertainty can lead to a drop in economic activity. Thus, it’s a concern that, by some measures, economic uncertainty is on the rise.

For example, here’s the Economic Policy Uncertainty Index for the United States, as reported by the FRED website run by the St. Louis Fed. You can see the recent spike on the far right.

Or here’s the Global Economic Policy Uncertainty Index:

These graphs are surely a reason for concern. Whatever the merits of a “move fast and break things” approach in certain contexts, it obviously will increase uncertainty. But how does one measure uncertainty? What is being measured here?

The US uncertainty index is not official government data. It is based on a method developed by three economists, Scott R. Baker, Nick Bloom, and Steven J. Davis. I mentioned their approach here when it was first being developed back 2012. They combine three sources of data: “the frequency of newspaper articles that reference economic uncertainty and the role of policy; the number of federal tax code provisions that are set to expire in coming years; and the extent of disagreement among economic forecasters about future inflation and future government spending on goods and services.” The average value from 1985-2010 is arbitrarily set at 100. Thus, you can see spikes during the Great Recession, the pandemic, and now early in 2025.

For a discussion from Nicholas Bloom about this and other ways of measuring uncertainty, and how they relate to actual economic outcomes, a useful starting point is his article, “Fluctuations in Uncertainty,” in the Spring 2014 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

An obvious concern about measures of uncertainty based (at least partly) on news reports is that there may be a divergence between how the media is covering the news of the day and what actual investors and business people are doing and saying. At least at the moment, there appears to such a divergence.

For example, a standard measure of uncertainty in financial markets is the CBOE Volatility Index from the Chicago Board Options Exchange, commonly called the VIX. The basic idea is to look at expectations of volatility of the stock market, by looking at the options that investors are buying on future values of the S&P stock market index. For example, if more investors are buying options to protect themselves against especially large falls in stock prices, then volatility would be up. But the VIX isn’t showing a rise in uncertainty just now.

Another way to measure uncertainty is ask businesses about their sales and employment forecasts 12 months in the future, and how much uncertainty they feel about those forecasts. The Federal Reserve Bank of Atlanta carries out a Survey of Business Uncertainty with this approach, and it does not show a prominent recent uptick in business uncertainty.

I don’t quite know what to make of these various meausures. The Baker-Bloom-Davis measure of uncertainty has been tested and used in research, and it cannot be casually dismissed. However, other meausures of uncertainty are not spiking int he same way. At the moment, it seems fair to say that there’s uncertainty about uncertainty, which isn’t the same thing as greater uncertainty, but perhaps headed in that direction.

High and Low Stress in the Workplace: A World War II Example

Will workers in an inherently high-stress environment perform better if their bosses seek to defuse that stress? Or if their bosses play up and emphasize the stress? The answer probably depends on specific contexts of the workforce and the boss. But for some evidence and a speculative answer in one context, Oded Stark offers “Stress in the air: A conjecture” (Economics and Human Biology, December 2024). From the abstract:

The 1949 study The American SoldierCombat and Its Aftermath, Volume II, by Stouffer et al. presents detailed accounts of the attitudes of American fighter pilots toward the stress experienced by them and of the policies and practices of the American Air Force command in addressing this stress during WWII. The 2022 study “Killer incentives” by Ager et al. documents an aspect and a repercussion of the stress of German fighter pilots and can be used to identify the response to that stress by the German Air Force command during WWII. Drawing on these two studies, in this paper I construct fighter pilot stress profiles in the two air forces. The picture that emerges is that there is a stark difference between the approaches of the two commands. This diversity leads me to conjecture that the American Air Force command explicitly sought to forestall and curtail fighter pilots’ stress, whereas the German Air Force command implicitly cultivated and engineered fighter pilots’ stress.

Stark points out that the American Air Force command was very aware of the stress experienced by fighter pilots, and how performance tended to diminish with additional missions. They and tried to address the stress in various ways. One approach was to set a limit: “The limit to the tour of duty of fighter pilots was 300 hours of combat flying, which was typically achieved in six or seven months of active combat duty.” This limit was pre-announced and socially sanctioned. In addition, Stark quotes Stauffer about ongoing evaluation of fighter pilots: “All fighter pilots were systematically examined throughout the entire period that they were on operational duty; as soon as any … anxiety reaction to combat flying was detected, the man was immediately removed from combat duty as a fighter pilot”–although those removed from combat duty could be reassigned to less risky flights. Finally, American flight crews were rewarded in terms of total missions the completed, and medals were typically awarded after the tour of duty was complete, not on whether a particular mission had been especially risky.

The German Air Command during World War II took a different approach. It encouraged rivalry between fighter pilots, and gave decorations and promotions based in part on whether a mission was especially risky. Stark adds: “As noted many times in the Stouffer et al. study, the American army was well aware of and sympathetic to the problem of psychiatric combat breakdown (by 1943 providing treatment for psychiatric casualties, either at forward stations near the front or in dedicated hospitals closer to the rear), whereas the German army was generally hostile to the idea of psychiatric breakdown and those who were considered guilty of malingering or cowardice were not treated well.”

One can easily hypothesize reasons why organizations might take different approaches to stress management. In certain contexts (financial markets?), some kinds of risk-taking might be especially remunerative. In wartime, perhaps an aggressor has reason to encourage risk-taking, while the party fighting back (and expecting a surge of wartime production to arrive) will want to deal with stress differently. Even before the war started, the culture that generated US fighter pilots in World War II might have been quite different from the culture that generated German fighter pilots.

Thus, Stark’s point is not that there is one best approach that organizations should follow for motivating workers in stressful environments. But it can be useful to think explicitly about whether a given work environment seeks to be stress-reducing or stress-increasing–and what tradeoffs can arise.