Why So Many Shareholders of US Firms are Untaxed

Over the last half-century or so, the share of corporate stock that is owned by investors with taxable mutual funds or brokerage accounts has fallen dramatically. Steven M. Rosenthal and Livia Mucciolo tell the story in “Who’s Left to Tax? Grappling With a Dwindling Shareholder Tax Base” (Tax Notes, April 1, 2024).

Here’s their figure showing a breakdown of who owns stock in US publicly traded corporations. Back in the 1960s, 80% of this ownership was in the form of taxable accounts. But the share of US corporate stock held by foreign investors and retirement accounts has risen substantially, and nonprofits own a chunk of US corporate stocks as well. So in the last two decades, only 20-30% of US corporate stock is in taxable accounts.

Rosenthal and Mucciolo offer some additional discussion of how these groups are taxed. For example, dividends paid by US firms are taxable, even when paid to foreign investors, but these payments are governed by international treaties. They explain: “However, the rate is often reduced by tax treaties between the United States and the home country of the foreign investor: from 30 percent to 15 percent on portfolio investment dividends, for example, and 5 percent or even 0 percent on dividends from direct investments.” Foreign investors do not pay capital gains on stocks to the US government–instead, such gains are taxable in their home country. If US firms use the increasingly common practice of distributing funds to their investors by repurchasing their shares, then such payments are treated as capital gains, not dividends.

For retirement accounts, the common practice is that the money is not taxed when it goes into the account, and the returns are not taxed as they occur over time. Instead, retirement money is taxed as income to the taxpayer when it is received after retirement. Nonprofit, of course, are not subject to income taxes.

With these patterns in mind, proposals for taxing owners of corporate stock as a group–not just the minority who hold their investments in taxable brokerage and mutual fund accounts–are going to run into complexities. Dramatic changes in retirement accounts or international tax treaties are not a simple matter, in politics or economics. Jacking up taxes on the 20-30% of shareholders who are taxable would created incentives to push their share even lower. One can make an argument that a reason for an explicit tax on corporate income is that it has become so difficult to tax the gains to shareholders of those firms.

The authors describe the challenges without trying to spell out policy recommendations. They note: “The transformation over the past 60 years in the nature of U.S. stock ownership from overwhelmingly domestic taxable accounts to overwhelmingly foreign and tax-exempt investors has many important policy implications, including how we can most effectively tax corporate profits; who is affected by changes in corporate taxation; and the form of corporate payouts to shareholders. Policymakers must continue the process, only now beginning, of grappling with the dwindling shareholder tax base.”

Statistics is (Literally) Statecraft

Statistics are not reality, but they are a map to reality, and that map is central to the basic knowledge needed for modern government. Or at least so argues Michel Foucault in Security, Territory, and Population: Lectures at the College du France, 1977-1978 (edited by Michel Senellart, translated by Graham Burchell, originally published 2004, translation into English published in 2007). For example, he argues: “[T]his knowledge of things that comprise the very reality of the state is precisely what at the time was called `statistics.’ Etymologically, statistics is knowledge of the state, of the forces and resources that characterize the state at a given moment.”

Here’s a passage from Foucault’s lecture of March 15, 1978. Hat tip: I was introduced to this essay by a LinkedIn post from Noah Williams at the University of Miami. My previous readings of Foucault did not take me this deep into his writings!)

[A]t the level of content, what must be known in order to be able to govern? I think we see an important phenomenon here, an essential transformation. In the images, the representation, and the art of government as it was defined up to the start of the seventeenth century, the sovereign essentially had to be wise and prudent. What did it mean to be wise? Being wise meant knowing the laws: knowing the positive laws of the country, the natural laws imposed on all men, and, of course, the commandments of God himself. Being wise meant knowing the historical examples, the models of virtue, and making them rules of behavior. On the other hand, the sovereign had to be prudent, that is to say, to know in what measure, when, and in what circumstances it was actually necessary to apply this wisdom. When, for example, should the laws of justice be rigorously applied, and when, rather, should the principles of equity prevail over the formal rules of justice? Wisdom and prudence, that is to say, in the end an ability to handle laws.

At the start of the seventeenth century I think we see the appearance of a completely different description of the knowledge required by someone who governs. What the sovereign or someone who governs, the sovereign inasmuch as he governs, must know is not just the laws, and it is not even primarily or fundamentally the laws (although one always refers to them, of course, and it is necessary to know them). What I think is new, crucial, and determinant is that the sovereign must know those elements that constitute the state … That is to say, someone who governs must know the elements that enable the state to be preserved in its strength, or in the necessary development of its strength, so that it is not dominated by others or loses its existence by losing its strength or relative strength. That is to say, the sovereign’s necessary knowledge (savoir) will be a knowledge (connaissance) of things rather than knowledge of the law, and this knowledge of things that comprise the very reality of the state is precisely what at the time was called “statistics.” Etymologically, statistics is knowledge of the state, of the forces and resources that characterize the state at a given moment. For example, knowledge of the population, the measure of its quantity, mortality, natality; reckoning of the different categories of individuals in a state and of their wealth; assessment of the potential wealth available to the state, mines and forests, etcetera; assessment of the wealth in circulation, of the balance of trade, and measure of the effects of taxes and duties, all this data, and more besides, now constitute the essential content of the sovereign’s knowledge. So, it is no longer the corpus of laws or skill in applying them when necessary, but a set of technical knowledge that describes the reality of the state itself.

Walter Walter O’Leary, a Managing Partner at South Pointe Capital, and a colleague of Noah Williams at the University of Miami, pointed out the origins of the terminology of “statistics from the Online Etymology Dictionary:

1770, “science dealing with data about the condition of a state or community” [Barnhart], from German Statistik, popularized and perhaps coined by German political scientist Gottfried Achenwall (1719-1772) in his “Vorbereitung zur Staatswissenschaft” (1748), from Modern Latin statisticum (collegium) “(lecture course on) state affairs,” from Italian statista “one skilled in statecraft,” from Latin status “a station, position, place; order, arrangement, condition,” figuratively “public order, community organization,” noun of action from past-participle stem of stare “to stand” (from PIE root *sta- “to stand, make or be firm”).

OED points out that “the context shows that [Achenwall] did not regard the term as novel,” but current use of it seems to trace to him. Sir John Sinclair is credited with introducing it in English use.

The broader meaning “numerical data of any sort collected and classified systematically” is from 1829; hence the study of any subject by means of extensive enumeration. Abbreviated form stats is recorded by 1961.

This early notion of statistics as statecraft is (of course) appealing to me. Indeed, it helped me to crystalize one form of my discontent with how modern politics is often practiced. I would probably be uncomfortable with a government should be run by economists and other technocrats. But I would like to feel that a larger share of politicians have more than a passing and outdated acquaintance with the statistics that provide a map of “the forces and resources that characterize the state in a given moment.”

What Are the Objectives of First-Year College Students?

For more than a half-century, a UCLA-based research group has been carrying out surveys of incoming first-year college students. There are lots of questions about the decision process the students went through in applying, and about their expectations and priorities. The data tables from the 2022 survey, from the Higher Education Research Institute at UCLA, are available here.

I’ll focus here on a single question, which asks about what life objectives are important. The first column shows the overall answer: the other two columns are divided into answers from males and females. (These figures are taken from several different tables: for those who want to dig deeper, the underlying tables offer a number of other breakdowns.)

It’s worth remembering that survey responses are always a mixture of what the person actually believes and what they feel is a desired or appropriate answer. With that noted, it’s interesting to consider some of the gender gaps here. For example, incoming first-year college students who are male are notably more likely to list “raising a family” and “becoming successful in a business of my own” as essential or very important. Females are notably more likely to emphasize “working to achieve greater gender equity,” along with a variety of other social goals like “working to correct economic inequalities,” “working to correct social inequalities,” “improving my understanding of other cultures and countries,” “heling to promote racial understanding,” participating in a community action program,” “keeping up to date in political affairs,” “becoming involved in programs to clean up the environment,” and “helping others who are in difficulty.”

With that difference in mind, it’s interesting “being very well off financially” is by far the highest value for both male and female incoming first-year students (although I’m not sure how the overall average can be lower than it is for males and females taken separately). And it’s interesting that despite the emphasis on social goals in female responses, males are slightly more likely to emphasize the goal of “developing a meaningful philosophy of life.”

One substantial swing over the decades has been in these two answers concerning “being well-off financially” and “developing a meaningful philosophy of life. Back in the 1985 survey, the reporting of the survey includes this figure. In the 1966 survey, develop a meaningful philosophy of life was a high priority for a much larger share of first-year college students. But priorities shift, the lines cross in the late 1970s, and by the 1980s “be financially very well off” is well in the lead.

What’s going on here? Here are some thoughts and reactions

1) One reason is a dramatic shift in the gender mix. Back in the 1966 survey, only 31.6% of females listed “be financially very well off” as a top priority, compared with 54.1% of men.

2) The US economy suffered “stagflation” of high inflation and unemployment in the 1970s, which probably made financial concerns more salient. In recent decades, the Great Recession and the pandemic recession have kept financial concerns salient.

3) The cost of higher education has risen dramatically. As I’ve pointed out in the past, when I was thinking about college in the late 1970s, I had a lot of friends attending the big local state university–in my case, the University of Minnesota. At that time, it was possible to cover all of U of M tuition and a share of living expenses by working at a minimum-wage job full-time over the summer and 10 hours/week during school. That’s no longer even close to true at the University of Minnesota, much less at the pricier private colleges and universities. When college and universities have a high price, students are going to become more focused on financial goals.

4) The share of high school students who go immediately to a post-secondary program in the next year was around 45% in the 1960s. Before the pandemic, it had reached nearly 70%, before dropping off. My guess is that a substantial share of this expansion of enrollment was from people who were more interested in economic goals than in a “meaningful philosophy of life.”

5) The priorities of students will shape the intellectual climate of a college or university.

6) It’s interesting to me that the survey question doesn’t just ask about being “well-off,” but about being “very well-off.” There will be a distribution of economic outcomes. Perhaps being in the middle of that distribution–say, from the 40th to the 60th percentile–can be counted as “well-off.” But when people talk about being “very well-off,” it seems to me that they are thinking about being in the upper reaches of economic outcomes. It is statistically not possible for all college students to end up in the upper reaches of economic outcomes. Developing a philosophy of life that you find meaningful is possible for everyone; in contrast, making 85% of college and university students very well-off is statistically impossible.

The State of Globalization: Both More and Less Than You May Think

The widespread current belief about globalization, I would say, is that it has been in decline since the Trump presidency, as a result of increased tariffs, a sharpening of global conflicts with China and Russia, and disruptions from the pandemic. But even with this perceived decline, a common belief is that globalization is an overwhelming force in both the US and global economy. Both of these beliefs are likely incorrect. Steven A. Altman and Caroline R. Bastian provide evidence and discussion in the DHL Global Connectedness Report 2024.

Altman and Bastian argue that globalization, fully understood, reached an all-time high in 2022 and remained near that level in 2023. They look at a variety of globalization trends. For example, this figure shows patterns of globalization n goods, investment, migration, and travel.



As you might expect, international growth in data flows has been remarkable: “The amount of data crossing national borders over the internet has nearly tripled since 2019, fueling dramatic increases in international information flows.” However, as the report points out, growth in data flows has also been very rapid within countries, so it’s not clear if the international share of internet traffic is rising.

Some of this growth in internet traffic is being driven by global commerce in service industries, where jobs are performed in one country and the work produce is transmitted digitally.

Information about science, intellectual property, and patents is globalizing as well.

An underlying theme here is that people sometimes talk about “globalization” as if it was purely a policy choice, and fully under the control of political actors. The rules about globalization make a difference, of course, but globalization is also driven by economic forces. In particular, the digital revolution has made it much easier to hear about distant products, to manage far-flung supply chains, to have services provided elsewhere, and to cooperate in matters of science and innovation. In some ways, the physical disruptions of the pandemic supercharged the importance of these digital connections.

But despite the growth of globalization to at or near all-time highs, it is paradoxically true that the extent of globalization is often overstated. Especially for the world’s largest economies, like the United States, what happens in the economy is overwhelmingly influenced by domestic factors. Even for the world economy as a whole, most flows are primarily domestic.

One might rejoice or lament in the extent of globalization. That’s a topic for another day. But either way, it’s useful to see the extent and trends of globalization more clearly.

Snapshots of Corporate Bonds in the Long Run

Certain basic investment models are based on just two investment options: a safe asset like US Treasury bonds and a risky asset like a stock-market index fund. The underlying idea is that you can choose the riskiness of your preferred portfolio by having a larger or smaller share of the safe asset. For example, a person who reaches retirement can decide to take less risk by reducing what share of their portfolio is invested in stocks.

But where corporate bonds fit in this scenario? There is something like $66 trillion outstanding in corporate debt around the world. Some of that debt is highly rated “investment grade” bonds, which in some ways resemble government debt–that is, safe but with lower interest rates. Another part is high-yield debt, sometimes called “junk bonds,” which seems closer to equities in having more risk and higher returns. (A professor of mine used to refer to “junk bonds” as “equity in drag.”) How do corporate bonds fit in the eco-system of finance?

Elroy Dimson, Paul Marsh, Mike Staunton offer a long-run view in “Corporate bonds and the credit premium.” It appears in the publicly available part of the UBS Global Investment Returns Yearbook 2024, which is subtitled: “Leveraging deep history to navigate the future.” They begin with a wry comment: “Traditionally, bonds have been seen as boring, relative to stocks. In choosing the name James
Bond, Ian Fleming said, `I wanted the simplest, dullest, plainest-sounding name I could find.’”

However, there’s big money involved in bonds: “[D]ebt securities worldwide have a value of some USD 136 trillion compared with around USD 100 trillion for global
equities. The debt total comprises some USD 70 trillion in government debt and USD 66 trillion of debt securities issued by corporations. Of this amount, corporate bonds account for around USD 45 trillion, the remainder being other corporate issues.”

Here’s a figure showing the distribution of debt securities around the world. The US dominates corporate bond markets in part because of the sheer size of its economy, but also because of the depth of its financial markets. Also, the role of corporate bonds is different in the US economy than in many other countries. In “bank-centered” economies, corporations often have a very close ties (including cross-ownership) to one or several banks, and thus can raise money through those connections. In contrast, US financial markets are more likely to push companies that want to raise money to go “to the market,” and raise funds from outside investors.

What do returns on corporate bond investments look like over time? The figure shows returns on US Treasury bonds as the light-gray line, and returns US corporate bonds as the dark-gray line. The red line shows the gap between the two: that is, what is the extra return or “premium” that an investor gets for taking on the extra risk of corporate bonds: “”Over the 158 years spanned by Figure 77, the credit spread has averaged 1.58% with a standard deviation of 0.73%. The lowest spread was 0.42% in 1965, while the highest was 4.53% in 1931.”

Corporate bonds have higher return, because of their higher risk. As a result, they will outperform safe US Treasuries over the long run

One obvious risk for corporate bonds, as compared to US Treasury bonds, is that corporations are more likely to default than the US government. Of course, a default doesn’t mean that investors lose 100% of the value, but they can often lose half or ever more. However, the share of the total value of US nonfinancial corporate bonds ending up in default has been declining over time.

To return to the question at the top, what role should corporate bonds be playing for investors? In a situation with a safe asset like Treasury bonds and a risky asset like stock market index funds, are corporate bonds more-0r-less superfluous? The UBS authors emphasize that they are not offering advice about future investing. Like the advertisements for investing are required to say: “Past performance is no guarantee of future results.”

However, they do cite several studies that looked back in time and calculated what would have been an optimal portfolio if you could choose between government debt, corporate debt, and stocks. Looking back at the past, at least, the optimal long-run portfolio would have includes corporate bonds. Indeed, one study found that over the long run, the optimal portfolio from 1936-2014 would have included a larger share of corporate bonds than either government debt or stocks.

What about strategies to make money in bonds short-term? Again, the authors are careful to point out that it’s easy to point to the investments that would have worked in the past. But especially after such strategies have been publicized, there’s no guarantee that the strategy will continue to work in the future. But for completeness, the authors also mention a few limited and intriguing exceptions. One involves “fallen angels,” which is corporate debt that was once highly-rated as investment-grade safe, but now its credit rating has declined to the point that it has become a high-yield “junk bond.” Here’s the dynamic that can unfold:

Most mandates for IG [investment-grade]corporate bond managers require them to sell bonds within a relatively short time-span if they get downgraded from IG to HY [high yield] status (typically bonds being downgraded to BB). These bonds are commonly referred to as “fallen angels”. The need for a substantial number of
investors to divest within a limited window appears to have created price pressures that temporarily reduce prices below their fair values. Historically, it has been profitable to buy these fallen angels. Ben Dor et. al. (2021) analyzed the fallen angels effect and report an extensive pattern of strong price reversals. Their results suggest that investors start selling in anticipation of the downgrade before it happens and continue to sell until around three months after. The price falls are then reversed and fallen angels outperform by a total of 6.6% in the two-years post downgrade. The greater the initial under-performance, the higher the subsequent returns. They conclude that this is due to price pressure rather than an overreaction
to the information implied by the downgrade.

This “fallen angel” price dynamic has been noted for some years now even after being published in the literature, but at last check, it does not yet seem to have disappeared. Of course, it could easily disappear in the future.

Why Aren’t More Patents Leading to More Productivity?

The number of US patents granted has been rising rapidly. However, US productivity has not been rising. Why aren’t more patents leading to more productivity?

Aakash Kalyani of the St. Louis Federal Reserve discusses research that suggests an intriguing possibility in “The Innovation Puzzle: Patents and Productivity Growth” (Economic Syn0pses: Federal Reserve Bank of St. Louis, March 29, 2024). The underlying research paper, “The Creativity Decline: Evidence from US Patents,” is available here.

There’s a new wave of research in economics that finds ways to use text as data, and Kalyani’s research offers an interesting example. The idea that some patents are more creative, while other patents have more of a me-too flavor. Kalyani suggests that one way to distinguish them is that more creative patents will be more likely to use new terms. As the research paper described it:

A patent describes in detail the working or features of an invention, and to do so uses a range of technical terminology. To construct my measure, I decompose the text of each patent (beginning in 1930) into one-word (unigrams), two-word (bigrams) and three-word (trigrams) combinations, and subsequently remove those that are commonly-used in everyday English language to obtain a list of technical terms. I then classify these technical terms into ones that were previously unused in the five years before the patent was filed. This process yields the share of new technical terms in a patent that is my measure of patent creativity. For the baseline version, I consider the measure with bigrams, and I show that my empirical results are unchanged for a measure that uses all three–unigrams, bigrams and trigrams.

An intuitive example is that “cloud computing” is first used in a patent application in 2007. Thus, patents using the term “cloud computing” in 2007 would be counted as “creative,” but patents after 2007 would not be counted as creative because they used “cloud computing”–they would only be counted as creative if they used some new term.

Of course, one can think of various things that might be right or wrong with defining “creative” patents in this way, but if we just go with the distinction, what patterns to we see? Here’s a figure from Kalyani. Three points seem worth emphasizing here: 1) The total number of patents per per capita gradually declines from the 1930s up through the 1980s, and productivity generally declines at the same time. 2) The number of patents starts rising quickly in the 1980s, but productivity does not rise at the same rate. 3) Up to the 1990s, the number of “creative” patents shown by the green dashed line follows a general trend similar to total patents; starting around 2000, creative patents drop off sharply.

One issue with text-based evidence is that one can imagine reasons why the use of language might change: for example, perhaps the US Patent Office became more likely to grant patents if the applications avoided new terms and used pre-existing language. But one would of course have to back up that alternative hypothesis! Kalyani looks at other patterns as well and finds that “creative patents” seem to matter. He writes:

Through examination of top scoring creative patents, I observe that creative terminology in these patents captures the description of new products, processes and features. I undertake a series of validation exercises to further bolster this observation. First, I show that top scoring creative patents tend to cite recent academic research rather than previously filed patents. Second, top scoring patents also score higher on ex-post measures of patent quality. These patents receive more citations and higher valuation. Third, I show that creative patents are costlier investments for a firm, and that a creative patent is associated with about 24.9% higher R&D expenditure than a derivative patent. These findings together suggest that creative patents are costly investments that tend to originate from recent academic research and generate higher ex-post value and follow-on innovation than derivative patents.

What factors might explain the co-existence of a decline in “creative” patents and a sharp rise in more derivative “me-too” patents? One factor may be that the age of inventors is rising.

I use patents matched to inventors to show that inventors are about three-times (3.2x) as likely to file a creative patent on entry compared to later on in their career. This number falls to 1.52x for the second patent, 1.25x for the third patent, and so on. In the aggregate, I find that percentage of patents filed by first time inventors have dropped from about 50% in 1980 to 27% in 2016. This drop in share of patents by new inventors reflects the overall demographic shifts in the US. The share of 20-35 year olds in the US workforce dropped from about 47.6% to 28.5% during the same time period.

Another reason for the sharp rise of derivative me-too patents may involve business strategies. At least to me, it seems possible that firms in some industries are using “patent thickets“–the name given to a group of patents on a very similar subject. It’s hard for other firms to enter a market if they need to worry about not just one patent, but many. In addition, if a firm keeps adding new patents to the thicket as older patents expire, this barrier to entry from new firms may not go away. In this scenario, patents have become less of a marker showing genuinely new technological improvements that boost productivity.

Follow R-Star?

For economists, r* refers to the “natural rate of interest” that emerges from economic theory. It’s the “Goldilocks” interest rate that is not too high and not too low: that is, the interest rate that would occur “naturally” in the economy when the economy is at potential output and inflation is stable. A “tight” or restrictive monetary policy would involve the central bank setting interest rates above r*; conversely, a “loose” or stimulative monetary policy would involve the central bank setting interest rates below r*.

It would obviously be useful to have clear estimates of r*. Do such estimates exist? Gianluca Benigno, Boris Hofmann, Galo Nuño, and Damiano Sandri investigate in “Quo vadis, r*? The natural rate of interest after the pandemic” (BIS Quarterly Review: Bank for International Settlements, March 2024, pp. 17-30). For those whose Latin is rusty or nonexistent, like me, a modern translation of “quo vadis” would be “where are you going,” while an older translation would be “whither goest thou.” The author write:

[We are] assessing the natural rate of interest, commonly known as r*, in the post-pandemic era. The natural rate refers to the short-term real interest rate that would prevail in the absence of business cycle shocks, with output at potential, saving equating investment and stable inflation. Hence, the natural rate serves as a yardstick for where real policy interest rates are headed. It is also a benchmark for assessing the monetary policy stance “looking through” business cycle fluctuations. … Together with the long-run inflation rate, defined by the central bank inflation target, it pins down the long-run level of the nominal policy rate.

The challenge is that it’s not obvious how to estimate r*. After all, historical data tells us what interest rates were as the economy and monetary policy fluctuated, but for r*, you need an estimate of what the interest rate would have been if the economy had remained at potential GDP, with low unemployment and low inflation. It’s also quite possible that r* shifts over time, which makes estimating it even harder. Moreover, in a globalized capital market, r* will be affected by global factors, not just factors within the domestic economy. The BIS authors describe the main factors that would influence the natural rate of interest:

The natural rate is commonly thought to be determined by real forces that structurally affect the balance between actual and potential output, or equivalently
between saving and investment. Specifically, factors that increase saving or decrease investment lower the natural rate. These include potential growth, demographic trends, inequality, shifts in savers’ and investors’ risk aversion and fiscal policy. Lower potential growth lowers investment by reducing the marginal return on capital and increases saving by lowering expected income. Longer life expectancy raises saving as households need to support a longer retirement. A lower dependency ratio – reflecting a higher share of working age people in the population – increases saving as those in the workforce typically save more than the young and the elderly. Higher inequality raises saving as richer households save a larger share of their income. Higher risk aversion induces higher saving, in particular in safe assets, and at the same time lowers investment. Finally, persistent fiscal deficits reduce aggregate saving. In a globalised world economy, with free capital flows, the same considerations apply but at the global level.

For example, a common set of beliefs about r* in the last decade or so is that there has been a “global savings glut”–and a higher supply of savings will tend to drive down natural interest rates. The global savings glut comes partly from very high saving in countries like China, partly from greater inequality of income and wealth because those with higher incomes and wealth tend to save more, and from other factors as well.

Economist seeking to estimate r* usually build a model of the economy. They set up the model so that it does a reasonably good job of following what happened in the actual economy. Then, when the economy is out of balance, the model can be used to project what the interest rate would be if the economy moves back into balance. (In a very broad sense, this is like looking at a single market that has been shocked by events–like the crop harvest problems in the cacao market that have driven up chocolate prices–and projecting the price to which cacao will return when the shock is over.)

The BIS economist discuss estimates of r* based on several modeling approaches: a “semi-structural” model, a “vector autoregressive” model, a “dynamic stochastic general equilibrium,” a model that looks at differences between shorter-term and long-term interest rates and how they evolve over time, and plain old surveys of key market participants. I will not try to explain the differences across these models here: suffice it to say that they are built on differing theoretical perspectives. Here’s a set of estimate of r* for the US dollar and for the euro:

As you can see, estimates of the natural rate of interest have declined over time. For the US, estimates before the Great Recession of 2008-09 were in the range of 2-3%. Since then, estimates were often 1% or lower–with a noticeable upward movement in the last year or so. The estimates for the euro area have a broadly similar movement from higher to lower, but a number of the estimates suggest that the natural interest rate in euro markets involved a negative interest rate for substantial parts of the last few years, a policy recommendation that raises some complications of its own.

For present purposes, the main concern here is whether these estimated of r* can offer practical guidance on whether, say, the Fed should be raising or lowering interest rates. It seems dubious. It’s not just that the range of estimate for the US market is wide, which it is, but also that each of these individual estimates is not precise, either, but represents a range of uncertainty. Moreover, the basic theory of the natural rate of interest r* suggests that it should be independent of monetary policy, because it represents the interest rate for an economy in balance. But is it really just a coincidence that estimates of r* plunged after the Great Recession, when monetary authorities were cutting interest rates? Were central banks cutting interest rates because r* had fallen, or do the estimates of r* from the economic models reflect to some extent that central banks had cut rates? It’s not easy to know.

The BIS authors argue: “The uncertainty surrounding r* suggests that it is a blurry guidepost for assessing the monetary policy stance and hence the tightness of monetary policy, in particular at the current juncture. In this context, it appears advisable to guide policy decisions based more firmly on observed inflation rather than on highly uncertain estimates of the natural rate.”

Productivity Syndrome and the Investment Prescription

Economic productivity is about growing the size of the pie. I sometimes point out that no matter what your goal–spending increases, tax cuts, greater support for the poor, environmental protection–that goal is easier when the economic pie is growing. When the economic pie isn’t growing, after all, then all priorities have to pit potential winners against potential losers in a zero-sum game.

Thus, a global slowdown in productivity is bad news all around. For context and policy advice, the McKinsey Global Institute has published “Investing in productivity growth” (March 24, 2024, by  Jan Mischke, Chris Bradley, Marc Canal, Olivia White, Sven Smit, and Denitsa Georgieva). As they point out: “Advanced-economy productivity growth has slowed by about one percentage point since the global financial crisis (GFC).”

In a given year, 1 percent isn’t much, but remember that it’s a cumulative effect. If productivity growth had been 1% higher since, say, the end of the Great Recession in 2009, then over those 15 years the US economy would already be about 15% larger. In 2024, US GDP is $28 trillion, so 15% larger would have meant an additional $4.2 trillion. As the McKinsey folks note: “Today the world needs productivity growth more than ever. It is the only way to raise living standards amid aging, the energy transition, supply chain reconfiguration, and inflated global balance sheets.”

The report offers some interesting context on global productivity. In this figure, the horizontal axis shows the level of productivity for various countries and regions, so that the lower-productivity areas like China and India are on the left, while the high-productivity areas like North America are on the right. As you can see, there’s a general pattern that lower-income places have the potential to grow at faster rates. In part, this is because lower-income places can take advantage of technologies that are already developed and sell to higher-income countries. IN part, it’s essentially a matter of arithmetic: when you start very small, doubling in size is easier than when you start very large. The “productivity frontier” is really a thought experiment, suggesting that certain areas of the world, like sub-Saharan Africa, Latin America, and Western Europe may have potential for substantially more rapid productivity growth.

When it comes to China and India, I’m often asked about whether their pattern of growth is about to level off and top out. It might! In China, in particular, the current government seems to have decided that economic growth is less important than other priorities like military power and social control. But there is no law of economics which says that these countries have topped out.

This figure shows some notable historical growth experiences. On the far left, all countries start at a situation where their per capita GDP was roughly $2,800–which the graph sets equal to 100. As you can see, growth in China and India is really just following a path already blazed by South Korea, and before that Japan, as well as Malaysia and Thailand. Given the basic ingredients for productivity growth–the average worker is gaining in education and skills, the average work has more capital equipment to work with, technology is improving, and there are incentives for firms to improve and innovate–growth in India and China could potentially still have decades to run.

What about productivity in high-income countries, like the United States? The McKinsey report suggest several main reasons why growth has slowed down. Two of the reasons are that factors driving growth in the early 2000s have shifted.

While many drivers affect productivity growth, two stand out for explaining the performance of advanced economies in recent years. First, manufacturing experienced waves of productivity advances fueled by the effects of Moore’s law and a burst of offshoring and restructuring. (Moore’s law, which holds that the number of transistors in a microchip doubles every two years, signals more broadly that computers become more powerful and efficient while coming down in cost.) These waves yielded productivity gains before the GFC [global financial crisis] but petered out over time. The second major factor is a secular decline in investment across multiple sectors … These two trends explain the slump in advanced economies almost entirely. Digitization was much discussed as the main candidate to rev up productivity again, but its impact failed to spread beyond the information and communications technology (ICT) sector.

The main prescription for additional economic growth from the McKinsey analysis is to raise the level of investment: to be clear, this advice is meant to include both investment in actual physical capital as well as investment in “intangible” capital that leads to gains in knowledge, management, and skills. The report notes:

The slump in capital investment slowed productivity growth beyond manufacturing by 0.5 percentage point in the United States, 0.3 point in our Western European sample economies, and 0.2 point in Japan … This decline spanned almost all sectors: in the United States, the only exceptions were mining and agriculture; in Europe, only mining, construction, and finance and insurance generally remained stable, while real estate accelerated.

More specifically, slowing growth in tangible capital (for example, machines, equipment, and buildings) explains almost 90 percent of the drop in the United States and 100 percent in Europe. From 1997 to 2019, gross fixed capital formation in tangibles fell from 22 to 14 percent of gross value added in the United States and from 25 to 17 percent in Europe. Intangible capital growth (for example, R&D and software) was more resilient but could not make up for falling investment in the material world. Gross fixed capital formation in intangibles increased from 12 to 16 percent in the United States and from 10 to 12 percent in Europe. Investment in intangibles is needed to boost corporate performance and labor productivity, but it may face barriers (skills needed to scale up, limited collateralization and recovery value), and the productivity benefits can take longer to materialize.

Economic growth doesn’t happen purely from the invention of technology: instead, it happens when that technology moves into widespread use. There’s a gap between the invention and the application, sometimes called the “valley of death,” because moving from the conceptual idea to the practical application can be so hard. “Investment” is how an economy bridges the gap. The McKinsey writers note: “Post-GFC investment declined sharply and persistently, failing to generate anything to take their place. But today, directed investment in areas such as digitization, automation, and artificial intelligence could fuel new waves of productivity growth.” I’m a little less certain than they are about the directions of future growth: for example, I think genetics and material science may have big roles to play as well. But without a rise in investment, we aren’t even going to know what we’re missing.

If Not Unemployment, How To Measure the Labor Market?

Economic statistics are all useful, and all imperfect. They must be consumed with care. Consider the unemployment rate, a headline indicator of the US labor market. The US unemployment rate has been below 4% since December 2021. As you can see from the graph, which shows the unemployment rate going back to 1948, there was a sustained period back in the early 1950s, and then another in the late 1960s, when the unemployment rate was this low for this long. But for the half-century from 1970 up through 2020, the US economy could only dream of an unemployment rate below 4% for 15 consecutive months.

Does it make sense to interpret this unemployment rate as a sign of a historically wonderful US labor market? Or does it make more sense to think about whether, for one reason or another, the unemployment rate at present isn’t capturing the essence of what’s happening in the US labor market?

In January 2024, the Hutchins Center on Fiscal and Monetary Policy at Brookings brought together 40 labor market economists talk about this issue and others. Louise Sheiner, David Wessel, Elijah Asdourian wrote an overview paper describing the discussion in “The U.S. Labor Market Post-Covid: What’s Changed, and What Hasn’t?” The first question they tackle is “What is the best measure of labor market slack?”

As Sheiner, Wessel and Asdourian write: “Many economists are no longer confident about the adequacy of the unemployment rate as the only important measure. Although unemployment in 2023 was at about the same level as it was in 2019, other measures of slack suggested that the labor market was much tighter.” Specifically, one of these “other measures” is the number of job vacancies (also called the number of job openings) divided by the number of unemployed–sometimes abbreviated at V/U.

As you can see, at the worst of the Great Recession back in 2009, as well as in the pandemic recession, there were about 0.2 job vacancies for each unemployed person. Just before the pandemic recession, there were about 1.2 job vacancies for every unemployed person. Just after the pandemic recession, the number of job vacancies per person spiked as high as 2.0, before dropping back to 1.4.


This figure helps explain why a sub-4% unemployment rate before the pandemic isn’t the same as a sub-4% unemployment rate after the pandemic–that is, there are notably more job vacancies per unemployed person after the pandemic. The spike in the V/U ratio in early March may also help to explain why inflation started rising at about that time, just as the drop in V/U may help to explain the easing of inflationary pressures in the last year or so.

But there are also doubts about just what is being captured by the “job vacancy” measure. This is based on data about job openings posted by employers. But in the age of the internet job search, as it has become cheaper for firms to post job openings, perhaps firms have become more likely to post openings. The pandemic-related shifts in employment, especially for firms now willing to hire remote workers, may have changed the underlying meaning of “job vacancy” statistics as well. The total job openings seems to be trending up.

The discussion of labor economists made similar points:

Other participants argued against relying on V/U, largely because of skepticism about the reliability of the vacancy measure. Julia Coronado of MacroPolicy Perspectives and others pointed out that the recent rise in V/U is hard to separate from the upward trend in vacancies that began around 2008. Erica Groshen, former commissioner of the Bureau of Labor Statistics, said that vacancies are increasing across the board because digital technology makes vacancies much easier to post. “When I applied to colleges, my high school told us, ‘You can apply to five colleges,’” she remarked. “…My kids were told 12 colleges, because it was electronic, and I think the next generation is being told something like 20.” Without accounting for the long-term increase in vacancies, V/U’s detractors argued that the data as is could not inform the ongoing conversation about labor market tightness.

Yet another way to look at the labor market is based on how people leave their jobs. Basically, there are two broad reasons for leaving a job: a voluntary separation called a “quit” (blue line) or an involuntary separation called a “layoff/discharge” (red line). As the figure shows, layoffs rise during recessions, and spiked in the pandemic. However, the number of quits was rising before the pandemic, and after the pandemic spiked to new highs. This is sometimes called the “Great Resignation”–that is, people choosing to quit jobs.

The higher number of quits suggests another pattern for the modern US labor market. Many of us are used to a mental model where workers move to being unemployed, and then back to being employed. But what about people who quit for a new job, and thus don’t go through a spell of unemployment? Or people who leave the labor market for a time and then re-enter, but are not counted as “unemployed” in the meantime? Statistics from the Current Population Survey let you look at flows into jobs, and the statistics suggest: 1) the number of people who move from being employed at one place to employed somewhere else is on the rise (top figure); 2) the number moving from unemployed to employed is about the same (second figure); and 3) the number moving from out-of-the-labor-force to employed has risen a bit.

Some of these patterns blend together. The US economy seems to be exhibiting more people who already have jobs shifting to jobs with other employers. Faced with that situation, a rational response for employers is to post more job openings. In some cases, the firm may not feel it is necessary to hire in the near-term, but they want to have an available pool of applicants if they lose workers, and if the right candidate comes along, they are willing to hire.

Taken together, these statistics suggest that the US economy is indeed performing well in terms of availability of jobs. But it also suggests that a lot of workers are looking for something different, or better, or higher-paying in a way that will help to offset the accumulated inflation of the last few years and the higher interest rates that they are facing for consumer and home loans.

What Should Intro Econ Include?

For many college students, and high school students as well, a single introductory economics course is the only course in the field they are ever going to take. This is not their fault! People are allowed to be interested in subjects other than economics! Perhaps alternative interests should even be encouraged! But for those of us who inhabit econo-land, it raises a real question: If we only get one crack at many students, maybe for a single academic quarter or semester, what content is it most important to teach?

The Journal of Economic Education has just published a six-paper symposium on the topic “What should go into the only economics course students will ever take?”, edited by Avi J. Cohen, Wendy Stock and Scott Wolla.

An essay by Wendy Stock, “Who does (and does not) take introductory economics?” , sets the stage. From the abstract:

Among students who began college in 2012, 74 percent never took economics, up from 62 percent in 2004. Fifteen percent of beginning college students in 2012 took some economics, and 12 percent were one-and-done students. About half of introductory economics students never took another economics class, and only about 2 percent majored in economics. The characteristics of one-and-done and some economics students are generally similar and closer to one another than to students with no economics

In his paper, Avi Cohen makes the case for a “literacy-targeted” principles of economics course: “The LT approach argues that it is far more valuable for students to learn and be able to apply a few core economic concepts well than to be exposed to a wide range of concepts and techniques that the majority of students are unlikely to use again.”

Apparently, there was an American Economic Association Committee on the teaching of undergraduate economics back in 1950. Cohen writes:

Eighty-five members from 50 educational institutions met between 1944 and 1950 and produced a 230-page special issue of the American Economic Review (AER) in 1950. Two recommendations in the report “Elementary Courses in Economics” (Hewitt et al. 1950 , 52–71) were:

The number of objectives and the content of the elementary course should be reduced….[T]he content of the elementary course has expanded beyond all possibility of adequate comprehension and assimilation by a student in one year of three class hours a week (56, italics in original).

Students should receive more training in the use of analytical tools.…[T]he typical course in elementary economics tends to concentrate attention on the elucidation of economic principles, rather than on training the student to make effective use of the principles he has learned. Examination questions test the student’s ability to explain, rather than his ability to use principles (59, italics in original).

These concerns that the intro course tried to cover too much, and ends up with the typical student being able to do too little, has been a regular critique of intro econ since then, as Cohen describes with a brisk review of commentary on the intro class since 1950. A comment from George Stigler in 1963 has often been quoted:

The watered-down encyclopedia which constitutes the present course in beginning college economics does not teach the student how to think on economic questions. The brief exposure to each of a vast array of techniques and problems leaves with the student no basic economic logic with which to analyze the economic questions he will face as a citizen. The student will memorize a few facts, diagrams, and policy recommendations, and ten years later will be as untutored in economics as the day he entered the class (657). An introductory-terminal course in economics makes its greatest contribution to the education of students if it concentrates upon a few subjects which are developed in sufficient detail and applied to a sufficient variety of actual economic problems to cause the student to absorb the basic logic of the approach (658, emphasis added).

I’ve taught the intro econ course with some success and been involved in the writing of several principles textbooks, so I’ve watched the evolution of these arguments over the years with interest. Perhaps the fundamental problem, as Cohen describes, is that many econ departments want to have a single principles of economics class, they want that class to count toward the economics major, and they want that class to prepare students for the courses that follow: especially intermediate micro and intermediate macro. Departments have some confidence that the existing principles of economics textbooks and classes more-or-less accomplish this goal. The incentives of departments to adjust the existing courses–and then perhaps also need to adjust the intermediate courses–are low.

Given these realities, any substantial rethinking of the existing intro course is going to face an uphill battle for widespread acceptance. Some of the subjects that could be cut from standard intro courses, Cohen suggests, include cost curves, comparisons of imperfectly competitive industries, formulas for elasticities (beyond, for example, % change quantity/% change price), details of national income accounting, formulas for fiscal and money multipliers (beyond, for example, 1/% leakages from circular flow). Moreover, other papers in the JEE symposium emphasize on how different types of pedagogy, generally aimed at getting away from exclusive use of classroom lectures and multiple choice exams, with a heavy emphasis on graphs, can help the intro course evolve. I have no strong objections to this approach, but at the end of the day, I think its ultimate destination is a better-taught version of the existing course.

Over the years, my own thoughts along these lines have been running more toward an intro course that dramatically de-emphasizes the textbook, but does not eliminate it, because a textbook is a useful tool for basic terminology and graphs: opportunity cost, supply and demand, perfect and imperfect competition, externalities and public goods, fiscal and monetary policy, comparative advantage, trade balances, and others.

But when it comes to examples, it seems peculiar and anachronistic to me to rely overmuch on textbooks in the internet age. An intro course needs to provide conceptual guidance and curate examples, of course. But the web is full of real-world examples of economic reasoning and data: indeed, many of the links at this website go to such articles. If the goal is economic literacy and functionality for students, pointing introductory students at, say, the websites of the Bureau of Labor Statistics and the Bureau of Economic Analysis, the Congressional Budget Office, the Energy Information Administration, the Social Security Administration, the World Development Indicators, and other seems to me a useful starting point. It seems to me quite possible to develop a set of exercises and readings where students could even choose among different questions and exercises–and discuss what they found with each other.

In short, pick a slightly shorter list of concepts and tools that you want intro introductory students to have, with a textbook to explain, but for examples and illustrations, give the students both questions to answer and a list of web addresses.

Of course, jumping straight into real-world events, without underlying disciplinary structure, isn’t a fair intro to the subject. But focusing only on disciplinary structure, and treating the intro course as just a prelude to the rest of the economics major, isn’t going to be productive for the half of intro econ students who won’t ever take another economics course, and isn’t going to be attractive for the roughly three-quarters of college students who never take an intro course. When I ask people who had that single long-ago intro econ course what they remember today, they often shrug at me, grin ruefully, and say something about “there were a whole bunch of graphs.” We can do better.