Foreign Exchange Markets: $7.5 Trillion Per Day

Once every three years, the Bank of International Settlements carries out a survey on the dimensions of markets for foreign exchange, as well as for various financial derivatives products. The December 2022 issue of the BIS Quarterly Review includes five articles discussing results from the latest survey. Here, I’ll focus on “The global foreign exchange market in a volatile time,” by Mathias Drehmann and Vladyslav Sushko.

The headline finding of the most recent survey is that daily turnover in foreign exchange market has reached $7.5 trillion per day–on an annual basis, about 60 times the level of world GDP.  Clearly, most of this market has nothing to do with trading foreign exchange for financing imports and exports. Instead, it’s about hedging and arbitrage in financial markets.

In this figure, the total height of the bars on the left-hand panel show the rise in the FX market over time. The shading within the bars show the types of trades: spot market, FX swaps (in which two parties who are each receiving a future payment in different currencies agree to “swap” these payments and receive the other currency instead); forward contracts (which can be thought of as customized contracts traded over-the-counter, unlike futures contracts which are standardized and traded on an exchange); options; and currency swaps (similar an FX swap, except that what is swapped is not just two payments in different currencies, but also a stream of interest payments over time). The right-hand panel shows that the share of this market accounted for by spot markets has been declining over time, while the share in FX swaps and forward contracts has been rising.

The survey results also emphasize the central importance of the US dollar in foreign exchange markets. Every foreign exchange transaction involves two different currencies: in almost 90% of all the trades in this market, the US dollar is on one side of the transaction. In practice, this means that if someone wants to trade two non-US-dollar currencies A and B, what actually happens behind the scenes is that A is first turned into US dollars, and then US dollars are turned into B. There is an ongoing debate in recent years over whether the US dollar will or might be losing its status as the global “reserve currency,” but this evidence suggests that the US dollar continues to play an extremely central role. However, one shift with regard to the role of the US dollar, which I have noted here in the past and is discussed in these survey results, is that when central banks around the world hold reserves of foreign exchange, the US dollar is playing a smaller role than in the past.

The raw size of this market suggests a certain degree of concern. What happens, for example, if there is a currency swap where a party has agreed to make a payment in, say, US dollars in the future, but when the time comes to make that payment, the party is unable to do so?  Two of the papers in this issue look at potential flashpoints. For example,   Marc Glowka and Thomas Nilsson write in “FX settlement risk: an unsettled issue:”

FX settlement risk, the risk that one party to a trade of currencies fails to deliver the currency owed, can result in significant losses for market participants, sometimes with systemic consequences. The failure of Bankhaus Herstatt in 1974, the best known example, eroded confidence in interbank relations and caused a freeze in money market lending (Galati (2002)). Recent examples include KfW Bankengruppe’s €300 million loss when Lehman Brothers collapsed in 2008 (Hughes (2009)), and Barclays’ $130 million loss to a small currency exchange in March 2020 (Parsons (2021)). Almost 50 years after the Herstatt bankruptcy, nearly a third of deliverable FX turnover remains subject to settlement risk, according to new data from the 2022 BIS Triennial Survey.

In a related spirit, Claudio Borio, Robert N McCauley and Patrick McGuire look at that issue in “Dollar debt in FX swaps and forwards: huge, missing and growing.” They write:

FX swap markets are vulnerable to funding squeezes. This was evident during the Great Financial Crisis (GFC) and again in March 2020 when the Covid-19 pandemic wrought havoc. For all the differences between 2008 and 2020, swaps emerged in both episodes as flash points, with dollar borrowers forced to pay high rates if they could borrow at all. To restore market functioning, central bank swap lines funnelled dollars to non-US banks offshore, which on-lent to those scrambling for dollars. This off-balance sheet dollar debt poses particular policy challenges because standard debt statistics miss it. The lack of direct information makes it harder for policymakers to anticipate the scale and geography of dollar rollover needs. Thus, in times of crisis, policies to restore the smooth flow of short-term dollars in the financial system (eg central bank swap lines) are set in a fog.

At least so far, the main “answer” when these problems arise has been for central banks to carry out their own currency swaps, to make a scarce currency more available. This approach has proven workable so far, but as these gigantic markets grow in size, and come under stresses beyond those currently imagined (a combination of negative economic events and a cybercrime attack, perhaps?), some additional advance consideration of fail-safe mechanisms seems worthwhile.

Three Patterns in World Trade Since the Pandemic

The World Trade Organization has released its World Trade Statistical Review 2022, which includes an overview of world trade patterns in 2021 and the first half of 2022. In the big picture sense, it’s no surprise that global trade slumped hard in the pandemic year of 2020 and the rebounded briskly in early 2021. Here, I wanted to pass along three of the figures from later in the report that had something interesting to say about shifts in the underlying patterns of world trade.

This first figure shows trade in “digitally delivered services (mode 1),” which refers to cross-border trade involving “insurance and pension services, financial services, charges for the use of intellectual property n.i.e., telecommunications, computer and information services, other business services, and personal, cultural and recreational services. Notice that from 2005 up through about 2012, growth in digitally delivered trade was increasing at about the same rate as trade in goods. But since then, trade in goods has increased only modestly, while trade in digitally-delivered services has nearly doubled. The gap seems to have widened in 2021, as the first force of the pandemic eased.

Just to be clear, this graph does not show the levels of trade in digitally-delivered services and in goods. The level for each category is set at a index number of 100 for the year 2005. The graph only lets you compare rates of growth since then. For a perspective on levels, global exports of digitally-delivered services was $3.7 trillion in 2021, while merchandise trade was roughly six times as large at $21.7 trillion. But the trendlines strongly suggest that digitally-delivered services will be a growth area for international trade in the future, while trade in goods may not be.

A second figure shows the quantity of container shipping from January 2015 through August 2022. The drop in can container shipping around April 2020 captures one aspect of the supply chain shocks that were happening at that time. Measured by the adjusted (orange) line, the quantity of containers shipped has reached an all-time high. But as the WTO notes: “Shipping indices showed global container throughput at an all-time high in September 2022 but not much higher than in 2021, suggesting weak trade growth in 2022.”

Finally, the third figure shows the number of international commercial flights from January 2020 through August 2022. You can see the 80% fall in such flights in the pandemic, and the gradual recovery since then. The WTO describes the patterns this way: “International commercial flights are classified as transport services and are closely associated with travel expenditure by international tourists. Both cargo and passenger flights also carry significant quantities of goods, so they are linked to both merchandise and commercial services trade. Daily commercial flights, including flights within the European Union, finally exceeded their level at the start of 2020 during the summer of 2022. However, flights excluding those within the EU remained below their prepandemic
level in August 2022. Rising fuel costs and increased economic uncertainty are expected to weigh on commercial flights in the remaining months of 2022 and in 2023.”

When Lower-Income Countries Face Rising Global Interest Rates

Imagine that you are policymaker in a low- or middle-income country. Interest rates go up in high-income nations, like the United States. When firms and the government in your country borrow, it has often been done in terms of US dollars–that is, US dollars were borrowed, then converted to home currency and spent in the domestic economy, but repayment needs to happen after converting home currency back to US dollars. The reason for this practice is that there wasn’t much demand in international capital markets to own debt denominated in your home currency.

But when global interest rates rise, this creates a double-whammy for the debt of the low- or middle-income country. Higher interest rates for US dollars mean that the dollar tends to appreciate on foreign exchange markets, which is indeed happening. As a result, when it comes time for a low- or middle-income country to convert its own currency back into US dollars to repay its debts, it’s more costly to do so. In addition, higher interest rates from the US Federal Reserve often pressure central banks around the world to raise interest rates as well.

What is to be done? Gita Gopinath offers a framework for thinking about these questions in “Managing a Turn in the Global Financial Cycle,” delivered as the 2022 Martin S. Feldstein Lecture at the National Bureau of Economic Research (NBER Reporter, September 2022).

A key policy question therefore is how emerging and developing economies should respond to this tightening cycle that is driven to an important degree by rising US monetary policy rates. The textbook answer would be to let the exchange rate be the shock absorber. An increase in foreign interest rates lowers domestic consumption. By letting the exchange rate depreciate, and therefore raising the relative price of imports to domestic goods, a country can shift consumption toward domestic goods, raise exports in some cases, and help preserve employment.

However, many emerging and developing economies find this solution of relying exclusively on exchange rate flexibility unsatisfying. This is because rising foreign interest rates come along with other troubles. They can trigger so-called “taper tantrums” and sudden stops in capital flows to their economies. In addition, the expansionary effects of exchange rate depreciations on exports in the short run are modest, consistent with their exports being invoiced in relatively stable dollar prices. …

Consequently, several emerging and developing economies have in practice used a combination of conventional and unconventional policy instruments to deal with turns in the global financial cycle. Unlike the textbook prescription, they not only adjust monetary policy rates but also rely on foreign exchange intervention (FXI) to limit exchange rate fluctuations, capital controls to regulate cross-border capital flows, and domestic macroprudential policies to regulate domestic financial flows. This common practice, however, lacks a welfare-theoretic framework to guide the optimal joint use of these tools. This shortcoming limited the policy advice the IMF could give to several of its members. Accordingly, to enhance IMF advice, David Lipton, the former first deputy managing director of the fund, championed the need to develop an Integrated Policy Framework that jointly examines the optimal use of conventional and unconventional instruments.

In the longer run, many of these issues could be ameliorated if at least some low- and middle-income countries around the world developed a greater ability to borrow in their own home currency, a pattern which does seem to be slowly emerging. But in the immediate present, developing economies must be concerned that if their exchange rates move too much, it could become difficult or impossible to repay existing loans. Thus, many countries are resorting to policies that would manage exchange rate fluctuations and limit international capital flows.

For long-time watchers of international macroeconomics, what’s perhaps most interesting here is not just that these alternative policy tools are being used, but that they are being used with the blessing of mainstream organizations like the International Monetary Fund. Gopinath was chief economist at the IMF from 2019 to 2022, and is now First Deputy Managing Director of the IMF. Up to about 2005, the official position of the IMF was basically to avoid or phase out international capital controls wherever possible. But that conventional wisdom had already shifted substantially a decade ago, and under the Integrated Policy Framework that Gopinath mentions, discussed in some detail in this 2020 white paper, the costs and tradeoffs of interventionist macroeconomic policies are to be weighed and balanced in the context of imperfect and potentially overreacting global financial markets.

Cities and Pandemics

Economists view the existence of urban areas as a balancing act between economies and diseconomies of agglomeration. The benefits (“economies”) of agglomeration include both aspects of production, like the gains from having workers, firms, and suppliers geographically close together, and also aspects of consumption like the clustering of entertainment activities (like restaurants, theater, sports, fireworks, and so on). The diseconomies of agglomeration include congestion, crime, and aspects of health including pandemics.

David M. Cutler and Edward Glaeser remind us that this tradeoff is not a new one in “Cities After the Pandemic” (Finance & Development, December 2022). They write:

[T]here are downsides to density; contagious disease is the most terrible of these. Humans have millennia of experience with urban epidemics. The first well-documented urban plague struck Athens in 430 BCE. It helped Sparta defeat Athens in the Peloponnesian War and brought an end to Athens’ golden age. … The Plague of Justinian, which hit Constantinople in 541 CE, may have done even more harm. It helped plunge Europe into centuries of darkness, widespread poverty, and political chaos. … The beginnings of globalization in the 19th century hastened the spread of diseases like yellow fever and cholera. Each killed a vastly higher share of the population than COVID-19.

But as they point out, the attractions of urban areas are such that they have continued to expand for several centuries now, especially when public investments in clean water and other infrastructure pushed back against the risks of disease. They write: “Yet despite the deaths, cities continued to attract migrants by the millions. Rural life was difficult and not rewarding economically. The very poor will do most anything to escape poverty, which explains why COVID-19 will likely do little to deter urbanization in poor countries.”

Cutler and Glaeser readily admit that the pandemic poses a challenge for the existing structures and interconnections of urban areas, and that sizeable adjustments are likely to occur. But they also suggest four main reasons that “hat cities as a whole—in both rich and poor countries—will survive and even thrive.”

First, the hypothesis that technology will make face-to-face contact obsolete is old and has been discredited many times. The late journalist Alvin Toffler predicted empty offices in 1980, but for most of the past 40 years, the problem has been too few offices, not too many. Technological change does more than just enable long-distance communication. It radically increases the returns to learning, which is fostered by being around other people.

For an overview of the research literature on how proximity increases productivity, the Journal of Economic Perspectives (where I work as Managing Editor) published a three-paper symposium on “Productivity Advantages of Cities” in the Summer 2020 issue:

Cutler and Glaeser argue that a common pattern during the “work-from-home” period in the pandemic was that while it proved possible to keep many of the existing processes moving forward in many organizations, there were ongoing problems when it came to issues like generating new ideas or processes and spreading them through an organization, or how to make promotion decisions. Here are their other three reasons:

Second, cities thrive as places of consumption as well as production. Urban agglomeration produces better restaurants as well as better accountants. Cities allow people to share the fixed costs of museums or concert venues. Between the 1970s and the 2000s, urban prices went up much faster than urban wages, which is compatible with the view that people increasingly wanted to be in cities for the amenities they provide. While some older people have decided never to return to in-person office work, plenty of younger people have shown enormous hunger to get back to face-to-face social interactions; a job can be a source of enjoyment as well as income.

Third, prices will adjust to ensure that offices don’t remain permanently empty, at least in cities where there is reasonable demand for office space. Before the pandemic, commercial real estate was in very short supply in cities like New York, San Francisco, and London, and many smaller, newer, or less profitable businesses were priced out of these markets. Landlords with unoccupied offices will cut rents and eventually find firms eager for that space. Of course, in some lower-end markets, which were near the edge of survival before COVID, demand may fall to the point where landlords prefer to walk away from their buildings rather than rent them out at bargain-basement prices. They can be turned into housing or, worse, left empty.

Fourth, much of the world remains poor, and for the poor, the economic appeal of urbanization easily overwhelms fears of health costs. Google mobility data show that workplace visits are substantially higher now than they were before the pandemic in cities such as São Paulo, Brazil, and Lagos, Nigeria. Moreover, skilled workers in poorer cities will actually benefit because videoconferencing makes it easier to connect to the wealthy world. The slowdown in business travel may, however, reduce foreign direct investment in developing-world cities. Before the pandemic, air links between cities were significant predictors of financial ties (Campante and Yanagizawa-Drott 2018).

More broadly, Cutler and Glaeser argue that the world is engaged in neglectful and apathetic sort of science experiment, in which we wait for the next global pandemic to hit rather than think about preventing it in advance.

In the past six decades, the bulk of “spillover events”—health-related events that spread disease beyond a country’s borders—have originated in some of the poorest parts of the planet. In regions plagued by poverty, people often have more contact with disease-carrying wildlife, vectors such as mosquitos survive longer, and sanitation is more limited. Consequently, the world seems to be engaging in a deadly science experiment in which it is waiting to see what new plague will emerge from the relatively unmonitored and under-resourced regions and spread globally.

What can be done to reduce the risk of another pandemic? … A natural path forward is for the rich world to engage in a massive health exchange with the poor world. In exchange for significant aid for public health infrastructure, recipient countries would agree to measures that keep humans away from animal carriers of disease, better monitor new illnesses, and commit to rapid response and containment. Fortunately, the world and its cities seem to have survived COVID-19 largely intact. We may not be so lucky next time. The result of complacency in 2020 was millions of deaths and enormous economic disruption. The world must heed this warning and invest in the entire world’s hygiene or risk being hit by a pandemic that is even worse. 

Practicalities of a Regulatory Budget

A “regulatory budget” begins with the notion that, just as governments write down their planned and actual taxes and spending, they should also write down the costs that are imposed by regulation. To take it a step further, one can imagine the government setting an overall “budget” for the costs of regulation that might be imposed in a given year, and then requiring that regulatory agencies operate within that budget.

In general, developing a good sense of the costs of regulation–and comparing costs to benefits–seems a worthwhile program. But the parallel from taxes and spending to regulation is obviously not exact.. The costs and benefits of regulation are both estimated with considerably less precision than, say, the taxes and spending involved in the Social Security system. The costs of implementing regulations are often not as simple as writing a check for a certain new item of pollution-control equipment or financial management software. Instead, producers will seek out ways to change how they operated in ways, sometimes in subtle ways, to reduce the costs of implementing regulations. Some benefits of regulation, like improved worker productivity or reduced health care costs, can be converted to monetary terms relatively easily, but gains in human health or environmental protection may be harder to put in monetary terms.

The ultimate costs and benefits of a series of regulations–that is, a comparison to what costs and benefits would have looked like if the series of regulations had never been implemented–is often unclear, especially five or 10 or 20 years down the road. And at a basic level, if a given regulation seems likely to have benefits greatly in excess of costs, do you really want to postpone the regulation because the current costs exceed some “regulatory budget”?

Where do these practical difficulties leave the issue of “regulatory budgeting”? The Harvard Journal of Law & Public Policy offers “A Symposium on Regulatory Budgeting” in its line-only “Per Curiam” Summer 2022 issue.

The Trump administration set a regulatory budget in 2017, thus making the idea anathema to many of the non-Trumpists. But as the symposium reminds us, the idea of a regulatory budget has deep nonpartisan roots. James Broughel writes in “The Regulatory Budget in Theory and Practice: Lessons from the U.S. States” (footnotes omitted):

Before the Trump administration’s actual implementation of a regulatory budget, interest in regulatory budgeting likely peaked in the United States in the late 1970s and early 1980s. Robert Crandall of the Brookings Institution has been credited as “probably the first proponent” of a regulatory budget. Democratic Senator Lloyd Bentsen introduced the Federal Regulatory Budget Act of 1978, which would have created a role for Congress in setting regulatory cost allocations for agencies, akin to the role it plays in making fiscal appropriations. At that time, there was considerable support for a regulatory budget throughout the U.S. federal government. President Jimmy Carter’s 1980 Economic Report of the President references a regulatory budget as a potential means of improving priority setting. The Joint Economic Committee of Congress issued a subsequent report endorsing a regulatory budget. Thereafter, OMB circulated a draft Regulatory Cost Accounting Act in 1980. Later, in 1992, John Morrall III, an OMB official, wrote a report for the Organisation for Economic Co-operation and Development endorsing a regulatory budget. These early proponents of regulatory budgets were noticeably bipartisan.

I took a closer look at Trump’s regulatory budget proposal a few years back when it was enacted. One component that got a lot of publicity was the requirement that two regulations be eliminated for each new regulation introduced. This requirement was probably of more symbolic than practical significance, given the possibilities for eliminating small-scale, long-age regulations, or for “eliminating” two regulations while replacing them with what would be counted as a single new regulation. The more interesting component was, as Broughel writes, “The major requirement was that each new dollar of regulatory cost was to be offset by the elimination of one existing dollar of regulatory cost.” In effect, regulatory agencies were asked to identify regulations where a given level of costs provided low or negative benefits, and replace them with regulations where that same given level of costs provided higher benefits.

This approach makes the most sense if you believe that many regulatory agencies are predisposed to add new regulations, rather than reconsider the effects of older regulations. The “regulatory budget” idea is intended to create some pushback, by making it necessary for regulatory agencies to continually reconsider their past regulations–especially those that may have worked less well or become outdated. Indeed, those who worked on these issues for the Trump administration (like Broughel) argue that they succeeded in putting a cap on regulatory costs during the Trump presidency.

While I am skeptical of the bigger claims made for a regulatory budget (for example, it’s the equivalent of saving thousands of dollars for every family every year, and in addition will unleash a surge of productivity growth), I do think that some form of pushback against the expansionary bias of regulators makes sense. In the symposium, Andrea Renda discusses the spread of regulatory budgeting around the world in “Regulatory Budgeting: Inhibiting or Promoting Better Policies?” She writes (footnotes omitted):

Over the past two decades, several governments have introduced tools to incentivize regulators to become more aware of the costs they impose on businesses and citizens when they propose new rules. In some European countries, such as the Netherlands and Germany, this cost-focused approach has taken priority over more comprehensive better regulation strategies such as the use of ex ante regulatory impact analysis (RIA), or
comprehensive retrospective reviews of the costs and benefits of individual regulations. In a dozen European Union Member States, plus Canada, Korea, Mexico and the United States, governments of various political orientations have introduced forms of regulatory budgeting, which require administrations to identify, every time they
introduce new regulation entailing significant regulatory costs, provisions to be repealed or revised, so that the net impact on overall regulatory costs is (at least) offset. These rules are generically referred to as “One-In-X-Out” (OIXO). … In their most common form of “One-In-One-Out” (OIOO), these rules amount to a commitment not to increase the estimated level of burdens over the chosen timeframe. The OECD refers to these commitments as “regulatory offsetting.”

Depending on the circumstances, the OIXO rule may explicitly refer to the number of regulations, and thus require that for every regulation introduced, one or more existing regulations are eliminated; or to the corresponding volume of regulatory costs, and hence require that when a new regulation is introduced, one or more regulations are modified or repealed, such that the overall change in regulatory costs is zero or negative. Most countries adopted the latter version, based on cost offsetting rather than on avoiding increases in the number of regulatory provisions. …

There are at least twenty countries in the world that have adopted an OIXO rule. These include ten EU member states (Austria, Finland, France, Germany, Hungary, Italy, Latvia, Lithuania, Spain and Sweden) as well as Canada, Mexico and Korea. In the past, three countries have had a similar rule in place (Denmark, the UK, and the United States), but later decided to gradually phase it out … . Four other countries were reportedly introducing similar regulatory budgeting systems in 2020: Poland, Romania, Slovakia, Slovenia.

Renda argues that many of these countries, of varying political persuasions, believe that the OIXO formulation has been a genuine success. She emphasizes, via a list of 10 lessons, that a regulatory budget is only one part of an overall approach to thinking about regulatory reform. One of her 10 lessons seemed worth repeating here:

Lesson 5. If carefully designed, regulatory budgeting rules are not incompatible with an ambitious policy agenda. Some countries have introduced OIXO rules and burden reduction targets in the context of a deregulatory effort. But the fact that these rules have been used in the context of a deregulatory attempt does not mean that they are, per se, incompatible with a more far-reaching and proactive approach to deregulation. In Germany, for example, the OIOO rule was adopted in a context in which by ambitious programs such as Energiewende are in place, and a systematic scrutiny of the impact of new legislation on sustainable development is carried out. In France, the government uses the OI2O rule but at the same time adopts ambitious proposals in terms of social and environmental benefits. In short, there is no incompatibility per se between the adoption of a cost reduction or regulatory budgeting system and an ambitious regulatory and policy agenda in the social and environmental

Clearly, some supporters of an activist and aggressive regulatory policy, in countries around the world, are recognizing that the best way to build public support for such policies is not by passing a lot of rules, but by reassuring the public that rules are being considered and reconsidered with appropriate care.

Warren Buffett on the Ovarian Lottery

Warren Buffett used to talk from time to time about the implications of what he called “the ovarian lottery”–the random accident of why you were born with one time, place, and identity rather than another.

It’s not a new idea. Some of you will know it as a version of what the philosopher John Rawls was talking about in his 1971 book A Theory of Justice when he discussed how justice required making decisions behind a “veil of ignorance.” Going back further, it’s version of what Adam Smith was talking about in 1759 in his first classic book The Moral Sentiments, when he discusses morality as evaluated by an “impartial spectator”–a hypothetical someone who not personally involved with the specific situation under discussion. There are of course differences in how this idea is used in each case, but overall notion is that to make a fair or moral evaluation, you need to remove yourself personally from the situation, so that you instead can imagine what it would be fair or ethical if you did not know what role you might end up playing.

Buffett’s telling is vivid in its own way, and focuses on his feelings of gratitude and thankfulness. Here, I’m quoting from the transcript of a question-and-answer session Warren Buffett had with a group of business school students at the University of Florida in 1998:

I have been extraordinarily lucky. I mean, I use this example and I will take a minute or two because I think it is worth thinking about a little bit. Let’s just assume it was 24 hours before you were born and a genie came to you and he said, “Herb, you look very promising and I have a big problem. I got to design the world in which you are going to live in. I have decided it is too tough; you design it. … You say, “I can design anything? There must be a catch?” The genie says there is a catch. You don’t know if you are going to be born black or white, rich or poor, male or female, infirm or able-bodied, bright or retarded. All you know is you are going to take one ball out of a barrel with 5.8 billion (balls). You are going to participate in the ovarian lottery. And that is going to be the most important thing in your life, because that is going to control whether you are born here or in Afghanistan or whether you are born with an IQ of 130 or an IQ of 70. It is going to determine a whole lot. What type of world are you going to design?

I think it is a good way to look at social questions, because not knowing which ball you are going to get, you are going to want to design a system that is going to provide lots of goods and services because you want people on balance to live well. And you want it to produce more and more so your kids live better than you do and your grandchildren live better than their parents. But you also want a system that does produce lots of goods and
services that does not leave behind a person who accidentally got the wrong ball and is not well wired for this particular system.

I am ideally wired for the system I fell into here. I came out and got into something that enables me to allocate capital. Nothing so wonderful about that. If all of us were stranded on a desert island somewhere and we were never going to get off of it, the most valuable person there would be the one who could raise the most rice over time. I can say, “I can allocate capital!” You wouldn’t be very excited about that. So I have been born in the right place. [Bill] Gates says that if I had been born three million years ago, I would have been some animal’s lunch. He says, “You can’t run very fast, you can’t climb trees, you can’t do anything.” You would just be chewed up the first day. You are lucky; you were born today. And I am.

The question getting back, here is this barrel with 6.5 billion balls, everybody in the world, if you could put your ball back, and they took out at random a 100 balls and you had to pick one of those, would you put your ball back in? Now those 100 balls you are going to get out, roughly 5 of them will be American, 95/5. So if you want to be in this country, you will only have 5 balls, half of them will be women and half men–I will let you decide how you will vote on that one. Half of them will below average in intelligence and half above average in intelligence. Do you want to put your ball in there? Most of you will not want to put your ball back to get 100. So what you are saying is: I am in the luckiest one percent of the world right now sitting in this room–the top one percent of the world.

Well, that is the way I feel. I am lucky to be born where I was because it was 50 to 1 in the United States when I was born. I have been lucky with parents, lucky with all kinds of things and lucky to be wired in a way that in a market economy, pays off like crazy for me. It doesn’t pay off as well for someone who is absolutely as good a citizen as I am (by) leading Boy Scout troops, teaching Sunday School or whatever, raising fine families, but just doesn’t happen to be wired in the same way that I am. So I have been extremely lucky.

I suppose it’s easy to feel “extremely lucky” if your name is near the top for the richest person in the world! But I too have been lucky in parents, lucky in marriage, lucky in good health, lucky in job, lucky to live in a time and place when being near-sighted and bookish could lead to upper middle-class economic security, lucky to live in a time and place with considerable freedom and not wrecked by war. My list of “meaningful life advice” is pretty short. But for me, “count your blessings” is high on the list.

An Economist Chews over Thanksgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there’s anything wrong with that. [This is an updated, amended, rearranged, and cobbled-together version of a post that was first published on Thanksgiving Day 2011.]

Maybe the biggest news about about Thanksgiving dinner this year is the rise in the cost of the traditional meal. For the economy as a whole, the starting point for measuring inflation is to define a relevant “basket” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical US household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose 20% from from 2021 to 2022, The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The lower line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been an OK measure of the overall inflation rate over long periods of time, but you can see the distinct rise in the real price of Thanksgiving dinner since 2020.

At least part of the reason for the overall rise in the price of Thanksgiving is supply shock affecting the turkey industry: a surge of Highly Pathogenic Avian Influenza (HPAI). Margaret Cornelius and Grace Grossen of the US Department of Agriculture offer a short overview in the “Livestock, Dairy, and Poultry Outlook: November 2022.” They write: “This year, the turkey industry has faced a particular challenge in supplying Thanksgiving dinner due to an outbreak of Highly Pathogenic Avian Influenza (HPAI), in addition to challenges common to all food industries this year—increased costs of production, a tight labor supply, and transportation constraints.”

The outbreak of HPAI in 2022 has led to a loss of about 8 million turkeys from US production this year: for comparison, this is about 4% of the total number of turkeys “slaughtered” (the USDA term) in 2021. Moreover, turkey farmers have an incentive to slaughter turkeys earlier than usual, to protect against the risk that the turkeys might become infected with HPAI, so the average weight of turkey has also declined in 2022.

Of course, for economists the price is only the beginning of the discussion of the turkey industry supply chain. This is just one small illustration of the old wisdom that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. The last time the U.S. Department of Agriculture did a detailed “Overview of the U.S. Turkey Industry” appears to be back in 2007, although an update was published in April 2014  Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but then declined somewhat, but appears to have flattened out. The figure below was taken from the website run by the National Turkey Federation a couple of years ago.

The USDA reports that overall US consumption of turkey has been falling in recent years, from 5.38 billion pounds in 2016 to 5.1 billion pounds in 2021.

On the supply side, turkey companies are what economists call “vertically integrated,” which means that they either carry out all the steps of production directly, or control these steps with contractual agreements. Over time, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.”

U.S. agriculture is full of examples of remarkable increases in yields over periods of a few decades, but such examples always drop my jaw. I tend to think of a “turkey” as a product that doesn’t have a lot of opportunity for technological development, but clearly I’m wrong. Here’s a graph showing the rise in size of turkeys over time from the 2007 report.

A more recent update from a news article shows this trend has continued. Indeed, most commercial turkeys are now bred through artificial insemination, because the males are too heavy to do otherwise.

The production of turkey is not a very concentrated industry with three relatively large producers (Butterball, Jennie-O, and Cargill Turkey & Cooked Meats) and then more than a dozen mid-sized producers.    Given this reasonably competitive environment, it’s interesting to note that the price markups for turkey–that is, the margin between the wholesale and the retail price–have in the past tended to decline around Thanksgiving, which obviously helps to keep the price lower for consumers. However, this pattern may be weakening over time, as margins have been higher in the last couple of Thanksgivings  Kim Ha of the US Department of Agriculture spells this out in the “Livestock, Dairy, and Poultry Outlook” report of November 2018. The vertical lines in the figure show Thanksgiving. She writes: “In the past, Thanksgiving holiday season retail turkey prices were commonly near annual low points, while wholesale prices rose. … The data indicate that the past Thanksgiving season relationship between retail and wholesale turkey prices may be lessening.”

If this post whets your your appetite for additional discussion, here’s a post on the processed pumpkin industry and another on some economics of mushroom production. Good times! Anyway, Thanksgiving is my favorite holiday. Good food, good company, no presents–and all these good topics for conversation. What’s not to like?

Thanksgiving Origins

Thanksgiving is a day for a traditional menu, and part of my holiday is to reprint this annual column on the origins of the day.

The first presidential proclamation of Thanksgiving as a national holiday was issued by George Washington on October 3, 1789. But it was a one-time event. Individual states (especially those in New England) continued to issue Thanksgiving proclamations on various days in the decades to come. But it wasn’t until 1863 when a magazine editor named Sarah Josepha Hale, after 15 years of letter-writing, prompted Abraham Lincoln in 1863 to designate the last Thursday in November as a national holiday–a pattern which then continued into the future.

An original and thus hard-to-read version of George Washington’s Thanksgiving proclamation can be viewed through the Library of Congress website. The economist in me was intrigued to notice that some of the causes for giving of thanks included “the means we have of acquiring and diffusing useful knowledge … the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.”

Also, the original Thankgiving proclamation was not without some controversy and dissent in the House of Representatives, as an example of unwanted and inappropriate federal government interventionism. As reported by the Papers of George Washington website at the University of Virginia.

The House was not unanimous in its determination to give thanks. Aedanus Burke of South Carolina objected that he “did not like this mimicking of European customs, where they made a mere mockery of thanksgivings.” Thomas Tudor Tucker “thought the House had no business to interfere in a matter which did not concern them. Why should the President direct the people to do what, perhaps, they have no mind to do? They may not be inclined to return thanks for a Constitution until they have experienced that it promotes their safety and happiness. We do not yet know but they may have reason to be dissatisfied with the effects it has already produced; but whether this be so or not, it is a business with which Congress have nothing to do; it is a religious matter, and, as such, is proscribed to us. If a day of thanksgiving must take place, let it be done by the authority of the several States.”

Here’s the transcript of George Washington’s Thanksgiving proclamation from the National Archives.

Thanksgiving Proclamation

By the President of the United States of America. a Proclamation.

Whereas it is the duty of all Nations to acknowledge the providence of Almighty God, to obey his will, to be grateful for his benefits, and humbly to implore his protection and favor—and whereas both Houses of Congress have by their joint Committee requested me “to recommend to the People of the United States a day of public thanksgiving and prayer to be observed by acknowledging with grateful hearts the many signal favors of Almighty God especially by affording them an opportunity peaceably to establish a form of government for their safety and happiness.”

Now therefore I do recommend and assign Thursday the 26th day of November next to be devoted by the People of these States to the service of that great and glorious Being, who is the beneficent Author of all the good that was, that is, or that will be—That we may then all unite in rendering unto him our sincere and humble thanks—for his kind care and protection of the People of this Country previous to their becoming a Nation—for the signal and manifold mercies, and the favorable interpositions of his Providence which we experienced in the course and conclusion of the late war—for the great degree of tranquillity, union, and plenty, which we have since enjoyed—for the peaceable and rational manner, in which we have been enabled to establish constitutions of government for our safety and happiness, and particularly the national One now lately instituted—for the civil and religious liberty with which we are blessed; and the means we have of acquiring and diffusing useful knowledge; and in general for all the great and various favors which he hath been pleased to confer upon us.

and also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations and beseech him to pardon our national and other transgressions—to enable us all, whether in public or private stations, to perform our several and relative duties properly and punctually—to render our national government a blessing to all the people, by constantly being a Government of wise, just, and constitutional laws, discreetly and faithfully executed and obeyed—to protect and guide all Sovereigns and Nations (especially such as have shewn kindness unto us) and to bless them with good government, peace, and concord—To promote the knowledge and practice of true religion and virtue, and the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.

Given under my hand at the City of New-York the third day of October in the year of our Lord 1789.

Go: Washington

Sarah Josepha Hale was editor of a magazine first called Ladies’ Magazine and later called Ladies’ Book from 1828 to 1877. It was among the most widely-known and influential magazines for women of its time. Hale wrote to Abraham Lincoln on September 28, 1863, suggesting that he set a national date for a Thankgiving holiday. From the Library of Congress, here’s a PDF file of the Hale’s actual letter to Lincoln, along with a typed transcript for 21st-century eyes. Here are a few sentences from Hale’s letter to Lincoln:

“You may have observed that, for some years past, there has been an increasing interest felt in our land to have the Thanksgiving held on the same day, in all the States; it now needs National recognition and authoritive fixation, only, to become permanently, an American custom and institution. … For the last fifteen years I have set forth this idea in the “Lady’s Book”, and placed the papers before the Governors of all the States and Territories — also I have sent these to our Ministers abroad, and our Missionaries to the heathen — and commanders in the Navy. From the recipients I have received, uniformly the most kind approval. … But I find there are obstacles not possible to be overcome without legislative aid — that each State should, by statute, make it obligatory on the Governor to appoint the last Thursday of November, annually, as Thanksgiving Day; — or, as this way would require years to be realized, it has ocurred to me that a proclamation from the President of the United States would be the best, surest and most fitting method of National appointment. I have written to my friend, Hon. Wm. H. Seward, and requested him to confer with President Lincoln on this subject …”

William Seward was Lincoln’s Secretary of State. In a remarkable example of rapid government decision-making, Lincoln responded to Hale’s September 28 letter by issuing a proclamation on October 3. It seems likely that Seward actually wrote the proclamation, and then Lincoln signed off. Here’s the text of Lincoln’s Thanksgiving proclamation, which characteristically mixed themes of thankfulness, mercy, and penitence:

Washington, D.C.
October 3, 1863
By the President of the United States of America.
A Proclamation.

The year that is drawing towards its close, has been filled with the blessings of fruitful fields and healthful skies. To these bounties, which are so constantly enjoyed that we are prone to forget the source from which they come, others have been added, which are of so extraordinary a nature, that they cannot fail to penetrate and soften even the heart which is habitually insensible to the ever watchful providence of Almighty God. In the midst of a civil war of unequaled magnitude and severity, which has sometimes seemed to foreign States to invite and to provoke their aggression, peace has been preserved with all nations, order has been maintained, the laws have been respected and obeyed, and harmony has prevailed everywhere except in the theatre of military conflict; while that theatre has been greatly contracted by the advancing armies and navies of the Union. Needful diversions of wealth and of strength from the fields of peaceful industry to the national defence, have not arrested the plough, the shuttle or the ship; the axe has enlarged the borders of our settlements, and the mines, as well of iron and coal as of the precious metals, have yielded even more abundantly than heretofore. Population has steadily increased, notwithstanding the waste that has been made in the camp, the siege and the battle-field; and the country, rejoicing in the consiousness of augmented strength and vigor, is permitted to expect continuance of years with large increase of freedom. No human counsel hath devised nor hath any mortal hand worked out these great things. They are the gracious gifts of the Most High God, who, while dealing with us in anger for our sins, hath nevertheless remembered mercy. It has seemed to me fit and proper that they should be solemnly, reverently and gratefully acknowledged as with one heart and one voice by the whole American People. I do therefore invite my fellow citizens in every part of the United States, and also those who are at sea and those who are sojourning in foreign lands, to set apart and observe the last Thursday of November next, as a day of Thanksgiving and Praise to our beneficent Father who dwelleth in the Heavens. And I recommend to them that while offering up the ascriptions justly due to Him for such singular deliverances and blessings, they do also, with humble penitence for our national perverseness and disobedience, commend to His tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife in which we are unavoidably engaged, and fervently implore the interposition of the Almighty Hand to heal the wounds of the nation and to restore it as soon as may be consistent with the Divine purposes to the full enjoyment of peace, harmony, tranquillity and Union.

In testimony whereof, I have hereunto set my hand and caused the Seal of the United States to be affixed.

Done at the City of Washington, this Third day of October, in the year of our Lord one thousand eight hundred and sixty-three, and of the Independence of the United States the Eighty-eighth.

By the President: Abraham Lincoln
William H. Seward,
Secretary of State

Some Economics of Dominant Superstar Firms

A range of evidence suggests that in recent decades, the leading firms in a given industry have attained a more dominant position than in the past. I’ve noted some of this accumulating evidence over time.

For example, back in 2015 the OECD published a report on “The Future of Productivity,” arguing that the productivity slowdown problems of many countries were occurring not because high-productivity firms were slowing down in their productivity growth, but because the firms with median and lower productivity weren’t keeping up. That year, Jae Song, David J. Price, Fatih Guvenen, and Nicholas Bloom wrote about how the pattern of diverging productivity across firms also led to diverging wages across firms. They argued that within a given firm, wage inequality has not changed much. But some high-productivity, high-profit firms were notably higher wages than other firms in the same industry, which was a major driver of growing market inequality in labor income. Nicholas Bloom summarized this evidence in a cover story in March 2017 for the Harvard Business Review.

The McKinsey Global Institute took up the mantle in 2018 with a report summarizing past evidence and offering new evidence in Superstars: The Dynamics of Firms, Sectors, and Cities Leading the Global Economy (October 2018). It looks at about 6000 of the world’s largest public and private firms: “Over the past 20 years, the gap has widened between superstar firms and median firms, and also between the bottom 10 percent and median firms. … The growth of economic profit at the top end of the distribution is thus mirrored at the bottom end by growing and increasingly persistent economic losses …”  In 2019, the US Census Bureau and the Bureau of Labor Statistics created an experimental database called Dispersion Statistics on Productivity, which let researchers look at how productivity was distributed across firms in a given industry: for example, firms in a certain industry at the 75th percentile of productivity are about 2.4 times as productive as those at the 25th percentile, on average. Again, there was some evidence that this gap is widening, and that best-practice methods of improving productivity are not spreading as well as they used to.

In short, an array of evidence suggests that the edge of dominant firms over their competitors has increased in a variety of industries. Jan Eeckhout reviews this evidence, and also looks at causes and effects, in his essay on “Dominant firms in the digital age” (UBS Center Public Paper #12, November 2022).

Eeckhout argues that the edge of dominant firms can be achieved in several ways in the digital era. The better-known approach, I think, is in the idea of network effects. For example, many buyers go to Amazon because many sellers are also at Amazon, and vice versa. Once such a network exists, it can be hard for a new firm to gain a foothold.

The more subtle approach is for firms to make to make investments that fall under the accounting category of “`Selling, General and Administrative expenses’ (SG&A). Those include expenditures on Research and Development (R&D), advertising, manager salaries, etc. and are often interpreted as fixed costs or intangibles. The observed rise in SG&A is a source of economies of scale as the fixed cost of production leads to declining average costs even with moderately decreasing returns in the variable inputs.” To put the point another way, some firms make substantial investments in technologies, brand names, and managers who can build on these capabilities. Eeckhout argues:

The rise of dominant firms that we have seen during the advent of the digital age is built on cost-reducing and efficiency-enhancing innovations that create increasing returns to scale. This implies a winner-takes-all market with a dominant firm achieving a long-lasting monopoly position. And while monopoly is often associated with higher prices, most of these firms achieve this position by doing the opposite, that is lowering prices. They can do this because their innovations and investments lead to an even larger reduction in costs. And that is why the digital technology is so attractive for customers: technological innovation is the hero. But because costs decline more than prices due to scale economies, technological change is also the villain.

(For those interested in digging deeper here, the Summer 2022 issue of the Journal of Economic Perspectives includes a three-paper symposium on the rising importance of intangible capital in the US economy, including everything from innovations to brand names. The Summer 2019 issue includes a three-paper symposium on the issue of the extent to which price markups over cost have been changing over time, and the implications for labor markets and the macroeconomy. As has been true for more than decade now, all JEP articles back to the first issue are freely available. Full disclosure: I work as Managing Editor of the JEP, and thus am predisposed to think the articles are of wider interest!).

As Eeckhout points out, potential consequences of this rise in dominant superstar firms include greater inequality of wages created by these lasting differences across firms; a slowdown in new business startups as entrepreneurs face a more challenging environment; a shift in the flow of national income going to capital, rather than labor; and in general, a greater ability of more-dominant firms, less concerned competition, to charge higher prices.

What is an appropriate policy solution? One approach is higher taxes on the profits of dominant firm, but without staking out a position here on the extent to which this desirable, it’s worth noting that the higher taxes would not alter the dominance of these firms, and many of the negative consequences would persist.

An alternative approach would be to recognize the phenomenon, but to take more of a hands-off attitude. After all, if the dominant firms are achieving success by making productivity-enhancing investments that reduce costs, this is broadly speaking a desirable goal, rather than something to be penalized. Besides, today’s dominant firms are not invulnerable, as anyone tracking the current performance of Meta (Facebook) or Twitter will attest. Not that long ago, companies like America Online and MySpace seemed to have dominant positions.

Besides, to what extent are consumers being “harmed” by, say, free access to email, word-processing, and spreadsheets offered by Google? Preston McAfee put it this way in an interview a few years ago:

First, let’s be clear about what Facebook and Google monopolize: digital advertising. The accurate phrase is ”exercise market power,” rather than monopolize, but life is short. Both companies give away their consumer product; the product they sell is advertising. While digital advertising is probably a market for antitrust purposes, it is not in the top 10 social issues we face and possibly not in the top thousand. Indeed, insofar as advertising is bad for consumers, monopolization, by increasing the price of advertising, does a social good. 

Amazon is in several businesses. In retail, Walmart’s revenue is still twice Amazon’s. In cloud services, Amazon invented the market and faces stiff competition from Microsoft and Google and some competition from others. In streaming video, they face competition from Netflix, Hulu, and the verticals like Disney and CBS. Moreover, there is a lot of great content being created; I conclude that Netflix’s and Amazon’s entry into content creation has been fantastic for the consumer. …

A more active approach would be to look for targeted opportunities to ensure greater competition. For example, McAfee suggests that consumers may well being harmed in a meaningful way by the Android-Apple duopoly in the market for smartphones, as well as in the very limited competition to provide home internet services.

Eeckhout emphasizes the general issue of “interoperability”–that is, the ability of consumers to shift between companies. He writes:

Interoperability has many applications. It is the regulation that ensures that a hardware producer cannot change the charger plug from product to product thus forcing users to buy an expensive new one each time, or whenever they need to replace an existing plug. And the concept of interoperability was at the heart of the development of the internet where the founding fathers of the world wide web ensured that the accessibility of different services was built in. They ensured that an email message for example could be sent from one provider (say Gmail) to another (say your company email servers). Similarly with the access to web pages that are hosted by different providers. This generates a lot of entry and competition of internet service providers. But this concept of interoperability does not come without regulation. For example, interoperability is not engrained in messaging services. It is impossible to send a message from WhatsApp to Snapchat since messaging services are closed. None of the services has an incentive to open their messaging platform to the messages of their competitors. As a result, compared to the number of service providers for email and the world wide web, the number of messaging services is very small.

If people should make a choice to transfer their personal information, or offer access to that information, from one setting to another, competition can be expanded. This goal isn’t a simple one. But if people could move their preferences and past shopping lists, even their financial and banking records and their health data, from one provider to another, competition in a number of areas could become easier. Another suggestion is that antitrust regulators should be skeptical when a dominant firm seeks to buy up smaller firms that have the potential to grow into future large-scale competitors.

The most active approach would go beyond specific situations of anticompetitive behavior and seek to use antitrust regulation in more aggressive ways, perhaps even with the goal of breaking up dominant firms. I don’t see a strong case for this kind of action. When the underlying issue is strong network effects, such effects are not going to go away. When the underlying issue is firms making major productivity-enhancing investments, that’s a good thing, not a bad one. Perhaps rather than figure out how to slow down the productivity leaders, we should be thinking more about what kinds of market structures and institutions might help to diffuse what they are already doing across the rest of the economy. Finding ways to level up the laggards is often harder than levelling down the leaders, but also ultimately more productive.

Income Inequality for US Households

I sometimes say that I feel as if I have a pretty good grasp on the US economy–except that my understanding has about a 2-3 year lag. For example, right now I feel as if I’ve got a pretty good understanding of events up through about May 2020, but I’m still trying to develop a satisfactory understanding of what has happened since then.

When it comes to inequality of household incomes, the Congressional Budget Office is on a similar timeline. CBO has just published The Distribution of Household Income, 2019 (November 2022). But CBO has a better excuse for the time lag than I do. A good chunk of the underlying data behind this report is from income tax data. This data has the great advantage that is isn’t from a survey asking people about their incomes, but is from what people actually filed with the Internal Revenue Service, with in turn is cross-checked with data from employers, financial institutions, and other types of income (like royalty payments). But data for 2019 incomes doesn’t get sent in until 2020, and the pandemic led to delays on when taxes were due. Thus, anyone working with full tax data is always a couple of years behind the times.

The real strength of the report is not that it is up to the minute, but rather that it offers a snapshot in time along with useful sense of trends in income inequality since the late 1970s, when it began to rise. Here are a few of the graphs that caught my eye. This is a snapshot of inequality across income levels for 2019. The breakout panel on the right shows that while average income for the top 1% was about $2 million, this breaks down into average income of $1.2 million for the 99 to 99.9th percent (that is, the top 1% not including the top 0.1%), average income of $5.7 million for the 99.9 to 99.99th percent (that is, the top 0.1%, not including the top 0.01%), and average income of $43 million for the top 0.01%.

This figure shows where the income comes from for each group. In particular, the black lines show that the highest share of income is from labor income–that is, being paid for work done in the previous year–for everyone up to the 99.9th percent. For the very tip-top, capital income and capital gains (think rising prices of assets like stocks and real estate) are the biggest share.

It’s easy to gabble about whether inequality is too high or too low, but many people are better gabblers than I am, so I won’t do that here. It’s perhaps worth saying that no one should expect income levels to be equal in a given year, for the same reason that there’s no reason for a 19 year-old high school dropout to be earning the same income in a given year as a 50 year-old physician–or someone who started a company that hires dozens or hundreds of people. In addition, most people move between income levels over time as their skills and experience and savings increase.

But the CBO is a just-the-numbers organization. Thus, the report is a place to get information about the degree of redistribution of income in the US, and how that has evolved over time.

For example, here’s a figure showing average federal taxes by income group: this measure included all federal taxes (say, including payroll taxes for Social Security and Medicare), but does not include state and local income or sales taxes

Here’s the trend in average federal taxes paid at the top of the income distribution in the last 40 years or so. You will notice that while the subject has been the source of considerable political controversy, the ups and downs have pretty much levelled out over time.

This figure shows trends in what household in the lower quintile of the income distribution receive in federal redistribution programs. Notice that assistance in the form of Medicaid has risen substantially, but of course, Medicaid can’t be used to pay the rent or buy groceries. The other main transfers–Supplemental Security Income, food stamps (SNAP), and “other”–have been flat or trending down.

So with federal taxes and benefits taken into account, how much redistribution happened in 2019? The fiture shows the share of income for various groups before and after taxes and spending. The CBO writes: “The lowest quintile received 8 percent of income after transfers and taxes, compared with 4 percent of income before transfers and
taxes. … In contrast, the share of income after transfers and taxes for the highest quintile was about 6 percentage points less than the share of income before transfers and taxes. Because those households paid more in taxes than
they received in transfers, the transfer and tax systems combined to reduce their share of income from 55 percent to 48 percent. Much of that decline was experienced by households in the top 1 percent of the distribution, whose share of income after transfers and taxes was 13 percent, 3 percentage points lower than their share of income before transfers and taxes.”

The Gini coefficient is a standard way of compressing the distribution of income into a single number. The Gini ranges from 0 to 1, where a Gini of 0 would imply a completely equal distribution of income, and a Gini of 1 would imply that a single person received all the income. Here’s the Gini for the US income distribution over time. You’ll notice that the top line, income inequality based on market incomes, is rising over time. However, the bottom line, which is income inequality after taxes and transfers, shows a Gini that has been essentially flat since 2000. The Gini in 2019 is higher than most of the 1970s and 1980s, but it’s similar in 2019 to the peak years over those periods, like 1986.

The overall pattern is that as market income has become more unequal, the forces of taxation and redistribution in pushing toward greater equality of after-tax, after-transfer income have become stronger over time, and since about 2000 those forces have broadly balanced each other out. Of course, nothing in these numbers is an argument that the US should not do more (or less) to redistribute. But the factual claim that after-tax, after-transfer income inequality has been rising substantially over time is often overstated–and is not true for the last two decades.