Hard and Soft Landings: The Federal Reserve’s Record

When the Federal Reserve raises interest rates to fight inflation, a “hard landing” refers to the possibility is that inflation is reduced at the cost of a significant recession, while a “soft landing” refers to the possibility that inflation is reduced with only a minor recession–or perhaps even no recession at all. Perhaps the canonical example of a hard landing happened in the late 1970s and early 1980s, when the Fed under chair Paul Volcker broke the back of the inflation of the 1970s by raising interest rates, but at the cost of back-to-back recessions in 1980-81 and 1982.

What is the historical record of the Federal Reserve in raising interest rates and managing a soft landing? Alan S. Blinder tackles that question in the just-published Winter 2023 issue of the Journal of Economic Perspectives in  “Landings, Soft and Hard: The Federal Reserve, 1965–2022.” (Full disclosure: I’ve been the Managing Editor of JEP for 36 years, so I am perhaps predisposed to find the articles of interest.)

From Blinder’s paper, here’s a figure showing the federal fund interest rate over time. There are some challenges in interpreting the figure when there are jagged jumps up and down in a short time, but Blinder argues that it is fair to read the historical record as involving 11 episodes where the Fed raised interest rates substantially since 1965.

What jumps out from the figure is that there are a number of times where the Fed raised interest rates and either it was not followed by a recession (1, 6, and 8), or the recession was very short (9), or the recession that followed was not caused by the higher interest rates (10 and 11). The Fed raising interest rates to nip inflation in the bud in 1994 (episode 8) is perhaps the best-known example of a landing so soft that a recession didn’t even occur. As another example, the Fed was gradually raising interest rates in the lead-up to the pandemic (episode 11), but the pandemic recession was clearly not caused by higher interest rates!

Here’s a table showing Blinder’s evaluation of the type of landing that followed each of the 11 episodes of monetary tightening. When he asks “was it a landing?”, he is raising the possibility that the higher interest rates didn’t actually bring inflation down at that time. For “would have been soft” (episode 7), Blinder argues that the Fed might have pulled off a soft landing with its interest rate increase in 1988-89, except for Iraq’s invasion of Kuwait in 1991.

The Fed has raised its key policy interest rate (the “federal funds rate”) from near-zero in March 2022 to about 4.6%–with talk of additional increases to come. Based on the historical record, what insights are possible about the whether a hard or soft landing is likely?

  1. There are clearly examples where higher interest rates from the Fed, in the service of fighting inflation, were followed by recessions.
  2. The Fed has faced two main challenges in the last few years: the pandemic recession, where the policy response led to a huge burst of disposable income along with supply chains problem, and then the Russian invasion of Ukraine in early 2022, which caused a new burst of higher prices for energy and food along with additional supply chain disruptions. History doesn’t offer do-overs. But it’s at least possible that the inflation which started in 2021 might have faded on its own if it had not been reinforced by the Russian inflation.
  3. One can argue that the last three recessions were not caused by higher Fed interest rates, but instead by the pandemic (2020), the implosion of financial instruments related to the housing price bubble (2007-2009), and the end of the dot-com boom in stock prices and investment levels (2001). Thus, perhaps the key question about the risks of a recession in 2023 may be less about interest rate policy and more about whether the US or world economy experiences a severe negative shock this year.
  4. Several of the Fed’s interest rate increases over time can be thought of as readjusting back to a more reasonable long-run level. For example, the rising Fed interest rates pre-pandemic were in some ways just based on a belief that the rate shouldn’t and couldn’t stay near zero percent forever. Or going back to Blinder’s episode 6, this was a time of chaotic shifts after a severe recession, with a plummeting price of oil and falling inflation, and the at that time seemed to believe that it had gone a little too far in cutting interest rates, so it adjusted back. Part of the Fed increasing interest rates in the last year or so is surely a belief that although it made sense to take the policy interest rate down to zero percent in the pandemic recession, again, it shouldn’t and couldn’t stay there forever.
  5. Macroeconomics is hard because the key factors driving the economy shift over time. It’s not obvious, for example, that the same lessons which applied to the stagflationary period of the 1970s should apply equally well and in the same ways to the dot-com boom-and-bust of the 1990s, or the housing price bubble of the early 2000s, and also to a short-and-sharp recession caused by a pandemic.
  6. Some of the arguments about inflation are really about momentum. When inflation rises, does it have a tendency to fade out? Or does it have a tendency to maintain the higher rate of inflation? Or does it have a tendency to build momentum, like a rock rolling downhill? Which scenario prevails probably depends to on how the causes of the inflation are perceived; the recent historical record and the credibility of the central bank in fighting inflation; and what expectations firms, workers, and consumers have about future inflation.

My own sense, for what it’s worth, is that the US economy is unlikely to escape this episode of higher Federal Reserve interest rates without experiencing a recession, by which I mean a period of higher unemployment and lower production. It seems to me that the pressures and tensions unleashed by the higher interest rates are working their way into reduced borrowing and credit, as well as tensions in bond markets. The Fed seems to be taking a middle road here, with a belief that part of the 6.5% inflation rate from December 2021 to December 2022 was temporary–in the sense that pandemic-related spending will fall, supply chains issues will resolve, and tensions from the Russian invasion of Ukraine will be manageable. Thus, the Fed is trying to raise interest rates only by as much as necessary to be clear that inflation will not gain a permanent foothold, while minimizing the risk of a hard landing.

Winter 2023 Journal of Economic Perspectives Free Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2023 issue, which in the Taylor household is known as issue #143. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

______________________

Symposium: Trade Sanctions and International Relations

“Economic Sanctions: Evolution, Consequences, and Challenges,” by T. Clifton Morgan, Constantinos Syropoulos and Yoto V. Yotov

Taking an interdisciplinary perspective, we examine the evolution of economic sanctions in the post-World War II era and reflect on the lessons that could be drawn from their features and patterns of use. We observe that, during this time, there has been a remarkable increase in the use of sanctions as an instrument of foreign policy. We classify this period into four ‘eras’ and discuss, in this context, how the evolution of sanctions may be linked to salient features of the contemporaneous international political and economic orders. Our review of the related literatures in economics and political science suggests, among other things, that our understanding of sanction processes could be significantly advanced by marrying these perspectives. We conclude by identifying several questions and challenges, and by discussing how interdisciplinary research could address them.Full-Text Access | Supplementary Materials

“Financial Sanctions, SWIFT, and the Architecture of the International Payment System,” by Marco Cipriani, Linda S. Goldberg and Gabriele La Spada

Financial sanctions, alongside economic sanctions, are components of the toolkit used by governments as part of international diplomacy. The use of sanctions, especially financial, has increased over the last 70 years. Financial sanctions have been particularly important whenever the goals of the sanctioning countries were related to democracy and human rights. Financial sanctions restrict entities—countries, businesses, or even individuals—from purchasing or selling financial assets, or from accessing custodial or other financial services. They can be imposed on a sanctioned entity’s ability to access the infrastructures that are in place to execute international payments, irrespective of whether such payments underpin financial or real activity. This article explains how financial sanctions can be designed to limit access to the international payment system and, in particular, the SWIFT network, and provides some recent examples.

Full-Text Access | Supplementary Materials

Symposium: Monetary Policy

“Monetary Policy When the Central Bank Shapes Financial-Market Sentiment,” by Anil K Kashyap and Jeremy C. Stein

Recent research has found that monetary policy works in part by influencing the risk premiums on both traded financial-market securities and intermediated loans. Research has also shown that when risk premiums are compressed, there is an increased likelihood of a reversal that damages the credit-supply mechanism and the real economy. Together these effects create an intertemporal tradeoff for monetary policy, as stimulating the economy today can sow the seeds of a future downturn that might be difficult to offset. We draw out some implications of this tradeoff for the conduct of monetary policy.

Full-Text Access | Supplementary Materials

“Risk Appetite and the Risk-Taking Channel of Monetary Policy,” by Michael D. Bauer, Ben S. Bernanke and Eric Milstein

Monetary policy affects financial markets and the broader economy in part by changing the risk appetite of investors. This article provides new evidence for this so-called risk-taking channel of monetary policy by revisiting and extending event-study analysis of Federal Open Market Committee announcements. We document significant effects of unexpected monetary policy changes on risk indicators drawn from equity, fixed-income, credit, and foreign exchange markets. We develop a new index of risk appetite based on the common component of these indicators. Surprise monetary easing leads to strong and persistent increases in our index, and vice versa for tightening surprises, consistent with the view that monetary policy affects asset prices in large part through its effects on risk appetite. We discuss the implications of the risk-taking channel for monetary policy transmission, optimal monetary policy, and financial stability.

Full-Text Access | Supplementary Materials

(6) Landings, Soft and Hard: The Federal Reserve, 1965–2022

Alan S. Blinder

“Soft landings,” that is, cases in which the central bank tightens monetary policy to fight inflation but does not cause a recession (which would be a “hard landing”), are thought to be difficult to achieve and extremely rare. According to the conventional wisdom, the Federal Reserve has managed to achieve only one soft landing in the past 60 years—in 1994–1995. This paper studies the eleven episodes of monetary policy tightening by the Fed since 1965, and concludes that the central bank has a better record than that—that as long as the criteria for softness are not too stringent, and Fed was actually trying to land the economy softly, the Fed has succeeded several times. Achieving a soft landing, however, requires both skill in managing monetary policy and the absence of adverse external shocks.

Full-Text Access | Supplementary Materials

(7) Monetary Policy and Inequality

Alisdair McKay and Christian K. Wolf

We ask three questions about the connection between monetary policy and inequality. First, does monetary policy affect inequality? While different households respond to changes in monetary policy for different reasons, we argue that the overall consumption effects are relatively evenly distributed across households. Second, does household heterogeneity change our understanding of monetary policy transmission? A more careful account of microeconomic consumption behavior materially alters our understanding of transmission channels, but has rather limited effect on our general view of the aggregate effects of monetary policy. Third, does inequality affect the optimal conduct of monetary policy? Since monetary policy is a rather blunt distributional tool, we argue that even a central bank with an explicit distributional mandate would not deviate much from conventional policy prescriptions.

Full-Text Access | Supplementary Materials

Symposium: Hispanic Americans

“Unraveling the Hispanic Health Paradox,” by José Fernandez, Mónica García-Pérez and Sandra Orozco-Aleman

In 2019, Hispanics in the US had a life expectancy advantage of 3.0 years and 7.1 years over non-Hispanic Whites and non-Hispanic Blacks, respectively, despite having real-household income values 26 percentage points lower than Non-Hispanic White households. Hispanics appear to have equal or even better health outcomes relative to non-Hispanic Whites across various health measures. This is known as the Hispanic health paradox. This paper underscores the importance of disaggregating Hispanics by ancestry and age profile when discussing the paradox across key health outcomes. It also provides an overview of the leading explanations, such as the salmon bias and the healthy immigrant effect. Further, it highlights the role of healthcare access and usage in this discussion. Ignoring these sources of bias have important consequences for how morbidity and mortality among Hispanics are measured within widely used national datasets.

Full-Text Access | Supplementary Materials

“Hispanic Americans in the Labor Market: Patterns over Time and across Generations,” by Francisca M. Antman, Brian Duncan and Stephen J. Trejo

This article reviews evidence on the labor market performance of Hispanics in the United States, with a particular focus on the US-born segment of this population. After discussing critical issues that arise in the US data sources commonly used to study Hispanics, we document how Hispanics currently compare with other Americans in terms of education, earnings, and labor supply, and then we discuss long-term trends in these outcomes. Relative to non-Hispanic Whites, US-born Hispanics from most national origin groups possess sizeable deficits in earnings, which in large part reflect corresponding educational deficits. Over time, rates of high school completion by US-born Hispanics have almost converged to those of non-Hispanic Whites, but the large Hispanic deficits in college completion have instead widened. Finally, from the perspective of immigrant generations, Hispanics experience substantial improvements in education and earnings between first-generation immigrants and the second-generation consisting of the US-born children of immigrants. Continued progress beyond the second generation is obscured by measurement issues arising from high rates of Hispanic intermarriage and the fact that later-generation descendants of Hispanic immigrants often do not self-identify as Hispanic when they come from families with mixed ethnic origins.

Full-Text Access | Supplementary Materials

“US Immigration from Latin America in Historical Perspective,” by Gordon Hanson, Pia Orrenius and Madeline Zavodny

The share of US residents who were born in Latin America and the Caribbean plateaued recently, after a half century of rapid growth. Our review of the evidence on the US immigration wave from the region suggests that it bears many similarities to the major immigration waves of the nineteenth and early twentieth centuries, that the demographic and economic forces behind Latin American migrant inflows appear to have weakened across most sending countries, and that a continued slowdown of immigration from Latin America post-pandemic has the potential to disrupt labor-intensive sectors in many US regional labor markets.

Full-Text Access | Supplementary Materials

Articles

Oleg Itskhoki: 2022 John Bates Clark Medalist,” by Andrew Atkeson and Gita Gopinath

The 2022 John Bates Clark Medal of the American Economic Association was awarded to Oleg Itskhoki, Professor of Economics at the University of California, Los Angeles for his path breaking contributions in international economics. This article summarizes Oleg Itskhoki’s work and places it in the context of the broader literature and emphasizes how it has shed new light on a number of long-standing puzzles regarding the behavior of exchange rates and international relative prices more generally and their connection to macroeconomic fluctuations and government’s choices of monetary and fiscal policies.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

Biologics and Biosimilars: A Test for Intellectual Property

The fundamental tradeoff of intellectual property rights–like patents and copyrights–is that the inventor gets a government-protected monopoly for a period of time, as an incentive for innovation, but then the innovation passes into the public domain. For example, this is the moment when generic equivalents of pharmaceuticals can start competing with brand-name drugs.

The period when a lucrative invention shifts into the public domain is primed for politics and strategy, as the incumbent firm tries to hold on to its leading competitive position. In one famous example, the US Congress passed in 1998 what became known as the “Mickey Mouse Protection Act,” which extended copyright protection for Mickey and many lesser-known creations to last for 95 years–an extension of 21 years from the previous rule. A few decades ago, when generic drugs started to take over from previously patented drugs, there was a storm of litigation and antitrust action around questions like whether generics had to go through similar testing for safely and efficacy by the Food and Drug Administration, and how to judge whether a generic drug was the same–or perhaps just a little different–than the previously patented drug.

Now, 91% of US prescriptions are for generic drugs. By one estimate, this saves over $370 billion per year for US health care consumers. But almost all of those generic drugs are “small molecule” drugs, which essentially means that they are created by chemistry. Starting a couple of decades ago, a new wave of “biologic” drugs arrived. These begin by isolating certain components from humans, animals, or microorganisms, and then growing them through biotechnology or related techniques. They include “vaccines, blood and blood components, allergenics, somatic cells, gene therapy, tissues, and recombinant therapeutic proteins.” As one example, the COVID vaccines are biologics. The highest-revenue prescription drug of all time, AbbVie’s Humira–an injectable treatment for autoimmune conditions like rheumatoid arthritis–is a biologic. It can cost some patients $70,000 per year.

A number of biosimilars are remarkable advances in health care, and bring substantial benefits to patients. Many of them also have very high prices. Indeed, the rough estimates seem to be that biologics account for 2% of US prescriptions for drugs, but 40% of total spending on prescription drugs. Of the top 10 prescription drugs in the US in 2021 by revenue, seven are biologics–including two versions of the COVID vaccine. Industry projections are that in a few years, annual revenues from biologics will exceed those for small-molecule drugs by $100 billion per year or more. Again, part of the reason is that so many small-molecule drugs are now available in generic form, while many leading biologics are still on patent. Humira is just about to go off patent this year.

The generic equivalents for biologic drugs are called “biosimilars,” because the chemistry of these (“large molecule”) biologics often isn’t easy to define. They are often hard to manufacture, and susceptible to heat or contamination. As a result, the producers of the original biologics often obtain patents not just on a therapeutic molecule, but on a variety of manufacturing techniques, each of which produced slightly different chemical outcomes that can be patented as well.

For example, the AbbVie patent on adalimumab, the key ingredient in Humira, actually expired back in 2016. But given new manufacturing techniques and variations on the original formulation, AbbVie has actually obtained more than 100 patents related to Humira. In the antitrust biz, this is sometimes called a “patent thicket”–a term which refers to an ever-evolving body of patents, with old ones expiring and new ones coming into play, thus blocking new competition on an ongoing basis.

Back in 2010, the Biologics Price Competition and Innovation Act became law, with the goal of defining a path for biosimilar drugs to replace the original brand-name biologics, as patents expire, in the same say that generics have replaced so many small-molecule drugs in the last couple of decades. The law is based on language about whether the biosimilar is “highly similar” to the original, with no “clinically meaningful differences.” But of course, terms like “highly similar” and “meaningful differences” are basically catnip for lawyers, especially when billions of dollars of sales are at stake.

I don’t have any deep insights into how to write the rules governing when rules governing when biosimilars can replace the original biologics. I readily acknowledge that there are some hard questions here, because we aren’t dealing with precise chemistry and small molecule drugs in this situation. There are legitimate questions about biosimilars being produced in safe ways.

But the overall goal here seems fairly clear: when patent protection expires, reasonably easy entry should be possible. If the rules for new biosimilars require extensive re-testing and long-term tests on patients, it will be much harder for biosimilars to gain a foothold. In some cases, makers of the original brand-name biologics also managed to sign exclusive deals with health care providers, where the providers agreed to use only the original biologic and to shun biosimilars. There are now 22 biosimilars approved for patients, with another seven scheduled to launch this year. But there have been essays in medical journals for several years now lamenting the slow pace at which biosimilars have been approved.

At present, Humira is the biggest test. At least eight other drug companies have plans to launch biosimilars. But along with the patent thicket that AbbVie has constructed around Humira, AbbVie is introducing two new biologics with similar effects, different active ingredients–and patent protection. Under current rules, a pharmacist can swap in a generic drug for a brand-name without needing a new prescription, but for biologics, a new prescription naming the biosimilar drug is needed. In some ways, the question of whether biosimilar competitors for Humira can become established in the market is a test case for the biosimilar industry as a whole, in the sense that it will affect whether firms see the biosimilar market as worth pursuing. But one big health care system is apparently ready to switch: “David Chen, who directs specialty drug use for Kaiser Permanente, said the insurer plans to stop covering Humira by the end of 2023. He expects at least 90 percent of patients to switch to the biosimilar alternative, and said Kaiser should save hundreds of millions of dollars a year.”

A golden opportunity for a beleaguered biosimilars market” (Tradeoffs, January 26, 2023).

Retirement Ages: Some International Comparisons

At what age does the average person retire in high-income countries? How long is the average period of retirement? The OECD collects this information. Here’s a trimmed down table with a selection of countries.

You can see from the first two columns of data that the average age of labor market exit for US men and women is a shade under 65 years. This is lower than Japan and Korea, and interestingly, lower than Sweden as well. The countries with the earliest average ages of labor market exit seem to be France, Spain and Greece at less than 61 years, with Italy also on the low side.

The last two columns of data show expected years in retirement. For the United States, this is 18.6 years for men and 21.3 years for women, mostly reflecting the longer average life expectancies for women. The shortest expected retirements are in Japan and Korea. Interestingly, Sweden has a later average age of retirement than the US but also a longer expected retirement–which is possible because of longer life expectancies in Sweden. The longest retirement periods seem to be in the countries with the lowest retirement ages, like France, Greece, and Spain, where it is common for men to have 23 years in retirement and women 27 years.

Over time, the average age of retirement in the US has followed a U-shaped pattern over last 50 years, first dropping by about three year and then rising back close to the earlier level. For men, the OECD data shows that average age of retirement for men was 65.5 years in 1970, 63.8 years in 1980, 62.4 years in 1990, 62.5 years in 2000, 62.9 years in 2010, and then 64.9 years in 2020. For the expected time in retirement, the US follows shows a substantial rise from 1970 up through 2012, but a gradual decline since then. For men, the OECD data shows 12.8 years of expected retirement in 1970, 15.0 years in 1980, 17.0 years in 1990, 18.2 years in 2000, 19.6 years in 2010, and then–after peaking at 20.1 years of expected years of retirement in 2012–a gradual decline to 18.6 expected years of retirement in 2020.

In a big-picture sense, this is consistent with a long-term pattern of US men over age 65 decreasing their labor force participation in the long run, but with an upward shift in labor market participation in the last 20 years or so. From the Our World in Data website:

Given that the US has a relatively late age of expected retirement and relatively short period of expected retirement, one might expect that the US Social Security system of government pensions would be in relatively good financial shape compared to some other countries, but this doesn’t seem to be correct. Consider this table from the Mercer CFA Institute Global Pension Index 2022, which ranks pension systems across 44 countries for adequacy, sustainability, and integrity. (“Integrity” refers to a combination of issues related to regulation, governance, protection, communication, and operating costs.) The US system does fairly well for adequacy, but not so well on the other measures. Of the countries listed above, Netherlands and Denmark are thought to be grade A systems. The US overall category with countries like France and Spain. Those countries rank higher than the US on “adequacy” of benefits, with their lower retirement ages and longer expected periods of retirement, but rank lower than the US on the financial “sustainability” of the benefits.

As the US deals with the financial consequences of an aging population, it seems appropriate that part of the answer will involve people working longer on average. But other parts of the answer will also involve additional financing for the system as a whole and additional support for the elderly poor.

Global Commodity Markets

Modern economic growth is a mixture of physical objects and the ideas that can be embodied in those objects. Wheat makes flour which makes a cake. Sand can be used to make concrete or computer chips. The original basic objects of the world economy are called “commodities.” Broadly speaking, they include fossil fuels, agriculture, and minerals. Many of the concerns about economic growth and environmental sustainability are essentially arguments about economic or environmental aspects of future production of commodities. For an overview of the economic issues, John Baffes and Peter Nagle have edited a four-chapter book on Commodity Markets : Evolution, Challenges and Policies (World Bank, 2022).

Here’s a long-term overview of commodity markets from the first chapter, “The Evolution of Commodity Markets over the Past Century,” by Baffes and Nagle together with Wee Chian Koh.

Economic expansion after World War II (WWII), and more recently the emergence of EMDEs [emerging markets and developing economies] as important players in the global economy, has increased commodity demand, especially for energy commodities and metals and minerals. Even though the world’s population rose from 2 billion in 1920 to 8 billion in 2020, the production of commodities to feed, clothe, and support the rising population has more than kept pace. Expanding production was possible because of technological innovations, the discovery
of new reserves of commodities, and more intensive agricultural production.

On the energy front, crude oil became the most important commodity, replacing coal. Known reserves of crude oil and natural gas have increased substantially even as production has risen. For example, the development of shale technology during the early 2000s enabled producers to exploit deposits that had previously been considered unprofitable; as a result, the United States became once again the largest producer of crude oil. Mineral resource development expanded because of advances in technology and new discoveries.

Metal production has become more efficient as innovations and productivity improvements became widespread in mining, smelting, and refining. Improved fabrication and new alloys have allowed less metal to be used without loss of strength. Despite radical changes in supply and consumption, metals prices, in real terms, have seen cycles around a quite flat trend over the past century. …

Food production has increased faster than population, and most of the world’s consumers have better access to adequate food supplies today than they did a century ago. This improvement is due to technological advances in the 1900s, especially the Green Revolution. In large part because of increasing productivity, prices of agricultural commodities have experienced a downward trend over the past 100 years.

The bottom line here is that large increases in population and GDP have been matched by large increases in commodity products including energy, metals, and agriculture.

However, while the quantity demanded of commodities has risen dramatically, prices of commodities do not show much upward trend–and some show a downward trend. The figure illustrates with two examples from energy, two from agriculture, and two from metals.

The figures of prices also show some large fluctuations in prices over time, in what are sometimes called “commodity cycles.” For countries where the economy is heavily reliant on production or important of one or a few commodities, the effects of these price fluctuations can be severe–and the volume discusses causes and effects of these commodity cycles at some length. But the overall pattern of higher quantities and flat or falling price levels remains.

Some readers may also be interested in “Commodity Prices and Growth in Africa,” by Angus Deaton (Nobel ’15), in the Summer 1999 Journal of Economic Perspectives.

The Bounce in Disposable Personal Income

When the pandemic hit, the general sense was that the US government couldn’t do too much to help, whether the assistance came in the form of stimulus checks, expanded unemployment payments, help to businesses (via the Paycheck Protection Program), tax cuts, and so on. Now we can look back and see the patterns in disposable personal income–that is, income that people have after they have received paid taxes and received government benefits.

The top panel shows total personal disposable income for the economy as a whole, adjusted for inflation, measured in billions of dollars. The bottom panel shows the same data on a per capita basis–that is, adjusted for the size of the US population. The data is monthly. Both figures are from the extraordinarily useful FRED website maintained by the Federal Reserve Bank of St. Louis.

What jumps out from these two figures is how dramatic the rise in personal income was, both right after the pandemic in spring 2020, and then after President Biden’s stimulus package was enacted into law early in 2021. Compare the shifts in real per capita income in 2020 an 2021 to what happened in the previous three recessions, and there’s just nothing remotely like it.

The sharp rises in disposable personal income help to explain why inflation started rising in mid-2021. The level of disposable and spendable personal income was spiking at a time when many parts of the economy (like restaurants, travel, and entertainment) were still shut down or quite constrained in many places, and at a time when supply chains were backed up. A working definition of inflation is “too much money chasing too few goods,” and that’s what happened. However, this impetus for inflation has faded in recent months, which has surely contribute to the rate of inflation sagging downward.

It also helps explain the “Great Resignation,” the pattern in which a number of people of working age dropped out of the workforce, and were not looking for jobs. Again, it will be interesting to see if some of those who left the labor market during the boom in disposable personal income return in the next year or so.

Finally, it also explains some of the economic stress that shows up in public opinion polls and news stories. After the much higher disposable personal income levels of the last two years, the economy has returned to 2019 levels of disposable personal income. That up and down is bound to be unsettling.

I find it hard to be too critical of decisions made in the teeth of the pandemic in early 2020. The level of uncertainty was just so very high. But it also seems to me that what the federal government knows how to do is send out checks–so that’s what it did. Meanwhile, policy questions like how to make more COVID tests available, or how to facilitate the most widespread and rapid distribution of vaccines. or whether to re-open schools in fall 2020 got considerably less attention.

Explaining College Attendance Gaps: Academic Preparation

There are large gaps in college attendance between men and women and across ethnic groups. To what extent might these differences reflect academic preparation of students? Sarah Reber and Ember Smith provide some baseline information on these issues in “College Enrollment Disparities: Understanding the Role of Academic Preparation (January 2023, Center on Children and Families at Brookings). They point out:

In 2022, young men [age 25-29] were nine percentage points less likely to have a bachelor’s degree than young women (35% and 44%). … Disparities in bachelor’s degree attainment by race and ethnicity are large: 68% of Asian or Pacific Islander adults aged 25 to 29 have a bachelor’s degree, compared with 45% of white,
28% of Black, and 25% of Hispanic young adults.

But what do these gaps look like if one takes academic preparation into account. In the figure, the top set of bars shows the likelihood of men and women enrolling in college at all–including two-year and four-year colleges–while the second set of bars shows only the likelihood of enrolling in a four-year college. The “No Controls” shows the overall average for males and females. But notice that when you compare those with similar grade point averages or levels of academic preparation, the gap goes away.

The authors write: “Taken together, the results by gender suggest that most or all gender gaps in college enrollment
are explained by differences in academic preparation. However, … GPA explains essentially all, and math test score
explains none, of the gaps.”

What are the patterns by ethnicity? Again, the top row of bars shows all colleges, and the bottom row shows just four-year colleges. The “No controls” shows the overall averages for each group. The striking pattern is that on average, Asians are more likely to attend college. But if one adjusts for grade point average or for overall academic achievement, blacks become the most likely group to attend college. The final two sets of bars show an adjustment for “socioeconomic status,” which clearly reduces the differences across groups, and then a joint adjustment for socioeconomic status and academic preparation, which looks a lot like the adjustment just for academic adjustment alone.

These types of results are just descriptions of patterns in the data. They are not studies that dig into cause-and-effect relationships or offer policy recommendations. In addition, the grade point average or academic preparation of a high school student is partly about the performance of K-12 schools, but also about differences across families, peer groups, and neighborhoods. But in my reading, this evidence strongly suggests that college attendance gaps across men and women, or across ethnic groups, largely reflect the academic preparation of high school students.

Getting Serious about Carbon Dioxide Removal

Perhaps the simplest way of removing carbon dioxide from the atmosphere is to manage forests in such a way that they soak up more carbon. But there are other ways, like capturing carbon directly from the air and then storing it deep underground, or in the form of mineral deposits. It’s perhaps not widely known that climate change models describing potential paths to reduce the risks of climate change typically assume that carbon dioxide removal will rise dramatically, and that it will be an important part of any ultimate solution. The University of Oxford’s Smith School of Enterprise and the Environment provides an overview of the science, policy, and public opinion in “The State of Carbon Dioxide Removal Report, 2023. The lead contributors are Stephen M Smith, Oliver Geden, Jan C. Minx, and Gregory F. Nemet.

Here’s a chart showing the various approaches to carbon dioxide removal in the first column and the route by which it works in the second column. The third column headed “TRL” stands for “Technology Readiness Level” ranked from theoretically possible at 1 to operationally ready at 9. The last two columns show an estimate of what cost for removing carbon might be if the technology was developed to large scale, and the potential for how much carbon it could remove (measured in gigatons of CO2).

Broadly speaking, these can be summarized into three categories of how the carbon is stored.

Biological storage (on land and in oceans). While annual plants do not retain carbon durably, trees can retain their carbon for decades, centuries or more. Soils and wetlands are a further store of carbon, derived from compounds exuded by roots and dead plant matter. In the oceans, aquatic biomass may sink to the ocean floor and become marine sediment. Carbon can be retained durably in these ecosystems, especially if managed carefully to reduce
disturbances.
Product storage. Many carbon-based products do not constitute durable storage. However, construction materials and biochar (a carbon-rich material produced by heating biomass in an oxygen-limited environment) can store carbon for decades or more. These carbon-based products can be made from conversion of harvested biomass (in the cases of biochar and wood in construction), from concentrated CO2 streams or even from CO2 from ambient air (in the case of aggregates).
Geochemical storage. Concentrated CO2 can be stored in geological formations, using depleted oil and gas fields or saline aquifers, or reactive minerals such as basalt. Geochemical capture leads directly to long-term storage of CO2 in the form of carbonate minerals or bicarbonate in the ocean.

The report emphasizes that it is extraordinarily unlikely that carbon dioxide removal can address atmospheric carbon levels on is own. The notion is that it can supplement other efforts. After all, all approaches that involve reduced use of fossil fuels only reduce the speed at which carbon is being added to the atmosphere, while the effect of carbon dioxide removal is actually to reduce pre-existing levels of carbon to lower levels than they would otherwise reach. The report argues:

Virtually all scenarios that limit warming to 1.5°C or 2°C require “novel” CDR, such as BECCS, biochar, DACCS, and enhanced rock weathering. However, only a tiny fraction (0.002 GtCO2 per year) of current CDR results from novel CDR methods. Closing the CDR gap requires rapid growth of novel CDR. Averaging across scenarios, novel CDR increases by a factor of 30 by 2030 (and up to about 540 in some scenarios) and by a factor of 1,300 (up to about 4,900 in some scenarios) by mid-century. Yet no country so far has pledged to scale novel CDR by 2030 as part of their Nationally Determined Contribution, and few countries have so far published proposals for upscaling novel CDR by 2050.

Indeed, if one looks at present at the amount of carbon dioxide removal, more than 99% is happening with reforestation, and about 0.1% involves the more novel forms of carbon dioxide removal listed in the table.

The other key point is that if at least some of these technologies are to be workable at scale, a lot of innovation and learning-by-doing is going to be needed over a sustained period of time. If countries aren’t starting a wide range of experimental projects in carbon dioxide removal very soon, then the necessary knowledge base won’t exist for large-scale used of carbon dioxide removal 2-3 decades from now. And to repeat myself, the main scenarios for mitigating the risks of climate change all include the assumption that this technology will become developed and workable. Without carbon dioxide removal technologies, the already very difficult task of dealing with rising levels of atmospheric carbon becomes much harder.

Is Globalization Fading?

Chad Bown tackles this question and other trade-related topics in an interview with Janet Bush of the McKinsey Global Institute (January 18, 2023, “Forward Thinking on the complicated and contentious state of global trade with Chad P. Bown“).

If globalization means trade agreements …

If deglobalization means less nondiscriminatory trade policies, most all the major economies of the world are members of the World Trade Organization. And one of the fundamental rules, pillars, of the World Trade Organization is the most-favored-nation, MFN, rule. You’re supposed to apply nondiscriminatory policy, basically the same tariffs toward everyone. Well, that is now changing.

We saw that really beginning to change in 2018, 2019, in the context of the US–China trade war, where those two countries went from having tariffs toward each other that were the nondiscriminatory types that they applied toward trading partners in the rest of the world. US toward China was about 3 percent. China toward the US was about 8 percent. Well, nowadays those countries are applying tariffs on the order of 20, 21 percent toward each other. Toward everyone else they’re still in the 3 to 8 percent range, so still relatively low. But they’re very much applying discriminatory trade policies, tariffs, toward one another.

With the conflict, Russia’s invasion of Ukraine, we have seen not only financial sanctions but a number of countries, the United States, EU, UK, Canada, the G-7, applying much higher tariffs against Russia. Discriminating, essentially breaking one of the central tenets of the WTO system—now, for good reason, obviously. But if that is what we mean by deglobalization, meaning no longer applying nondiscriminatory policies toward each other, trying to shape economic activity for noneconomic reasons, say, I do think some of that is going on. At the end of all of that, we still may have just as much trade as we had before, just as much cross-border movement of goods and services, but the patterns of that may look fundamentally different than the way it was before all this new stuff started happening over the last four or five years.

The tradeoffs between economies of scale and a wider distribution of global suppliers

Companies, their job is to reduce costs and provide goods and services to consumers for as low a price as possible. And if governments reduce trade barriers and make it seem as though trade relations with trading partners are secure, reduce uncertainty, then it makes sense for companies to make big investments and build supply chains to try to reduce those costs. What we have seen happen is you do, at the end of the day, have certain types of goods where you do have really geographically concentrated sources of supply. …

Another one that we have seen is the semiconductor story, and high-end semiconductors in particular. For the United States, globally, the main [sources] of high-end semiconductors, the fastest, fanciest chips, are essentially Taiwan and South Korea. Companies like TSMC and Samsung produce the vast majority of high-end semiconductors. For the United States, those are not countries or entities of concern. Those are places that are friends, allies.

And yet it doesn’t make a lot of sense in a new world where we have not just geopolitical shocks and concerns, but you’ve got pandemics and you’ve got climate-induced shocks. Whether that’s incredible storms, floods, droughts, it really doesn’t make sense to have incredibly geographically concentrated sources of supply, even though those may be incredibly economically efficient and may be the result of very good economic policies, which led firms to achieve economies of scale and build out really impressive supply chains. … Additional geographic diversification could ultimately be beneficial.

Now, it may end up being more costly. There are massive economies of scale of having all of that production locally sourced there in Taiwan or in South Korea. And provided nothing ever goes wrong, then great. But the concern is, we now live in a world where we’re more likely to be exposed to things that could go wrong, and we have to plan for that accordingly. And we have to convince the companies to do more for their supply chains, and some of that likely needs to come about through policy.

American Doubts about Health Care

It’s not surprising that some Americans would have increased doubt about the US health care system since early 2020 and the arrival of the COVID pandemic. Of course, the pandemic was not caused by the US health care system, but with over one million deaths attributed to COVID so far, the health care system was not likely to escape a share of the blame. But that said, what is most striking about the attitudes of Americans toward the US health care system is not their more recent doubts, but rather the steadiness of their doubts during the last two decades, according to results from just-released Gallup polling (January 19, 2023).

For example, this measure suggests that only about one-third of Americans think the US health care system has minor or no problems–and that percentage hasn’t varied much in the last 20 years. Consider for a moment all the changes in the US healthcare system in the last 20 years, including the passage and enacting of the Patient Protection and Affordable Health Care Act of 2010. In these longer term pattern of satisfaction with the overall US health care system, none of these changes seem to have had a major positive or negative effect on how Americans see health care.

Similarly, while about 60% of Americans are consistently satisfied with the cost of their own health care, only about 20-25% are satisfied with the total cost of health care for the country.

Perhaps the most interesting pattern in these survey results is that while about 60-65% of the over-55 age group view US health care quality as “excellent” or “good,” while the the satisfaction of younger health care groups with the quality of US health care seems has been diminishing for the last decade or so. Unless the opinions of younger Americans about US health care undergo a substantial shift, the pressures to “do something” about the quality of US healthcare seem likely to rise.