Mergers and Enforcement in 2021: Hart-Scott-Rodino

The first step in US antitrust enforcement is the requirement, under the Hart-Scott-Rodino Antitrust Improvements Act of 1976, that all mergers above a certain size–now $92 million–must be reported to the federal government before they occur. This gives the authorities at the Federal Trade Commission and the Antitrust Division at the US Department of Justice a chance to challenge mergers before they occur. How is that working out? The Hart-Scott-Rodino law also requires an annual report on the state of antitrust in the previous year, and the report for fiscal 2021 has just been published.

Here’s the headline graph showing the number of mergers reported to the federal government for each year in the last decade.

There was an enormous merger boom in 2021. But by the middle of 2022, when the stock market was flattening an dipping and interest rates were rising, merger started slowing down.

In a market-oriented economy, it makes some sense that a lot of mergers should be allowed to proceed. Of course, private firms will sometimes make mistakes in merger decisions, just as they sometimes do in investment decisions, new product decisions, hiring and firing, and so on. But the firms and their managers are the ones closest to the ground with detailed information. There’s no reason to think that the government will be in a better position to figure out if a certain deal will improve a company’s efficiency or productivity. But if the merger threatens to injure consumers by limiting competition, antitrust authorities may have have a role to play.

So out of the 3,520 mergers reported in 2021, how many would you guess were challenged by the antitrust authorities? The Federal Trade Commission challenged 18: five settled by consent orders (that is, the companies proceeded after adjusting the deal); seven in which the transaction was abandoned or restructured; and six that led to litigation. The Antitrust Division at the US Department of Justice challenged another 14 mergers: two led to lawsuits; nine to consent degrees; and three in which the transaction was restructured without a formal consent decree.

Overall, less than 1% of the mergers were challenged, during a giant boom year for mergers Even for someone like me, who believes that companies should often be allowed to proceed and to make mistakes, it’s not a big number.

Some of the mergers that were blocked seem like relatively straightforward cases. For example, Aon plc was blocked from acquiring Willis Towers Watson plc., which would have combined two of the three largest insurance brokers in the world. CoStar was blocked from acquiring RentPath, which are two of the major websites that match renters with apartments. Some hospitals in Memphis were blocked from merging.

But some of the more interesting cases in 2021 were situations in which, rather than two well-established firms merging, the case involved the antitrust authorities seeking to improve possibilities for future competition. For example Visa had proposed buying a company called Plaid. The antitrust authorities argued that Visa is effectively a monopolist in online debit card services, and while Plaid is currently a small firm, it has some possibility for becoming a future competitor. In another case:

Illumina’s $7.1 billion proposed acquisition of Grail, a maker of non-invasive, early detection liquid biopsy that screens for multiple types of cancer using DNA sequencing. Illumina was the only provider of DNA sequencing that is a viable option for these multi-cancer early detection (MCED) tests. The complaint alleged that the proposed merger would likely harm innovation in the market for MCED tests.

These antitrust efforts which turn on possibilities of future competition, or possibilities of harms to future innovative efforts (after all, perhaps the combined company would have resources to make a stronger innovative effort?) are a gray area in the law, but an area that the current antitrust authorities seem eager to pursue.

In the last week, the Federal Trade Commission lost a case to block Facebook from buying a company called Within, which is a virtual reality fitness startup. The argument from the FTC was that this merger could inhibit future competition in the market for virtual reality fitness apps. I have have no strong opinion on the legalities of the ruling. I’ve read that this case was viewed as a borderline call, even within the FTC. But I will note that if, out of thousands of mergers per year, the antitrust authorities choose to focus their limited efforts and resources on competition within the market for virtual reality fitness apps, then they seem to be implicitly saying that anti-competitive concerns for the US economy as a whole are not especially severe.

Trade Sanctions: How Well Do They Work?

For a country with a large economy like the United States, trade sanctions have obvious attractions. They offer what looks like a muscular pursuit of foreign policy goals–that is, more than giving worthy speeches or recalling ambassadors from foreign embassies. They don’t involve declaring war or sending troops to fight. They can seem cost-free, in the sense that declaring that certain kind of trade will be blocked doesn’t involve a direct budgetary cost. Trade sanctions appeal to protectionists who view trad with suspicion, anyway.

So what’s the available evidence trade sanctions and how they work? Two papers take up this issue in the just-published Winter 2023 issue of the Journal of Economic Perspectives. T. Clifton Morgan, Constantinos Syropoulos and Yoto V. Yotov provide an overview in “Economic Sanctions: Evolution, Consequences, and Challenges,” while Marco Cipriani, Linda S. Goldberg and Gabriele La Spada focus on financial sanctions and the workings of the SWIFT system in “Financial Sanctions, SWIFT, and the Architecture of the International Payment System.” (Full disclosure: I’ve been the Managing Editor of JEP for 37 years, and thus am probably predisposed to think the articles are of interest. But no money is being made here. These articles, like all JEP articles back to the first issue, are freely available online courtesy of the American Economic Association.)

Syropoulos, Yotov, and a group of co-authors have created the Global Sanctions Data Base, which provides systematic data about 1,325 sanction cases during the period 1950–2022. A figure from the JEP paper shows the sharp rise in sanctions in the last couple of decades. In particular, financial and travel sanctions have risen especially quickly.

How well do the sanctions work? Here, Morgan, Syropoulos, and Yotov point out an interesting disjunction in how economists and political scientists answer this question. They write: “[E]economists have tended to interpret ‘effectiveness’ in terms of the economic damage that sanctions cause, while political scientists have considered sanctions ‘effective’ only if they achieve their declared political objectives.” For example, economists tend to judge the recent sanctions on Russia in terms of how much they hurt Russia’s economy, while political scientists tend to judge sanctions in terms of whether they lead Russia to stop its war in Ukraine. In addition, the success rate of trade sanctions is doubtless linked to factors like whether they are internationally coordinated, how well they are targeted, and whether they are reinforced with other policies. Or perhaps sanctions in the past have only been adopted when they seemed especially likely to work, which means that their past performance cannot be casually extrapolated to future scenarios.

Morgan, Syropoulos, and Yotov work through these various issues and offer this thought: “[E]ven in their worst light, sanctions have been shown to be effective in a modest fraction of cases. Even a 25 percent success rate for sanctions may be considerably higher than doing nothing, and the costs may be substantially lower than
other alternatives, like overt military interventions. Perhaps the `sanctions glass’ should be viewed as one-quarter full, not three-quarters empty.”

What about financial sanctions? In particular, the policy decision to cut off Russia from the SWIFT system received considerable attention, although even among economists, a fair number could not tell you in any detail–at least pre-2022–just what the SWIFT system actually did. Cipriani, Goldberg and La Spada discuss the history of financial sanctions, with some prominent examples of how they have worked or not in the past, and with some emphasis on the Society for Worldwide Interbank Financial Telecommunication, more commonly known as SWIFT.

When two banks need to transfer funds inside a country, they can do so through the central bank for that country–which is why the Federal Reserve is sometimes called the “bank for banks.” But most central banks (Switzerland is an exception) don’t facilitate transactions between domestic and foreign banks. Instead, there is a network of “correspondent” banks that operate in more than one country. If a Russian bank wants to transfer or receive funds with a bank in another country, it must typically operate through one of these correspondent banks. Messages must be sent back and forth

Back in the 1950s, 1960s, and 1970s, those messages were typically sent by Telex (for those of you old enough to remember that term). But Telex messages had no special formatting and were comparatively high-cost. Thus, banks and bank regulators around the world set up SWIFT as a nonprofit financial institution based in Belgium–and thus directly under financial laws of Belgium and the European Union. The idea was to have a set of computer protocols for a wide array of financial transactions. The correspondent banks were still needed, but messages about international financial transactions could be sent quickly and efficiently. Here’s a figure showing the number of financial institutions connected to SWIFT and the number of messages sent over the system.

The key point to remember here is that SWIFT doesn’t actually move any money or hold any money. It is just a messaging system between banks. In addition, the computer protocols that SWIFT has created for financial transactions are in the public domain. If a financial institution wants to use those protocols to move money, but to send the messages outside of the SWIFT system, it is quite possible to do so. Thus, cutting off Russia’s access to SWIFT surely raises the cost to Russia of making international financial transactions, but it does not block the transactions themselves.

The SWIFT system is by far and away the most dominant for sending messages between international financial institutions. But countries around the world have noticed that SWIFT is being used for trade sanctions–for Iran for a period of time in 2012, for North Korea in 2017, and now for Russia–and some of them are setting up alternative systems using the SWIFT protocols that don’t do much now, but could be ramped up if needed. As Cipriani, Goldberg, and La Spada point out, Russia has set up such a system:

Russia developed its own financial messaging system, SPFS (System for Transfer of Financial Messages). SPFS can transmit messages in the SWIFT format, and more broadly messages based on the ISO 20022 standard, as well as free-format messages. More than 400 banks have already connected to SPFS, most of them Russian or from former Soviet Republics. A few banks from Germany, Switzerland, France, Japan, Sweden, Turkey, and Cuba are also connected. By April 2022, the number of countries with financial institutions using SPFS had grown from 12 to 52, at which point the Central Bank of Russia decided not to publish the names of SPFS users. Due to its limited scale, SPFS mainly processes financial messages within Russia; in 2021, roughly 20 percent of all Russian domestic transfers were done through SPFS, with the Russian central bank aiming to increase this share to 30 percent by 2023 (Shagina 2021).

And so has China:

In 2015, the People’s Bank of China launched the Chinese Cross-Border Interbank Payment System (CIPS)
with the purpose of supporting the use of the renminbi in international trade and international financial markets. In contrast to SWIFT, … CIPS is not only a messaging system but also offers payment clearing and settlement
services for cross-border payments in renminbi. … [A]t the end January 2022, there were 1,280 participants from 103 countries. Among the direct participants, eleven are foreign banks, including large banks from the United States and other developed countries. The system is overseen and backed by People’s Bank of China. Similarly to Russia’s SPFS, CIPS uses the SWIFT industry standard for syntax in financial messages. Indirect participants can obtain services provided by CIPS through direct participants.

India is developing an interbank messaging system based on the SWIFT protocols as well, and there are some reports in the business press of plans for Russia, China, and India to merge these systems, although at present SWIFT remains far and away the dominant system for sending these international interbank messages.

Economic sanctions can impose costs. But whether they can achieve the desired political goals is a more complicated issue. The answer seems to be “sometimes,” but depending heavily on specific circumstances and the possibilities for evading the sanctions.

Hard and Soft Landings: The Federal Reserve’s Record

When the Federal Reserve raises interest rates to fight inflation, a “hard landing” refers to the possibility is that inflation is reduced at the cost of a significant recession, while a “soft landing” refers to the possibility that inflation is reduced with only a minor recession–or perhaps even no recession at all. Perhaps the canonical example of a hard landing happened in the late 1970s and early 1980s, when the Fed under chair Paul Volcker broke the back of the inflation of the 1970s by raising interest rates, but at the cost of back-to-back recessions in 1980-81 and 1982.

What is the historical record of the Federal Reserve in raising interest rates and managing a soft landing? Alan S. Blinder tackles that question in the just-published Winter 2023 issue of the Journal of Economic Perspectives in  “Landings, Soft and Hard: The Federal Reserve, 1965–2022.” (Full disclosure: I’ve been the Managing Editor of JEP for 36 years, so I am perhaps predisposed to find the articles of interest.)

From Blinder’s paper, here’s a figure showing the federal fund interest rate over time. There are some challenges in interpreting the figure when there are jagged jumps up and down in a short time, but Blinder argues that it is fair to read the historical record as involving 11 episodes where the Fed raised interest rates substantially since 1965.

What jumps out from the figure is that there are a number of times where the Fed raised interest rates and either it was not followed by a recession (1, 6, and 8), or the recession was very short (9), or the recession that followed was not caused by the higher interest rates (10 and 11). The Fed raising interest rates to nip inflation in the bud in 1994 (episode 8) is perhaps the best-known example of a landing so soft that a recession didn’t even occur. As another example, the Fed was gradually raising interest rates in the lead-up to the pandemic (episode 11), but the pandemic recession was clearly not caused by higher interest rates!

Here’s a table showing Blinder’s evaluation of the type of landing that followed each of the 11 episodes of monetary tightening. When he asks “was it a landing?”, he is raising the possibility that the higher interest rates didn’t actually bring inflation down at that time. For “would have been soft” (episode 7), Blinder argues that the Fed might have pulled off a soft landing with its interest rate increase in 1988-89, except for Iraq’s invasion of Kuwait in 1991.

The Fed has raised its key policy interest rate (the “federal funds rate”) from near-zero in March 2022 to about 4.6%–with talk of additional increases to come. Based on the historical record, what insights are possible about the whether a hard or soft landing is likely?

  1. There are clearly examples where higher interest rates from the Fed, in the service of fighting inflation, were followed by recessions.
  2. The Fed has faced two main challenges in the last few years: the pandemic recession, where the policy response led to a huge burst of disposable income along with supply chains problem, and then the Russian invasion of Ukraine in early 2022, which caused a new burst of higher prices for energy and food along with additional supply chain disruptions. History doesn’t offer do-overs. But it’s at least possible that the inflation which started in 2021 might have faded on its own if it had not been reinforced by the Russian inflation.
  3. One can argue that the last three recessions were not caused by higher Fed interest rates, but instead by the pandemic (2020), the implosion of financial instruments related to the housing price bubble (2007-2009), and the end of the dot-com boom in stock prices and investment levels (2001). Thus, perhaps the key question about the risks of a recession in 2023 may be less about interest rate policy and more about whether the US or world economy experiences a severe negative shock this year.
  4. Several of the Fed’s interest rate increases over time can be thought of as readjusting back to a more reasonable long-run level. For example, the rising Fed interest rates pre-pandemic were in some ways just based on a belief that the rate shouldn’t and couldn’t stay near zero percent forever. Or going back to Blinder’s episode 6, this was a time of chaotic shifts after a severe recession, with a plummeting price of oil and falling inflation, and the at that time seemed to believe that it had gone a little too far in cutting interest rates, so it adjusted back. Part of the Fed increasing interest rates in the last year or so is surely a belief that although it made sense to take the policy interest rate down to zero percent in the pandemic recession, again, it shouldn’t and couldn’t stay there forever.
  5. Macroeconomics is hard because the key factors driving the economy shift over time. It’s not obvious, for example, that the same lessons which applied to the stagflationary period of the 1970s should apply equally well and in the same ways to the dot-com boom-and-bust of the 1990s, or the housing price bubble of the early 2000s, and also to a short-and-sharp recession caused by a pandemic.
  6. Some of the arguments about inflation are really about momentum. When inflation rises, does it have a tendency to fade out? Or does it have a tendency to maintain the higher rate of inflation? Or does it have a tendency to build momentum, like a rock rolling downhill? Which scenario prevails probably depends to on how the causes of the inflation are perceived; the recent historical record and the credibility of the central bank in fighting inflation; and what expectations firms, workers, and consumers have about future inflation.

My own sense, for what it’s worth, is that the US economy is unlikely to escape this episode of higher Federal Reserve interest rates without experiencing a recession, by which I mean a period of higher unemployment and lower production. It seems to me that the pressures and tensions unleashed by the higher interest rates are working their way into reduced borrowing and credit, as well as tensions in bond markets. The Fed seems to be taking a middle road here, with a belief that part of the 6.5% inflation rate from December 2021 to December 2022 was temporary–in the sense that pandemic-related spending will fall, supply chains issues will resolve, and tensions from the Russian invasion of Ukraine will be manageable. Thus, the Fed is trying to raise interest rates only by as much as necessary to be clear that inflation will not gain a permanent foothold, while minimizing the risk of a hard landing.

Winter 2023 Journal of Economic Perspectives Free Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2023 issue, which in the Taylor household is known as issue #143. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

______________________

Symposium: Trade Sanctions and International Relations

“Economic Sanctions: Evolution, Consequences, and Challenges,” by T. Clifton Morgan, Constantinos Syropoulos and Yoto V. Yotov

Taking an interdisciplinary perspective, we examine the evolution of economic sanctions in the post-World War II era and reflect on the lessons that could be drawn from their features and patterns of use. We observe that, during this time, there has been a remarkable increase in the use of sanctions as an instrument of foreign policy. We classify this period into four ‘eras’ and discuss, in this context, how the evolution of sanctions may be linked to salient features of the contemporaneous international political and economic orders. Our review of the related literatures in economics and political science suggests, among other things, that our understanding of sanction processes could be significantly advanced by marrying these perspectives. We conclude by identifying several questions and challenges, and by discussing how interdisciplinary research could address them.Full-Text Access | Supplementary Materials

“Financial Sanctions, SWIFT, and the Architecture of the International Payment System,” by Marco Cipriani, Linda S. Goldberg and Gabriele La Spada

Financial sanctions, alongside economic sanctions, are components of the toolkit used by governments as part of international diplomacy. The use of sanctions, especially financial, has increased over the last 70 years. Financial sanctions have been particularly important whenever the goals of the sanctioning countries were related to democracy and human rights. Financial sanctions restrict entities—countries, businesses, or even individuals—from purchasing or selling financial assets, or from accessing custodial or other financial services. They can be imposed on a sanctioned entity’s ability to access the infrastructures that are in place to execute international payments, irrespective of whether such payments underpin financial or real activity. This article explains how financial sanctions can be designed to limit access to the international payment system and, in particular, the SWIFT network, and provides some recent examples.

Full-Text Access | Supplementary Materials

Symposium: Monetary Policy

“Monetary Policy When the Central Bank Shapes Financial-Market Sentiment,” by Anil K Kashyap and Jeremy C. Stein

Recent research has found that monetary policy works in part by influencing the risk premiums on both traded financial-market securities and intermediated loans. Research has also shown that when risk premiums are compressed, there is an increased likelihood of a reversal that damages the credit-supply mechanism and the real economy. Together these effects create an intertemporal tradeoff for monetary policy, as stimulating the economy today can sow the seeds of a future downturn that might be difficult to offset. We draw out some implications of this tradeoff for the conduct of monetary policy.

Full-Text Access | Supplementary Materials

“Risk Appetite and the Risk-Taking Channel of Monetary Policy,” by Michael D. Bauer, Ben S. Bernanke and Eric Milstein

Monetary policy affects financial markets and the broader economy in part by changing the risk appetite of investors. This article provides new evidence for this so-called risk-taking channel of monetary policy by revisiting and extending event-study analysis of Federal Open Market Committee announcements. We document significant effects of unexpected monetary policy changes on risk indicators drawn from equity, fixed-income, credit, and foreign exchange markets. We develop a new index of risk appetite based on the common component of these indicators. Surprise monetary easing leads to strong and persistent increases in our index, and vice versa for tightening surprises, consistent with the view that monetary policy affects asset prices in large part through its effects on risk appetite. We discuss the implications of the risk-taking channel for monetary policy transmission, optimal monetary policy, and financial stability.

Full-Text Access | Supplementary Materials

(6) Landings, Soft and Hard: The Federal Reserve, 1965–2022

Alan S. Blinder

“Soft landings,” that is, cases in which the central bank tightens monetary policy to fight inflation but does not cause a recession (which would be a “hard landing”), are thought to be difficult to achieve and extremely rare. According to the conventional wisdom, the Federal Reserve has managed to achieve only one soft landing in the past 60 years—in 1994–1995. This paper studies the eleven episodes of monetary policy tightening by the Fed since 1965, and concludes that the central bank has a better record than that—that as long as the criteria for softness are not too stringent, and Fed was actually trying to land the economy softly, the Fed has succeeded several times. Achieving a soft landing, however, requires both skill in managing monetary policy and the absence of adverse external shocks.

Full-Text Access | Supplementary Materials

(7) Monetary Policy and Inequality

Alisdair McKay and Christian K. Wolf

We ask three questions about the connection between monetary policy and inequality. First, does monetary policy affect inequality? While different households respond to changes in monetary policy for different reasons, we argue that the overall consumption effects are relatively evenly distributed across households. Second, does household heterogeneity change our understanding of monetary policy transmission? A more careful account of microeconomic consumption behavior materially alters our understanding of transmission channels, but has rather limited effect on our general view of the aggregate effects of monetary policy. Third, does inequality affect the optimal conduct of monetary policy? Since monetary policy is a rather blunt distributional tool, we argue that even a central bank with an explicit distributional mandate would not deviate much from conventional policy prescriptions.

Full-Text Access | Supplementary Materials

Symposium: Hispanic Americans

“Unraveling the Hispanic Health Paradox,” by José Fernandez, Mónica García-Pérez and Sandra Orozco-Aleman

In 2019, Hispanics in the US had a life expectancy advantage of 3.0 years and 7.1 years over non-Hispanic Whites and non-Hispanic Blacks, respectively, despite having real-household income values 26 percentage points lower than Non-Hispanic White households. Hispanics appear to have equal or even better health outcomes relative to non-Hispanic Whites across various health measures. This is known as the Hispanic health paradox. This paper underscores the importance of disaggregating Hispanics by ancestry and age profile when discussing the paradox across key health outcomes. It also provides an overview of the leading explanations, such as the salmon bias and the healthy immigrant effect. Further, it highlights the role of healthcare access and usage in this discussion. Ignoring these sources of bias have important consequences for how morbidity and mortality among Hispanics are measured within widely used national datasets.

Full-Text Access | Supplementary Materials

“Hispanic Americans in the Labor Market: Patterns over Time and across Generations,” by Francisca M. Antman, Brian Duncan and Stephen J. Trejo

This article reviews evidence on the labor market performance of Hispanics in the United States, with a particular focus on the US-born segment of this population. After discussing critical issues that arise in the US data sources commonly used to study Hispanics, we document how Hispanics currently compare with other Americans in terms of education, earnings, and labor supply, and then we discuss long-term trends in these outcomes. Relative to non-Hispanic Whites, US-born Hispanics from most national origin groups possess sizeable deficits in earnings, which in large part reflect corresponding educational deficits. Over time, rates of high school completion by US-born Hispanics have almost converged to those of non-Hispanic Whites, but the large Hispanic deficits in college completion have instead widened. Finally, from the perspective of immigrant generations, Hispanics experience substantial improvements in education and earnings between first-generation immigrants and the second-generation consisting of the US-born children of immigrants. Continued progress beyond the second generation is obscured by measurement issues arising from high rates of Hispanic intermarriage and the fact that later-generation descendants of Hispanic immigrants often do not self-identify as Hispanic when they come from families with mixed ethnic origins.

Full-Text Access | Supplementary Materials

“US Immigration from Latin America in Historical Perspective,” by Gordon Hanson, Pia Orrenius and Madeline Zavodny

The share of US residents who were born in Latin America and the Caribbean plateaued recently, after a half century of rapid growth. Our review of the evidence on the US immigration wave from the region suggests that it bears many similarities to the major immigration waves of the nineteenth and early twentieth centuries, that the demographic and economic forces behind Latin American migrant inflows appear to have weakened across most sending countries, and that a continued slowdown of immigration from Latin America post-pandemic has the potential to disrupt labor-intensive sectors in many US regional labor markets.

Full-Text Access | Supplementary Materials

Articles

Oleg Itskhoki: 2022 John Bates Clark Medalist,” by Andrew Atkeson and Gita Gopinath

The 2022 John Bates Clark Medal of the American Economic Association was awarded to Oleg Itskhoki, Professor of Economics at the University of California, Los Angeles for his path breaking contributions in international economics. This article summarizes Oleg Itskhoki’s work and places it in the context of the broader literature and emphasizes how it has shed new light on a number of long-standing puzzles regarding the behavior of exchange rates and international relative prices more generally and their connection to macroeconomic fluctuations and government’s choices of monetary and fiscal policies.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

Biologics and Biosimilars: A Test for Intellectual Property

The fundamental tradeoff of intellectual property rights–like patents and copyrights–is that the inventor gets a government-protected monopoly for a period of time, as an incentive for innovation, but then the innovation passes into the public domain. For example, this is the moment when generic equivalents of pharmaceuticals can start competing with brand-name drugs.

The period when a lucrative invention shifts into the public domain is primed for politics and strategy, as the incumbent firm tries to hold on to its leading competitive position. In one famous example, the US Congress passed in 1998 what became known as the “Mickey Mouse Protection Act,” which extended copyright protection for Mickey and many lesser-known creations to last for 95 years–an extension of 21 years from the previous rule. A few decades ago, when generic drugs started to take over from previously patented drugs, there was a storm of litigation and antitrust action around questions like whether generics had to go through similar testing for safely and efficacy by the Food and Drug Administration, and how to judge whether a generic drug was the same–or perhaps just a little different–than the previously patented drug.

Now, 91% of US prescriptions are for generic drugs. By one estimate, this saves over $370 billion per year for US health care consumers. But almost all of those generic drugs are “small molecule” drugs, which essentially means that they are created by chemistry. Starting a couple of decades ago, a new wave of “biologic” drugs arrived. These begin by isolating certain components from humans, animals, or microorganisms, and then growing them through biotechnology or related techniques. They include “vaccines, blood and blood components, allergenics, somatic cells, gene therapy, tissues, and recombinant therapeutic proteins.” As one example, the COVID vaccines are biologics. The highest-revenue prescription drug of all time, AbbVie’s Humira–an injectable treatment for autoimmune conditions like rheumatoid arthritis–is a biologic. It can cost some patients $70,000 per year.

A number of biosimilars are remarkable advances in health care, and bring substantial benefits to patients. Many of them also have very high prices. Indeed, the rough estimates seem to be that biologics account for 2% of US prescriptions for drugs, but 40% of total spending on prescription drugs. Of the top 10 prescription drugs in the US in 2021 by revenue, seven are biologics–including two versions of the COVID vaccine. Industry projections are that in a few years, annual revenues from biologics will exceed those for small-molecule drugs by $100 billion per year or more. Again, part of the reason is that so many small-molecule drugs are now available in generic form, while many leading biologics are still on patent. Humira is just about to go off patent this year.

The generic equivalents for biologic drugs are called “biosimilars,” because the chemistry of these (“large molecule”) biologics often isn’t easy to define. They are often hard to manufacture, and susceptible to heat or contamination. As a result, the producers of the original biologics often obtain patents not just on a therapeutic molecule, but on a variety of manufacturing techniques, each of which produced slightly different chemical outcomes that can be patented as well.

For example, the AbbVie patent on adalimumab, the key ingredient in Humira, actually expired back in 2016. But given new manufacturing techniques and variations on the original formulation, AbbVie has actually obtained more than 100 patents related to Humira. In the antitrust biz, this is sometimes called a “patent thicket”–a term which refers to an ever-evolving body of patents, with old ones expiring and new ones coming into play, thus blocking new competition on an ongoing basis.

Back in 2010, the Biologics Price Competition and Innovation Act became law, with the goal of defining a path for biosimilar drugs to replace the original brand-name biologics, as patents expire, in the same say that generics have replaced so many small-molecule drugs in the last couple of decades. The law is based on language about whether the biosimilar is “highly similar” to the original, with no “clinically meaningful differences.” But of course, terms like “highly similar” and “meaningful differences” are basically catnip for lawyers, especially when billions of dollars of sales are at stake.

I don’t have any deep insights into how to write the rules governing when rules governing when biosimilars can replace the original biologics. I readily acknowledge that there are some hard questions here, because we aren’t dealing with precise chemistry and small molecule drugs in this situation. There are legitimate questions about biosimilars being produced in safe ways.

But the overall goal here seems fairly clear: when patent protection expires, reasonably easy entry should be possible. If the rules for new biosimilars require extensive re-testing and long-term tests on patients, it will be much harder for biosimilars to gain a foothold. In some cases, makers of the original brand-name biologics also managed to sign exclusive deals with health care providers, where the providers agreed to use only the original biologic and to shun biosimilars. There are now 22 biosimilars approved for patients, with another seven scheduled to launch this year. But there have been essays in medical journals for several years now lamenting the slow pace at which biosimilars have been approved.

At present, Humira is the biggest test. At least eight other drug companies have plans to launch biosimilars. But along with the patent thicket that AbbVie has constructed around Humira, AbbVie is introducing two new biologics with similar effects, different active ingredients–and patent protection. Under current rules, a pharmacist can swap in a generic drug for a brand-name without needing a new prescription, but for biologics, a new prescription naming the biosimilar drug is needed. In some ways, the question of whether biosimilar competitors for Humira can become established in the market is a test case for the biosimilar industry as a whole, in the sense that it will affect whether firms see the biosimilar market as worth pursuing. But one big health care system is apparently ready to switch: “David Chen, who directs specialty drug use for Kaiser Permanente, said the insurer plans to stop covering Humira by the end of 2023. He expects at least 90 percent of patients to switch to the biosimilar alternative, and said Kaiser should save hundreds of millions of dollars a year.”

A golden opportunity for a beleaguered biosimilars market” (Tradeoffs, January 26, 2023).

Retirement Ages: Some International Comparisons

At what age does the average person retire in high-income countries? How long is the average period of retirement? The OECD collects this information. Here’s a trimmed down table with a selection of countries.

You can see from the first two columns of data that the average age of labor market exit for US men and women is a shade under 65 years. This is lower than Japan and Korea, and interestingly, lower than Sweden as well. The countries with the earliest average ages of labor market exit seem to be France, Spain and Greece at less than 61 years, with Italy also on the low side.

The last two columns of data show expected years in retirement. For the United States, this is 18.6 years for men and 21.3 years for women, mostly reflecting the longer average life expectancies for women. The shortest expected retirements are in Japan and Korea. Interestingly, Sweden has a later average age of retirement than the US but also a longer expected retirement–which is possible because of longer life expectancies in Sweden. The longest retirement periods seem to be in the countries with the lowest retirement ages, like France, Greece, and Spain, where it is common for men to have 23 years in retirement and women 27 years.

Over time, the average age of retirement in the US has followed a U-shaped pattern over last 50 years, first dropping by about three year and then rising back close to the earlier level. For men, the OECD data shows that average age of retirement for men was 65.5 years in 1970, 63.8 years in 1980, 62.4 years in 1990, 62.5 years in 2000, 62.9 years in 2010, and then 64.9 years in 2020. For the expected time in retirement, the US follows shows a substantial rise from 1970 up through 2012, but a gradual decline since then. For men, the OECD data shows 12.8 years of expected retirement in 1970, 15.0 years in 1980, 17.0 years in 1990, 18.2 years in 2000, 19.6 years in 2010, and then–after peaking at 20.1 years of expected years of retirement in 2012–a gradual decline to 18.6 expected years of retirement in 2020.

In a big-picture sense, this is consistent with a long-term pattern of US men over age 65 decreasing their labor force participation in the long run, but with an upward shift in labor market participation in the last 20 years or so. From the Our World in Data website:

Given that the US has a relatively late age of expected retirement and relatively short period of expected retirement, one might expect that the US Social Security system of government pensions would be in relatively good financial shape compared to some other countries, but this doesn’t seem to be correct. Consider this table from the Mercer CFA Institute Global Pension Index 2022, which ranks pension systems across 44 countries for adequacy, sustainability, and integrity. (“Integrity” refers to a combination of issues related to regulation, governance, protection, communication, and operating costs.) The US system does fairly well for adequacy, but not so well on the other measures. Of the countries listed above, Netherlands and Denmark are thought to be grade A systems. The US overall category with countries like France and Spain. Those countries rank higher than the US on “adequacy” of benefits, with their lower retirement ages and longer expected periods of retirement, but rank lower than the US on the financial “sustainability” of the benefits.

As the US deals with the financial consequences of an aging population, it seems appropriate that part of the answer will involve people working longer on average. But other parts of the answer will also involve additional financing for the system as a whole and additional support for the elderly poor.

Global Commodity Markets

Modern economic growth is a mixture of physical objects and the ideas that can be embodied in those objects. Wheat makes flour which makes a cake. Sand can be used to make concrete or computer chips. The original basic objects of the world economy are called “commodities.” Broadly speaking, they include fossil fuels, agriculture, and minerals. Many of the concerns about economic growth and environmental sustainability are essentially arguments about economic or environmental aspects of future production of commodities. For an overview of the economic issues, John Baffes and Peter Nagle have edited a four-chapter book on Commodity Markets : Evolution, Challenges and Policies (World Bank, 2022).

Here’s a long-term overview of commodity markets from the first chapter, “The Evolution of Commodity Markets over the Past Century,” by Baffes and Nagle together with Wee Chian Koh.

Economic expansion after World War II (WWII), and more recently the emergence of EMDEs [emerging markets and developing economies] as important players in the global economy, has increased commodity demand, especially for energy commodities and metals and minerals. Even though the world’s population rose from 2 billion in 1920 to 8 billion in 2020, the production of commodities to feed, clothe, and support the rising population has more than kept pace. Expanding production was possible because of technological innovations, the discovery
of new reserves of commodities, and more intensive agricultural production.

On the energy front, crude oil became the most important commodity, replacing coal. Known reserves of crude oil and natural gas have increased substantially even as production has risen. For example, the development of shale technology during the early 2000s enabled producers to exploit deposits that had previously been considered unprofitable; as a result, the United States became once again the largest producer of crude oil. Mineral resource development expanded because of advances in technology and new discoveries.

Metal production has become more efficient as innovations and productivity improvements became widespread in mining, smelting, and refining. Improved fabrication and new alloys have allowed less metal to be used without loss of strength. Despite radical changes in supply and consumption, metals prices, in real terms, have seen cycles around a quite flat trend over the past century. …

Food production has increased faster than population, and most of the world’s consumers have better access to adequate food supplies today than they did a century ago. This improvement is due to technological advances in the 1900s, especially the Green Revolution. In large part because of increasing productivity, prices of agricultural commodities have experienced a downward trend over the past 100 years.

The bottom line here is that large increases in population and GDP have been matched by large increases in commodity products including energy, metals, and agriculture.

However, while the quantity demanded of commodities has risen dramatically, prices of commodities do not show much upward trend–and some show a downward trend. The figure illustrates with two examples from energy, two from agriculture, and two from metals.

The figures of prices also show some large fluctuations in prices over time, in what are sometimes called “commodity cycles.” For countries where the economy is heavily reliant on production or important of one or a few commodities, the effects of these price fluctuations can be severe–and the volume discusses causes and effects of these commodity cycles at some length. But the overall pattern of higher quantities and flat or falling price levels remains.

Some readers may also be interested in “Commodity Prices and Growth in Africa,” by Angus Deaton (Nobel ’15), in the Summer 1999 Journal of Economic Perspectives.

The Bounce in Disposable Personal Income

When the pandemic hit, the general sense was that the US government couldn’t do too much to help, whether the assistance came in the form of stimulus checks, expanded unemployment payments, help to businesses (via the Paycheck Protection Program), tax cuts, and so on. Now we can look back and see the patterns in disposable personal income–that is, income that people have after they have received paid taxes and received government benefits.

The top panel shows total personal disposable income for the economy as a whole, adjusted for inflation, measured in billions of dollars. The bottom panel shows the same data on a per capita basis–that is, adjusted for the size of the US population. The data is monthly. Both figures are from the extraordinarily useful FRED website maintained by the Federal Reserve Bank of St. Louis.

What jumps out from these two figures is how dramatic the rise in personal income was, both right after the pandemic in spring 2020, and then after President Biden’s stimulus package was enacted into law early in 2021. Compare the shifts in real per capita income in 2020 an 2021 to what happened in the previous three recessions, and there’s just nothing remotely like it.

The sharp rises in disposable personal income help to explain why inflation started rising in mid-2021. The level of disposable and spendable personal income was spiking at a time when many parts of the economy (like restaurants, travel, and entertainment) were still shut down or quite constrained in many places, and at a time when supply chains were backed up. A working definition of inflation is “too much money chasing too few goods,” and that’s what happened. However, this impetus for inflation has faded in recent months, which has surely contribute to the rate of inflation sagging downward.

It also helps explain the “Great Resignation,” the pattern in which a number of people of working age dropped out of the workforce, and were not looking for jobs. Again, it will be interesting to see if some of those who left the labor market during the boom in disposable personal income return in the next year or so.

Finally, it also explains some of the economic stress that shows up in public opinion polls and news stories. After the much higher disposable personal income levels of the last two years, the economy has returned to 2019 levels of disposable personal income. That up and down is bound to be unsettling.

I find it hard to be too critical of decisions made in the teeth of the pandemic in early 2020. The level of uncertainty was just so very high. But it also seems to me that what the federal government knows how to do is send out checks–so that’s what it did. Meanwhile, policy questions like how to make more COVID tests available, or how to facilitate the most widespread and rapid distribution of vaccines. or whether to re-open schools in fall 2020 got considerably less attention.

Explaining College Attendance Gaps: Academic Preparation

There are large gaps in college attendance between men and women and across ethnic groups. To what extent might these differences reflect academic preparation of students? Sarah Reber and Ember Smith provide some baseline information on these issues in “College Enrollment Disparities: Understanding the Role of Academic Preparation (January 2023, Center on Children and Families at Brookings). They point out:

In 2022, young men [age 25-29] were nine percentage points less likely to have a bachelor’s degree than young women (35% and 44%). … Disparities in bachelor’s degree attainment by race and ethnicity are large: 68% of Asian or Pacific Islander adults aged 25 to 29 have a bachelor’s degree, compared with 45% of white,
28% of Black, and 25% of Hispanic young adults.

But what do these gaps look like if one takes academic preparation into account. In the figure, the top set of bars shows the likelihood of men and women enrolling in college at all–including two-year and four-year colleges–while the second set of bars shows only the likelihood of enrolling in a four-year college. The “No Controls” shows the overall average for males and females. But notice that when you compare those with similar grade point averages or levels of academic preparation, the gap goes away.

The authors write: “Taken together, the results by gender suggest that most or all gender gaps in college enrollment
are explained by differences in academic preparation. However, … GPA explains essentially all, and math test score
explains none, of the gaps.”

What are the patterns by ethnicity? Again, the top row of bars shows all colleges, and the bottom row shows just four-year colleges. The “No controls” shows the overall averages for each group. The striking pattern is that on average, Asians are more likely to attend college. But if one adjusts for grade point average or for overall academic achievement, blacks become the most likely group to attend college. The final two sets of bars show an adjustment for “socioeconomic status,” which clearly reduces the differences across groups, and then a joint adjustment for socioeconomic status and academic preparation, which looks a lot like the adjustment just for academic adjustment alone.

These types of results are just descriptions of patterns in the data. They are not studies that dig into cause-and-effect relationships or offer policy recommendations. In addition, the grade point average or academic preparation of a high school student is partly about the performance of K-12 schools, but also about differences across families, peer groups, and neighborhoods. But in my reading, this evidence strongly suggests that college attendance gaps across men and women, or across ethnic groups, largely reflect the academic preparation of high school students.

Getting Serious about Carbon Dioxide Removal

Perhaps the simplest way of removing carbon dioxide from the atmosphere is to manage forests in such a way that they soak up more carbon. But there are other ways, like capturing carbon directly from the air and then storing it deep underground, or in the form of mineral deposits. It’s perhaps not widely known that climate change models describing potential paths to reduce the risks of climate change typically assume that carbon dioxide removal will rise dramatically, and that it will be an important part of any ultimate solution. The University of Oxford’s Smith School of Enterprise and the Environment provides an overview of the science, policy, and public opinion in “The State of Carbon Dioxide Removal Report, 2023. The lead contributors are Stephen M Smith, Oliver Geden, Jan C. Minx, and Gregory F. Nemet.

Here’s a chart showing the various approaches to carbon dioxide removal in the first column and the route by which it works in the second column. The third column headed “TRL” stands for “Technology Readiness Level” ranked from theoretically possible at 1 to operationally ready at 9. The last two columns show an estimate of what cost for removing carbon might be if the technology was developed to large scale, and the potential for how much carbon it could remove (measured in gigatons of CO2).

Broadly speaking, these can be summarized into three categories of how the carbon is stored.

Biological storage (on land and in oceans). While annual plants do not retain carbon durably, trees can retain their carbon for decades, centuries or more. Soils and wetlands are a further store of carbon, derived from compounds exuded by roots and dead plant matter. In the oceans, aquatic biomass may sink to the ocean floor and become marine sediment. Carbon can be retained durably in these ecosystems, especially if managed carefully to reduce
disturbances.
Product storage. Many carbon-based products do not constitute durable storage. However, construction materials and biochar (a carbon-rich material produced by heating biomass in an oxygen-limited environment) can store carbon for decades or more. These carbon-based products can be made from conversion of harvested biomass (in the cases of biochar and wood in construction), from concentrated CO2 streams or even from CO2 from ambient air (in the case of aggregates).
Geochemical storage. Concentrated CO2 can be stored in geological formations, using depleted oil and gas fields or saline aquifers, or reactive minerals such as basalt. Geochemical capture leads directly to long-term storage of CO2 in the form of carbonate minerals or bicarbonate in the ocean.

The report emphasizes that it is extraordinarily unlikely that carbon dioxide removal can address atmospheric carbon levels on is own. The notion is that it can supplement other efforts. After all, all approaches that involve reduced use of fossil fuels only reduce the speed at which carbon is being added to the atmosphere, while the effect of carbon dioxide removal is actually to reduce pre-existing levels of carbon to lower levels than they would otherwise reach. The report argues:

Virtually all scenarios that limit warming to 1.5°C or 2°C require “novel” CDR, such as BECCS, biochar, DACCS, and enhanced rock weathering. However, only a tiny fraction (0.002 GtCO2 per year) of current CDR results from novel CDR methods. Closing the CDR gap requires rapid growth of novel CDR. Averaging across scenarios, novel CDR increases by a factor of 30 by 2030 (and up to about 540 in some scenarios) and by a factor of 1,300 (up to about 4,900 in some scenarios) by mid-century. Yet no country so far has pledged to scale novel CDR by 2030 as part of their Nationally Determined Contribution, and few countries have so far published proposals for upscaling novel CDR by 2050.

Indeed, if one looks at present at the amount of carbon dioxide removal, more than 99% is happening with reforestation, and about 0.1% involves the more novel forms of carbon dioxide removal listed in the table.

The other key point is that if at least some of these technologies are to be workable at scale, a lot of innovation and learning-by-doing is going to be needed over a sustained period of time. If countries aren’t starting a wide range of experimental projects in carbon dioxide removal very soon, then the necessary knowledge base won’t exist for large-scale used of carbon dioxide removal 2-3 decades from now. And to repeat myself, the main scenarios for mitigating the risks of climate change all include the assumption that this technology will become developed and workable. Without carbon dioxide removal technologies, the already very difficult task of dealing with rising levels of atmospheric carbon becomes much harder.