The Case for More Activist Antitrust Policy

The University of Pennsylvania Law Review  (June 2020) has published a nine-paper symposium on antitrust law, with contributions by a number of the leading economists in the field who tend to favor more aggressive pro-competition policy in this area. Whatever your own leanings, it\’s a nice overview of many of the key issues. Here are snippets from three of the papers. Below, I\’ll list all the papers in the issue with links and abstracts. 

 C. Scott Hemphill and Tim Wu write about \”Nascent Competitors,\” which is the concern that large firms may seek to maintain their dominant market position by buying up the kinds of small firms that might have developed into future competitors. The article is perhaps of particular interest because Wu has just accepted a position with the Biden administration to join the National Economic Council, where he will focus on competition and technology policy. Hemphill and Wu write (footnotes omitted): 

Nascent rivals play an important role in both the competitive process and the process of innovation. New firms with new technologies can challenge and even displace existing firms; sometimes, innovation by an unproven outsider is the only way to introduce new competition to an entrenched incumbent. That makes the treatment of nascent competitors core to the goals of the antitrust laws. As the D.C. Circuit has explained, “it would be inimical to the purpose of the Sherman Act to allow monopolists free rei[]n to squash nascent, albeit unproven, competitors at will . . . .” Government enforcers have expressed interest in protecting nascent competition, particularly in the context of acquisitions made by leading online platforms.

However, enforcers face a dilemma. While nascent competitors often pose a uniquely potent threat to an entrenched incumbent, the firm’s eventual significance is uncertain, given the environment of rapid technological change in which such threats tend to arise. That uncertainty, along with a lack of present, direct competition, may make enforcers and courts hesitant or unwilling to prevent an incumbent from acquiring or excluding a nascent threat. A hesitant enforcer might insist on strong proof that the competitor, if left alone, probably would have grown into a full-fledged rival, yet in so doing, neglect an important category of anticompetitive behavior.

One main concern with a general rule that would block entrenched incumbents from buying smaller companies is that, for entrepreneurs who start small companies, the chance of being bought out by a big firm is one of the primary incentives for starting a firm in the first place. Thus, there is a concern that more aggressive antitrust enforcement against buying smaller firms could reduce incentives to start such firms in the first place. Hemphill and Wu tackle the question head-on:

The acquisition of a nascent competitor raises several particularly challenging questions of policy and doctrine. First, acquisition can serve as an important exit for investors in a small company, and thereby attract capital necessary for innovation. Blocking or deterring too many acquisitions would be undesirable. However, the significance of this concern should not be exaggerated, for our proposed approach is very far from a general ban on the acquisition of unproven companies. We would discourage, at most, acquisition by the firm or firms most threatened by a nascent rival. Profitable acquisitions by others would be left alone, as would the acquisition of merely complementary or other nonthreatening firms. While wary of the potential for overenforcement, we believe that scrutiny of the most troubling acquisitions of unproven firms must be a key ingredient of a competition enforcement agenda that takes innovation seriously.

In another paper, William P. Rogerson and Howard Shelanski write about \”Antitrust Enforcement, Regulation, and Digital Platforms.\” They raise the concern that the tools of antitrust may not be well-suited to some of the competition issues posed by big digital firms. For example, if Alphabet was forced to sell off Google, or some other subsidiaries, would competition really be improved? What would it even mean to, say, try to break Google\’s search engine into separate companies? When there are \”network economies,\” where many agents want to be on a given website because so many other players are on the same website, perhaps a relatively small number of firms is the natural outcome. 

Thus, while certainly not ruling out traditional antitrust actions, Rogerson and Shelanski argue that the case for using regulations to achieve pro-competitive outcomes. They write: 

[W]e discuss why certain forms of what we call “light handed procompetitive” (LHPC) regulation could increase levels of competition in markets served by digital platforms while helping to clarify the platforms’ obligations with respect to interrelated policy objectives, notably privacy and data security. Key categories of LHPC regulation could include interconnection/interoperability requirements (such as access to application programming interfaces (APIs)), limits on discrimination, both user-side and third-party-side data portability rules, and perhaps additional restrictions on certain business practices subject to rule of reason analysis under general antitrust statutes. These types of regulations would limit the ability of dominant digital platforms to leverage their market power into related markets or insulate their installed base from competition. In so doing, they would preserve incentives for innovation by firms in related markets, increase the competitive impact of existing competitors, and reduce barriers to entry for nascent firms. 

The regulation we propose is “light handed” in that it largely avoids the burdens and difficulties of a regime—such as that found in public utility regulation—that regulates access terms and revenues based on firms’ costs, which the regulatory agency must in turn track and monitor. Although our proposed regulatory scheme would require a dominant digital platform to provide a baseline level of access (interconnection/interoperability) that the regulator determines is necessary to promote actual and potential competition, we believe that this could avoid most of the information and oversight costs of full-blown cost-based regulation …  The primary regulation applied to price or non-price access terms would be a nondiscrimination condition, which would require a dominant digital platform to offer the same terms to all users. Such regulation would not, like traditional rate regulation, attempt to tie the level or terms of access to a platform’s underlying costs, to regulate the company’s terms of service to end users, or to limit the incumbent platform’s profits or lines of business. Instead of imposing monopoly controls, LHPC regulation aims to protect and promote competitive access to the marketplace as the means of governing firms’ behavior. In other words, its primary goal is to increase the viability and incentives of actual and potential competitors. As we will discuss, the Federal Communication Commission’s (FCC) successful use of similar sorts of requirements on various telecommunications providers provides one model for this type of regulation.

Nancy L. Rose and Jonathan Sallet tackle a more traditional antitrust question in \”The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right.\”   A \”horizontal\” merger is one between two firms selling the same product. This is in contrast to a \”vertical\” merger, where one firm merges with a supplier, or a merger where the two firms sell different products. When two firms selling the same product propose a merger, they often argue that the two firms  will be more efficient together, and thus able to provide a lower-cost product to consumers. Rose and Sallett offer this example: 

Here is a stylized example of the role that efficiencies might play in an antitrust review. Imagine two paper manufacturers, each with a single factory that produces several kinds of paper, and suppose their marginal costs decline with longer production runs of a single type of paper. They wish to merge, which by definition eliminates a competitor. They justify the merger on the ground that after they combine their operations, they will increase the specialization in each plant, enabling longer runs and lower marginal costs, and thus incentivizing them to lower prices to their customers and expand output. If the cost reduction were sufficiently large, such efficiencies could offset the merger’s otherwise expected tendency to increase prices.

In this situation, the antitrust authorities need to evaluate whether these potential efficiencies exist and are likely to benefit consumers. Or alternatively, is the talk of \”efficiencies\” a way for top corporate managers to build their empires while eliminating some competition? Rose and Sallett argue, based on the empirical evidence of what has happened after past mergers, that antitrust enforcers have been too willing to believe in the possibility of efficiencies that don\’t seem to happen. They write: 

As empirically-trained economists focused further on what data revealed about the relationship between mergers and efficiencies, the results cast considerable doubt on post-merger benefits. As discussed at length by Professor Hovenkamp, “the empirical evidence is not unanimous, however, it strongly suggests that current merger policy tends to underestimate harm, overestimate efficiencies, or some combination of the two.” The business literature is even more skeptical. As management consultant McKinsey & Company reported in 2010: “Most mergers are doomed from the beginning. Anyone who has researched merger success rates knows that roughly 70 percent of mergers fail.”

For more on antitrust and the big tech companies, some of my previous posts include:

Here\’s the full set of papers from the June 2020 issue of the  University of Pennsylvania Law Review issue, with links and abstracts: 

\”Framing the Chicago School of Antitrust Analysis,\” by Herbert  Fiona Scott Morton
The Chicago School of antitrust has benefitted from a great deal of law office history, written by admiring advocates rather than more dispassionate observers. This essay attempts a more neutral examination of the ideology, political impulses, and economics that produced the School and that account for its durability. The origins of the Chicago School lie in a strong commitment to libertarianism and nonintervention. Economic models of perfect competition best suited these goals. The early strength of the Chicago School was that it provided simple, convincing answers to everything that was wrong with antitrust policy in the 1960s, when antitrust was characterized by over-enforcement, poor quality economics or none at all, and many internal contradictions. The Chicago School’s greatest weakness is that it did not keep up. Its leading advocates either spurned or ignored important developments in economics that gave a better accounting of an economy that was increasingly characterized by significant product differentiation, rapid innovation, networking, and strategic behavior. The Chicago School’s protest that newer models of the economy lacked testability lost its credibility as industrial economics experienced an empirical renaissance, nearly all of it based on models of imperfect competition. What kept Chicago alive was the financial support of firms and others who stood to profit from less intervention. Properly designed antitrust enforcement is a public good. Its beneficiaries—consumers—are individually small, numerous, scattered, and diverse. Those who stand to profit from nonintervention were fewer in number, individually much more powerful, and much more united in their message. As a result, the Chicago School went from being a model of enlightened economic policy to an economically outdated but nevertheless powerful tool of regulatory capture.

\”Nascent Competitors,\” by C. Scott Hemphill & Tim Wu
A nascent competitor is a firm whose prospective innovation represents a serious threat to an incumbent. Protecting such competition is a critical mission for antitrust law, given the outsized role of unproven outsiders as innovators and the uniquely potent threat they often pose to powerful entrenched firms. In this Article, we identify nascent competition as a distinct analytical category and outline a program of antitrust enforcement to protect it. We make the case for enforcement even where the ultimate competitive significance of the target is uncertain, and explain why a contrary view is mistaken as a matter of policy and precedent. Depending on the facts, troubling conduct can be scrutinized under ordinary merger law or as unlawful maintenance of monopoly, an approach that has several advantages. In distinguishing harmful from harmless acquisitions, certain evidence takes on heightened importance. Evidence of an acquirer’s anticompetitive plan, as revealed through internal communications or subsequent conduct, is particularly probative. After-the-fact scrutiny is sometimes necessary as new evidence comes to light. Finally, our suggested approach poses little risk of dampening desirable investment in startups, as it is confined to acquisitions by those firms most threatened by nascent rivals.

\”Antitrust Enforcement, Regulation, and Digital Platforms,\” by William P. Rogerson & Howard Shelanski
There is a growing concern over concentration and market power in a broad range of industrial sectors in the United States, particularly in markets served by digital platforms. At the same time, reports and studies around the world have called for increased competition enforcement against digital platforms, both by conventional antitrust authorities and through increased use of regulatory tools. This Article examines how, despite the challenges of implementing effective rules, regulatory approaches could help to address certain concerns about digital platforms by complementing traditional antitrust enforcement. We explain why introducing light- handed, industry-specific regulation could increase competition and reduce barriers to entry in markets served by digital platforms while better preserving the benefits they bring to consumers.

\”The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right,\” Nancy L. Rose and Jonathan Sallet
The extent to which horizontal mergers deliver competitive benefits that offset any potential for competitive harm is a critical issue of antitrust enforcement. This Article evaluates economic analyses of merger efficiencies and concludes that a substantial body of work casts doubt on their presumptive existence and magnitude. That has two significant implications. First, the current methods used by the federal antitrust agencies to determine whether to investigate a horizontal merger likely rests on an overly-optimistic view of the existence of cognizable efficiencies, which we believe has the effect of justifying market-concentration thresholds that are likely too lax. Second, criticisms of the current treatment of efficiencies as too demanding—for example, that antitrust agencies and reviewing courts require too much of merging parties in demonstrating the existence of efficiencies—are misplaced, in part because they fail to recognize that full-blown merger investigations and subsequent litigation are focused on the mergers that are most likely to cause harm.

\”Oligopoly Coordination, Economic Analysis, and the Prophylactic Role of Horizontal Merger Enforcement,\” by Jonathan B. Baker and Joseph Farrell
For decades, the major United States airlines have raised passenger fares through coordinated fare-setting when their route networks overlap, according to the United States Department of Justice. Through its review of company documents and testimony, the Justice Department found that when major airlines have overlapping route networks, they respond to rivals’ price changes across multiple routes and thereby discourage competition from their rivals. A recent empirical study reached a similar conclusion: It found that fares have increased for this reason on more than 1000 routes nationwide and even that American and Delta, two airlines with substantial route overlaps, have come close to cooperating perfectly on routes they both serve.

\”The Role of Antitrust in Preventing Patent Holdup,\” by Carl Shapiro and Mark A. Lemley
Patent holdup has proven one of the most controversial topics in innovation policy, in part because companies with a vested interest in denying its existence have spent tens of millions of dollars trying to debunk it. Notwithstanding a barrage of political and academic attacks, both the general theory of holdup and its practical application in patent law remain valid and pose significant concerns for patent policy. Patent and antitrust law have made significant strides in the past fifteen years in limiting the problem of patent holdup. But those advances are currently under threat from the Antitrust Division of the Department of Justice, which has reversed prior policies and broken with the Federal Trade Commission to downplay the significance of patent holdup while undermining private efforts to prevent it. Ironically, the effect of the Antitrust Division’s actions is to create a greater role for antitrust law in stopping patent holdup. We offer some suggestions for moving in the right direction.

\”Competition Law as Common Law: American Express and the Evolution of Antitrust,\” by Michael L. Katz & A. Douglas Melamed
We explore the implications of the widely accepted understanding that competition law is common—or “judge-made”—law. Specifically, we ask how the rule of reason in antitrust law should be shaped and implemented, not just to guide correct application of existing law to the facts of a case, but also to enable courts to participate constructively in the common law-like evolution of antitrust law in the light of changes in economic learning and business and judicial experience. We explore these issues in the context of a recently decided case, Ohio v. American Express, and conclude that the Supreme Court, not only made several substantive errors, but also did not apply the rule of reason in a way that enabled an effective common law-like evolution of antitrust law.

\”Probability, Presumptions and Evidentiary Burdens in Antitrust Analysis: Revitalizing the Rule of Reason for Exclusionary Conduct,\” by Andrew I. Gavil & Steven C. Salop
The conservative critique of antitrust law has been highly influential. It has facilitated a transformation of antitrust standards of conduct since the 1970s and led to increasingly more permissive standards of conduct. While these changes have taken many forms, all were influenced by the view that competition law was over-deterrent. Critics relied heavily on the assumption that the durability and costs of false positive errors far exceeded the costs of false negatives. Many of the assumptions that guided this retrenchment of antitrust rules were mistaken and advances in law and economic analysis have rendered them anachronistic, particularly with respect to exclusionary conduct. Continued reliance on what are now exaggerated fears of “false positives,” and failure adequately to consider the harm from “false negatives,” has led courts to impose excessive burdens of proof on plaintiffs that belie both sound economic analysis and well-established procedural norms. The result is not better antitrust standards, but instead an unwarranted bias towards non-intervention that creates a tendency toward false negatives, particularly in modern markets characterized by economies of scale and network effects. In this article, we explain how these erroneous assumptions about markets, institutions, and conduct have distorted the antitrust decision-making process and produced an excessive risk of false negatives in exclusionary conduct cases involving firms attempting to achieve, maintain, or enhance dominance or substantial market power. To redress this imbalance, we integrate modern economic analysis and decision theory with the foundational conventions of antitrust law, which has long relied on probability, presumptions, and reasonable inferences to provide effective means for evaluating competitive effects and resolving antitrust claims.

\”The Post-Chicago Antitrust Revolution: A Retrospective,\” by Christopher S. Yoo
A symposium examining the contributions of the post-Chicago School provides an appropriate opportunity to offer some thoughts on both the past and the future of antitrust. This afterword reviews the excellent papers presented with an eye toward appreciating the contributions and limitations of both the Chicago School, in terms of promoting the consumer welfare standard and embracing price theory as the preferred mode of economic analysis, and the post-Chicago School, with its emphasis on game theory and firm-level strategic conduct. It then explores two emerging trends, specifically neo-Brandeisian advocacy for abandoning consumer welfare as the sole goal of antitrust and the increasing emphasis on empirical analyses.

What’s Happening with Global Connectedness in the Pandemic?

The last few decades have been a time of economic globalization. But that trend has been faltering for a few years now, with rising political obstacles to trade and migration. Will the global pandemic recession be a force that separates the world economy, or perhaps reinforce some patterns of globalization while hindering others? Steven A. Altman and Phillip Bastian tackle these questions in the DHL GLobal Connectedness Index 2020, subtitled \”The State of Globalization in a Distancing World\” (December 2020).  

Here\’s are some long-run patterns of globalization. The first panel shows exports (as a measure of trade in goods) starting a sharp rise after World War II. The second panel shows the rise in foreign direct investment since about 1980. The third panel shows the rise in migration since about 1970. 

But although the long-run trend toward globalization is clear, Altman and Bastian point out that many people have heard so much about globalization that they tend to overestimate its prevalence. They write: 

[M]ost business and personal activity is still domestic rather than international. Roughly 21% of all goods and services end up in a different country from where they were produced. Companies buying, building, or reinvesting in foreign operations via FDI [foreign direct investment] accounted for only 7% of gross fixed capital formation last year. Just 7% of voice call minutes, including calls over the internet, were international. And a mere 3.5% of people lived outside of the countries where they were born. …

If many of these global “depth” measures are lower than you expected, you are in good company. Surveys of managers, students, and the general public have consistently shown that most people think international flows are larger than they really are. This pattern shows up across countries, as well as respondent characteristics such as level of education, age, gender, and political leanings. … In public policy, people who overestimate these types of measures tend to presume that globalization is a much bigger factor in joblessness, wage stagnation, and climate change than evidence suggests.

What about the effects of the pandemic on globalization? The report looks at four \”pillars\” of globalization: trade in goods and services, flows of people, international flows of information, and international capital flows. The overall theme is that while all the pillars of globalization took a hit earlier in 2020, all except flows of people have been starting to bounce back. Here are a few points about these that caught my eye. 

On trade in goods, the report notes: 

Covid-19 is on track to cause a much smaller decline in trade intensity than the 2008-09 global financial crisis. This reflects both how quickly trade recovered during 2020 and the fact that many of the industries that were hit hardest by the pandemic (e.g. restaurants) provide local services rather than heavily traded goods. It is also notable that a significant part of the forecasted decline in trade intensity in 2020 is due to lower commodity prices. … [R]emoving the effects of price changes …  the share of real global output that is exported is expected to remain above its 2009 level in 2020. … 

Covid-19 has also accelerated the growth of international e-commerce. According to one study, cross-border discretionary e-commerce sales soared 53% year-on-year during the second quarter of 2020. Cross-border sales, nonetheless, accounted for only 10% of all consumer e-commerce transactions in 2018, suggesting ample headroom for additional growth. The share of online shoppers who made purchases from foreign suppliers rose from 17% in 2016 to 23% in 2018. An analysis by the McKinsey Global Institute forecasts that international business and consumer e-commerce could expand trade in manufactured goods by 6 – 10% by 2030.

For international flows of capital, the  report looks foreign direct investment and portfolio equity investment: \”The distinction between the two is that FDI gives the investor (typically a multinational corporation) a voice in the management of a foreign enterprise, whereas portfolio equity investment does not. For statistical purposes, if the investor owns at least 10% of the foreign company, it is normally classified as FDI; below 10% it is deemed portfolio investment.\” 

On foreign direct investment: 

The UN Conference on Trade and Development (UNCTAD) forecasts that FDI flows will decline 30 – 40% in 2020, and that they are likely to slip another 5 – 10% in 2021, before starting to recover in 2022. The pandemic has crimped FDI flows through various channels: reductions in earnings available to invest, worsening business prospects, restrictions on business travel, uncertainty both in general and specifically about global supply chains, and so on. Nonetheless, double-digit drops in FDI flows are not uncommon, and certainly not as alarming as a similar drop in trade would be. For example, FDI flows shrank 43% in 2001 and 35% over a two-year period during the global financial crisis, and they have swung widely both up and down over the last five years due to changes in US tax policy.

For international investment in stocks, the general pattern in recent years has been as shift away from \”home bias,\” where investors stick to their own country, and toward greater international diversification. \”Portfolio equity stocks closed out 2019 at 37% of world stock market capitalization, just shy of the record level reported in 2018.\” This graph shows the monthly pattern of portfolio investment in emerging markets in 2020: a big dip in March, but then mostly a resumption of the usual pattern. 

When it comes to flows of information, 2020 has been a remarkable year. As businesses, schools, and social lives shifted online with the pandemic, international internet traffic rose dramatically. As the figure shows, it has been rising about 20% per year, but in 2020 it rose almost 50%. 

But on the other side, domestic internet traffic must be rising very substantially, too. The report notes: \”The Internet continues to power large increases in information flows within and across borders, but international information flows are no longer consistently growing faster than domestic information flows. Covid-19 has caused data traffic to boom in 2020, but it remains unclear whether the pandemic has made the nature of information flows more—or less—global.\”

Flows of people across international borders have dropped dramatically, with no real sign yet of a bounceback. In terms of tourism, \”The UNWTO [UN World Tourism Organization] does not foresee widespread lifting of travel restrictions until mid-2021 and forecasts that it will take 2.5 to 4 years for international tourist arrivals to rebound to their 2019 levels.\”

Flows of people who spread economic and education connections are way down, too: 

Though business travel makes up just a fraction of international travel, it is an important enabler of international trade, investment, and economic development. New research highlights how business travel facilitates knowledge transfer from countries with strengths in certain industries to other economies. According to this study, a permanent shutdown of international business travel would shrink global economic activity by an order of magnitude more than the amount that was spent on business travel before the pandemic. …

Early data show large declines in international student enrollment in the United States, the top destination for foreign students. According to National Student Clearinghouse Research Center’s tracking of student enrollment during the Covid-19 pandemic, international student enrollment is down 14% at the undergraduate level and 8% at the graduate level in fall 2020. The American Council on Education estimates that international student enrollment could fall by as much as 25%. New students deciding not to begin their studies in the US during the pandemic are the primary driver of falling enrollments.

Of course, if new international students keep staying away, the decline will accumulate over the next few years. In the world of academia, my sense is that lots of professors are experiencing a quality/quantity tradeoff: yes, attending a conference via Zoom has lower value-added, because of the loss of the personal touch, but on the other side, one can \”attend\” many more conferences. 

The overall takeaway here is that globalization is resilient. Yes, it is taking a hit in 2020. But even if globalization were to decline somewhat, it would still be an important force in the economy. Here are a few more comments from Altman and Bastian:

Despite recent trade turbulence, the ratio of gross exports to world GDP is still remarkably close to its all-time high. Even after falling from a peak of 32% in 2008 to 29% in 2019, this measure of global trade integration is still 20% higher than it was in 2000, twice as high as it was in 1970, and almost six times higher than in 1945. … Globalization can go into reverse—as demonstrated by the trendlines between the 1920s and 1950s—but recent data do not depict a similar reversal. …

[T]he world is—and will remain—only partially globalized. Globalization can rise or fall significantly without getting anywhere close to either a state where national borders become irrelevant or one where they loom so large that it is best to think of a world of disconnected national economies. All signs point to a future where international flows will remain so large that decision-makers ignore them at their peril, even as borders and cross-country differences continue to make domestic activity the default in most areas.

Intergenerational Mobility and Neighborhood Effects

Raj Chetty delivered a keynote address titled \”Improving Equality of Opportunity: New Insights from Big Data,\” to the annual meetings of the Western Economic Association International. It\’ has now been published in the January 2021 issue of Contemporary Economic Policy (39:1, 7–41, needs a subscription to access). The lecture gives a nice overview of some  of Chetty\’s work in the last few years. 

Chetty offers this nugget from previous research as as starting point: 

In particular, let’s think about the chance that a child born to parents in the bottom quintile—the bottom fifth of the income distribution—has of rising up to the top fifth of the income distribution. If you look at the data across countries, there are a number of recent studies that have computed statistics like this, called intergenerational transition matrices. You see that in the United States, kids born to families in the bottom fifth have about a seven and a half percent chance of reaching the top fifth. That compares with nine percent in the United Kingdom, 11.7% in Denmark, and 13.5% in Canada. … One way you might think about this is that your chance of achieving the American dream are in some sense almost two times higher if you’re growing up in Canada rather than the United States. I think these international comparisons are motivating. It’s a little bit difficult to figure out how to make sense of them.

Start with a basic question: What do we need to  know if we want to measure intergenerational economic mobility? Basically, you need a measure of income for two separate generations, choosing an age when both older and younger generations are reasonably well-established in a career. Then you can compare where in the income distribution the younger generation falls, relative to their parents, and in that way have a sense of the chances of  how likely someone from one part of the income distribution is to move (higher or lower) to a different part of the income distribution. Best of all, you would like to have a really big nationally representative sample of this data, so that you can look at how such effects might vary across different locations, different sexes, and races/ethnicities. 
There is no single data set that will give you this kind of information. So Chetty and a team of researchers that has now grown to a couple of dozen people figured out how to combine existing datasets to get the necessary data.  Specifically, as Chetty describes it: 
In particular we use the 2000 and 2010 decennial census, as well as the American Community Survey—the ACS—which covers about 1% of Americans each year. We take that census data and link it to federal income tax returns from 1989 to 2015, creating a longitudinal data set. … 
Those of you who have kids and live in the United States know that you have to write down your kid’s Social Security number on your tax return to claim him or her as a dependent. That allows us to link 99% of children in America back to their parents. What I’m going to describe here, the set of kids we’re seeking to study, are children born between 1978 and 1983 who were born in the United States or came to the United States as authorized immigrants while they were kids. That’s our target sample.

In practice, we have an analysis sample of about 20.5 million children once we’ve done the linkage of those various data sets, and that covers about 96% of the kidswe expect to be in our target sample. It’s not 100% because there are some kids for whom you’re not able to find a tax return or who weren’t claimed or who you weren’t able to link the census data to the tax data. But it’s still pretty good—at 96% you have essentially everybody you’re looking to study.

Just to be clear, this data is \”de-identified\”: that is, stringent precautions are taken so that researchers don\’t see anyone\’s actual name or Social Security number or other personal information. Constructing this dataset is a substantial task, and it\’s a task that would not have been possible for researchers of previous generation. 

Here\’s one result. The horizontal axis shows the percentage of the income distribution for the older generation; the vertical axis the average for where the next generation ended up in the income distribution. The marked point shows that when the older generation was in the 25th percentile, the average outcome for the younger generation is the 41st percentile of the income distribution. Overall, the flatter this line, the more intergenerational income mobility exists: a perfectly flat line would mean that no matter where the older generation started, the expected result for the younger generation is the same. 

But while a lot of the previous research on intergenerational mobility had data that was nationally representative, the US-based research in particular has not had enough data that it could make plausible statements about intergenerational mobility at more local levels.

As a concrete example to help the exposition, Chetty sticks to this case where a parent\’s household is in the 25th percentile of the income distribution. Then he can ask: in different metropolitan areas across the United States, where is the average income of the younger generation higher or lower? 

Here\’s a map showing the pattern across the US where the younger generation is white and black men. The blue areas in the \”heat map\” show where the income of the younger generation is higher than the older generation; the white areas show where it\’s the same; the red areas show where it is lower. For black men, on the left, the overall pattern is mostly red and white: that is, the younger generation of black men usually have the same or lower income than their parents. For white men, on the right, the overall pattern is mostly blue and white; that is, the younger generation of white men are usually doing  the same or better than their parents.

There is of course lots to chew on in why these patterns differ across metropolitan area. For example, if one looks at the same graph for black women and white women, there is very little difference in this measure of intergenerational mortality. But the Chetty research group has so much data that they can look at much smaller geographic areas, including Census \”tracts.\” There are about 70,000 tracts in the US that include about 4,000 people each. In certain high-population cities, the Chetty data lets a research look at intergenerational mobility at the level of a city block. 

Thus, Chetty and his team can tackle the question: what do patterns of intergenerational mobility look like not at the national level, or at the metro area  level, but at the neighborhood level. Sticking to the earlier concrete example of children born to a family at the 25th income percentile, does their income mobility differ depending on the neighborhood in which they live? Chetty writes: 

[T]he geographic scale on which we should think about neighborhoods as they matter for economic opportunity and upward mobility is incredibly narrow, like a half mile radius around your house. We find this not just for poverty rates, but many other characteristics. If you look at differences in characteristics outside that half mile radius, they have essentially no predictive power at all. I think that’s extremely useful from a policy perspective. We started this talk with the American dream. We now see that its origins, its roots, seem to actually be extremely hyperlocal.

Do the hyperlocal neighborhoods that have more intergenerational income mobility tend to share certain characteristics? Yes. For example, the share of households in the neighborhood with two-parent families makes a difference, as does the \”social capital\” of the neighborhood as measured by whether it has community gathering places like churches or even bowling alleys. 

At some level, all of this may sound like \”common sense for credit,\” as economics has been described. Certain neighborhoods, not too far apart, are more correlated with intergenerational mobility than others.  But are the neighborhood effects something that could potentially be used for public policy? To put it another way, if the neighborhood changes, does someone from the younger generation basically have the same level of intergenerational mobility–because mobility is mostly determined by your family–or does someone from the younger generation perhaps have a higher or lower level of mobility–because the neighborhood has a separate causal effect on eventual economic outcomes?  

There are a variety of ways one might address this question. There\’s a major federal study called Moving to Opportunity. It was a randomized study: that is, a randomly selected half the sample got a certain program, but half didn\’t, so you can then compare outcomes between the results. Chetty describes: 

Let me start with the moving to opportunity (MTO) experiment. This experiment was conducted in five large cities around the United States. I’m going to focus on the case of Chicago, just to pick one illustration. In this experiment, researchers gave families living in very high poverty public housing projects, for instance, the Robert Taylor Homes in Chicago, one of two different kinds of housing vouchers through a random lottery. One group received Section 8 vouchers, which were vouchers that enabled them to move anywhere they could find housing. This assistance was on the order of about $1,000 per month in today’s dollars. The other group, called the experimental group, was given the same vouchers worth exactly the same amount, but they came with the restriction that users had to move to a low poverty area. These areas were defined as census tracts with a poverty rate below 10%.

It turns out that the randomly selected group who moved to lower-poverty areas as children did indeed have higher incomes as adults. The exact numbers of course have some statistical uncertainty built in. But as a bottom line, Chetty writes: 

That is to say, if I moved to a place where I see kids growing up to earn $1,000 more, on average, than in my hometown, I myself pick up about $700 of that. Just to put it differently, something like 70% of the variation in the maps that I’ve been showing you seems to reflect, if you take this oint estimate directly, causal effects of place, and 30% reflects selection. So a good chunk of it seems to actually be the causal effects of place.

Another statistical approach is just to look at families who move for any reason. Look at families with several children. When that family moves, some of their children will grow up in the new neighborhood for longer than others, and it turns out that the income gains from moving to the new neighborhood for children growing up in the same family line up with how long the child lived in that neighborhood. Chetty describes various other approaches to demonstrating that the neighborhood in which you grow up, where that is defined as the area within about a half-mile of your home, has a lasting effect on economic prospects.  

What public policy recommendations might flow from this finding? It would be impractical and costly to pay for vast numbers of people to relocate from current neighborhoods.  But there\’s a smaller-scale policy that might well be workable, which just involves providing information and connections to families with an interest in moving. Chetty writes:  

A different view is maybe this is about some sort of barriers, frictions that are preventing families from getting to these places. Maybe they lack information, maybe landlords in those neighborhoods don’t want to rent to them, maybe they don’t have the liquidity they need to get to those places, and so on.  We are conducting a randomized trial where we’re trying to address a bunch of those barriers by providing information, and by simplifying the process for landlords by providing essentially brokerage services, like search assistance, in the housing search process. We take that for granted in the high-end of the housing market, but it basically doesn’t exist at the low-end of the housing market. We take about 1,000 families, 500 of which receive the services, and 500 don’t, randomly chosen. …

We found that this was an incredibly impactful intervention; we were extremely surprised by how much impact our services had on families’ likelihoods of moving to higher-opportunity neighborhoods. In the control group, less than one-fifth of families moved to higher opportunity areas. Eighty percent of families that received these vouchers chose to live in places that are relatively low mobility. In the treatment group, this completely changed. The vast majority of families in  the treatment group are now living in these high mobility places.I was just in Seattle talking to some of the families who’ve moved. They’re incredibly happy and describe how this small set of services, which only comes at a 2% incremental cost relative to the baseline cost of the housing voucher program, dramatically changed their choices and their kids’ experience. …
There are a couple elements. First, from an economic perspective, we provide damage mitigation funds. This is basically an insurance fund that says that if anything goes wrong, we will cover it. In practice, the amount of expenses incurred are essentially zero, but I think it gives landlords some peace of mind. Second, there’s a simplification in the inspection process, which traditionally involves a lot of red tape and delays. We shortened the inspection process to 1 day, making it much simpler. Third—this actually surprised me—apparently telling landlords that their units can provide a pathway to opportunity for low-income kids actually makes them much more motivated to rent their units to certain families. …
In fact, now we find landlords coming to the housing authority saying things like: “I heard about this program,” or, “I had a really good experience with your previous tenant, I want to now rent again.” I think we can change that equilibrium if we do it thoughtfully.

What about taking steps to improve the prospects for intergenerational mobility in neighborhoods in a direct way? It\’s worth remembering that Chetty\’s evidence suggests that what really matters is the half-mile around where people live, so what would seem to be called for is projects that improve the neighborhoods at a local level.  Chetty writes: 

What specific investments can be useful? Of course, that’s the question you’d want to know
the answer to. That could range from things like, most obviously, improving the quality of schools in an area to things like mentoring programs, and changing the amount of social capital, if we can figure out ways to measure and manipulate things like connectedness, reducing crime, and physical infrastructure. There are many such efforts that have been implemented over the years by local governments, nonprofits, and other practitioners.
You might ask which of those things is actually most effective; what’s the recipe for increasing upward mobility in a given place? I think the honest answer is that we just do not know yet. The reason for that is that there are lots of these place-based efforts where someone invests a lot of money in a given neighborhood. The neighborhood looks completely different 10 years down the road, but you have no idea whether that’s because new people moved in and displaced the people who were living there before, so the neighborhood gentrified, or if the people who lived there to begin with benefitted. And again, I think resolving that question comes back to having historical longitudinal data and being able to follow the people who lived there to begin with.
As you read this brief overview of a much larger body of work, you may find your mind raising questions about other possible connections or policies. That\’s natural. It\’s a genuinely exciting area of research that is opening up before our eyes. 

Debt and Deficits: Nostalgia for the 1980s

Back in the mid-1980s, the federal government under the Reagan administration ran what were widely considered to be excessive and risky budget deficits: from 1983 to 1986, the annual deficit was between 4.7% of GDP and 5.9% of GDP per year. The accumulated federal debt held by the public as a share of GDP rose from 21.2% of GDP in 1981 to 35.2% of GDP by 1987. I cannot exaggerate how much ink was spilled over this problem, some of it by me, back in those innocent and carefree time, before we learned to stop worrying and love the deficit.

The Congressional Budget Office has just released \”The 2021 Long-Term Budget Outlook\” (March 2021). There\’s nothing deeply new in it, but it made me think about how attitudes about budget deficits and government debt have evolved. The report notes: 

At an estimated 10.3 percent of gross domestic product (GDP), the deficit in 2021would be the second largest since 1945, exceeded only by the 14.9 percent shortfall recorded last year. … By the end of 2021, federal debt held by the public is projected to equal 102 percent of GDP. Debt would reach 107 percent of GDP (surpassing its historical high) in 2031 and would almost double to 202 percent of GDP by 2051. Debt that is high and rising as a percentage of GDP boosts federal and private borrowing costs, slows the growth of economic output, and increases interest payments abroad. A growing debt burden could increase the risk of a fiscal crisis and higher inflation as well as undermine confidence in the U.S. dollar, making it more costly to finance public and private activity in international markets.

Here are a few illustrative figures. The first one shows accumulated federal debt over time since 1900. You see the bumps for debt accumulated during World Wars I and II, and during the Great Depression of the 1930s. If you look at the 1980s, you can see the Reagan-era rise in debt/GDP.  But after the debt/GDP ratio had sagged back to 26.5% by 2001, you can see the the big jump for debt incurred during the Great Recession, and then debt incurred during the pandemic recession, and then where the projections under current law would take us.

From an historical point of view, you can think of fiscal policy during the Great Recession and the pandemic recession as similar to what happened during the Great Recession and World War II. In both cases, there were two huge stresses within a period of about 15 years, and the federal government addressed both of them with borrowed money. In historical perspective, those Reagan-era deficits that caused such a fuss were just a minor speed bump. However, what\’s projected for the future has no US historical equivalent. 
This figure shows projections for annual budget deficits, rather than for accumulated debt. The figure separates out the amount of deficits that are attributable to interest payments in past borrowing (blue area). The \”primary deficit\” (purple area) is the deficit due to all non-interest spending. You\’ll notice that the primary deficit doesn\’t get crazy-high: it steadily grows from about 2.5% of GDP in the mid-2000s to 4.6% of GDP by the late 2040s. The problem is that the federal government gets on what I\’ve called the \”interest payments treadmill,\” where high interest payments are helping to create large annual deficits, and then large annual deficits keep leading to higher future interest payments. 
If the government could take actions to hold down the rise in the primary deficit over time, with some mixture of spending cuts and tax increases (or does it sound better to say spending \”restraint\” and tax \”enhancements\”?), then this could also keep the US government from stepping on the interest payments treadmill.  
This figure shows projected trends for spending and taxes, under current law. You can see the sepnding jump during the Great Recession, and then the jump during the pandemic recession. Assuming current law, projected tax revenues as a share of GDP don\’t change much going forward. However projected outlays do rise.

CBO explains the rise in outlays: 
Larger deficits in the last few years of the decade result from increases in spending that outpace increases in revenues. In particular:
  • Mandatory spending increases as a percentage of GDP. Those increases stem both from the aging of the population, which causes the number of participants in Social Security and Medicare to grow faster than the overall population, and from growth in federal health care costs per beneficiary that exceeds the growth in GDP per capita.
  • Net spending for interest as a percentage of GDP is projected to increase over the remainder of the decade as interest rates rise and federal debt remains high. 
There\’s been some talk in recent years about how, in a time of low interest rates, it could be an excellent time for the US government to make long-run investments that would pay off in future productivity. This case has some merit to me, but it\’s not what is actually happening. Instead, the fundamental purpose of the US government has been shifting. Back in 1970, about one-third of all federal spending was direct payments to individuals: now, direct payments to individuals are 70% of all federal spending. The federal government use to have missions like fighting wars and putting a person on the moon: now, it cuts checks. The CBO has this to say about the agenda of using federal debt to finance investments: 
Moreover, the effects on economic outcomes would depend on the types of policies that generate the higher deficits and debt. For example, increased high-quality and effective federal investment would boost private-sector productivity and output (though it would only partially mitigate the adverse consequences of greater borrowing). However, in CBO’s projections, the increasing deficits and debt result primarily from increases in noninvestment spending. Notably, net outlays for interest are a significant component of the increase in spending over the next 30 years. In addition, federal spending for Social Security, Medicare, and Medicaid for people age 65 or older would account for about half
of all federal noninterest spending by 2051, rising from about one-third in 2021.
For decades now, we have known that a combination of the aging of the post-World War II \”baby boom\” generation combined with rising life expectancies was going to raise the share of elderly Americans. We have also known for decades that primary programs for meeting the needs of this group–like Social Security and Medicare–have made promises that their current funding sources can\’t support. We have been watching US health care costs rise as a share of GDP For decades.  Meanwhile, the US economy has been experiencing slow productivity growth, which makes addressing all problems closer to a zero-sum game.  

Neither during the Great Recession of 2007-2009 nor during the heart of the pandemic recession in 2020 and early 2021 was an appropriate time to focus on the long-term future of government debt. But averting our eyes from the trajectory of the national debt is not a long-term strategy. 

Teaching Current Monetary Policy

Most of the ingredients of the standard intro econ class have been pretty stable for a long time: opportunity cost and budget constraints, supply and demand, perfect and imperfect competition, externalities and public goods, unemployment and inflation, growth and business cycles, benefits and costs of international trade. In contrast, the topic of how the central bank conducts monetary policy has shifted substantially more than decade ago, in a way that makes earlier textbooks literally obsolete. Jane Ihrig and Scott Wolla offer an overview of the changes in \”Monetary Policy Had Changed, Has Your Instruction?\” (Teaching Resources for Economics at Community Colleges Newsletter, March 2021).  For a more in-depth discussion, Ihrig and Wolla wrote: “Let’s close the gap: Revising teaching materialsto reflect how the Federal Reserve implements monetary policy” (October 2020, Federal Reserve Finance and Economics Discussion Series 2020-092)

(The TRECC Newsletter has financial support from the NSF and other sources. It currently has about 1500 subscribers. If you are are involved in community college teaching, or you teach a substantial number of lower-level undergraduate courses, you may want to put yourself on the subscription list.)

As Ihrig and Wolla point out, the traditional pedagogy of how monetary policy works focuses on how the Federal Reserve can raise or lower a specific targeted interest rate: the \”federal funds\” rate. It goes like this: 

One big change is in the \”policy implementation\” arrow. When I was taking intro econ in the late 1970s or when I was teaching big classes in the late 1980s and into the 1990s, the textbooks all discussed three tools for conducting monetary policy: open market operations, changing the reserve requirement, or changing the discount rate. 

Somewhat disconcertingly, when my son took AP economics in high school last year, he was still learning this lesson–even though it does not describe what the Fed has actually been doing for more than a decade since the Great Recession. Perhaps even more disconcertingly, when Ihrig and Wolla looked the latest revision of some prominent intro econ textbooks with publication dates 2021, like the widely used texts by Mankiw and by McConnell, Brue and Flynn, and found that they are still emphasizing open market operations as the main tool of Fed monetary policy. 

The Fed still targets the federal funds interest rate. But as for the earlier methods of policy implementation, there literally is no reserve requirement any more–no requirement that the banks  hold a certain portion of their assets on reserve at the Fed. The discount rate, where the Fed makes direct loans to institutions, has not been a commonly-used tool of monetary policy, because it was seen as a sign of financial weakness, perhaps even looming insolvency, if a bank needed could not get credit from other sources and needed to borrow from the Fed. The Fed still does open market operations for various reasons, but it is not the tool used to target the federal funds interest rate. 
What changed? In the Great Recession, the Fed had reduced its target interest rate–the federal funds interest rate–essentially to zero percent, as show in the figure. The Fed had hit the \”zero lower bound,\” as economists say, but it still wanted and needed to try to boost the economy. 

In  particular, during the Great Recession there was deep uncertainty about the true value of certain mortgage-backed securities, and with very few buyers willing to purchase such securities, their value had plunged. Banks and financial institutions that were holding such securities as part of their assets were threatened with insolvency as a result.  Thus, the Fed started a program of \”quantitative easing,\” where the Fed purchased both these mortgage-securities and also Treasury debt directly. Ihrig and Wolla describe it this way: 

This shift in framework was spurred on by actions during the global financial crisis. Between 2008 and 2014 the Fed conducted a series of large-scale asset purchase programs to lower longer-term interest rates, ease broader financial market conditions, and thus support economic activity and job creation. These purchases not only increased the Fed\’s level of securities holdings but also increased the total level of reserves in the banking system from around $15 billion in 2007 to about $2.7 trillion in late 2014. At this point, reserves were no longer limited but instead became quite plentiful, or \”ample.\”

Bank reserves expanded enormously, as shown in this figure. In turn, this shift from \”limited reserves\” to the new era of \”ample reserves\” opened up the possibility for a new tool of monetary policy: the Fed could pay interest on these bank reserves. 

In short, the primary tool that the Fed now uses for the conduct of monetary policy is \”interest on reserves,\” often abbreviated as IOR. This isn\’t hard to explain. Reserves go up and down for two reasons: one is if the Fed is using quantitative easing to buy Treasury debt or mortgage-backed securities directly, or not; the other is if banks prefer to hold reserves at the Fed and receive the IOR payments, or not. By moving the IOR up or down, the Fed still seeks to target the federal funds interest rate. For example, that rise in the fed funds rate from about 2016-2018, shown above, was a result of a rise in the interest rate paid on reserves.  

Thus, if you (or your textbook) is not emphasizing interest paid on reserves as the primary tool of monetary policy, or if (heaven forbid) you are still emphasizing open market operations as the main tool of monetary policy, you are more than a decade out of date. That said, here are some of my remaining questions about how to teach at the intro level how monetary policy is conducted. 
1) Should the previous pedagogy from before the ample-reserves era be eliminated from intro-level textbooks? The case for keeping it is partly that many intro-level books teach some recent economic history of the last few decades, and one can explain the shift to the \”ample reserves\” period. Also, some central banks around the world, like the People\’s Bank of China, still use tools like altering reserve reserve requirements. But as time passes, the case for dropping the earlier pedagogy grows. 
2) Although interest on reserves is the primary tool that the Fed has been using for targeting the Federal funds interest rate, there is also a secondary tool: the overnight reverse repurchase agreement facility that the Fed can use to put a floor under the federal funds interest rate, if it does not rise as desired with changes in the interest paid on reserves. Frankly, I despair of teaching the reverse repo market in an intro-level class. Given that the Fed does not see it as the primary tool, it\’s one of those topics where I would wait for an appropriate student question and be ready to talk about it during office hours. Or maybe it deserves a \”box\” in a textbook as some additional terminology worth mentioning. But maybe there\’s a smart teacher out there with a way to make this work for the intro-level class? 
3) Although interest on reserves it the Fed\’s main policy tool for normal times, neither of the last two recessions have been \”normal.\” One big change mentioned above was the shift to quantitative easing. Another change is the shift to \”forward guidance,\” or trying to affect current economic conditions by announcing the future path of Fed interest rate policy. Yet another is emergency lending facilities aimed at certain parts of financial markets, to make sure they will keep functioning even under economic stress. Here\’s Ihrig and Wolla in the Fed working paper: 

For example, during both crises, the Fed conducted large-scale asset purchases to either deliberately push down longer-term interest rates (the motive during the 2007-2009 financial crisis and subsequent recession) or aid market functioning and help foster accommodative financial conditions (the motive during the COVID-19 pandemic). As a traditional open market operation, when the Fed also made adjustments to existing lending facilities and introduced new, emergency lending facilities to help provide short-term liquidity to banks and other financial institutions. For example, the Fed expanded its currency swap program where it loans dollars to foreign central banks to alleviate dollar funding stresses abroad. It also introduced the Primary Dealer Credit Facility during both stress events, which provided overnight loans to primary dealers and helped foster improved conditions in financial markets.

The current standard practice for teaching the conduct of monetary policy at the intro level is still emerging. At present, my own take is to discuss the shift from limited reserves too ample reserves to give some sense of how these issues have evolved, but as time passes, it will eventually be time to drop the old limited-reserves approach entirely. Instead, for the current time, the tools of monetary policy to teach at the intro level would be interest on reserves, quantitative easing, forward guidance, and emergency (but temporary) lending facilities.

Putting Monetary Values on Health Costs of Coronavirus

W. Kip Viscusi delivered the Presidential Address at the (virtual) Southern Economic Association meetings last November on the subject \”Economic Lessons for Coronavirus Risk Policies.\” The paper is forthcoming in the Southern Economic Journal; for now, it\’s available at the SSRN website a Vanderbilt University Law School Working Paper (Number 21-04, January 21, 2021). 

Viscusi is known for, as he says early in the paper, attempting to \”strike a meaningful balance between risk and costs,\” even though \”[e]conomic analyses in these domains are sometimes challenging and necessarily involve treading on controversial terrain. 

Thus, his analysis starts with standard estimates for the \”value of a statistical life\” of $11 million. The idea of a VSL is discussed here and here.  But the key point to understand is that the number is built on actual  real-world choices about risk, like how much more do people get paid for a riskier job, or what safety requirements for a car or a consumer product are judged to be \”acceptable\” or \”too expensive.\” For example, say the government proposes a regulation that costs $110 million, but it is projected to reduce risks in a way that saves 11 lives in a city of 1.1 million people. That decision is implicitly saying that it\’s worthwhile for government to spend of $10 million per life saved. This reduction in risk is referred to as a \”statistical\” life saved. Using the standard measure of $11 million, mortality costs alone for COVID-19 in the US were $3.9 trillion through December 2020.
It\’s a little harder to use these same statistical methods to measure morbidity, rather than mortality, because morbidity covers everything from feeling crappy for a few days to a long and life-threatening hospital stay. But Viscusi uses some studies based, for example,  on what people are willing to spend to avoid hospitalization, and argues that the health costs of morbidity may add 40-50% to costs of mortality.
As one more health-related cost, Viscusi has long argued that at some high level of lost income (or high costs), the reactions to those lower income levels will also raise the risks of death, and he cites some of his own recent work to this effect: 

[A] loss in income of just over $100 million will lead to one expected death …  This relationship highlights the important link between the performance of the economy and individual health. Shutting down economic activity to limit the pandemic has adverse health consequences in addition to the favorable reduction in risk due to increased social contacts. Income transfers such as unemployment compensation consequently serve a health-related function in dampening these losses.

Again, the value of a statistical life studies depend are not based on a social or political judgement about fairness or equity: they are based on empirical studies of what people are actually willing to pay to reduce certain risks (or what compensation they need to receive for taking on those risks). Thus, the calculations lead to Viscusi \”treading on controversial terrain,\” as he puts it. 

For example, it is generally true that those with lower-incomes will need less compensation for taking on risk. He writes: 

There are strong income-related dependencies of the VSL. Many estimates of the income elasticity of the U.S. VSL are in the range of 0.6, and some available estimates are in excess of 1.0. Proper application of these income-related variations in the VSL would reduce the VSL applied to the poor. Many of those most affected by COVID-19 have below average income levels, including essential workers in grocery stores and other retail establishments, as well as those engaged in the production and delivery of food. Similarly, people whose jobs do not permit them to work at home and who must rely on public transit for their commute also are subject to considerable COVID-19 risks. Policies that protect their lives would be less highly valued if there were income adjustments. Given the strong
societal interest in maintaining the efforts of such individuals, it is unlikely that there would be support for applying a lower value to their lives, which in turn would lead to less policy emphasis on protecting their well-being.

Similarly, the value of a statistical life studies tend to find an inverse U-shape with age. Here\’s an illustrative figure from Viscusi, based on the empirical evidence: 

As Viscusi notes, the interpretation whether the empirical findings of a different VSL by age has led to political controversies.

Embarking on any age variation in the VSL is a controversial undertaking. In its analysis
of the Clear Skies Initiative, the U.S. Environmental Protection Agency (EPA) adopted a
downward age adjustment for those age 65 and older, reducing their applicable VSL by 37%. The result was a political firestorm against the “senior discount,” leading to headlines in critical articles such as “Seniors on Sale, 37% off” and “What’s a Granny Worth?” EPA abandoned this approach given the public outcry.

Notwithstanding the potential controversy that arises from even considering the prospect
of downward adjustments in the VSL for people who are older, what would the effect of
adopting an age-adjusted VSL be on the estimates of the mortality costs of COVID-19 Instead of a VSL of $11 million for the age 85, their VSL declines to $3 million. The average age-adjusted VSL for the COVID-19 age distribution is $6.3 million if there is no discounting of the VSLY stream and $5.2 million if a discount rate of 3% is applied to the stream of VSLY values. The total effect of accounting for age adjustments to the VSL is to cut the mortality cost estimate roughly in half.

To be clear, Viscusi is not advocating that public policy should place a different value on people\’s lives by age, income, or any other metric. He writes: 

[T]he risk equity concept that I have found to be attractive in many contexts is what I term `equitable risk tradeoffs.\’ Thus, rather than equalizing an objective measure of risk or life expectancy levels, the task is to adjust policies to set the risk-money tradeoff equal to a common population-wide VSL.

Some people find the VSL approach to thinking about risk just a useful way of codifying the choices that we are already making; others find it one more example of economics run amok. If you fall into the latter category, you might want to at least consider that when it comes to COVID-19 and future pandemics, Viscusi-style monetary values on mortality and morbidity offer a justification for a much more aggressive public spending response than the United States has actually done. As one example, Viscusi writes: 

Given the tremendous benefits that could be derived by having more adequate medical resources, it is preferable from a benefit-cost standpoint to make provision before health crises arise so that severe rationing is not required for the next pandemic. In anticipation of future pandemics, it is feasible to acquire high-quality ventilators at a cost from $25,000 to $50,000. Adding in the cost of medical support personnel would raise the annual  cost to about $100,000. A reserve supply of ventilators could be a component of an anticipatory pandemic policy. Preparing for future pandemics remains a cost-effective strategy even for annual probabilities of a pandemic on the order of 1/100. However, survey evidence by Pike et al. (2020) suggests that support for protective efforts of this type is unlikely to emerge, as there is a lack of public concern with long-term pandemic risks. As a result, there is likely to be a continued shortfall in preparations for prospective risks, leading to future repetitions of the difficult rationing decisions posed by COVID-19.

In addition, Viscusi points out that a number of medical ethicists in the last year have been talking about how to ration available health care resources in various ways, : that is, about who will get certain kinds of care and who won\’t. Viscusi-style estimates of the value of a statistical life make the case for high levels of government spending to avoid such rationing. As he writes: 

If human life is accorded an appropriate monetized value, the application of VSL and efficient principles for controlling risks will lead to greater levels of protection than will result if medical personnel follow the guidance provided by many prominent medical ethicists.

The Coming Evolution of Electric Power in the US

Even readers who are only experiencing the Texas electricity disruptions from afar may wish to consider a new report from an expert panel at the National Academy of Sciences, The Future of Electric Power in the United States (2021, prepublication copy downloadable for free).  Here\’s a summary of a few of the main economic  and technological changes facing the US electricity industry. 

1) A Potentially Large Increase in Demand for Electricity 

In the last couple of decades, per capita demand for electricity has been fairly flat in the US, while per capita demand for energy has actually declined. This is in part because of greater energy efficiency, and also in part because the US economy has been shifting to service-oriented production that is uses less energy for each dollar of output produced. 

But looking ahead, at present about 0.1% of energy for US transportation come from electricity. If the market for electric cars expands dramatically, this will clearly rise–and given that much of the recharging of electric vehicles would happen in homes, the transmission of electricity to residences could rise substantially. Also at present, \”12 percent of energy used by industry came from electricity in 2019, almost all deep-decarbonization studies suggest that electrification of certain industrial end uses will be important.\”  More broadly, pretty much all aspects of the digital economy run on electricity. One estimate is that for the world as a whole,  \”data centers and networks consumed around 380 terawatt hours (TWh) of electricity in 2014–2015, about 2 percent of total electricity demand.\”  

Of course, continual increases in efficiency of electricity use–say, by large household appliances and new light bulbs–could offset this rise in demand for electricity. On the other side, changes in more people using electricity in their homes or all purposes, rather than using natural gas or oil, could also shift electricity demand higher. 
2) A Shift in How Electricity is Generated 
Even if the demand for electricity doesn\’t risk, there is an ongoing push toward generating electricity in different ways that produce less or no carbon emissions. As the report says: \”An economy that decarbonizes is an economy that electrifies.\” Here\’s a graph showing the current sources of electricity in the US. 
Obviously, coal and natural gas still dominate electricity generation in the US. As the report notes: \”So far, the United States is making modest progress in decarbonizing its power sector primarily
through the replacement of coal with natural gas—sometimes called `shallow decarbonization\’—but the path to deep decarbonization is not yet widely agreed upon.\” 

There doesn\’t seem to be much of a push for more nuclear or hydro (although there probably should be), which means that we\’re talking about here is expanding the green and yellow areas to account for a much larger share of electricity generation over the next 2-3 decades. Maybe other technologies like carbon capture and storage or hydrogen can make a difference, too. My sense is that most people (and politicians) have no realistic sense of what would be needed to make solar and wind the main elements of the electrical grid. It\’s not just expanding wind and solar by a factor of maybe 10, with all that implies for building additional solar and wind facilities, but it\’s also building the new transmission lines needed to get the power where we want it to be and building the electricity grid so that it can deal with the intermittent nature of these sources of electricity. 

3) The Changing Grid Edge

The old traditional method of producing and selling electricity involved a few big generating facilities:  the report calls this the \”make it-move it-use it\” model. The new method involves decentralized electricity generation from a variety of possible sources, right down to rooftop solar panels that can supply electricity to the grid. In addition, electricity providers are gaining the capability to charge different amounts at different times of day, or for different levels of household usage, or even to cycle your air conditioning or heat on and off at times when demand is especially high. There may also be rising demand for different kinds of electricity provision: for example, homes may want \”fast-charging\” capability for their electric car, but status quo electricity elsewhere in the grid. Certain areas may have industrial or information-technology facilities that need especially large quantities of electricity. Issues of physical security of the grid and also cybersecurity of the grid are important, too.  
Factors like these mean that the electricity grid itself is becoming enormously more complex to manage, with a much wider range of options for generation, distribution, and usage.  
The report discusses an array of other topics, including jobs that will be lost and gained, global supply chains for the kinds of equipment that will be needed (much of it based in China), effects for people with lower-incomes who may have less access to reliable electricity, and so on.  
But in my mind, perhaps the key point is recognizing that the evolution of the electricity industry is heavily shaped by physical realities and by regulations at both state and federal levels. With many industries, I\’m happy to let private companies struggle in the market to get my business, with a relatively mild background of government regulation. But for most of us, electricity is delivered from a common  grid, and I don\’t have a choice of competing providers. The rules for  how my power company will buy power, and how it determines what it will charge me, are heavily regulated and largely opaque to the public. Questions about where new generating facilities and new transmission lines will be built often seem to involve one separate dispute at a time, rather than as part of an overall strategy. The incentives for a given electricity utility to invest in research and development or to experiment with different kinds of service provision,, given the realities of regulation, can be quite limited. 
Ins short, the electricity market is facing needs for transformational change in a regulatory and economic context where change has traditionally been constricted and piecemeal. It\’s all too easy to imagine how pressures for change and limits on change will collide.