Mission Creep for Bank Regulators and Central Banks

The standard argument for government regulators who supervise the extent of bank risk is that if banks take on too much risk in the pursuit of short-term profits, but also raise the risk of  becoming insolvent, there are dangers not just to the banks themselves, but also risk to to bank depositors, the supply of credit in the economy, and other intertwined financial institutions. To put it another way, if the government is likely to end up bailing out individuals, firms, or the economy itself, then the government has a reason to check on how much risk is being taken.  

But what if countries start to load up the bank regulators with a few other goals at the same time? What tradeoffs might emerge? Sasin Kirakul, Jeffery Yong, and Raihan Zamil describe the situation in \”The universe of supervisory mandates – total eclipse of the core?\”  (Financial Stability Institute Insights on policy implementation No 30, March 2021).

Specifically, they look at bank regulators across 27 jurisdictions. In about half of these, the central bank also has the job of bank supervision; in the other half, a separate regulatory agency has the job. In all these jurisdictions, the bank regulators are to focus on \”safety and soundness. But the authors identify 13 other jobs that are simultaneously being assigned to bank regulators–and they note that most bank regulators have at least 10 of these other jobs. They suggest visualizing the responsibilities with this diagram: 

The basic goal of supporting the public interest is at the bottom, with the core idea of safety and soundness of banking institutions right above.  This is surrounded by five of what they call \”surveillance and oversight\” goals: financial stability; crisis management; AML/CFT, which stands for anti-money laundering/combating the financing of terrorism; resolution, which refers to closing down insolvent banks, and consumer protection. The outer semicircle then includes seven \”promotional objective, which refers to promoting financial sector development, financial literacy, financial inclusion, competition in the financial sector, efficiency, facilitating financial technology and innovation, and positioning the domestic market as an international financial center.  Then off to the right you see \”climate change,\” which can be viewed as either an oversight/surveillance goal (that is, are banks and financial institutions taking these risks into account) or a promotional goal (is sufficient capital flowing to this purpose). 

There are ongoing efforts to add just a few more items to the list. For example, some economists at the IMF have argued that central banks like the Federal Reserve should go beyond the monetary policy issues of looking at employment, inflation, and interest rates, and also beyond the financial regulation responsibilities that many of them already face, and should also look at trying to address inequality. 

For the United States, the current statutory goals for financial regulators include safety and soundness as well as the first five surveillance and oversight goals–although in the US setting these goals are somewhat divided between different agencies like the Federal Reserve, the Office of the Comptroller of the Currency, and the Federal Deposit Insurance Commission. There are also statutory directives for certain agencies to pursue consumer projection and and financial inclusion, and non-statutory mandates to promote financial literacy, fintech/innovation, and to in some way take climate change concerns into account.  

In some situations, of course, these other goals can reinforce the basic goal of safety and soundness in banking. In other situations, not so much. For example, during a time of economic crisis, should the financial regulator also be pressing hard to make sure all banks are safe and sound, or should it give them a bit more slack at that time? Does \”developing the financial sector\” mean building up certain banks to be more profitable, while perhaps charging consumers more? What if promoting fintech/innovation could cause some banks to become weaker, thus reducing their safety and soundness and perhaps leading to less competition? Does the climate change goal involve bank regulators in deciding what particular firms or industries are \”safe\” or \”risky\” borrowers, and thus who will receive credit? 

There\’s a standard problem that when you start aiming at many different goals all at once, you often face some tradeoffs between those goals. For example, imagine a person planning a dinner with the following goals: tastes appealing to everyone; also tastes different and interesting; includes fiber, protein, vitamins, all needed nutrients; low calorie; locally sources; easily affordable; can prepare with no more than one hour of cooking time; and freezes well for leftovers. All the goals are worthy ones, and with some effort, one can often find a compromise solution that fits most of them. But you will almost certainly need to do less on some of the goals to make it possible to meet other goals. (Pre-pandemic, one of the last dinner parties my wife and I gave was for guests who between them were vegetarian, gluten-free, dairy- free, and no beans or legumes. Talk about compromises on the menu!)

In the case of the regulators who supervise banks, the more tasks you give them to do, the less attention and energy they will inevitably have for the core \”safety and soundness\” regulation. Also, more goals typically mean that the regulators have more discretion when trading off one objective against another, and thus it becomes harder to hold them to account. Those who need to aim at a dozen or more different targets are likely to end up missing at least some of them, much of the time. 

Measuring Teaching Quality in Higher Education

For every college professor, teaching is an important part of their job. For most college professors, who are not located at relatively few research-oriented universities, teaching the main part of their job. So how can we evaluate whether teaching is being done well or poorly? This question applies both at the individual level, but also for bigger institutional questions: for example, are faculty with lifetime tenure, who were granted tenure in substantial part for their performance as researchers, better teachers than faculty with short-term contracts?  David Figlio and Morton Schapiro tackle such questions in \”Staffing the Higher Education Classroom\” (Journal of Economic Perspectives, Winter 2021, 35:1, 143-62). 

The question of how to evaluate college teaching isn\’t easy. For example, there are not annual exams as often occur at the K-12 level, nor are certain classes followed by a common exam like the AP exams in high school. My experience is that the faculty colleges and universities are not especially good at self-policing of teaching.  In some cases, newly hired faculty get some feedback and guidance, and there are hallway discussions about especially awful teachers, but that\’s about it. Many colleges and universities have questionnaires on which students can evaluate faculty. This is probably a better method than throwing darts in the dark, but it is also demonstrably full of biases: students may prefer easier graders, classes that require less work, or classes with an especially charismatic professor. There is a developed body of evidence that white American faculty members tend to score higher. Figlio and Schapiro write: 

Concerns about bias have led the American Sociological Association (2019) to caution against over-reliance on student evaluations of teaching, pointing out that “a growing body of evidence suggests that their use in personnel decisions is problematic” given that they “are weakly related to other measures of teaching effectiveness and student learning” and that they “have been found to be biased against women and people of color.” The ASA suggests that “student feedback should not be used alone as a measure of teaching quality. If it is used in faculty evaluation processes, it should be considered as part of a holistic assessment of teaching effectiveness.” Seventeen other scholarly associations, including the American Anthropological Association, the American Historical Association, and the American Political Science Association, have endorsed the ASA report …

Figlio and Schapiro suggest two measures of effective teaching for intro-level classes: 1) how many students from a certain intro-level teacher go on to become majors in the subject, and 2) \”deep learning,\” which is combination of how many in an intro-level class go on to take any additional classes in a subject, and do whether students from a certain teacher tend to perform better in those follow-up classes. They authors are based at Northwestern University, and so they were able to obtain \”registrar data on all Northwestern University freshmen who entered between fall 2001 and fall 2008, a total of 15,662 students, and on the faculty who taught them during their first quarter at Northwestern.\” 

Of course, Figlio and Schapiro emphasize that their approach is focused on Northwestern students, who are not a random cross-section of college students. The methods they use may need to be adapted in other higher-education contexts. In addition, this focus on first-quarter teaching of first-year students is an obvious limitation in some ways, but given that the first quarter may also play an outsized role in the adaptation of students to college, it has some strengths, too. In addition, they focus on comparing faculty within departments, so that econ professors are compared to other econ professors, philosophy professors to other philosophy professors, and so on. But with these limitations duly noted, they offer what might be viewed as preliminary findings that are nonetheless worth considering. 
For example, it seems as if their two measures of teaching quality are not correlated: \”That is, teachers who leave scores of majors in their wake appear to be no better or worse at teaching the material needed for future courses than their less inspiring counterparts; teachers who are exceptional at conveying course material are no more likely than others to inspire students to take more courses in the subject area. We would love to see if this result would be replicated at other institutions.\” This result may capture the idea that some teachers are \”charismatic\” in the sense of attracting students to a subject, but that those same teachers don\’t teach in a way that helps student performance in future classes.
They measure the quality of research done by tenured faculty using measures of publications and professional awards, but find: \”Our bottom line is, regardless of our measure of teaching and research quality, there is no apparent relationship between teaching quality and research quality.\” Of course, this doesn\’t mean that top researchers in the tenure-track are worse teachers; just that they aren\’t any better. They cite other research backing up this conclusion as well. 
This finding raises some awkward questions, as Figlio and Schapiro note: 

But what if state legislators take seriously our finding that while top teachers don’t sacrifice research output, it is also the case that top researchers don’t teach exceptionally well? Why have those high-priced scholars in the undergraduate classroom in the first place? Surely it would be more cost-efficient to replace them in the classroom either with untenured, lower-paid professors, or with faculty not on the tenure-line in the first place. That, of course, is what has been happening throughout American higher education for the past several decades, as we discuss in detail in the section that follows. And, of course, there’s the other potentially uncomfortable question that our analysis implies: Should we be concerned about the possibility that the weakest scholars amongst the tenured faculty are no more distinguished in the classroom than are the strongest scholars? Should expectations for teaching excellence be higher for faculty members who are on the margin of tenurability on the basis of their research excellence?

Figlio and Schapiro then extend their analysis to looking at the teaching quality of non-tenure track faculty. Their results here do need to be interpreted with care, given that non-tenure contract faculty at Northwestern often operate with three-year renewable contracts, and most of these faculty in this category are in their second or later contract. They write: 

Thus, our results should be viewed in the context of where non-tenure faculty at a major research university function as designated teachers (both full-time and part-time) with long-term relationships to the university. We find that, on average, tenure-line faculty members do not teach introductory undergraduate courses as well as do their (largely full-time, long-term) contingent faculty counterparts. In other words, our results suggest that on average, first-term freshmen learn more from contingent faculty members than they do from tenure track/tenured faculty. 

When they look more closely at the distribution of these results, they find that the overall average advantage of Northwestern\’s contingent faculty mainly arises because of a certain number of tenured faculty at the bottom tail of the distribution of teachers seem to be terrible at teaching first-year students. As Figlio and Schapiro point out, any contract faculty who were terrible and at the bottom tail of the teaching distribution are likely to be let go–and so they don\’t appear in the data. Thus, the lesson  here would be that institutions should be have greater awareness about the possibility that a small share of tenure-track faculty may be doing a terrible job in intro-level classes–and get those faculty reassigned somewhere else.
This study obviously leaves a lot of questions unanswered. For example, perhaps the skills to be a top teacher in an intro-level class are different than the skills to teach an advanced class. Maybe top researchers do better in teaching advanced classes? Or perhaps top researchers offer other benefits to the university (grant money, public recognition, connectedness to the frontier concepts in a field) that have additional value? But the big step forward here is to jumpstart more serious thinking about how it\’s possible to develop some alternative quantitative measures of teacher quality that don\’t rely on subjective evaluations by other faculty members or on student questionnaires.
One other study I recently ran across along these lines uses data from the unique academic environment of the US Naval Academy, where students are required to take certain courses from randomly assigned faculty. Michael Insler, Alexander F. McQuoid, Ahmed Rahman, and Katherine Smith discuss their findings in \”Fear and Loathing in the Classroom: Why Does Teacher Quality Matter?\” (January 2021, IZA DP No. 14036).  They write: 

Specifically, we use student panel data from the United States Naval Academy (USNA), where freshmen and sophomores must take a set of mandatory sequential courses, which includes courses in the humanities, social sciences, and STEM disciplines. Students cannot directly choose which courses to take nor when to take them. They cannot choose their instructors. They cannot switch instructors at any point. They must take the core sequence regardless of interest or ability.\” In addition: 

Due to unique institutional features, we observe students’ administratively recorded grades at different points during the semester, including a cumulative course grade immediately prior to the final exam, a final exam grade, and an overall course grade, allowing us to separately estimate multiple aspects of faculty value-added. Given that instructors determine the final grades of their students, there are both objective and subjective components of any academic performance measure. For a subset of courses in
our sample, however, final exams are created, administered, and graded by faculty who do not directly influence the final course grade. This enables us to disentangle faculty impacts on objective measures of student learning within a course (grade on final exam) from faculty-specific subjective grading practices (final course grade). Using the objectively determined final exam grade, we measure the direct impact of the instructor on the knowledge learned by the student.
To unpack this just a bit, the researchers can look both at test scores specifically, which can be viewed as \”hard\” measure of what is learned. But when instructors give a grade for a class, the instructor has some ability to add a subjective component in determining the final grade. For example, one can imagine that perhaps a certain student made great progress in improved study skills, or a student had some reason why they underperformed on the final (perhaps relative to earlier scores on classwork), and the professor did not want to overly penalize them. 
One potential concern here is that some faculty might \”teach to the test,\” in a way that makes the test scores of their student look good, but doesn\’t do as much to prepare the students for the follow-up classes. Another potential concern is that when faculty depart from the test scores in giving their final grades, they may be giving students a misleading sense of their skills and preparation in the field–and thus setting those students up for disappointing performance in the follow-up class. Here the finding from Insler, McQuoid, Rahman, and Smith: 
We find that instructors who help boost the common final exam scores of their students also boost their performance in the follow-on course. Instructors who tend to give out easier subjective grades however dramatically hurt subsequent student performance. Exploring a variety of mechanisms, we suggest that instructors harm students not by “teaching to the test,” but rather by producing misleading signals regarding the difficulty of the subject and the “soft skills” needed for college success. This effect is stronger in non-STEM fields, among female students, and among extroverted students. Faculty that are well-liked by students—and thus likely prized by university administrators—and considered to be easy have particularly pernicious effects on subsequent student performance.

Again, this result is based on data from a nonrepresentative academic institution. But it does suggest some dangers of relying on contemporaneous popularity among students as a measure of teaching performance. 

Carbon Capture and Storage: The Negative Carbon Option?

There used to be one coal-fired electricity generating plant in the US using carbon capture and storage (CCS) technology, the Petra Nova plant outside of Houston, Texas. It\’s now been shut down. It\’s not that the plant was a roaring technology success; for example, the process for scrubbing out the carbon required so much energy that the company had to build a separate natural-gas power plant just for that purpose. Still, I was sorry to see it go. There are other US plants, not coal-fired, learning about carbon capture and storage. But the way to learn about new technologies is to use them at scale. 

Here, I\’ll take a look at the Global Status of CCS 2020 report from the Global CCS Institute (December 2020) and the Special Report on Carbon Capture Utilisation and Storage: CCUS in clean energy transitions from the International Energy Agency (September 2020). These reports make no effort to oversell carbon capture and storage. Instead, the argument is that in specific locations and for specific purposes, carbon capture and storage technology could be a useful or even a necessary part of reducing carbon emissions. 

Brad Page, chairman of the Global CCS Institute, notes: \”Just considering the role for CCS implicit in the IPCC 1.5 Special Report, somewhere between 350 and 1200 gigatonnes of CO2 will need to be captured and stored this century. Currently, some 40 megatonnes of CO2 are captured and stored annually. This must increase at least 100-fold by 2050 to meet the scenarios laid out by the IPCC.\” Nicholas Stern adds: \”We have long known that CCUS will be an essential technology for emissions reduction; its deployment across a wide range of sectors of the economy must now be accelerated.\”

The basic point here is that even if there can be an enormous jump in non-carbon energy production for most purposes, there are likely to remain a few uses where it is extremely costly to substitute away from fossil fuels. Common examples include the iron, steel, and concrete industries, as well as back-up power-generating facilities that are needed for stabilizing power grids. For those purposes, carbon capture and storage technology can keep the resulting emissions as low as possible. Carbon capture and storage might have a role to play in a shift to hydrogen technology: hydrogen generates electricity without carbon, but using coal or natural gas to make the hydrogen is not carbon free. Moreover, it would be useful to have at least a few energy technologies that are carbon-negative. Examples would include if it is possible to combine biofuels with carbon capture and storage technology, or perhaps even in certain locations to use a cheap but local noncarbon energy source (say, geothermal energy) to capture carbon from the air. 

The IEA report summarizes the current situation in the US for carbon capture and storage technology this way: 

The United States is the global leader in CCUS development and deployment, with ten commercial CCUS facilities, some dating back to the 1970s and 1980s. These facilities have a total CO2 capture capacity of around 25 Mt/year – close to two-thirds of global capacity. Another facility in construction has a capture capacity of 1.5 Mt/year of CO2, and there are at least another 18-20 planned projects that would add around 46 Mt/year were they all to come to fruition. Most existing CCUS projects in the United States are associated with low-cost capture opportunities, including natural gas processing (where capture is required to meet gas quality specifications) and the production of synthetic natural gas, fertiliser, hydrogen and bioethanol. One project – Petra Nova – captures CO2 from a retrofitted coal-fired power plant for use in EOR though operations were suspended recently due to low oil prices. …  All but one of the ten existing projects earn revenues from the sale of the captured CO2 for EOR operations. There are also numerous pilot- and demonstration-scale projects in operation as well as significant CCUS R&D activity, including through the Department of Energy’s National Laboratories.

I found the IEA discussion of potential options for removing carbon from the atmosphere to be especially interesting. as they state: \”Carbon removal is also often seen as a way of producing net-negative emissions in the second half of the century to counterbalance excessive emissions earlier on. This feature of many climate scenarios however should not be interpreted as an alternative to cutting emissions today or a reason to delay action.\”

Basically, there are nature-based and technology-based options. The nature-based solutions involve finding ways to absorb more carbon in plants, soil, and oceans. The main technology solutions are bioenergy carbon capture and storage, commonly abbreviated as BECCS and direct air capture with storage, often abbreviated as DACS. The IEA writes: 

While all these approaches can be complementary, technology solutions can offer advantages over nature-based solutions, including the verifiability and permanency of underground storage; the fact that they are not vulnerable to weather events; including fires that can release CO2 stored in biomass into the atmosphere; and their much lower land area requirements. BECCS and DACS are also at a more advanced stage of deployment than some carbon removal approaches. Land management approaches and afforestation/reforestation are at the early adoption stage and their potential is limited by land needs for growing food. Other non-technological approaches – such as enhanced weathering, which involves the dissolution of natural or artificially created minerals to remove CO2 from the atmosphere, and ocean fertilisation/alkalinisation, which involves adding alkaline substances to seawater to enhance the ocean’s ability to absorb carbon – are only at the fundamental research stage. Thus, their carbon removal potentials, costs and environmental impact are extremely uncertain.

Here are a few words from the IEA on BECCS and on DACS:

BECCS involves the capture and permanent storage of CO2 from processes where biomass is converted to energy or used to produce materials. Examples include biomass-based power plants, pulp mills for paper production, kilns for cement production and plants producing biofuels. Waste-to-energy plants may also generate negative emissions when fed with biogenic fuel. In principle, if biomass is grown sustainably and then processed into a fuel that is then burned, the technology pathway can be considered carbon-neutral; if some or all of the CO2 released during combustion is captured and stored permanently, it is carbon negative, i.e. less CO2 is released into the atmosphere than is removed by the crops during their growth. … The most advanced BECCS projects capture CO2 from ethanol production or biomass-based power generation, while industrial applications of BECCS are only at the prototype stage. There are currently more than ten facilities capturing CO2 from bioenergy production around the world . The Illinois Industrial CCS Project, with a capture capacity of 1 MtCO2/yr, is the largest and the only project with dedicated CO2 storage, while other projects, most of which are pilots, use the captured CO2 for EOR [enhanced oil recovery[ or other uses. …

A total of 15 DAC plants are currently operating in Canada, Europe, and the United States. … Most of them are small-scale pilot and demonstration plants, with the CO2 diverted to various uses, including for the production of chemicals and fuels, beverage carbonation and in greenhouses, rather than geologically stored. Two commercial plants are currently operating in Switzerland, selling CO2 to greenhouses and for beverage carbonation. There is only one pilot plant, in Iceland, currently storing the CO2: the plant captures CO2 from air and blends it with CO2 captured from geothermal fluid before injecting it into underground basalt formations, where it is mineralised, i.e. converted into a mineral. In North America, both Carbon Engineering and Global Thermostat have been operating a number of pilot plants, with Carbon Engineering (in collaboration with Occidental Petroleum) currently designing what would be the world’s largest DAC facility, with a capture capacity of 1 MtCO2 per year, for use in EOR [enhanced oil recovery] …

Reducing carbon emissions isn\’t likely to happen through any single solution, but rather through a portfolio of actions. It seems to me that carbon capture and storage has a small but meaningful place in that portfolio. For a couple of earlier posts on this technology, see: 

Will Workers Disperse from Cities?

Predictions that technology shifts will cause urban job concentrations to disperse have been made a number of times in the last half-century or so. The predictions always sound plausible. But up until the pandemic, the predictions kept not happening.

Here\’s an example from a 1995 book City of Bits, by an MIT professor of architecture named William J. Mitchell. He wrote a quarter-century ago, while also making references to predictions a quarter-century before that (footnotes omitted): 

As information work has grown in volume and importance, and as increasingly efficient transportation and communication systems have allowed separation of offices from warehouses and factories, office buildings at high-priced central business district (CBD) locations have evolved into slick-skinned, air-conditioned, elevator-serviced towers. These architecturally represent the power and prestige of information-work organizations (banks, insurance companies, corporate headquarters of business and industrial organizations, government bureaucracies, law, accounting, and architectural firms, and so on) much as a grand, rusticated palazzo represented the importance of a great Roman, Florentine, or Sienese family. … 

From this follows a familiar, widely replicated, larger urban pattern–one that you can see (with some local variants) from London to Chicago to Tokyo. The towers cluster densely at the most central, accessible locations in transportation networks. Office workers live in the lower-density suburban periphery and commute daily to and from their work.  … 

The bonding agent that has held this whole intricate structure together (at every level, from that of the individual office cubicle to that of CBDs and commuter rail networks) is the need for face-to-face contact with coworkers and clients, for close proximity to expensive information-processing equipment, and for access to information held at the central location and available only there. But the development of inexpensive, widely distributed computational capacity and of pervasive, increasingly sophisticated telecommunications systems has greatly weakened the adhesive power of these former imperatives, so that chunks of the old structure have begun to break away and then to stick together again in new sorts of aggregations. We have seen the emergence of telecommuting, \”the partial or total substitution of telecommunication, with or without the assistance of computers, for the twice-daily commute to/from work.\”

Gobs of \”back office\” work can, for example, be excised from downtown towers and shifted to less expensive suburban or exurban locations, from which locally housed workers remain in close electronic contact with the now smaller but still central and visible head offices. These satellite offices may even be transferred to other towns or to offshore locations where labor is cheaper. (Next time you pay your credit card bill or order something from a mail-order catalogue, take a look at the mailing address. You\’ll find that the envelope doesn\’t go to a downtown location in a major city, but more likely to an obscure location in the heartland of the country.) 

The bedroom communities that have grown up around major urban centers also provide opportunities for establishing telecommuting centers small, Main Street office complexes with telecommunications links to central offices of large corporations or government departments. As a consequence, commuting patterns and service locations also begin to change; a worker might bicycle to a suburban satellite office cluster or telecommuting center, for example, rather than commute by car or public transportation to a
downtown headquarters. Another strategy is to create resort offices, where groups can retreat for a time to work on special projects requiring sustained concentration or higher intellectual productivity, yet retain electronic access to the information resources of the head office. This idea has interested Japanese corporations, and prototypes have been constructed at locations such as the Aso resort area near Kumamoto …

More radically, much information work that was traditionally done at city-center locations can potentially be shifted back to network-connected, computer-equipped, suburban or even rural homes. Way back in the 1960s, well before the birth of the personal computer, James Martin and Adrian R. D. Norman could see this coming. They suggested that \”we may see a return to cottage industry, with the spinning wheel replaced by the computer terminal\” and that \”in the future some companies may have almost no offices.\” The OPEC oil crisis of 1973 motivated some serious study of the economics of home-based telecommuting. Then the strategy was heavily promoted by pop futurologists of the Reaganite eighties, who argued that it would save workers the time and cost of commuting while also saving employers the cost of space and other overhead. The federal Clean Air Act amendments of 1990, which required many businesses with a hundred or more employees to reduce the use of cars for commuting, provided further impetus. …

In the 1960s and early 1 970s, as the telecommunications revolution was rapidly gaining momentum, some urbanists leaped to the conclusion that downtowns would soon dissolve as these new arrangements took hold. Melvin Webber, for example, predicted: \”For the first time in history, it might be possible to locate on a mountain top and to maintain intimate, real-time and realistic contact with business or other associates. All persons tapped into the global communications net would have ties approximating those used today in a given metropolitan region.\” …

But the prophets of urban dissolution underestimated the inertia of existing patterns, and the reality that has evolved in the 1980s and 1990s is certainly more complex than they imagined. The changing relative costs of telecommunication and transportation have indeed begun to affect the location of office work. But weakening of the glue that once firmly held office downtowns together turns out to permit rather than determine dispersal; the workings of labor and capital markets and the effects of special local conditions often end up shaping the locational patterns that actually emerge from the shakeup.

I love the passage in part because it starts of in the first paragraph talking about how dense central business districts \”represent the power and prestige of information-work organizations,\” which makes it sound as if downtown urban areas are nothing but an ego trip for top executives, but then ends with some comments about how economic factors \”labor and capital markets\” actually end up shaping the results. 
The economic patterns of big cities have changed. I have discussed \”How Cities Stopped Being Ladders of Opportunity\” (January 19, 2021), because in recent decades they have been places where the more-educated could earn higher wages, but they have stopped being places where the less-educated could earn higher wages. 
But moreover, when Mitchell in his 1995 book referred to \”the need for face-to-face contact with coworkers and clients,\” he was seeing only part of the picture. Yes, contact with coworkers and clients within a firm matters, but it\’s also true that firms of a certain type often bunch together geographically. It seems important to be geographically located near workers and clients from other firms, too. I\’ve written a bit about this \”economics of density,\” and offer some links, in \”Cities as Economic Engines: Is Lower Density in Our Future\” (August 14, 2020). 
 
Hannah Rubinton offers another piece of evidence in \”Business Dynamism and City Size\” (Economic Synopses: Federal Reserve Bank of St. Louis, 2021, Number 4). The points represent data for individual cities. The horizontal axis shows the population of the city. The vertical axis of the top panel shows the \”establishment entry rate,\” which is the rate at which new business establishments are started in a city. An \”establishment\” includes both a new business or a new location for part of an existing firm. In the bottom panel, the vertical axis shows the \”establishment exit rate.\” The payoff for these figures is that if you plot the data for 1982, you can that larger cities tended to have lower rates of entry and exit (the solid lines slope down), but by 2018 the larger cities tended to have higher rates of entry and exit (the dashed lines slope up.)

This pattern reflects that in the last few decades, a substantial part of economic dynamism, productivity growth, and wage growth has been happening in the larger cities. As Rubinton notes: 

At the same time, large and small cities have diverged on several important dimensions: Large cities increasingly have a more educated workforce and offer higher wage premiums for skilled workers. Given that dynamism is important for productivity and economic growth, the differential changes in dynamism across cities could be important to understanding the divergence in wages and skill-composition between large and small cities. … [T]hese patterns are consistent with competition becoming tougher in large cities relative to small cities. Large cities have become more congested than they were in 1980: As population has grown and technology has improved, rents and wages have increased. Less-productive firms that cannot afford the higher prices are more likely to exit, leaving room for new firms to enter.

Maybe the aftereffects of the pandemic will change all this. I tend to believe that some of the shift to telecommuting in this last year will persist. But I\’m also very aware that predictions about how jobs \”can potentially be shifted back to network-connected, computer-equipped, suburban or even rural homes\” have been around for decades. Yet downtown business districts and other clusters of economic activity continue to persist and grow, which suggests strong underlying economic forces at work. 

Negative Interest Rates: Practical, but Limited

For a lot of people, the idea of negative interest rates sounds as if it must violate some law of nature, like a perpetual motion machine. Why would any depositor put money into an investment that promised a negative return? Well, starting way back in 2012, a substantial number of central banks around the world including the European Central Bank, the Swiss National Bank, the Bank of Japan, and the Sveriges Riksbank (the central bank of Sweden) have pushed the specific interest rates on which they focus monetary policy into negative territory for the last several years. Luís Brandão-Marques, Marco Casiraghi, Gaston Gelos, Güneş Kamber, and Roland Meeks offer an overview of the experience in \”Negative Interest Rates: Taking Stock of the Experience So Far\” (IMF Monetary and Capital Markets Department, 21-03, March 2021). 

Perhaps the obvious questions about a negative interest rate is why depositors would put money in the bank at all. The short answer is that banks provide an array of financial services to both businesses and individual customers (ease of electronic payments, not needing to hold large amounts of cash, access to credit, and so on). As a customer, you can pay for those services with some combination of fees and lower interest rates. It\’s easy to imagine a situation where slightly negative interest rates are offset by changes in other fees or contractual arrangements. 

It\’s also worth remembering that the fact of a negative interest rate in real terms is not actually new at all. At many times in the past, people have experienced negative real interest rates on their bank deposits–that is, when the inflation rate is higher than the nominal interest rate, the real interest rate is negative. 

Of course, if the bank interest rates became too negative, then depositors would indeed move away from banks and toward cash or other alternative investments. What is the \”effective lower bound\” for a negative interest rate? The answer will depend on various assumptions about the financial system, including costs of setting up companies that store large amounts of cash, but here\’s a set of estimates from various studies. 

Given that slightly negative bank interest rates are clearly possible, what are their benefits and risks? 

On the benefit side, when a central bank acts to lower interest rates, its goal is to stimulate the economy, both to encourage growth and also to raise inflation if that rate is below the desired target level (often set at 2%). Thus, the key question is whether moving the bank policy interest rate below zero does provide some additional macroeconomic boost. The specific effects of negative interest rates aren\’t easy sort out on their own, because central banks that have moved their policy interest rate into negative territory have also been carrying out other unconventional monetary policies like quantitative easing setting explicit forward guidance for what monetary policy will be in the near-term or middle-term future, or intervening in exchange rate markets. But as this report summarizes the evidence: \”For instance, the transmission mechanism of monetary policy does not appear to change significantly when official rates become negative.\”

As one example of how negative interest rate policies reduce actual interest rates, this figure shows how the returns on euro-denominated government debt have dipped into negative terms. 

On the risk side, perhaps the main danger was that negative interest rates would cause large losses for banks or other major players in the financial system like money market funds. But again, at least so far, these problems have not emerged. The report notes:  

Overall, most of the theoretical negative side effects associated with NIRP [negative interest rate policies] have failed to materialize or have turned out to be less relevant than expected. Economists and policymakers have identified a number of potential drawbacks of NIRP, but none of them have emerged with such an intensity as to tilt the cost-benefit analysis in favor of removing this instrument from the central bank toolbox.  … [O]verall, bank profitability has not significantly suffered so far …  and banks do not appear to have engaged in excessive risk-taking. Of course, these side effects may still arise if NIRP remains in place for a long time or policy rates go even more negative, approaching the reversal rate.

However, it\’s worth noting that the negative interest rates in place have mainly affected large institutional depositors. For households and retail investors, banks have tried to keep the interest rates they receive in slightly positive territory–but have also adjusted other fees and charges in ways that have sustained bank profitability. As the report notes: \”Banks seem to respond to NIRP by increasing fees on retail deposits, while passing on negative rates partly to firms.\”

The IMF report also identifies areas where research on negative interest rates policies has been limited. 

The literature so far has largely overlooked the impact of negative interest rates on financial intermediaries other than banks. Although pension funds and insurance companies do not typically offer overnight deposits and thus the constraint on lowering the corresponding rates below zero is not an issue, other non-linearities may arise when market rates become negative. Among others, legal or behavioral constraints to offering negative nominal returns could affect the profitability of nonbanks. Given the importance of these institutions for the financial system, the absence of empirical evidence on the impact of negative rates on their behavior is surprising. ..

Another interesting direction for future research is to further study the determinants of the corporate channel identified by Altavilla and others (2019b). According to this channel, cash-rich firms with relationships with banks that charge negative rates on deposits are more likely to use their liquidity to increase investment. What drives this channel is still unclear. For instance, the role of multiple bank relationships could be investigated. If cash-rich firms can easily move their liquidity across financial institutions (including nonbanks), then negative rates on corporate deposit may simply lead these firms to reallocate their liquidity across intermediaries, without any significant impact on investment. By contrast, frictions that prevent firms from easily establish new bank relationships, and thus move their funds around, could induce a reallocation from corporate deposits to other less liquid assets, such as fixed capital.

In my reading, the general tone of the IMF report is that the negative interest rate policies have been modestly useful, and without worrisome negative side effects–but also that central banks have a number of other options for unconventional monetary policy and while particular option should be in the toolkit of options, it perhaps should not be pushed too hard.  

For some additional discussion of negative interest rates,  including the previous IMF staff report on the subject back in 2017 and various other sources, starting points include: \\

Retail Investors Show "Exuberance"

The word \”exuberance\” has a special meaning for investors. Back in 1996, then-Federal Reserve chair Alan Greenspan gave a speech as stock prices rose during the \”dot-com\” boom. He asked: \”But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions ...?\” When the Fed chair starts \”just asking\” questions about exuberance, people take note.

But any one who took Greenspan\’s speech as a prediction of a near-term drop in the stock market missed out, because the \”exuberance\” had a few more years to run. The S&P 500 index was at about 750 at the time of Greenspan\’s speech in December 1996. It had doubled in value during the previous five years. After Greenspan\’s speech, it would double again, topping out at nearly 1500 in September 2000, before sagging back to about 820 in September 2002.

Still, when those in the financial community use the language of \”exuberance,\” my eyebrows go up. The March 2021 issue of the BIS Quarterly Review from the Bank of International Settlements uses \”exuberance\” a couple of times in a lead article, \”Markets wrestle with reflation prospects\” (pp. 1-16).   Here\’s a snippet (references to graphs and boxes omitted): 

Equities and credit gained on the back of a brighter outlook and expectations of greater fiscal support, with signs of exuberance reflected in the behaviour of retail investors. …

Low long-term interest rates have been critical in supporting valuations. Since recent US price/earnings ratios were among the highest on record, they suggest stretched valuations if considered in isolation. However, assessments that also take into account the prevailing low level of interest rates indicate that valuations were in line with their historical average. …[E]quity prices are particularly sensitive to monetary policy in environments akin to the current one, featuring high price/earnings ratios and low interest rates.

Even if equity valuations did not appear excessive in the light of low rates, some signs of exuberance had a familiar ring. Just as during the dotcom boom in the late 1990s, IPOs [initial public offerings] saw a major expansion and stock prices often soared on the first day of trading. The share of unprofitable firms among those tapping equity markets also kept growing. In addition, strong investor appetite supported the rise of special purpose acquisition companies (SPACs) – otherwise known as “blank cheque” companies. These are conduits that raise funds without an immediate investment plan.

The increasing footprint of retail investors and the appeal of alternative asset classes also pointed to brisk risk-taking. An index gauging interest in the stock market on the basis of internet searches surged, eclipsing its previous highest level in 2009. This rise went hand in hand with the growing market influence of retail investors. In a sign of strong risk appetite, funds investing in the main cryptoassets grew rapidly in size following sustained inflows, and the prices of these assets reached all-time peaks …

Here are some illustrative figures. The first shows the number and value of initial public offerings. 

This figure shows the rise of the \”blank cheque\” SPAC companies. 

This figure shows data based on Google searches about interest in the stock market. 

Sirio Aramonte and Fernando Avalos contribute a short \”box\” discussion to this article with more details on \”The rising influence of retail investors.\” They write (the graphs to which they refer are included below):

Telltale signs of retail investors\’ growing activity emerged from patterns in equity trading volumes and stock price movements. For one, small traders seem to be often attracted by the speculative nature of single stocks, rather than by the diversification benefits of indices. Consistent with such preferences gaining in importance, share turnover for exchange-traded funds (ETFs) tracking the S&P 500 has flattened over the past four years, while that for the S&P 500\’s individual constituents has been on an upward trend over the same period, pointing to 2017 as the possible start year of retail investors\’ rising influence (Graph B, left-hand panel). In addition, retail investors are more likely to trade assets on the basis of non-fundamental information. During the late 1990s tech boom, for instance, they sometimes responded to important news about certain companies by rushing to buy the equity of similarly named but distinct firms. Comparable patterns emerged in early 2021 – for instance, when the value of a company briefly quintupled as investors misinterpreted a social media message as endorsing its stock.

In the United States, retail investors\’ sustained risk-taking has been channelled through brokerage accounts, the main tool they have to manage their non-retirement funds. Brokerage accounts allow owners to take leverage in the form of margin debt. In December 2020, the amount of that debt stood at $750 billion, the highest level on record since 1997, both in inflation-adjusted terms and as a share of GDP. Its fast growth in the aftermath of March 2020 exceeded 60% (Graph B, centre panel). There is evidence that retail investors are currently taking risky one-way bets, as rapid surges in margin debt have been followed by periods of stock market declines.

In seeking exposure to individual companies, retail investors trade options. Call (put) options pay off only when the price of the underlying stock rises (falls) past a preset value, with gains potentially amounting to multiples of the initial investment. In this sense, options have embedded leverage that margin debt magnifies further. Academic research has found that option trading tends to be unprofitable in the aggregate and over longer periods for small traders, not least because of their poor market timing.

Reports in early 2021 have suggested that the surge in trading volumes for call options – on both small and large stocks – has indeed stemmed from retail activity. For example, internet searches for options on five technology stocks – a clear sign of retail investors\’ interest – predicted next-day option volumes. This link was particularly strong for searches that took place on days with high stock returns, suggesting that option activity was underpinned by bets on a continuation of positive returns … 

Equity prices rose and fell as retail investors coordinated their trading on specific stocks through social media in January 2021. While online chat rooms were already a popular means of information exchange in the late 1990s, the trebling of the number of US internet users and the rise in no-fee brokerages since then has widened the pool of traders who can combine their efforts. In a recent episode, retail investors forced short-sellers to unwind their positions in distressed companies. A similar move in the more liquid silver market floundered a few days later. These dislocations were short-lived, not least because, in response to collateral requests from clearing houses, some brokerages limited their customers\’ ability to trade. Even so, it has become clear that deliberate large-scale coordination among small traders is possible and can have substantial effects on prices.

Certain actions of retail investors can raise concerns about market functioning. Sudden bursts of trading activity can push prices far away from fundamental values, especially for less liquid securities, thus impairing their information content. In a move that underscored the materiality of this issue, the US Securities and Exchange Commission suspended trading in the shares of companies that had experienced large price movements on the back of social media discussions.

Here\’s the figure showing how exchange-traded funds are being used more for individual stocks. 

Here\’s the figure showing the rise in margin debt for retail investors. 
I\’m certainly not in the investment advice business, and I\’m very aware that Greenspan\’s 1996 comments about \”irrational exuberance\” were more in the middle of a stock market rise than at the end. That said, there do seem to be elements of occasional exuberance at play. 

The Case for More Activist Antitrust Policy

The University of Pennsylvania Law Review  (June 2020) has published a nine-paper symposium on antitrust law, with contributions by a number of the leading economists in the field who tend to favor more aggressive pro-competition policy in this area. Whatever your own leanings, it\’s a nice overview of many of the key issues. Here are snippets from three of the papers. Below, I\’ll list all the papers in the issue with links and abstracts.

C. Scott Hemphill and Tim Wu write about “Nascent Competitors,” which is the concern that large firms may seek to maintain their dominant market position by buying up the kinds of small firms that might have developed into future competitors. The article is perhaps of particular interest because Wu has just accepted a position with the Biden administration to join the National Economic Council, where he will focus on competition and technology policy. Hemphill and Wu write (footnotes omitted):

Nascent rivals play an important role in both the competitive process and the process of innovation. New firms with new technologies can challenge and even displace existing firms; sometimes, innovation by an unproven outsider is the only way to introduce new competition to an entrenched incumbent. That makes the treatment of nascent competitors core to the goals of the antitrust laws. As the D.C. Circuit has explained, “it would be inimical to the purpose of the Sherman Act to allow monopolists free rei[]n to squash nascent, albeit unproven, competitors at will . . . .” Government enforcers have expressed interest in protecting nascent competition, particularly in the context of acquisitions made by leading online platforms.

However, enforcers face a dilemma. While nascent competitors often pose a uniquely potent threat to an entrenched incumbent, the firm’s eventual significance is uncertain, given the environment of rapid technological change in which such threats tend to arise. That uncertainty, along with a lack of present, direct competition, may make enforcers and courts hesitant or unwilling to prevent an incumbent from acquiring or excluding a nascent threat. A hesitant enforcer might insist on strong proof that the competitor, if left alone, probably would have grown into a full-fledged rival, yet in so doing, neglect an important category of anticompetitive behavior.

One main concern with a general rule that would block entrenched incumbents from buying smaller companies is that, for entrepreneurs who start small companies, the chance of being bought out by a big firm is one of the primary incentives for starting a firm in the first place. Thus, there is a concern that more aggressive antitrust enforcement against buying smaller firms could reduce incentives to start such firms in the first place. Hemphill and Wu tackle the question head-on:

The acquisition of a nascent competitor raises several particularly challenging questions of policy and doctrine. First, acquisition can serve as an important exit for investors in a small company, and thereby attract capital necessary for innovation. Blocking or deterring too many acquisitions would be undesirable. However, the significance of this concern should not be exaggerated, for our proposed approach is very far from a general ban on the acquisition of unproven companies. We would discourage, at most, acquisition by the firm or firms most threatened by a nascent rival. Profitable acquisitions by others would be left alone, as would the acquisition of merely complementary or other nonthreatening firms. While wary of the potential for overenforcement, we believe that scrutiny of the most troubling acquisitions of unproven firms must be a key ingredient of a competition enforcement agenda that takes innovation seriously.

In another paper, William P. Rogerson and Howard Shelanski write about “Antitrust Enforcement, Regulation, and Digital Platforms.” They raise the concern that the tools of antitrust may not be well-suited to some of the competition issues posed by big digital firms. For example, if Alphabet was forced to sell off Google, or some other subsidiaries, would competition really be improved? What would it even mean to, say, try to break Google’s search engine into separate companies? When there are “network economies,” where many agents want to be on a given website because so many other players are on the same website, perhaps a relatively small number of firms is the natural outcome.

Thus, while certainly not ruling out traditional antitrust actions, Rogerson and Shelanski argue that the case for using regulations to achieve pro-competitive outcomes. They write:

[W]e discuss why certain forms of what we call “light handed procompetitive” (LHPC) regulation could increase levels of competition in markets served by digital platforms while helping to clarify the platforms’ obligations with respect to interrelated policy objectives, notably privacy and data security. Key categories of LHPC regulation could include interconnection/interoperability requirements (such as access to application programming interfaces (APIs)), limits on discrimination, both user-side and third-party-side data portability rules, and perhaps additional restrictions on certain business practices subject to rule of reason analysis under general antitrust statutes. These types of regulations would limit the ability of dominant digital platforms to leverage their market power into related markets or insulate their installed base from competition. In so doing, they would preserve incentives for innovation by firms in related markets, increase the competitive impact of existing competitors, and reduce barriers to entry for nascent firms.

The regulation we propose is “light handed” in that it largely avoids the burdens and difficulties of a regime—such as that found in public utility regulation—that regulates access terms and revenues based on firms’ costs, which the regulatory agency must in turn track and monitor. Although our proposed regulatory scheme would require a dominant digital platform to provide a baseline level of access (interconnection/interoperability) that the regulator determines is necessary to promote actual and potential competition, we believe that this could avoid most of the information and oversight costs of full-blown cost-based regulation …  The primary regulation applied to price or non-price access terms would be a nondiscrimination condition, which would require a dominant digital platform to offer the same terms to all users. Such regulation would not, like traditional rate regulation, attempt to tie the level or terms of access to a platform’s underlying costs, to regulate the company’s terms of service to end users, or to limit the incumbent platform’s profits or lines of business. Instead of imposing monopoly controls, LHPC regulation aims to protect and promote competitive access to the marketplace as the means of governing firms’ behavior. In other words, its primary goal is to increase the viability and incentives of actual and potential competitors. As we will discuss, the Federal Communication Commission’s (FCC) successful use of similar sorts of requirements on various telecommunications providers provides one model for this type of regulation.

Nancy L. Rose and Jonathan Sallet tackle a more traditional antitrust question in “The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right.”   A “horizontal” merger is one between two firms selling the same product. This is in contrast to a “vertical” merger, where one firm merges with a supplier, or a merger where the two firms sell different products. When two firms selling the same product propose a merger, they often argue that the two firms  will be more efficient together, and thus able to provide a lower-cost product to consumers. Rose and Sallett offer this example:

Here is a stylized example of the role that efficiencies might play in an antitrust review. Imagine two paper manufacturers, each with a single factory that produces several kinds of paper, and suppose their marginal costs decline with longer production runs of a single type of paper. They wish to merge, which by definition eliminates a competitor. They justify the merger on the ground that after they combine their operations, they will increase the specialization in each plant, enabling longer runs and lower marginal costs, and thus incentivizing them to lower prices to their customers and expand output. If the cost reduction were sufficiently large, such efficiencies could offset the merger’s otherwise expected tendency to increase prices.

In this situation, the antitrust authorities need to evaluate whether these potential efficiencies exist and are likely to benefit consumers. Or alternatively, is the talk of “efficiencies” a way for top corporate managers to build their empires while eliminating some competition? Rose and Sallett argue, based on the empirical evidence of what has happened after past mergers, that antitrust enforcers have been too willing to believe in the possibility of efficiencies that don\’t seem to happen. They write:

As empirically-trained economists focused further on what data revealed about the relationship between mergers and efficiencies, the results cast considerable doubt on post-merger benefits. As discussed at length by Professor Hovenkamp, “the empirical evidence is not unanimous, however, it strongly suggests that current merger policy tends to underestimate harm, overestimate efficiencies, or some combination of the two.” The business literature is even more skeptical. As management consultant McKinsey & Company reported in 2010: “Most mergers are doomed from the beginning. Anyone who has researched merger success rates knows that roughly 70 percent of mergers fail.”

For more on antitrust and the big tech companies, some of my previous posts include:

Here\’s the full set of papers from the June 2020 issue of the  University of Pennsylvania Law Review issue, with links and abstracts:

Framing the Chicago School of Antitrust Analysis,\” by Herbert  Fiona Scott Morton
The Chicago School of antitrust has benefitted from a great deal of law office history, written by admiring advocates rather than more dispassionate observers. This essay attempts a more neutral examination of the ideology, political impulses, and economics that produced the School and that account for its durability. The origins of the Chicago School lie in a strong commitment to libertarianism and nonintervention. Economic models of perfect competition best suited these goals. The early strength of the Chicago School was that it provided simple, convincing answers to everything that was wrong with antitrust policy in the 1960s, when antitrust was characterized by over-enforcement, poor quality economics or none at all, and many internal contradictions. The Chicago School’s greatest weakness is that it did not keep up. Its leading advocates either spurned or ignored important developments in economics that gave a better accounting of an economy that was increasingly characterized by significant product differentiation, rapid innovation, networking, and strategic behavior. The Chicago School’s protest that newer models of the economy lacked testability lost its credibility as industrial economics experienced an empirical renaissance, nearly all of it based on models of imperfect competition. What kept Chicago alive was the financial support of firms and others who stood to profit from less intervention. Properly designed antitrust enforcement is a public good. Its beneficiaries—consumers—are individually small, numerous, scattered, and diverse. Those who stand to profit from nonintervention were fewer in number, individually much more powerful, and much more united in their message. As a result, the Chicago School went from being a model of enlightened economic policy to an economically outdated but nevertheless powerful tool of regulatory capture.

Nascent Competitors,” by C. Scott Hemphill & Tim Wu
A nascent competitor is a firm whose prospective innovation represents a serious threat to an incumbent. Protecting such competition is a critical mission for antitrust law, given the outsized role of unproven outsiders as innovators and the uniquely potent threat they often pose to powerful entrenched firms. In this Article, we identify nascent competition as a distinct analytical category and outline a program of antitrust enforcement to protect it. We make the case for enforcement even where the ultimate competitive significance of the target is uncertain, and explain why a contrary view is mistaken as a matter of policy and precedent. Depending on the facts, troubling conduct can be scrutinized under ordinary merger law or as unlawful maintenance of monopoly, an approach that has several advantages. In distinguishing harmful from harmless acquisitions, certain evidence takes on heightened importance. Evidence of an acquirer’s anticompetitive plan, as revealed through internal communications or subsequent conduct, is particularly probative. After-the-fact scrutiny is sometimes necessary as new evidence comes to light. Finally, our suggested approach poses little risk of dampening desirable investment in startups, as it is confined to acquisitions by those firms most threatened by nascent rivals.

Antitrust Enforcement, Regulation, and Digital Platforms,” by William P. Rogerson & Howard Shelanski
There is a growing concern over concentration and market power in a broad range of industrial sectors in the United States, particularly in markets served by digital platforms. At the same time, reports and studies around the world have called for increased competition enforcement against digital platforms, both by conventional antitrust authorities and through increased use of regulatory tools. This Article examines how, despite the challenges of implementing effective rules, regulatory approaches could help to address certain concerns about digital platforms by complementing traditional antitrust enforcement. We explain why introducing light- handed, industry-specific regulation could increase competition and reduce barriers to entry in markets served by digital platforms while better preserving the benefits they bring to consumers.

The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right,” Nancy L. Rose and Jonathan Sallet
The extent to which horizontal mergers deliver competitive benefits that offset any potential for competitive harm is a critical issue of antitrust enforcement. This Article evaluates economic analyses of merger efficiencies and concludes that a substantial body of work casts doubt on their presumptive existence and magnitude. That has two significant implications. First, the current methods used by the federal antitrust agencies to determine whether to investigate a horizontal merger likely rests on an overly-optimistic view of the existence of cognizable efficiencies, which we believe has the effect of justifying market-concentration thresholds that are likely too lax. Second, criticisms of the current treatment of efficiencies as too demanding—for example, that antitrust agencies and reviewing courts require too much of merging parties in demonstrating the existence of efficiencies—are misplaced, in part because they fail to recognize that full-blown merger investigations and subsequent litigation are focused on the mergers that are most likely to cause harm.

Oligopoly Coordination, Economic Analysis, and the Prophylactic Role of Horizontal Merger Enforcement,” by Jonathan B. Baker and Joseph Farrell
For decades, the major United States airlines have raised passenger fares through coordinated fare-setting when their route networks overlap, according to the United States Department of Justice. Through its review of company documents and testimony, the Justice Department found that when major airlines have overlapping route networks, they respond to rivals’ price changes across multiple routes and thereby discourage competition from their rivals. A recent empirical study reached a similar conclusion: It found that fares have increased for this reason on more than 1000 routes nationwide and even that American and Delta, two airlines with substantial route overlaps, have come close to cooperating perfectly on routes they both serve.

The Role of Antitrust in Preventing Patent Holdup,” by Carl Shapiro and Mark A. Lemley
Patent holdup has proven one of the most controversial topics in innovation policy, in part because companies with a vested interest in denying its existence have spent tens of millions of dollars trying to debunk it. Notwithstanding a barrage of political and academic attacks, both the general theory of holdup and its practical application in patent law remain valid and pose significant concerns for patent policy. Patent and antitrust law have made significant strides in the past fifteen years in limiting the problem of patent holdup. But those advances are currently under threat from the Antitrust Division of the Department of Justice, which has reversed prior policies and broken with the Federal Trade Commission to downplay the significance of patent holdup while undermining private efforts to prevent it. Ironically, the effect of the Antitrust Division’s actions is to create a greater role for antitrust law in stopping patent holdup. We offer some suggestions for moving in the right direction.

Competition Law as Common Law: American Express and the Evolution of Antitrust,” by Michael L. Katz & A. Douglas Melamed
We explore the implications of the widely accepted understanding that competition law is common—or “judge-made”—law. Specifically, we ask how the rule of reason in antitrust law should be shaped and implemented, not just to guide correct application of existing law to the facts of a case, but also to enable courts to participate constructively in the common law-like evolution of antitrust law in the light of changes in economic learning and business and judicial experience. We explore these issues in the context of a recently decided case, Ohio v. American Express, and conclude that the Supreme Court, not only made several substantive errors, but also did not apply the rule of reason in a way that enabled an effective common law-like evolution of antitrust law.

Probability, Presumptions and Evidentiary Burdens in Antitrust Analysis: Revitalizing the Rule of Reason for Exclusionary Conduct,” by Andrew I. Gavil & Steven C. Salop
The conservative critique of antitrust law has been highly influential. It has facilitated a transformation of antitrust standards of conduct since the 1970s and led to increasingly more permissive standards of conduct. While these changes have taken many forms, all were influenced by the view that competition law was over-deterrent. Critics relied heavily on the assumption that the durability and costs of false positive errors far exceeded the costs of false negatives. Many of the assumptions that guided this retrenchment of antitrust rules were mistaken and advances in law and economic analysis have rendered them anachronistic, particularly with respect to exclusionary conduct. Continued reliance on what are now exaggerated fears of “false positives,” and failure adequately to consider the harm from “false negatives,” has led courts to impose excessive burdens of proof on plaintiffs that belie both sound economic analysis and well-established procedural norms. The result is not better antitrust standards, but instead an unwarranted bias towards non-intervention that creates a tendency toward false negatives, particularly in modern markets characterized by economies of scale and network effects. In this article, we explain how these erroneous assumptions about markets, institutions, and conduct have distorted the antitrust decision-making process and produced an excessive risk of false negatives in exclusionary conduct cases involving firms attempting to achieve, maintain, or enhance dominance or substantial market power. To redress this imbalance, we integrate modern economic analysis and decision theory with the foundational conventions of antitrust law, which has long relied on probability, presumptions, and reasonable inferences to provide effective means for evaluating competitive effects and resolving antitrust claims.

The Post-Chicago Antitrust Revolution: A Retrospective,” by Christopher S. Yoo
A symposium examining the contributions of the post-Chicago School provides an appropriate opportunity to offer some thoughts on both the past and the future of antitrust. This afterword reviews the excellent papers presented with an eye toward appreciating the contributions and limitations of both the Chicago School, in terms of promoting the consumer welfare standard and embracing price theory as the preferred mode of economic analysis, and the post-Chicago School, with its emphasis on game theory and firm-level strategic conduct. It then explores two emerging trends, specifically neo-Brandeisian advocacy for abandoning consumer welfare as the sole goal of antitrust and the increasing emphasis on empirical analyses.

What’s Happening with Global Connectedness in the Pandemic?

The last few decades have been a time of economic globalization. But that trend has been faltering for a few years now, with rising political obstacles to trade and migration. Will the global pandemic recession be a force that separates the world economy, or perhaps reinforce some patterns of globalization while hindering others? Steven A. Altman and Phillip Bastian tackle these questions in the DHL GLobal Connectedness Index 2020, subtitled \”The State of Globalization in a Distancing World\” (December 2020).  

Here\’s are some long-run patterns of globalization. The first panel shows exports (as a measure of trade in goods) starting a sharp rise after World War II. The second panel shows the rise in foreign direct investment since about 1980. The third panel shows the rise in migration since about 1970. 

But although the long-run trend toward globalization is clear, Altman and Bastian point out that many people have heard so much about globalization that they tend to overestimate its prevalence. They write: 

[M]ost business and personal activity is still domestic rather than international. Roughly 21% of all goods and services end up in a different country from where they were produced. Companies buying, building, or reinvesting in foreign operations via FDI [foreign direct investment] accounted for only 7% of gross fixed capital formation last year. Just 7% of voice call minutes, including calls over the internet, were international. And a mere 3.5% of people lived outside of the countries where they were born. …

If many of these global “depth” measures are lower than you expected, you are in good company. Surveys of managers, students, and the general public have consistently shown that most people think international flows are larger than they really are. This pattern shows up across countries, as well as respondent characteristics such as level of education, age, gender, and political leanings. … In public policy, people who overestimate these types of measures tend to presume that globalization is a much bigger factor in joblessness, wage stagnation, and climate change than evidence suggests.

What about the effects of the pandemic on globalization? The report looks at four \”pillars\” of globalization: trade in goods and services, flows of people, international flows of information, and international capital flows. The overall theme is that while all the pillars of globalization took a hit earlier in 2020, all except flows of people have been starting to bounce back. Here are a few points about these that caught my eye. 

On trade in goods, the report notes: 

Covid-19 is on track to cause a much smaller decline in trade intensity than the 2008-09 global financial crisis. This reflects both how quickly trade recovered during 2020 and the fact that many of the industries that were hit hardest by the pandemic (e.g. restaurants) provide local services rather than heavily traded goods. It is also notable that a significant part of the forecasted decline in trade intensity in 2020 is due to lower commodity prices. … [R]emoving the effects of price changes …  the share of real global output that is exported is expected to remain above its 2009 level in 2020. … 

Covid-19 has also accelerated the growth of international e-commerce. According to one study, cross-border discretionary e-commerce sales soared 53% year-on-year during the second quarter of 2020. Cross-border sales, nonetheless, accounted for only 10% of all consumer e-commerce transactions in 2018, suggesting ample headroom for additional growth. The share of online shoppers who made purchases from foreign suppliers rose from 17% in 2016 to 23% in 2018. An analysis by the McKinsey Global Institute forecasts that international business and consumer e-commerce could expand trade in manufactured goods by 6 – 10% by 2030.

For international flows of capital, the  report looks foreign direct investment and portfolio equity investment: \”The distinction between the two is that FDI gives the investor (typically a multinational corporation) a voice in the management of a foreign enterprise, whereas portfolio equity investment does not. For statistical purposes, if the investor owns at least 10% of the foreign company, it is normally classified as FDI; below 10% it is deemed portfolio investment.\” 

On foreign direct investment: 

The UN Conference on Trade and Development (UNCTAD) forecasts that FDI flows will decline 30 – 40% in 2020, and that they are likely to slip another 5 – 10% in 2021, before starting to recover in 2022. The pandemic has crimped FDI flows through various channels: reductions in earnings available to invest, worsening business prospects, restrictions on business travel, uncertainty both in general and specifically about global supply chains, and so on. Nonetheless, double-digit drops in FDI flows are not uncommon, and certainly not as alarming as a similar drop in trade would be. For example, FDI flows shrank 43% in 2001 and 35% over a two-year period during the global financial crisis, and they have swung widely both up and down over the last five years due to changes in US tax policy.

For international investment in stocks, the general pattern in recent years has been as shift away from \”home bias,\” where investors stick to their own country, and toward greater international diversification. \”Portfolio equity stocks closed out 2019 at 37% of world stock market capitalization, just shy of the record level reported in 2018.\” This graph shows the monthly pattern of portfolio investment in emerging markets in 2020: a big dip in March, but then mostly a resumption of the usual pattern. 

When it comes to flows of information, 2020 has been a remarkable year. As businesses, schools, and social lives shifted online with the pandemic, international internet traffic rose dramatically. As the figure shows, it has been rising about 20% per year, but in 2020 it rose almost 50%. 

But on the other side, domestic internet traffic must be rising very substantially, too. The report notes: \”The Internet continues to power large increases in information flows within and across borders, but international information flows are no longer consistently growing faster than domestic information flows. Covid-19 has caused data traffic to boom in 2020, but it remains unclear whether the pandemic has made the nature of information flows more—or less—global.\”

Flows of people across international borders have dropped dramatically, with no real sign yet of a bounceback. In terms of tourism, \”The UNWTO [UN World Tourism Organization] does not foresee widespread lifting of travel restrictions until mid-2021 and forecasts that it will take 2.5 to 4 years for international tourist arrivals to rebound to their 2019 levels.\”

Flows of people who spread economic and education connections are way down, too: 

Though business travel makes up just a fraction of international travel, it is an important enabler of international trade, investment, and economic development. New research highlights how business travel facilitates knowledge transfer from countries with strengths in certain industries to other economies. According to this study, a permanent shutdown of international business travel would shrink global economic activity by an order of magnitude more than the amount that was spent on business travel before the pandemic. …

Early data show large declines in international student enrollment in the United States, the top destination for foreign students. According to National Student Clearinghouse Research Center’s tracking of student enrollment during the Covid-19 pandemic, international student enrollment is down 14% at the undergraduate level and 8% at the graduate level in fall 2020. The American Council on Education estimates that international student enrollment could fall by as much as 25%. New students deciding not to begin their studies in the US during the pandemic are the primary driver of falling enrollments.

Of course, if new international students keep staying away, the decline will accumulate over the next few years. In the world of academia, my sense is that lots of professors are experiencing a quality/quantity tradeoff: yes, attending a conference via Zoom has lower value-added, because of the loss of the personal touch, but on the other side, one can \”attend\” many more conferences. 

The overall takeaway here is that globalization is resilient. Yes, it is taking a hit in 2020. But even if globalization were to decline somewhat, it would still be an important force in the economy. Here are a few more comments from Altman and Bastian:

Despite recent trade turbulence, the ratio of gross exports to world GDP is still remarkably close to its all-time high. Even after falling from a peak of 32% in 2008 to 29% in 2019, this measure of global trade integration is still 20% higher than it was in 2000, twice as high as it was in 1970, and almost six times higher than in 1945. … Globalization can go into reverse—as demonstrated by the trendlines between the 1920s and 1950s—but recent data do not depict a similar reversal. …

[T]he world is—and will remain—only partially globalized. Globalization can rise or fall significantly without getting anywhere close to either a state where national borders become irrelevant or one where they loom so large that it is best to think of a world of disconnected national economies. All signs point to a future where international flows will remain so large that decision-makers ignore them at their peril, even as borders and cross-country differences continue to make domestic activity the default in most areas.

Intergenerational Mobility and Neighborhood Effects

Raj Chetty delivered a keynote address titled \”Improving Equality of Opportunity: New Insights from Big Data,\” to the annual meetings of the Western Economic Association International. It\’ has now been published in the January 2021 issue of Contemporary Economic Policy (39:1, 7–41, needs a subscription to access). The lecture gives a nice overview of some  of Chetty\’s work in the last few years. 

Chetty offers this nugget from previous research as as starting point: 

In particular, let’s think about the chance that a child born to parents in the bottom quintile—the bottom fifth of the income distribution—has of rising up to the top fifth of the income distribution. If you look at the data across countries, there are a number of recent studies that have computed statistics like this, called intergenerational transition matrices. You see that in the United States, kids born to families in the bottom fifth have about a seven and a half percent chance of reaching the top fifth. That compares with nine percent in the United Kingdom, 11.7% in Denmark, and 13.5% in Canada. … One way you might think about this is that your chance of achieving the American dream are in some sense almost two times higher if you’re growing up in Canada rather than the United States. I think these international comparisons are motivating. It’s a little bit difficult to figure out how to make sense of them.

Start with a basic question: What do we need to  know if we want to measure intergenerational economic mobility? Basically, you need a measure of income for two separate generations, choosing an age when both older and younger generations are reasonably well-established in a career. Then you can compare where in the income distribution the younger generation falls, relative to their parents, and in that way have a sense of the chances of  how likely someone from one part of the income distribution is to move (higher or lower) to a different part of the income distribution. Best of all, you would like to have a really big nationally representative sample of this data, so that you can look at how such effects might vary across different locations, different sexes, and races/ethnicities. 
There is no single data set that will give you this kind of information. So Chetty and a team of researchers that has now grown to a couple of dozen people figured out how to combine existing datasets to get the necessary data.  Specifically, as Chetty describes it: 
In particular we use the 2000 and 2010 decennial census, as well as the American Community Survey—the ACS—which covers about 1% of Americans each year. We take that census data and link it to federal income tax returns from 1989 to 2015, creating a longitudinal data set. … 
Those of you who have kids and live in the United States know that you have to write down your kid’s Social Security number on your tax return to claim him or her as a dependent. That allows us to link 99% of children in America back to their parents. What I’m going to describe here, the set of kids we’re seeking to study, are children born between 1978 and 1983 who were born in the United States or came to the United States as authorized immigrants while they were kids. That’s our target sample.

In practice, we have an analysis sample of about 20.5 million children once we’ve done the linkage of those various data sets, and that covers about 96% of the kidswe expect to be in our target sample. It’s not 100% because there are some kids for whom you’re not able to find a tax return or who weren’t claimed or who you weren’t able to link the census data to the tax data. But it’s still pretty good—at 96% you have essentially everybody you’re looking to study.

Just to be clear, this data is \”de-identified\”: that is, stringent precautions are taken so that researchers don\’t see anyone\’s actual name or Social Security number or other personal information. Constructing this dataset is a substantial task, and it\’s a task that would not have been possible for researchers of previous generation. 

Here\’s one result. The horizontal axis shows the percentage of the income distribution for the older generation; the vertical axis the average for where the next generation ended up in the income distribution. The marked point shows that when the older generation was in the 25th percentile, the average outcome for the younger generation is the 41st percentile of the income distribution. Overall, the flatter this line, the more intergenerational income mobility exists: a perfectly flat line would mean that no matter where the older generation started, the expected result for the younger generation is the same. 

But while a lot of the previous research on intergenerational mobility had data that was nationally representative, the US-based research in particular has not had enough data that it could make plausible statements about intergenerational mobility at more local levels.

As a concrete example to help the exposition, Chetty sticks to this case where a parent\’s household is in the 25th percentile of the income distribution. Then he can ask: in different metropolitan areas across the United States, where is the average income of the younger generation higher or lower? 

Here\’s a map showing the pattern across the US where the younger generation is white and black men. The blue areas in the \”heat map\” show where the income of the younger generation is higher than the older generation; the white areas show where it\’s the same; the red areas show where it is lower. For black men, on the left, the overall pattern is mostly red and white: that is, the younger generation of black men usually have the same or lower income than their parents. For white men, on the right, the overall pattern is mostly blue and white; that is, the younger generation of white men are usually doing  the same or better than their parents.

There is of course lots to chew on in why these patterns differ across metropolitan area. For example, if one looks at the same graph for black women and white women, there is very little difference in this measure of intergenerational mortality. But the Chetty research group has so much data that they can look at much smaller geographic areas, including Census \”tracts.\” There are about 70,000 tracts in the US that include about 4,000 people each. In certain high-population cities, the Chetty data lets a research look at intergenerational mobility at the level of a city block. 

Thus, Chetty and his team can tackle the question: what do patterns of intergenerational mobility look like not at the national level, or at the metro area  level, but at the neighborhood level. Sticking to the earlier concrete example of children born to a family at the 25th income percentile, does their income mobility differ depending on the neighborhood in which they live? Chetty writes: 

[T]he geographic scale on which we should think about neighborhoods as they matter for economic opportunity and upward mobility is incredibly narrow, like a half mile radius around your house. We find this not just for poverty rates, but many other characteristics. If you look at differences in characteristics outside that half mile radius, they have essentially no predictive power at all. I think that’s extremely useful from a policy perspective. We started this talk with the American dream. We now see that its origins, its roots, seem to actually be extremely hyperlocal.

Do the hyperlocal neighborhoods that have more intergenerational income mobility tend to share certain characteristics? Yes. For example, the share of households in the neighborhood with two-parent families makes a difference, as does the \”social capital\” of the neighborhood as measured by whether it has community gathering places like churches or even bowling alleys. 

At some level, all of this may sound like \”common sense for credit,\” as economics has been described. Certain neighborhoods, not too far apart, are more correlated with intergenerational mobility than others.  But are the neighborhood effects something that could potentially be used for public policy? To put it another way, if the neighborhood changes, does someone from the younger generation basically have the same level of intergenerational mobility–because mobility is mostly determined by your family–or does someone from the younger generation perhaps have a higher or lower level of mobility–because the neighborhood has a separate causal effect on eventual economic outcomes?  

There are a variety of ways one might address this question. There\’s a major federal study called Moving to Opportunity. It was a randomized study: that is, a randomly selected half the sample got a certain program, but half didn\’t, so you can then compare outcomes between the results. Chetty describes: 

Let me start with the moving to opportunity (MTO) experiment. This experiment was conducted in five large cities around the United States. I’m going to focus on the case of Chicago, just to pick one illustration. In this experiment, researchers gave families living in very high poverty public housing projects, for instance, the Robert Taylor Homes in Chicago, one of two different kinds of housing vouchers through a random lottery. One group received Section 8 vouchers, which were vouchers that enabled them to move anywhere they could find housing. This assistance was on the order of about $1,000 per month in today’s dollars. The other group, called the experimental group, was given the same vouchers worth exactly the same amount, but they came with the restriction that users had to move to a low poverty area. These areas were defined as census tracts with a poverty rate below 10%.

It turns out that the randomly selected group who moved to lower-poverty areas as children did indeed have higher incomes as adults. The exact numbers of course have some statistical uncertainty built in. But as a bottom line, Chetty writes: 

That is to say, if I moved to a place where I see kids growing up to earn $1,000 more, on average, than in my hometown, I myself pick up about $700 of that. Just to put it differently, something like 70% of the variation in the maps that I’ve been showing you seems to reflect, if you take this oint estimate directly, causal effects of place, and 30% reflects selection. So a good chunk of it seems to actually be the causal effects of place.

Another statistical approach is just to look at families who move for any reason. Look at families with several children. When that family moves, some of their children will grow up in the new neighborhood for longer than others, and it turns out that the income gains from moving to the new neighborhood for children growing up in the same family line up with how long the child lived in that neighborhood. Chetty describes various other approaches to demonstrating that the neighborhood in which you grow up, where that is defined as the area within about a half-mile of your home, has a lasting effect on economic prospects.  

What public policy recommendations might flow from this finding? It would be impractical and costly to pay for vast numbers of people to relocate from current neighborhoods.  But there\’s a smaller-scale policy that might well be workable, which just involves providing information and connections to families with an interest in moving. Chetty writes:  

A different view is maybe this is about some sort of barriers, frictions that are preventing families from getting to these places. Maybe they lack information, maybe landlords in those neighborhoods don’t want to rent to them, maybe they don’t have the liquidity they need to get to those places, and so on.  We are conducting a randomized trial where we’re trying to address a bunch of those barriers by providing information, and by simplifying the process for landlords by providing essentially brokerage services, like search assistance, in the housing search process. We take that for granted in the high-end of the housing market, but it basically doesn’t exist at the low-end of the housing market. We take about 1,000 families, 500 of which receive the services, and 500 don’t, randomly chosen. …

We found that this was an incredibly impactful intervention; we were extremely surprised by how much impact our services had on families’ likelihoods of moving to higher-opportunity neighborhoods. In the control group, less than one-fifth of families moved to higher opportunity areas. Eighty percent of families that received these vouchers chose to live in places that are relatively low mobility. In the treatment group, this completely changed. The vast majority of families in  the treatment group are now living in these high mobility places.I was just in Seattle talking to some of the families who’ve moved. They’re incredibly happy and describe how this small set of services, which only comes at a 2% incremental cost relative to the baseline cost of the housing voucher program, dramatically changed their choices and their kids’ experience. …
There are a couple elements. First, from an economic perspective, we provide damage mitigation funds. This is basically an insurance fund that says that if anything goes wrong, we will cover it. In practice, the amount of expenses incurred are essentially zero, but I think it gives landlords some peace of mind. Second, there’s a simplification in the inspection process, which traditionally involves a lot of red tape and delays. We shortened the inspection process to 1 day, making it much simpler. Third—this actually surprised me—apparently telling landlords that their units can provide a pathway to opportunity for low-income kids actually makes them much more motivated to rent their units to certain families. …
In fact, now we find landlords coming to the housing authority saying things like: “I heard about this program,” or, “I had a really good experience with your previous tenant, I want to now rent again.” I think we can change that equilibrium if we do it thoughtfully.

What about taking steps to improve the prospects for intergenerational mobility in neighborhoods in a direct way? It\’s worth remembering that Chetty\’s evidence suggests that what really matters is the half-mile around where people live, so what would seem to be called for is projects that improve the neighborhoods at a local level.  Chetty writes: 

What specific investments can be useful? Of course, that’s the question you’d want to know
the answer to. That could range from things like, most obviously, improving the quality of schools in an area to things like mentoring programs, and changing the amount of social capital, if we can figure out ways to measure and manipulate things like connectedness, reducing crime, and physical infrastructure. There are many such efforts that have been implemented over the years by local governments, nonprofits, and other practitioners.
You might ask which of those things is actually most effective; what’s the recipe for increasing upward mobility in a given place? I think the honest answer is that we just do not know yet. The reason for that is that there are lots of these place-based efforts where someone invests a lot of money in a given neighborhood. The neighborhood looks completely different 10 years down the road, but you have no idea whether that’s because new people moved in and displaced the people who were living there before, so the neighborhood gentrified, or if the people who lived there to begin with benefitted. And again, I think resolving that question comes back to having historical longitudinal data and being able to follow the people who lived there to begin with.
As you read this brief overview of a much larger body of work, you may find your mind raising questions about other possible connections or policies. That\’s natural. It\’s a genuinely exciting area of research that is opening up before our eyes. 

Debt and Deficits: Nostalgia for the 1980s

Back in the mid-1980s, the federal government under the Reagan administration ran what were widely considered to be excessive and risky budget deficits: from 1983 to 1986, the annual deficit was between 4.7% of GDP and 5.9% of GDP per year. The accumulated federal debt held by the public as a share of GDP rose from 21.2% of GDP in 1981 to 35.2% of GDP by 1987. I cannot exaggerate how much ink was spilled over this problem, some of it by me, back in those innocent and carefree time, before we learned to stop worrying and love the deficit.

The Congressional Budget Office has just released \”The 2021 Long-Term Budget Outlook\” (March 2021). There\’s nothing deeply new in it, but it made me think about how attitudes about budget deficits and government debt have evolved. The report notes: 

At an estimated 10.3 percent of gross domestic product (GDP), the deficit in 2021would be the second largest since 1945, exceeded only by the 14.9 percent shortfall recorded last year. … By the end of 2021, federal debt held by the public is projected to equal 102 percent of GDP. Debt would reach 107 percent of GDP (surpassing its historical high) in 2031 and would almost double to 202 percent of GDP by 2051. Debt that is high and rising as a percentage of GDP boosts federal and private borrowing costs, slows the growth of economic output, and increases interest payments abroad. A growing debt burden could increase the risk of a fiscal crisis and higher inflation as well as undermine confidence in the U.S. dollar, making it more costly to finance public and private activity in international markets.

Here are a few illustrative figures. The first one shows accumulated federal debt over time since 1900. You see the bumps for debt accumulated during World Wars I and II, and during the Great Depression of the 1930s. If you look at the 1980s, you can see the Reagan-era rise in debt/GDP.  But after the debt/GDP ratio had sagged back to 26.5% by 2001, you can see the the big jump for debt incurred during the Great Recession, and then debt incurred during the pandemic recession, and then where the projections under current law would take us.

From an historical point of view, you can think of fiscal policy during the Great Recession and the pandemic recession as similar to what happened during the Great Recession and World War II. In both cases, there were two huge stresses within a period of about 15 years, and the federal government addressed both of them with borrowed money. In historical perspective, those Reagan-era deficits that caused such a fuss were just a minor speed bump. However, what\’s projected for the future has no US historical equivalent. 
This figure shows projections for annual budget deficits, rather than for accumulated debt. The figure separates out the amount of deficits that are attributable to interest payments in past borrowing (blue area). The \”primary deficit\” (purple area) is the deficit due to all non-interest spending. You\’ll notice that the primary deficit doesn\’t get crazy-high: it steadily grows from about 2.5% of GDP in the mid-2000s to 4.6% of GDP by the late 2040s. The problem is that the federal government gets on what I\’ve called the \”interest payments treadmill,\” where high interest payments are helping to create large annual deficits, and then large annual deficits keep leading to higher future interest payments. 
If the government could take actions to hold down the rise in the primary deficit over time, with some mixture of spending cuts and tax increases (or does it sound better to say spending \”restraint\” and tax \”enhancements\”?), then this could also keep the US government from stepping on the interest payments treadmill.  
This figure shows projected trends for spending and taxes, under current law. You can see the sepnding jump during the Great Recession, and then the jump during the pandemic recession. Assuming current law, projected tax revenues as a share of GDP don\’t change much going forward. However projected outlays do rise.

CBO explains the rise in outlays: 
Larger deficits in the last few years of the decade result from increases in spending that outpace increases in revenues. In particular:
  • Mandatory spending increases as a percentage of GDP. Those increases stem both from the aging of the population, which causes the number of participants in Social Security and Medicare to grow faster than the overall population, and from growth in federal health care costs per beneficiary that exceeds the growth in GDP per capita.
  • Net spending for interest as a percentage of GDP is projected to increase over the remainder of the decade as interest rates rise and federal debt remains high. 
There\’s been some talk in recent years about how, in a time of low interest rates, it could be an excellent time for the US government to make long-run investments that would pay off in future productivity. This case has some merit to me, but it\’s not what is actually happening. Instead, the fundamental purpose of the US government has been shifting. Back in 1970, about one-third of all federal spending was direct payments to individuals: now, direct payments to individuals are 70% of all federal spending. The federal government use to have missions like fighting wars and putting a person on the moon: now, it cuts checks. The CBO has this to say about the agenda of using federal debt to finance investments: 
Moreover, the effects on economic outcomes would depend on the types of policies that generate the higher deficits and debt. For example, increased high-quality and effective federal investment would boost private-sector productivity and output (though it would only partially mitigate the adverse consequences of greater borrowing). However, in CBO’s projections, the increasing deficits and debt result primarily from increases in noninvestment spending. Notably, net outlays for interest are a significant component of the increase in spending over the next 30 years. In addition, federal spending for Social Security, Medicare, and Medicaid for people age 65 or older would account for about half
of all federal noninterest spending by 2051, rising from about one-third in 2021.
For decades now, we have known that a combination of the aging of the post-World War II \”baby boom\” generation combined with rising life expectancies was going to raise the share of elderly Americans. We have also known for decades that primary programs for meeting the needs of this group–like Social Security and Medicare–have made promises that their current funding sources can\’t support. We have been watching US health care costs rise as a share of GDP For decades.  Meanwhile, the US economy has been experiencing slow productivity growth, which makes addressing all problems closer to a zero-sum game.  

Neither during the Great Recession of 2007-2009 nor during the heart of the pandemic recession in 2020 and early 2021 was an appropriate time to focus on the long-term future of government debt. But averting our eyes from the trajectory of the national debt is not a long-term strategy.