What are the Current Financial Stability Concerns?

Sometimes you start working on a project, and it turns out to be more timely than you had expected. Back in October 2019, the Hutchins Center on Fiscal & Monetary Policy at the Brookings Institution and the Initiative on Global Markets at the University
of Chicago Booth School of Business formed a Task Force on Financial Stability. At the time, I suspect it seemed like a worthy effort that would produce what a friend of mine used to call “good gray books”–where the adjective “gray” referred not to the cover but to the excitement quotient. But then the pandemic hit, and in March 2020 a series of real challenges to financial stability erupted. Thus, when the report came out in June 2021, it could be less about whether the financial stability problems of 2007-9 had been addressed, and more about the evident weaknesses remaining in the US financial system.

(For the record, the members of the task force were Glenn Hubbard, Donald Kohn, Laurie Goodman, Kathryn Judge, Anil Kashyap, Ralph Koijen, Blythe Masters, Sandie O’Connor and
Kara Stein.)

The question of “financial stability” basically focuses on this problem: When bad times hit, as they will do now and then, does the financial system keep performing its tasks pretty well in a way that helps ameliorate the crisis, or is the financial system staggered in a way that can propagate the original crisis and make it worse? For example, do loans still get made? Are financial transactions still carried out briskly? Do prices of financial assets fall by a reasonable amount, given the economic bad news, or is there a “fire sale” rush-for-the-exits dynamic that depresses prices very sharply or even makes some assets nearly impossible to sell for a time, until the market situation clarifies and stabilizes?

These issues of what is sometimes called “financial plumbing” are not intrinsically exciting. Like real-world plumbing, the importance of financial plumbing becomes most clear when it has broken down. Regular maintenance on the financial plumbing, by anticipating possible breakdowns, is of high importance.

The good news from the US financial system in the aftermath of the pandemic recession is that the banking sector looked very solid. In the aftermath of the Great Recession in 2007-9, various laws and regulations “it built the banking sector’s strength and resilience through more demanding capital and liquidity requirements as well as rigorous stress tests of the largest banks. … The banking sector, reflecting the effects of earlier reforms, remained resilient and met extraordinary demands for credit created when the interruption of economic activity caused by the pandemic sharply cut business cash flows.” The pandemic recession didn’t lead to plaintive calls from big banks that they needed just one more set of bailouts.

However, one tradeoff of making the banks safer is that an increasing amount of the US financial system now happens outside banks in the “non-bank” sector. Here’s one symptom of that change: it used to be that companies borrowed from banks, but now that banks are more highly constricted, companies borrow using the bond market or syndicated loans that are sold to investors instead.

The report devotes separate chapters to risks in five parts of the non-bank financial system: the market for US Treasury securities, mutual funds, insurance companies, housing finance, and “central clearing counterparties.” The report has details: here, I’ll just offer a few words about each one.

One of the main reasons that investors hold US Treasury debt is that it is a safe asset, which means that when times go bad and you need cash, you can plan to sell off some of the Treasury debt without worrying that it might lose its value. But in March 2020, the US government wanted to dramatically increase its sales of Treasury debt (to fund additional government spending) at the same time that many private investors and central banks around the world also wanted to sell some of their holdings o Treasury debt and move to cash. For a moment or two, there even seemed to be a risk that the US Treasury would not be able to sell debt as planned–which would have led to chaos throughout global financial markets as investors changed how they value the outstanding $22 trillion or so in Treasury debt. The Federal Reserve stepped in:

In response, the Federal Reserve purchased more than $1.5 trillion in Treasuries in March and April. By contrast, in the quantitative easing program it undertook in 2012 and 2013 to spur economic recovery from the Global Financial Crisis, the Fed was purchasing $45 billion of Treasury securities each month. The purchases in 2020 were undertaken specifically to “restore market functioning,” the Fed said.

The market for US Treasuries did not crash. But emergency last-minute rescues by the Fed are not the first-best policy option. There are serious arguments that the entire method of selling Treasury debt through a series of private-sector dealers needs to be revamped.

Insurance companies manage a large pool of assets, so that they will have the funds available to pay claims during the next major natural disaster. They typically invest in assets like corporate bonds and Treasury debt. But when the pandemic hit, and corporate bonds dropped in value, the stock values of the big insurance companies also dropped (after all, their financial assets were worth less). The state-level regulators of insurance companies thus began to sound alarms about the safety and solvency of these firms. Thus, the insurance companies wanted to back away from writing new policies or from providing a supply of finance to capital markets–right at a time when many companies needed to borrow.

A similar problem arose in money market funds in March 2020. Those funds typically invest in corporate bonds (remember, companies don’t raise much money by borrowing from banks any more). But in a sudden pandemic, investors worry that companies are likely to default, and the price of those bonds falls. Investors in the money market funds start pulling out their cash. But there is a “maturity mismatch” here: investors can often pull out their money from a money market fund whenever they wish, but the money market fund is holding longer-term corporate bonds, and in the pandemic recession it can only sell those bonds at a loss. There is a risk of a run on the money market fund, similar to a bank run, where investors sprint to withdraw their funds before the value of financial assets behind the fund decline. In the meantime, money market mutual funds become less of a funding source for corporations to borrow money.

We learned during the Great Recession of 2007-9 that housing finance is closely entangled with the broader economy as a whole. Most housing finance now proceeds through non-bank firms, which bundle borrowing into mortgage-backed securities and sell the securities to big investors–like the money market mutual funds and insurance companies already mentioned, as well as to pension funds, banks, and others. The pandemic recession raised a risk of a spike in the number of mortgage defaults. There for a time less willingness to extend such loans and the value of mortgage-backed securities–remember, these are the financial assets held by other major players throughtout the financial system–dropped. As the report notes: “The effects of the COVID-19 recession were cushioned by the Federal Reserve’s unprecedented intervention in the market for agency mortgage-backed securities. However, liquidity strains in the non-agency securities and loan markets disrupted mortgage credit supply. Widespread failures of nonbank mortgage servicers did not materialize, largely because of emergency government intervention. Emergency actions are not a permanent solution and do not address the underlying issue.”

There are two main ways of organizing markets for financial derivatives: “over-the-counter” markets and centralized clearinghouses. In an “over-the-counter” market, financial firms buy and sell with each other directly. One problem with such a market is that it can be hard to see overall patterns in the market as a whole. Thus, a common response in the aftermath of the Dodd-Frank act of 2010 was for a change so that of buying and selling financial derivatives would happen instead through a central clearinghouse:”The largest derivative CCPs include divisions of CME Group, Intercontinental Group, and the London Stock Exchange Group.” But of course, a central clearinghouse also has a potential problem: what if under the stress of a negative financial shock, the clearinghouse itself stops being able to operate, or even goes bankrupt? After all, most clearing operations use a business model where sales happen “on margin,” with the buyer putting down only a percentage of the ultimate price, and then making adjustments later. There is a “default fund” if things go wrong. But if price of financial assets move sharply and unexpectedly and certain players in the financial markets can’t meet their obligations and the default fund isn’t big enough, the clearinghouse itself could conceivably go broke, with potentially dire consequences for the smooth operation of the financial sector and the economy as a whole.

The basic answer to all these financial problems in March 2020, as I have already hinted, was massive intervention by the Federal Reserve. The Task Force report notes:

Under these circumstances, central banks had to intervene with massive and extraordinary actions to restore the functioning of these markets and preserve the access to credit of households, businesses, and governments. The Federal Reserve purchased huge amounts of Treasury and agency securities and agency mortgage-backed securities; it opened special liquidity facilities for securities dealers, for money market funds, for businesses that had lost access to commercial paper markets, and for businesses and state and local governments that encountered problems accessing bond markets. To meet heightened international demands for dollar liquidity, the Federal Reserve increased the size of its dollar swap lines with foreign central banks and expanded the list of central banks with access. In these facilities, the Fed sends dollars to foreign central banks temporarily and gets foreign currency in return. That enables foreign central banks to meet the dollar needs of their own banks. The Fed also established a repo facility for international monetary authorities, in which it lent for a short period against Treasury collateral so that foreign institutions could avoid selling their Treasury securities outright.

The economic recession part of the pandemic (as opposed to the larger public health and economic readjustment parts) was only two months in length, so the short-term Fed fixes were sufficient to help the US financial system through its turbulence. But it seems important to consider the stability of the non-bank financial sector before the next negative economic shock hits.

Bitcoin’s Remarkably High Electricity Use

Bitcoin uses a lot of electricity by its nature. The reason is that creation of new Bitcoin , in the basic structure of the system, requires solving computationally difficult problems. This characteristic prevents Bitcoin from being created in a helter-skelter or unfettered way. But solving a set of problems that require increasing numbers of calculations over time also takes electricity. Analisa R. Bala provides a short and useful overview in “Cleaning up Crypto,” from the September 2021 issue of Finance & Development. She writes: “A single bitcoin transaction may emit as much carbon as more than 1.8 million Visa purchases.”

It’s not straightforward to estimate the amount of energy used by Bitcoin, or the environmental costs of that energy, because it involves making assumptions about how the electricity is generated. But here’s one set of estimates The red line shows the best estimate of the amount of electricity used in the Bitcoin network. The pink area shows the range of uncertainty around the estimate. The points on the far right offer comparisons with the amount of electricity used by some countries. The best-fit estimate (red line) suggests that Bitcoin uses more electricity than Denmark or New Zealand. The pink area suggests, using the upper end estimates, that it’s possible before the pandemic that Bitcoin was using as much electricity as France.

As far as environmental costs of generating this electricity go, Bala reports:

In October, more than 65 percent of bitcoin miners were based in China, where they could use hydroelectricity in the summer but mostly drew on the country’s coal-fired power stations or ran their own generators on diesel or heavy fuel oil. Now that there is a government clampdown, many miners are relocating to countries like Iran and Kazakhstan, where electricity comes almost entirely from fossil fuels.

As you might expect, there are technical alternatives that might allow Bitcoin and other cryptocurrencies to operate with lower electricity usage. But as Bala notes, “Bitcoin is largely where cryptocurrency’s energy consumption problem lies,” and I don’t yet see signs that Bitcoin is willing to change how it operates in any fundamental way.

Autonomous Vehicles: Eeyore Speaks

Missy Cummings was “one of the first female fighter pilots in the US Navy and now a professor in the Duke University Pratt School of Engineering and the Duke Institute for Brain Sciences, as well as the director of Duke’s Humans and Autonomy Laboratory.” She is interviewed by Michael Chui of the McKinsey Global Institute in “From fighter pilot to robotics pioneer: An interview with Missy Cummings” (September 22, 2021, audio and transcript). I was struck in particular by her comments about autonomous vehicles–and more generally, about the economic role of artificial intelligence and robotics moving forward. Cummings calls herself as the “Eeyore” of autonomous vehicles–named after the extra-pessimistic donkey in the Winnie-the-Pooh stories. She says:

I think everyone was blown away with how quickly this technology transitioned out of the academic sphere and into the commercial sphere. It literally did happen overnight. But I think what we’re seeing now are the consequences of that. Because the technology was still extremely immature, still very much experimental. And Silicon Valley decided to try to commercialize a technology that was still very much in its infancy. … They’re still really struggling to try to get the technology to a point of maturation that’s safe enough in commercial settings, in robo-taxi settings.

We’re not going to get this anytime soon. I have yet to see any real advancements that I think can allow these cars to “reason under uncertainty.” That’s where I draw the line. The cars must be able to reason under uncertainty, whether that uncertainty is weather, human behavior, different failure modes in the car. If cars cannot figure out what to do at least to fail gracefully, then it’s going to be a long time before this technology is ready.

We may be able to get slow-speed technology—slow, meaning for autonomous shuttles—out of this. Maybe some very limited geo-fenced operations for robo-taxis. But we’re not going to get the kind of full reach that either car companies or the Silicon Valley companies like Cruise and Waymo think that we’re going to get. We’re going to fall short of that goal. I’m not sure yet what the spinout technologies will be. But I think that one day we are going to look back, and the self-driving race to the bottom, I think it’s going to become a really important set of Harvard Business [School] case studies. …

The right problems, specifically when we’re talking about the surface transportation world, including trucking and cars, is looking at collaboration between sensors and humans. Letting the system prevent humans from doing stupid things like falling asleep at the wheel, texting while driving, because if humans do these things, the car can then react and keep the car in a safe place. If nothing else, pull it over to the side of the road and flash the blinkers until the human can take charge in the way that they need to. So this idea of technology acting as a guardian. …

I know there’s a whole set of parents out there that are with me. I call myself Eeyore sometimes about the status of self-driving cars in the future. What I truly want, being the mother of a 14-year-old, is for self-driving cars to be here. I don’t want my 14-year-old daughter behind the wheel of a car ever. I do research in this field. I do understand how bad human drivers are. What I want would be for this technology to work, but understanding my job is to do research in this field, I recognize that it’s not going to happen.

What I foresee is that there is—not just in automotive transportation, but also in medicine and the military, finance—we are going to see a very distinct shift away from replacing human reasoning to augmenting reasoning. That’s where the real future is. That’s where the money is going to be made. Because people think they’re doing it, but they’re not really doing it, and they’re not doing it in the right way. …

There’s also going to be a huge growth area in the maintenance of any kind of computer system that has underlying AI in it, including robots. I think robot maintenance is going to be one of the biggest growth areas in the next 20 years. We cannot keep all the robots that we have right now working. And we’re not thinking about maintenance in a way that’s streamlined, that can pull from the resources of the typical socioeconomic class that’s doing maintenance now on regular forklifts. We’re going to have to figure out how education needs to change so that we can help people lift themselves up by their bootstraps.

I have no expertise or insight into the underlying technology, but I have written from time to time on this blog about the transformational possibilities of autonomous vehicles (for example, here and here), so I wanted to give an alternative view.

In addition, it seems to me that Cummings is expressing here a view about the future interaction of humans and technology that I’m seeing in several places: In the past, a focus of technology and innovation has often been on replacement of existing workers–for example, in manufacturing work or in the ways that information technology combined business records in ways that decimated the ranks of middle managers whose jobs had previously involved being the conduits for that information. In the future, the hope is that technology will be more focused on augmenting the abilities of existing workers, in ways that lead to higher-paying jobs and altogether new jobs. As one example among economists, Daron Acemoglu and some of his co-authors have also been writing along these lines (for example, here, here, and here). I’m not sure what policies are less likely to produce technology that tends to replace jobs and lead to inequality vs. technology that augments jobs and inequality, but the distinction seems worth some thought.

The Problem of Automated Screening of Job Applicants

There are plenty of good things that happen when firms post their job openings online. When a job is posted publicly, many more people have a chance to learn about it. Jobs are perhaps less likely to be allocated by a social network of who-hears-what-from-whom, opening up opportunities more broadly. Also, potential workers can search jobs and employers with their computer or phone, rather than needing to travel in-person from one potential employer to another, gathering up applications. Filling out applications on-line can be quicker than filling out paper forms in person, too.

But there’s also a long-standing problem with online job applications, which in an essay written 20 years ago David Autor called the problem of “excess application.” That is, people can easily apply for many more jobs, and as a result, employers get many more applications for a given job. Employers need to whittle down the online job applicants to a manageable number, so they turn to automated tools for screening the job applications. Joseph B. Fuller,
Manjari Raman, Eva Sage-Gavin, and Kristen Hines describe some of the problems that arise in “Hidden Workers: Untapped Talent,”
subtitled “How leaders can improve hiring practices to uncover missed talent pools, close skills gaps, and improve diversity (published by Accenture and the Harvard Business School, September 2021). They write:

An Applicant Tracking System (ATS) is a workflow-oriented tool that helps organizations manage and track the pipeline of applicants in each step of the recruiting process. A Recruiting Management or Marketing System (RMS) complements the ATS and supports recruiters in all activities related to marketing open positions, sourcing key talent,
creating talent pools, and automating aspects of the recruiting process such as automates candidate scoring and interview scheduling. Together, these systems represent the foundation of the hiring process in a majority of organizations. In fact, more than 90% of employers in our survey use their RMS to initially filter or rank potential middle-skills (94%) and high-skills (92%) candidates. These systems are vital; however, they are
designed to maximize the efficiency of the process. That leads them to hone in on candidates, using very specific parameters, in order to minimize the number of applicants that are actively considered. For example, most use proxies (such as a college degree or possession of precisely described skills) for attributes such as skills, work ethic, and self-efficacy. Most also use a failure to meet certain criteria (such as a gap in full-time employment) as a basis for excluding a candidate from consideration irrespective of their other qualifications.

As a result, they exclude from consideration viable candidates whose resumes do not match the criteria but who could perform at a high level with training. A large majority (88%) of employers agree, telling us that qualified high skills candidates are vetted out of the process because they do not match the exact criteria established by the job description. That number rose to 94% in the case of middle-skills workers

Thus, the problem of excess applications requires that firms do automated screening. But all too often, the automated screening relies on using certain keywords, which are only a rough proxy for what the firm is looking for. The report has several suggestions for addressing this problem: for example, firms should avoid job descriptions with a long list of “nice-to-haves” that end up ruling out lots of potential workers, and focus on a shorter list of “must-haves.” But remember, the beginning of the problem is that the firm needs ways of reducing the excess applications to a manageable number, and having fewer job requirement doesn’t address that need. Thus, actual tests of the must-have skills can be a useful way of whittling down the list of applicants

Some human resources expert argue that human resources departments in general often do a poor job in the interview process as well. Interviewers tend to like people who are like themselves. Some of the metrics often used for hiring young adults, like college grades, don’t have a strong correlation with workplace performance. Many companies don’t have a process for looking at basic data about recently hired employees–stuff like absence and quit rates, performance-based raises, or just asking the supervisor if they would hire that person again. With that information a firm could then work backward to see if some common traits could have been helpful in the hiring process, and perhaps also what “nice-to-have” traits didn’t turn out to be all that important.

In short, the goal is to have firms rethink the automated screening processes they are using, and come up with alternative ways of shaping the applicant pool–broadening it in some ways, but also narrowing it in different ways–so that the result is more likely to lead to a productive and lasting job match. The Accenture/HBS report argues that companies that have developed ways to reach out to the “hidden workers” who often will not make it through current automated screening of job applications “were 36% less likely to
face talent and skills shortages compared to companies that do not hire hidden workers.”

Following this line of thought, I was intrigued by “Hiring as Exploration,” by Danielle Li, Lindsey R. Raymond, and Peter Bergman (NBER Working Paper 27736, August 2020). They consider a “contextual bandit” approach. The intuition here, at least as I learned it, refers to the idea of a “one-armed bandit” as a synonym for a slot machine. Say that you are confronted with the problem of which slot machine to play in a large casino, given that some slot machines will pay off better than others. On one side, you want to exploit a slot machine with high payoffs. On the other side, even if you find a slot machine which seems to have pretty good payoffs, it can be a useful strategy to explore a little and see if perhaps some unexpected slot machine might pay as well or better. A contextual bandit model is built on finding the appropriate balance in this exploit/explore dynamic.

From this perspective, the problem with a lot of automated methods for screening job applications is that they do too little exploring. The automated tools take whatever criteria are imposed and narrow down this list of candidates, but they don’t systematically sample those who are perhaps fairly close to being on the final list, but being left off, to learn about other mixtures of personal backgrounds, skills, and traits that might be important.

In this spirit, the authors create several algorithms for screening job applicants, and they define an applicant’s “hiring potential” as the likelihood that the person will be hired, given that they are interviewed. The algorithms all use background information “on an applicant’s demographics (race, gender, and ethnicity), education (institution and degree), and work history (prior fims).” The key difference is that some of the algorithms just produce a point score for who should be interviewed, while the contextual bandit algorithm produces both a point score ad a standard deviation around that point score. Then, and here is the key point, the contextual bandit algorithm ranks the applicants according to the upper bound the confidence interval associated with the standard deviation. Thus, an applicant with a lower score but higher uncertainty could easily be ranked ahead of an applicant with a higher score but lower uncertainty. Again, the idea is to get more exploration into the job search and to look for matches that might be exceptionally good, even at the risk of interviewing some real duds. They apply their algorithms to actual job applicants for professional services jobs at a Fortune 500 firm. They write:

Our data come from administrative records on job applications to these types of professional services positions within a Fortune 500 firm. Like many other firms in its sector, this firm is overwhelmed with applications and rejects the vast majority of candidates on the basis of an initial resume screen. Yet, among those who pass this screen and go on to be interviewed, hiring rates are still relatively low: in our case, only 10% receive and accept an offer. Because recruiting is costly and diverts employees from other productive work, the firm would like to adopt screening tools that improve its ability to identify applicants it may actually hire.”

They find that several of the algorithms would have the effect of reducing the share of selected applicants who are Black or Hispanic, while the contextual bandit approach looking for employees who are potentially outstanding “would more than double the share of selected applicants who are Black or Hispanic, from 10% to 23%.” They also find that while the previous approach at this firm was leading to as situation where 10% of those interviewed were actually offered and accepted a job, the contextual bandit approach led to an applicant pool where 25% of those who were interviewed were offered and accepted a job.

In short, automated screening of job applications is a practical necessity, but too many firms are using it in a way that limits their applicant pool in undesired ways–and then those same firms are more likely to complain that they don’t have enough qualified applicants. For those firms, perhaps the problem doesn’t rest with the pool of job applicants, but in the unthoughtful and counterproductive ways the potential employers are fishing in that pool.

Interview with Luigi Zingales: Social Media and Antitrust

Allison Schrager has a conversation with Luigi Zingales on the subject “Break Up Big Tech? A conversation about the future of the industry” (City Journal website, September 21, 2021). Zingales makes a number of interesting points, but here’s one of them:

I think the problem is that we treat Big Tech as one big issue, and we say we need to break them up. Rather, what we should do depends on what we want to accomplish, and what sector in the industry we’re taking about. Let’s start with social media. I think the government should have tried to stop Facebook’s acquisition of Instagram and WhatsApp, but I am not sure that breaking them up now would make a difference in the long term. If there are big network externalities, separating Facebook from Instagram would be just a temporary measure, because eventually only one of the two will prevail. …

[T]here is one thing I’d love to see. Why can’t I have software that monitors both Signal and WhatsApp and can receive and send data to both at the same time? In 2008, a company called Power Ventures did just that, but Facebook sued the hell out of it and established a principle in U.S. courts that if I give you my Facebook log-in credentials and you download data with my consent, then you are committing a federal crime and should go to jail. I think this is crazy, and it’s one of many legal issues making solutions difficult. …

We should separate the two key functions Facebook performs: sharing of information and editing of information. Facebook and Twitter allow me to share a photo with everyone who follows me. Yet, Facebook also decides whether all my followers will see the picture at the top of their feed, at the bottom, or not at all, if their feed is clogged with other posts. Facebook can also decide whether to promote my picture to lots of people I don’t know. …

First, we should separate the editorial role from the sharing role. In the editorial role, where there are no network externalities, we can have competition. I can have a University of Chicago editor, and another person could have Jacobin as editor. Newspapers can redefine their role as editors. I could subscribe to the Wall Street Journal editorial-selection services: the Wall Street Journal would edit and select from the web the articles or tweets I want to read. For example, I hate it when people talk about their lives on Twitter; other people love that. There should be free competition on curating these information feeds.

By contrast, the sharing function (which benefits from network externalities) should be considered a common carrier, with the restrictions typical of a common carrier, including universal service. Everyone should be allowed to post on Facebook, unless she violates the law. In the same way, the sharing function of Facebook should retain protection from legal liability, while the editorial function should not. …

Consider a phone company. Do you know how many crimes are committed over the phone? Are phone companies responsible for those? You could wiretap every conversation, but no one would even consider that possibility. Unlike phone companies, Facebook, Twitter, and YouTube promote the content posted on their networks. Recently, I wanted to watch a YouTube video of Noam Chomsky, and I immediately got all these recommendations for this strange TV channel. When I started to investigate, I discovered that it came from Venezuela. Venezuela has a very radical, left-wing channel that broadcasts in English for the American market. YouTube promotes the channel, and makes money off promoting it, because it wants to keep viewers like me attached to their service as much as it can. And the way to keep us engaged is to give us more and more radical stuff that stimulates us more and more. The problem isn’t social media; it’s the business model, which is to get people addicted to platforms.

I am not fully confident about the Zingales claim that sharing in social media has implications for network externalities, while editorial choices about what to promote does not. There is also a degree of irony here in that a previous round of complaints against social media was that it cannibalized newspaper and other journalism, by passing along their content without paying the original publishers for it. Now, the proposal is that social media sites should be required to let others curate and pass along their content?

But I do think it’s useful to think in specific terms about what we’re trying to accomplish by applying antitrust regulation to social media. Just saying “sic ’em” is not a worthy motivation for public policy. For example, is the goal is to have a more competitive market for online advertising? Or to protect privacy of individual data? Or are there issues in how these firms choose what content to promote, and how to promote it, that raise anticompetitive or other policy issues? What part of a social media firm is more like a phone conversation, just passing along what someone says, and what part involves strategy and choices by the firm?

Economic Sanctions: A Reality Check

Economic sanctions is an attempt to carry out foreign policy using economic terms. It is a deliberately broad term. It includes decisions about not trading certain products with certain countries or companies, or seeking to freeze the bank accounts of countries, companies, or individuals. In political terms, one main attraction of economic sanctions is that it addresses a demand to “do something” in foreign policy in a way that doesn’t involve ordering soldiers into harms or imposing large budgetary costs. Thus, it’s no surprise that sanctions are quite popular. What’s less clear is whether they are effective.

Daniel W. Drezner makes a case for a degree of skepticism in an essay in the latest issue of Foreign Affairs, “The United States of Sanctions: The Use and Abuse of Economic Coercion” (September/October 2021). He writes:

Sanctions—measures taken by one country to disrupt economic exchange with another—have become the go-to solution for nearly every foreign policy problem. During President Barack Obama’s first term, the United States designated an average of 500 entities for sanctions per year for reasons ranging from human rights abuses to nuclear proliferation to violations of territorial sovereignty. That figure nearly doubled over the course of Donald Trump’s presidency. President Joe Biden, in his first few months in office, imposed new sanctions against Myanmar (for its coup), Nicaragua (for its crackdown), and Russia (for its hacking). He has not fundamentally altered any of the Trump administration’s sanctions programs beyond lifting those against the International Criminal Court. To punish Saudi Arabia for the murder of the dissident Jamal Khashoggi, the Biden administration sanctioned certain Saudi officials, and yet human rights activists wanted more. Activists have also clamored for sanctions on China for its persecution of the Uyghurs, on Hungary for its democratic backsliding, and on Israel for its treatment of the Palestinians. 

We don’t know much about how well these sanctions actually achieve a foreign goal. The limited studies on the subject suggest they are effective less than half the time. Moreover, the government actors who impose sanctions often don’t seem to pay much attention to whether they work or not. Drezner writes:

A 2019 Government Accountability Office study concluded that not even the federal government was necessarily aware when sanctions were working. Officials at the Treasury, State, and Commerce Departments, the report noted, “stated they do not conduct agency assessments of the effectiveness of sanctions in achieving broader U.S. policy goals.” 

Drezner argues that the promiscuous overuse of sanctions by the United States results from two factors: weakness and lack of imagination, which seem interrelated. The weakness is is that US dominance in world economic and military affairs is diminishing. For a number of foreign policy priorities, we want to do more than just give a speech, but less than order a military sortie. We settle on economic sanctions because we lack an ability to envision how foreign policy goals might be pursued in other ways.

There seem to be several conditions for economic sanctions to be effective: precise targeting, a realistic goal, and a degree of international cooperation. As an example, Drezner points out: “In 2005, when the United States designated the Macao-based bank Banco Delta Asia as a money-laundering concern working on behalf of North Korea, even Chinese banks responded with alacrity to limit their exposure.” Some of the efforts to limit flows of funds to terrorist groups seem to have been effective, at least over the short- and medium-mterm.

But when the US, standing mostly alone, imposes sanctions for general purposes on large economies, the main effect is often to cause suffering to the people of the country, rather than actually to achieve a foreign policy goal. The international sanctions against South Africa may be the best example of a success story in assisting regime change. But economic sanctions that require a country to dismantle its existing political/economic arrangements are not likely to work well.

The United States has imposed decades-long sanctions on Belarus, Cuba, Russia, Syria, and Zimbabwe with little to show in the way of tangible results. The Trump administration ratcheted up U.S. economic pressure against Iran, North Korea, and Venezuela as part of its “maximum pressure” campaigns to block even minor evasions of economic restrictions. The efforts also relied on what are known as “secondary sanctions,” whereby third-party countries and companies are threatened with economic coercion if they do not agree to participate in sanctioning the initial target. In every case, the target suffered severe economic costs yet made no concessions. Not even Venezuela, a bankrupt socialist state experiencing hyperinflation in the United States’ backyard, acquiesced.

The Trump administration was quite aggressive in using economic sanctions to pressure China for economic and foreign policy goals. That policy does not seem to have been effective.

Similarly, the myriad tariffs and other restrictive measures that the Trump administration imposed on China in 2018 failed to generate any concessions of substance. A trade war launched to transform China’s economy from state capitalism to a more market-friendly model wound up yielding something much less exciting: a quantitative purchasing agreement for U.S. agricultural goods that China has failed to honor. If anything, the sanctions backfired, harming the United States’ agricultural and high-tech sectors. According to Moody’s Investors Service, just eight percent of the added costs of the tariffs were borne by China; 93 percent were paid for by U.S. importers and ultimately passed on to consumers in the form of higher prices.

Indeed, it seems to me that we have often developed an odd vocabulary in talking about economic sanctions, where we refer to them as “success” when they cause disruption or stress, not when they actually succeed in accomplishing the foreign policy goal that they were purportedly enacted to address.

I’m reluctant to opine much on foreign policy. It’s not my area of expertise. But even I understand that building and projecting America’s interests needs to be a broad-based project that involves more than just imposing economic and military costs on others, but also includes building connections and offering carrots. Thus, foreign policy can work with economic policy on issues of building trade relations, encouraging investment flows, and providing loans or aid. Building the connections between nations that offer a degree of leverage in foreign policy can also use other tools: cultural exchanges, travel between countries, communication and consultation between governments, helping with training and expertise, and a range of treaty alliances on smaller issues. Individually, many of these are small steps. But together, they build up a reservoir of understanding and connectedness, so that when the tougher and bigger issues come up, US foriegn policy goals have a greater chance to succeed.

Drezner reports that Treasury Secretary Janet Yellen has promised to carry out a review of the US use of economic sanctions, which seems overdue. Acting as if economic sanctions are an appropriate part of almost every foreign policy goal, and then watching as other countries do the same in pursuit of all of their own foreign policy goals, doesn’t seem like a pathway to make the world a safer or more flourishing place.

The World Bank Kills Its “Doing Business” Report

The World Bank announced that it is discontinuing its its biennial Doing Business Report. The reason is that World Bank insiders, under pressure from national government, leaned on the researchers charged with compiling the report to change their findings–which they did. Details are available in a report: “Investigation of Data Irregularities in Doing Business 2018 and Doing Business 2020 – Investigation Findings and Report to the Board of Executive Directors(September 15, 2021).

Before sketching what happened within the World Bank–a topic admittedly of interest mainly to a small number of insiders–it’s perhaps useful to describe how the Doing Business reports came into existence and what they were trying to do. At least for now you can check out the website of the 2020 Doing Business report for yourself.

Simeon Djankov described the origins of the Doing Business reports in the Winter 2016 issue of the Journal of Economic Perspectives (where I work as Managing Editor). Djankov wrote:

The Doing Business report was first published in 2003 with five sets of indicators for 133 economies. However, the team that created Doing Business had been formed three years earlier, during the writing of the World Development Report 2002: Building Institutions for Markets (World Bank 2001). The focus on the importance of institutions in development was chosen by Joseph Stiglitz, who at the time was the World Bank’s Chief Economist. As a member of the team, I was tasked with authoring the chapters on institutions and firms. At the time, the work by Rafael La Porta, Florencio Lopez de Silanes, Andrei Shleifer, and Robert Vishny on legal origins and various aspects of institutional evolution was generating a great deal of interest. I turned to Shleifer with a request to collaborate on several background papers for the World Development Report. He agreed, on the condition that we used this work as an opportunity to gather and analyze new cross-country datasets on institutions. This is how Doing Business started.

By 2020, the Doing Business report included data for 190 countries on 10 categories: “the processes
for business incorporation, getting a building permit, obtaining an electricity connection, transferring property, getting access to credit, protecting minority investors, paying taxes, engaging in international trade, enforcing contracts, and resolving insolvency.” There was also data collected on regulation of employment and contracting with government, although this data was not an official part of the Doing Business Index–and the idea of an index is perhaps where the trouble starts.

After all, researchers all over the world, both in international institutions like the World Bank or the IMF, as well as in academia and think tanks, do research on these kinds of topics all the time. But the idea of the Doing Business Index was to come up with concrete ways of measuring these 10 categories. Obvious questions arise. For example, say the task is to measure the costs of obtaining, say, an electricity connection in a given country. One cost will just be the cost of the electricity itself, but what about the time and the number of permits required? Doesn’t the reliability of the electricity supply need to be taken into account? Isn’t there likely to be a big difference in a given country between urban and rural areas? Maybe the process and costs will be different by industry, too.

The Doing Business project, to its credit, wanted to be scrupulous about exactly what was being measured. In the case of getting electricity, the 2020 report spells out:

Doing Business records all procedures required for a business to obtain a permanent electricity connection and supply for a standardized warehouse … These procedures include applications and contracts with electricity utilities, all necessary inspections and clearances from the distribution utility as well as from other agencies, and the external and final connection works between the building and the electricity grid. The process of getting an electricity connection is divided into distinct procedures and the study records data for the time and cost to complete each procedure. In addition, Doing Business measures the reliability of supply and transparency of tariffs index … and the price of electricity …

The data for measure all of these dimensions of “getting electricity” was based on survey evidence from local industry experts. As the report notes: “The data on getting electricity is collected through a questionnaire completed by experts in the electricity sector, including electrical engineers, electricians, electrical installation firms, as well as representatives from utility companies and energy regulators, and other public officials involved in this sector. To make the data comparable across economies, several assumptions about the business, the warehouse and the electricity connection are used.”

I hope this brief description of one category of the Doing Business report gives a sense of the ambition and scope of the project. Surveys were going out to multiple industry experts in 190 countries, across these 10 categories. Then the survey answers were being compiled and combined into a single index number for “getting electricity,” and the single index numbers for all 10 categories were being combined into an overall Doing Business index number for every country.

For those putting together the reports, and for those like me reading the reports, it was obvious that the index numbers and ranking were in some sense very broadly informative. However, what was really interesting about the report was that you could drill down into the underlying questions and get a more detailed and granular sense of what was causing a given score to be high or low. You could point out what seemed to be strengths and weaknesses in the business climate of countries. And if you disagreed with a given score, the methodology was clear enough that you could often pinpoint your precise area of disagreement. In other words, Doing Business was trying to take the ideas of “business climate” or “institutions that interact with private business” and make them specific, rather than ethereal and rhetorical.

But for national governments, the overall Doing Business scores felt like a judgement. Governments around the world, including India, Russia, Peru, and others, announced as a policy goal that they would perform better in the Doing Business rankings. Of course, all countries prefer to be above average in their business climate. National governments quarreled with the Doing Business methods and findings.

As one of many issues, the Doing Business rankings often looked at the actual formal rules and regulations. But many companies in practice found ways to circumvent those rules, perhaps with political pull or bribes. A system that functions based on such favoritism may not be a good thing–but if you focus on the formal rules, you may not be capturing how the system actually operates. For an overview of the issues and controversies surrounding the Doing Business indicators, a useful starting point is the two-paper symposium in the Spring 2015 issue on Doing Business in the Journal of Economic Perspectives:

For the 2018 Doing Business report, the focus of the outside report is on China. There was deep concern at the World Bank that China might reduce its financial commitments to the Bank, with phrases like could be in “very deep trouble” being tossed around. Overall, China had been ranked #78 in the previous Doing Business report, and China’s government was complaining to the top leaders at the bank that this ranking was too low, and didn’t reflect progress China had made. The underlying data did show that China had made substantial progress–but it also showed that a number of other countries ranked near China had also made even more substantial progress. Thus, in terms of rankings, China was scheduled to drop.

Shenanigans followed. The 2018 report was about to be published, with China having a ranking of #85. Top leadership of the Bank then pulled the report just before publication, and starting asking for some way of recalculating China’s numbers. The report documents in painful detail how the pressure was exerted, how options were proposed and then discarded (like, say, using data for Macao to measure business conditions in China), and eventually, how China got its #78 ranking back.

For the next cycle of the Doing Business report scheduled for 2020, a similar cycle occurred, this time with Saudi Arabia as the protagonist. For example, early data ranked Jordan higher than Saudi Arabia as a reformer in the Middle East region. Again, political pressure was brought to bear on the researchers doing the work. Again, the numbers got altered. A similar pattern also happened for Azerbaijan.

One finishes the outside report wondering how many other national government were negotiating with the World Bank over their scores. Once such doubts are not just rumors, but backed by outside investigators, there may not have been much choice but to end the report, at least for a time. One might also argue that given the imperfections of the report–the difficulties and idiosyncracies of how it defined terms, gathered information, and combined that information into rankings–perhaps the loss is not an enormous one.

On the side, the demise of Doing Business seems unfortunate to me, but then, I’m a person for whom more data is pretty much always better. Yes, the rankings were imperfect, but having no one attempting to systematically measure and compare across these categories, so that there is no information at all, doesn’t seem to me an overall gain. More troubling, the saga of the Doing Business report reveals how national governments will push back against research findings and statistics they don’t like. This story shows how researchers were pressured to knuckle under, and did. When that happens, it casts a shadow over research not just at the World Bank, but everywhere.

Mulling Pandemic Advice from September 2019

Two years ago in September 2019, before COVID was on everyone’s lips, the White House Council of Economic Advisers published a report called “Mitigating the Impact of Pandemic Influenza through Vaccine Innovation.” Even committed readers of government reports like me skipped over the report at the time. After all, I’d written about reports that discussed pandemic risks from time to time in the past (for example, here and here). Back in fall 2019, there didn’t feel like any pressing need to revisit the subject? But Alex Tabarrok referred back to the 2019 report in a recent post at the always-useful Marginal Revolution website. and given where we are today, it’s interesting to read the report again.

The CEA report states the risk quite clearly:

Every year, millions of Americans suffer from seasonal influenza, commonly known as “the flu,” which is caused by influenza viruses. A new vaccine is formulated annually to decrease infections resulting from the small genetic changes that continually occur in the most prevalent viruses and make them less recognizable to the human immune system. There is, however, a 4 percent annual probability of pandemic influenza resulting from large and unpredictable genetic changes leading to an easily transmissible influenza virus for which much of the population would lack the residual immunity that results from prior virus exposures and vaccinations. … [I]n a pandemic year, depending on the transmission efficiency and virulence of the particular pandemic virus, the economic damage would range from $413 billion to $3.79 trillion. Fatalities in the most serious scenario would exceed half a million people in the United States. Millions more would be sick, with between approximately 670,000 to 4.3 million requiring hospitalization. In a severe pandemic, healthy people might avoid work and normal social interactions in an attempt to avert illness by limiting contact with sick persons

The CEA discussion is framed in terms of the annual flu season. Thus, the question is whether, if a new and virulent type of flu appeared, how quickly could we get the appropriate flu vaccine shot in place? Thus, the 2019 report notes:

Large-scale, immediate immunization is the most effective way to control the spread of influenza, but the predominant, currently licensed, vaccine manufacturing technology would not provide sufficient doses rapidly enough to mitigate a pandemic. Current influenza vaccine production focuses on providing vaccines for the seasonal flu and primarily relies on growing viruses in chicken eggs. Egg-based production can take six months or more to deliver substantial amounts of vaccines after a pathogenic, influenza virus is identified—too slowly to stave off the rapid spread of infections if an unexpected and highly contagious pandemic virus emerges. …

Newer technologies, like cell-based or recombinant vaccines, have the potential to cut production times and improve efficacy compared with egg-based vaccines and are currently priced below the expected per capita value of improved production speeds for pandemic vaccines. But these existing technologies have not yet been adopted on a large scale. Besides improving pandemic preparedness, new vaccine technologies may have an additional benefit of potentially improving vaccine efficacy for seasonal influenza. We estimate the economic benefits that these new technologies could generate for each seasonal influenza vaccine recipient, and find that the benefits are particularly compelling for older adults (65+) who are at high risk of influenza complications and death.

In retrospect, this emphasis on production speed still seems relevant, but it also seems to skip past some other issues: For example, how quickly can we learn about a new flu strain? How can we accelerate the regulatory process? What about steps that we might want to take immediately in the case of a severe future outbreak, to buy time until the vaccine is available? For example, what could we do to have an oversupply of masks, ventilators, and testing kits rapidly available? Under what circumstances does it make sense to start widespread contact tracing? Under what circumstances do lockdowns, travel restrictions, or quarantines make sense?

What is perhaps ironic, or tragic, is that we did manage to pull off COVIC vaccinations. Back in spring 2020, it wasn’t clear that it would be possible to develop a COVID vaccine quickly, or perhaps at all. But thanks in part to being able to build on earlier research, like efforts to find a vaccine for HIV, the vaccine arrived faster than many had predicted.

The 2019 report is quite correct that it’s worth laying the groundwork in advance for developing a vaccine. Development and production of vaccines is also a case where speed matters a great deal. If you need a couple of years to get a new vaccine into production, the losses in health and economic production will be enormous–and the benefits of vaccinating those who haven’t already gotten the illness after that delay will be correspondingly small. The CEA report notes that in the much smaller and milder 2009 “swine flu” pandemic, of 2009, the first cases in humans were identified in April 2009, and the national flu vaccination campaign started six months later in October 2009. The COVID vaccine was admittedly a tougher case, because COVID was less familiar than just another variant of flu, but it took roughly a year from the first human cases to the vaccine becoming available–and well over a year until enough doses were readily available to all.

The $18 billion spent on Operation Warp Speed may already have highest benefit-cost ratio of any government spending program that has ever existed. The benefits will only expand as vaccines are rolled out around the world and when you take into account that lessons from COVID vaccines may enable better vaccines in the future, too.

But it also feels to me as if, in our current social controversies about vaccinations, mask-wearing, social distancing, hand-washing, and all the rest, we have not done much to learn the other lessons of the pandemic. We don’t have an early-warning system for a future pandemic, which could be implemented as part of existing medical tests–or some think could be implemented with a high degree of anonymity by doing tests at sewage treatment plants. We don’t have a system for accelerating the development of cheap and widespread tests for a new pandemic, or for a system of contact tracing, either. The next pandemic will arrive on its own schedule. Right now, it feels to me as if we are sleepwalking into that next pandemic with the same set of lousy policy options: that is, stop-and-start lockdowns under ever-changing rules in the short-term, hoping to develop and produce a vaccine, and then hoping that the public health apparatus can do a better job of persuading people to get the vaccine.

International Corporate Taxation: What to Tax?

There have been news stories in the last month or two about broad-based support across 130 countries for a minimum corporate global tax rate of 15%. The common assertion is that a minimum tax rate will be a powerful discouragement for companies that are trying to use accounting methods to shift their profits to low-tax countries. But the problem of international corporate taxation is considerably harder than agreeing on a minimum tax rate.

Ruud de Mooij, Alexander Klemm, and Victoria Perry have edited a collection of essays that lays out the issues in Corporate Income Taxes Under Pressure: Why Reform is Needed and How it Could be Designed (IMF, 2021).

Imagine a hypothetical of a multinational company. It’s a US-based firm with management and headquarters in the US. However, the company owns subsidiary firms in a dozen other countries that support its global production chain, and it sells its products backed by a substantial advertising/marketing in several dozen other countries. When all is said and done, the firm makes a profit. But was the profit generated by the US-based management of the country? By the production units in other countries? By some combination of these two? What about the countries where the actual sales take place?

It’s easy to make this example a little more complex. What if the multinational company also owns a management consulting arm, based in non-US country #1, an insurance arm based in non-US country #2, and a research and development facility based in non-US country #3. Again, these different branches happen within the single firm, but all their services to the firm are provided digitally–without any physical product that ships across national borders. The firm will need to make decisions about what it is reasonable to pay each of these parts of the firm–and to decide what part of its overall profits (if any) are attributable to each arm of the company.

Finally, remember that each country along the production chain has two goals: it wants to encourage economic activity to happen within its own borders, and it wants some share of corporate tax revenues for itself. Some countries will put a stronger emphasis on attracting economic activity; others will put a stronger emphasis on collecting revenue. Some countries may reason that if they attract economic activity with low corporate taxes, they can instead collect tax revenue with value-added taxes or payroll taxes as production happens. Each country will write its own corporate tax rules, perhaps following the same general pattern, but also with its own favoritisms and politics built in. For example, countries may impose a certain corporate tax rate, then also have other provisions in the tax code, or other agreements about what kinds of public services will be provided to the firm, which make the effective corporate tax rates lower. Moreover, it is a general rule of international corporate taxation that a company should not be taxed more than once on the same earnings.

In thinking about the appropriate tax rules for multinational corporations, generalized statements of support for a 15% minimum rate (even if that support holds up when tested in the fiery furnace of practica politics) doesn’t begin to address the issues at hand. The question is not so much the tax rate (15% or another level), but which governments have the right to tax what parts of the production chain.

In Chapter 3 of the book, Narine Nersesyan lays out these issues in “The Current International Tax
Architecture: A Short Primer.” She writes (citations and footnotes omitted):

When a business activity crosses national borders, the question arises as to where the profits resulting from that activity should be taxed. In principle, there are at least three possibilities for assigning a taxing right:

• Source: the countries where production takes place
• Residence: the countries where a company is deemed to reside
• Destination: the countries where sales take place

The generally applied tax architecture for determining where profits are taxed is now nearly 100 years old—designed for a world in which most trade was in physical goods, trade made a less significant contribution to world GDP, and global value chains were not particularly complex. … The current international tax framework is based on the so-called “1920’s compromise”. In very basic outline, under the “compromise” the primary right to tax active business income is assigned where the activity takes place—in the “source” country—while the right to tax passive income, such as dividends, royalties and interest, is given up to the “residence” country—where the entity or person that receives and ultimately owns the profit resides. The system has, however, evolved in ways that considerably deviate from this historic “compromise,” and international tax arrangements currently rest on a fragile and contentious balance of taxing rights between residence and source countries. …

While domestic laws of each individual country set out the rules … the international taxation system is—very importantly—overlain with a network of more than 3,000 bilateral double-taxation treaties. These typically add (among other functions) a layer of definitions and income allocation rules that try to bring into alignment, and therefore can alter, the rules imposed by the individual signatories. … The key role of the international tax architecture is to govern the allocation of taxing rights between the potential tax-claiming jurisdictions to avoid both excessive taxation of a single activity and a nontaxation of a business activity.

The problem of how to allocate the profits of a multinational company across the different activities that it carries out in a range of countries is a genuinely sticky one, as each country grabs for a slice of the pie. But there are now companies that are incorporated in one country, with management and control operations in another country, and assets and jobs and still other countries.

Different chapters in the volume look at possible policies for taxation for multinational companies, including source-based taxation (although figuring out “the source” of profits is going to be tricky); residence-based taxation (although figuring out the real residence (or residences?) of multinational firm is going to be tricky); destination-based taxation (which would allocate the worldwide profits of a multinational according to where sales ultimately occur–and you can just imagine how a country with big exporters who sell elsewhere like that idea); or a formulary approach (which attempts to resolve all of these issues through a formula that includes all of these elements).

I don’t mean to offer nothing here but the counsel of despair. I’m sure there are steps that can be taken to discourage companies from booking a large share of their profits in jurisdictions where almost none of their actual operation takes place. But rethinking the roots of multinational corporate taxation in a way that would be acceptable to politicians in most countries is a genuinely herculean task.

How Poverty Changed in 2020: Mixed Measures

Each year in September, the US Census Bureau releases a report on income and poverty measures for the previous year. Thus, Income and Poverty in the United States: 2020, by Emily A. Shrider, Melissa Kollar, Frances Chen, and Jessica Semega (September 2021) looks at changes for 2020. As one would expect, given the pandemic, the poverty rate in 2020 rose.

This figure shows the poverty rate in the top graph and the number of people in poverty in the bottom graph.

This figure shows the ongoing pattern of poverty by age: the elderly have the lowest poverty rate, and it didn’t budge much in 2020. The poverty rates for children and working-age adults both rose.

But an obvious question arises here. In an attempt to offer protection against the economic costs of the pandemic, the federal government ran a budget deficit of more than $3 trillion in 2020, equal to about 15% of GDP. How does that amount show up in these numbers? The short answer is that it doesn’t.

The poverty line is a measure of “money income.” Thus, it does include cash payments like Social Security or welfare and unemployment payments. However, it does not include a value for non-cash benefits like food stamps or Medicaid and Medicaid. It also does not include the value of payments made to the poor via tax credits, like the Earned Income Tax Credit. And what about the stimulus payments made to individuals in 2020 under the Coronavirus Aid, Relief, and Economic Security Act (CARES Act) and the Coronavirus Response and Relief Supplemental Appropriations Act (CRRSA Act)? They aren’t included in the poverty rate calculations either, because they were implemented a tax credits

There’s one really good reason not to adjust the poverty line for non-cash and tax credit support, which is that it’s always been done this way. Thus, continuing to do it in the same way allows poverty rates from one year to be compared easily to rates from the past–if what was included in the poverty rate changed each year, then such comparisons would be harder.

However, there are also obvious reasons to use a more inclusive measure of income to take a clearer picture of poverty. Thus, about a decade ago the Census Bureau developed the “Supplemental Poverty Measure,” and along with the poverty rate report, it also published The Supplemental Poverty Rate, 2020, by Liana Fox and Kalee Burns (September 2021). As this figure from the report shows, the “supplemental poverty rate” fell even though the official poverty rate rose.

The difference is because of what is included in the two measures. As the report notes:

Income used for estimating the official poverty measure includes cash benefits from the government(e.g., Social Security, unemployment insurance benefits, public assistance benefits, and workers’ compensation benefits), but does not take into account taxes or noncash benefits aimed
at improving the economic situation of the population. The SPM [supplemental poverty measure] incorporates all of these elements, adding cash benefits, noncash transfers, and stimulus payments, while subtracting necessary expenses such as taxes, medical expenses, and expenses related to work.

According to this report, the economic impact/stimulus payments reduced the number of people who would otherwise have fallen below the poverty line by 11.7 million. At least by this measure, the poor as a group did not suffer disproportionate economic losses in 2020 during the pandemic recession and its aftermath.