Some Economics of Stablecoins

A “stablecoin” is a kind of cryptocurrency. But unlike (say) Bitcoin, the value of a stablecoin is attached to a certain currency–often the US dollar, but in theory some combination of other assets. In other words, no one should be buying stablecoin because they expect its value to rise! By its nature, it can only be used as a store of value or as a means of payment.

Stablecoin holdings quintuple from October 2020 to October 2021, reaching $127 billion. Thus, the President’s Working Group on Financial Markets (PWG), joined by the Federal Deposit Insurance Corporation (FDIC) and the Office of the Comptroller of the Currency (OCC), released a Report on Stablecoins” in November 2021. Here’s a figure showing the rise in stablecoins, with Tether and USD Coin leading the way.

Why buy stablecoins at all? Imagine that you are someone who wants to buy and sell another blockchain-based cryptocurrency like Bitcoin, or wants to become involved in what is known as “decentralized finance” or DeFi–which the report defines as “a variety of financial products, services, activities, and arrangements supported by smart contract-enabled distributed ledger technology.” In short, you want to live in a blockchain world and avoid dealing in regular currencies like the US dollar, but you also want to the fluctuations in value of other cryptocurrencies like Bitcoin. Thus, you turn to a blockchain-based stablecoin instead. As the report notes: “At the time of publication of this report, stablecoins are predominantly used in the United States to facilitate trading, lending, and borrowing of other digital assets. For example, stablecoins allow market participants to engage in speculative digital asset trading and to move easily between digital asset platforms and applications, reducing the need for fiat currencies and traditional financial institutions.”

How much should federal financial regulators be worried about stablecoins? As the the report points out, there are a number of potential risks. What if the stablecoin is not actually backed with other assets in a way that lets it hold its value? What if payment via stablecoin doesn’t work for reasons ranging from systemic breakdown to honest mistake to fraud?

Because a group of government regulators are writing this report, it is perhaps unsurprising that they recommend a sweeping new set of federal laws to govern stablecoins, with a recommendation that “Congress act promptly to enact legislation to ensure that payment stablecoins and payment stablecoin arrangements are subject to a federal prudential framework on a consistent and comprehensive basis.” This would include steps like: treating all stablecoin companies as insured depository institutions, with standard oversight rules similar to those for banks: regulations for the “digital wallet” companies where people hold their stablecoins for security and financial stability; standard to ensure smooth interoperability between stablecoin platforms; and rules about ownership or affiliation of stablecoin companies with other commercial firms.

But before we try to sweep stablecoins and all other cryptocurrencies under the warm blankets of federal financial regulation, I find myself tempted by more of a “buyer beware” approach. At this stage, the whole idea of blockchain-based payments is in a stage of experimentation. Maybe we should generally let this experimentation play out awhile longer. After all, a government regulatory apparatus offers reassurances that will reduce the risks of stablecoins and other blockchain-related assets–and in this way will tend to encourage use of such assets. It’s not clear to me that it should be an official government policy goal to encourage large chunks of the financial system to migrate to blockchain-related systems.

Right now, in the context of the US financial system as a whole, $127 billion in stablecoins is not a lot. The report notes:

Unlike most stablecoins, the traditional retail non-cash payments systems—that is, check, automated clearing house (ACH), and credit, debit, or prepaid card transactions—all rely on financial institutions for one or more parts of this process, and each financial institution maintains its own ledger of transactions that is compared to ledgers held at other institutions and intermediaries. Together, these systems process over 600 million transactions per day. In 2018, the number of non-cash payments by consumers and businesses reached 174.2 billion, and the value of these payments totaled $97.04 trillion. Risk of fraud or instances of error are governed by state and federal laws …

In addition, if stablecoins or other cryptocurrencies are being used for money-laundering or other illegal transactions, government regulators and law enforcement have shown an ability to break through that anonymity when they have a strong reason to do so. The US Treasury oversees something called the Financial Crimes Enforcement Network, or FinCen. The stablecoin report notes in its acronym-heavy way:

In the United States, most stablecoins are considered “convertible virtual currency” (CVC) and treated as “value that substitutes for currency” under FinCEN’s regulations. All CVC financial service providers engaged in money transmission, which can include stablecoin administrators and other participants in stablecoin arrangements, must register as money services businesses (MSBs) with FinCEN. As such, they must comply with FinCEN’s regulations, issued pursuant to authority under the BSA [Bank Secrecy Act], which require that MSBs maintain AML [anti money-laundering] programs, report cash transactions of $10,000 or more, file suspicious activity reports (SARs) on certain suspected illegal
activity, and comply with various other obligations. Current BSA regulations require the transfer of certain specific information well beyond what can be inferred from the blockchain resulting in non-compliance. While the Office of Foreign Assets Control (OFAC) has provided guidance on how the virtual currency industry can build a risk-based sanctions compliance program that includes internal controls like transaction screening and know your customer procedures, there may be some instances where U.S. sanctions compliance requirements (i.e., rejecting transactions) could be difficult to comply with under blockchain protocols.

For now, this kind of financial regulation for stablecoins seems like plenty to me. If you are worried about the other risks of stablecoins and other cryptocurrencies, then you should be figuring out other ways to reduce those risks, not relying on federal regulation to rescue you.

The Slowdown in Agricultural Productivity Growth

Agricultural productivity growth is of central importance. In many of the lowest-income parts of the world, a majority of the local population is involved in subsistence farming. A standard pattern of economic growth is that people move away from agriculture to manufacturing and services, and move away from rural areas to urban areas. In this process, agriculture itself shifts from a focus on food to output that are sources of fiber, energy and other industrial inputs, and items like cut flowers. Moreover, current predictions are that global population will peak a little short of 10 billion people in the 2060s, and then drop off to less than 9 billion by the end of the 21st century.

It’s also worth remembering that higher agricultural productivity can mean more output with less strain on resources of soil and water–and potentially with reduced use of pesticides as well.

Given the breakthroughs in understanding the genetics of plants and how to look after them, along with the gradual spread of these insights around the world, one might expect growth in agricultural productivity to be steady, or even robust. But when Keith Fuglie, Jeremy Jelliffe, and Stephen Morgan of the Economic Research Service at the US Department of Agriculture looked at the data, they find, “Slowing Productivity Reduces Growth in Global Agricultural Output” (Amber Waves, December 28, 2021).

To set the stage, here’s a measure of growth in global agricultural output over the decades. The breakdown shows that agricultural output can rise for a number of reasons: more land, more irrigation, more inputs like fertilizer. The gains in output that can’t be attributed to these other factors are a measure of productivity gains. Thus, you can see that while gains in agricultural input back in the 1960s and 1970s were largely due to higher inputs, there has been a shift over time to more importance of productivity growth. You can also see that the green bar of productivity growth is smaller in the most recent decade.

A stacked bar chart showing the contributions of these factors to agricultural output from 1961 to 2019: Total factor productivity, increased input use, irrigation expansion, and land expansion.

What patterns emerge when looking into this data more closely? Most of the decline in agricultural productivity growth, it turns out, is in the lower-income developing economies, while agricultural productivity growth in higher-income developed countries has stayed strong. The authors write: “In terms of productivity growth, Latin America and the Caribbean experienced the largest slowdown, followed by Asia. In Sub-Saharan Africa, agricultural productivity growth was already low in the 2000s and turned slightly negative in the 2010s.”

Side-by-side stacked bar charts showing contributors to agricultural output in developed countries and developing countries from the 1960s to the 2010s.

Here’s the overall pattern that emerges. The black line shows population, projected forward to 2040. The blue line shows agricultural production rising faster than population. The red line shows that cropland has risen, but not a lot. The green line shows a general (and occasionally interrupted) pattern of falling agricultural prices in late 20th century, with a price rise starting around 2000s.

Line chart comparing trends in agricultural prices, agricultural production, cropland, and world population from 1900 through 2040.

There are theories about why agricultural productivity growth has been faring poorly in the developing countries that seem as if they should have the most room for gains. Perhaps the agricultural sector in those countries is less flexible and responsive to shifts in patterns of weather or crop disease. Some of the agricultural productivity gains in higher-income countries are based on customizing production using information and feedback from satellite, internet, and cellular infrastructure, which is less available in developing economies. Whatever the reason, the need to find ways of improving agricultural productivity in lower-income countries is vital and pressing.

Sectoral Training: Does it Work?

The idea of “sectoral training” is that everyone needs skills for a well-paid career, but not everyone needs or wants to acquire those skill by getting a four-year undergraduate degree. Might training focused on becoming employable in a specific high-demand sector of the economy, preferably with an employer standing by and ready to hire, work better for some young adults? The US Department of Education runs the What Works Clearinghouse, which collects studies on various programs and writes up an overview of the results. In November, the WWC published evaluations of two sectoral training programs, Project Quest and Year Up. Neither set of findings is very encouraging–but there is some controversy over whether the WWC is focusing on the proper outcomes.

For background, Project Quest started in San Antonio, Texas, in 1992, and has since spread to some other locations in Texas and Arizona. The program accepts those who are at least 18, and who have a high school degree (or equivalent). The US Department of Education looks at three studies of Project Quest, and describes the intervention in this way:

All three interventions target their efforts on recruiting individuals who are unemployed, underemployed, meet federal poverty guidelines, or are on public assistance. … Project QUEST is a community-based organization that partners with colleges, professional
training institutes, and employers. Participants enroll full-time in an occupational training program. They attend weekly group meetings led by a counselor that focus on life skills, time management, study skills, test-taking techniques, critical thinking, conflict resolution, and workforce readiness skills. Participants who need to improve their basic reading and math skills can complete basic skills coursework prior to enrolling in the occupational program. … Participants typically complete their occupational program within one to three years, depending on the length of the program.

What do the results of the three studies show, according to the US Department of Education?

The evidence indicates that implementing Project QUEST:
• is likely to increase industry-recognized credential, certificate, or license completion
• may increase credit accumulation
• may result in little to no change in short-term employment, short-term earnings, medium-term employment, medium-term earnings, and long-term earnings
• may decrease postsecondary degree attainment

Obviously, this is not especially encouraging. What about the other program, Year Up? The US Department of Education describes the program this way:

Year Up is an occupational and technical education intervention that targets high school graduates to provide them with six months of training in the information technology and financial service sectors followed by a six-month internship and supports to ensure that participants have strong connections to employment. … The evidence indicates that implementing Year Up:
• is likely to increase short-term earnings
• may result in little to no change in short-term employment
• may result in little to no change in medium-term earnings
• may result in little to no change in industry-recognized credential, certificate, or license completion
• may result in little to no change in medium-term employment

Of course, reports like these don’t prove that some other kind of sectoral training program might not work. But they do suggest that some of the more prominent examples of sectoral training aren’t performing as well as hoped. However, Harry J. Holzer believes that the official reports provide are too gloomy. He has written “Do sectoral training programs work? What the evidence on Project Quest and Year Up really shows” (Brookings Institution, January 12, 2022). As background, Holzer is an advocate of sectoral training programs (for example, see his “After COVID-19: Building a More Coherent and Effective Workforce Development System in the United States,” Hamilton Project, February 2021). In this essay, he writes:

I argue that the best available evidence still suggests that Project Quest and Year Up, along with other sector-based programs, remain among our most successful education and training efforts for disadvantaged US workers. While major challenges remain in scaling such programs and limiting their cost, the evidence to date of their effectiveness remains strong, and they should continue to be a major pillar of workforce policy going forward.

Holzer points to other reviews of sectoral training programs that reach much more positive outcomes. For example, Lawrence F. Katz, Jonathan Roth, Richard Hendra and  Kelsey Schaberg have written “Why Do Sectoral Employment Programs Work? Lessons from WorkAdvance” (National Bureau of Economic Research Working Paper, December 2020). They discuss four sectoral training programs with randomized control trial (RCT) evaluations. They write:

We first reexamine the evidence on the impacts of sectorfocused programs on earnings from four RCT-based major evaluations – the SEIS, WorkAdvance, Project Quest, and Year-Up – of eight different programs/providers (with one provider Per Scholas appearing in two different evaluations). Programs are geared toward opportunity youth and young adults (Year Up) or broader groups of low-income (or disadvantaged) adults. Participants are disproportionately drawn from minority groups (Blacks and Hispanics), low-income households, and individuals without a college degree. The sector-focused programs evaluated in these four RCTs generate substantial earnings gains from 14 to 39 percent the year or so following training completion. And all three evaluations with available longer-term follow-ups (WorkAdvance for six years after random assignment, Project Quest for nine years, and Year Up for three years) show substantial persistence of the early earnings gains with little evidence of the fade out of treatment impacts found in many evaluations of past employment programs. Sector-focused programs appear to generate persistent earnings gains by moving participants into jobs with higher hourly wages rather than mainly by increasing employment rates.

Why the difference in findings? Holzer suggest several reasons:

  1. The US Department of Education review process for publishing its evaluations is sluggish, and so it leave out at least three recent positive studies of Project Quest and Years Up. These studies also aren’t included in the Katz et al. (2020) review. They are: Roder, Anne and Mark Elliott. 2021. Eleven Year Gains: Project QUEST’s Investment Continues to Pay Dividends. New York: Economic Mobility Corporation; Rolston, Howard et al. 2021. Valley Initiative for Development and Advancement: Three-Year Impact  Report. OPRE Report No. 2021-96, US Department of Health and Human Services; and Fein, David et al. 2021. Still Bridging the Opportunity Divide for Low-Income Youth: Year Up’s Longer-Term Impacts. OPRE Report.
  2. The recent studies also have longer follow-up periods, and leaving out these studies means that the WWC summaries don’t include positive long-run effects.
  3. With Project Quest, the WWC summary includes some spin-off programs that are similar, but not the same.
  4. The WWC apparently has a strict rule in its evaluations that either effects are statistically significant at the 5% level, or worthless. Thus, an effect that is significant at, say, the 6% or 7% level is not viewed as a finding that might be worth more investigation with a larger sample size, but as a purely negative result. There are controversies in economics and statistics over how and when to use a 5% level of significance, but both sides of the controversy agree that this kind of black-and-white use of a rigid standard is not sensible.
  5. The WWC also has strict rules about what it counts as evidence. For example, say that the WWC wants to evaluate effects after 3, 5, and 7 years, but a study evaluates the evidence after 4, 6, and 8 years. Holzer says that the WWC would then ignore that study, because it does not “align with WWC’s preferred measures.

At a minimum, concerns like these suggest that sectoral training should not be dismissed based on the US Department of Education website evaluations. These programs have a range of different procedures and are focused on different groups, and there remains much to learn about best practices. But there is a strong case for continuing to expand and study such programs.

Public Opinion on Capitalism and Socialism

The Gallup Poll asks Americans about their attitudes toward capitalism and socialism on a semi-regular basis. Some recent results are reported by Jeffrey M. Jones in “Socialism, Capitalism Ratings in U.S. Unchanged” (December 26, 2021, from a poll carried out in October 2021).

This figure shows the share of Americans who have a “positive image” of capitalism or of socialism. The answers over the last decade are, perhaps surprisingly, quite stable.

To put those percentages in context, here’s a ranking of capitalism and socialism relative to some other terms.

Everyone loves small business. If you are a supporter of “capitalism,” it may be a good public relations move to refer instead to “free enterprise,” and thus perhaps to sidestep possibly being associated with “big business.” However, big business generally garners more positive support than either socialism or the federal government. The rankings of these other terms have remained the same in the last decade, too, although levels of positivity associated with both big business and government have declined since about 2012.

Finally, you might think that people view capitalism and socialism as two sides of a coin: that is, you favor one or the other. But this isn’t necessarily true, as Frank Newport points out in “Deconstructing Americans’ Views of Socialism, Capitalism” (December 17, 2021). As the figure shows, about one-fifth of Americans have a favorable image of both capitalism and socialism, and another one-fifth have an unfavorable image of both.


Central Bank Digital Currency: The Fed Speaks

The concept of a “central bank digital currency” is open to all sorts of interpretation, much of it unrealistic. To some, it sounds as if the Federal Reserve is going into competition with Bitcoin, or that the Fed is going set up accounts for individuals. These steps aren’t going to happen. The Federal Reserve takes a first step at setting expectations, without actually committing to any policy choices, in a discussion paper called “Money and Payments: The U.S. Dollar in the Age of Digital Transformation” (January 2022; if you want to submit feedback, you can go to the link). Near the start of the report, the Fed writes:

For the purpose of this paper, a CBDC is defined as a digital liability of a central bank that is widely available to the general public. In this respect, it is analogous to a digital form of paper money. The paper has been designed to foster a broad and transparent public dialogue about CBDCs in general, and about the potential benefits and risks of a U.S. CBDC. The paper is not intended to advance any specific policy outcome, nor is it intended
to signal that the Federal Reserve will make any imminent decisions about the appropriateness of issuing a U.S. CBDC.

The definition of a CBDC hints at the issues involve. How exactly would a “digital form of paper money” work? Perhaps the key point now is that when you get paid or buy something or make a payment with a check or credit card or e-payment, the payments commonly flow from one bank to another. But when you use currency, no individual bank guarantees your payment. Thus, a CBDC opens up the idea of a mechanism for payments that happens outside the banking system, and where the underlying value is backed by the Fed, rather than a bank.

In thinking about this topic, I found some meditations by Raphael Auer, Jon Frost, Michael Lee, Antoine Martin, and Neha Narula of the New York Fed to be useful. In a blog post called “Why Central Bank Digital Currencies?” (December 2021), they point out that when it comes to facilitating payments, there are four goals:

Payment costs. It costs money to pay money. Costs of payments have generally fallen over time, but surprisingly, not by much – credit card networks still routinely charge merchants service fees of 3 percent, and card revenues make up over 1 percent of GDP in the United States and much of Latin America. …

Financial inclusion. Universal access to payment services is a long-standing policy goal … Inclusion is a major societal concern in both developing economies and in some developed economies with a large unbanked population (the United States and the Euro area, for example). …

Consumer privacy. … Digital payments, including bank accounts, payment cards, and digital wallets, create a data trail. Consumers’ private information is aggregated and distributed for monetization. Recent research suggests that there are public good aspects to privacy; individuals may share too much data, as they do not bear the full cost of failing to protect their privacy when choosing their payment instrument. …

Promoting innovation. New, more convenient and secure payment methods not only benefit consumers but can also spur innovative business opportunities. New technologies also offer an opportunity to potentially automate certain financial practices through “smart contracts,” improving efficiency. 

The recent Fed report also lays out problems along these dimensions. As the Fed writes: “A crucial test for a potential CBDC is whether it would prove superior to other methods that
might address issues of concern outlined in this paper. … The Federal Reserve will continue to explore a wide range of design options for a CBDC. While no decisions have been made on whether to pursue a CBDC, analysis to date suggests that a potential U.S. CBDC, if one were created, would best serve the needs of the United States by being privacy-protected, intermediated, widely transferable, and identity-verified.”

Even in boilerplate writing like this, you can see the issues bubbling under the surface. A CBDC is supposed to be both “privacy-protected” and also at the time “identify-verified.” Hmm.

A CBDC is likely to be “intermediated,” which refers to the idea that it would operate through outside financial institutions, which would not necessarily need to be banks. People or companies might have a “digital wallet” at these companies to hold their central bank digital currency. But once you have added additional outside financial institutions, with their own costs and profit margins, are payment costs actually likely to fall? And “unbanked” people who currently lack a connection to the financial system more likely to build such a connection with with “digital wallets”?

When it comes to financial innovation creating different kinds of payments, it seems to me that there are plenty of companies giving this a try already, and it’s not clear to me that a CBDC would either increase innovation or cut costs (and thus drive down prices) for this industry.

At present, the case for a CBDC seems to rest on a lot of “could potentially” statements. The Fed report writes:

A CBDC could potentially offer a range of benefits. For example, it could provide households and businesses a convenient, electronic form of central bank money, with the safety and liquidity that would entail; give entrepreneurs a platform on which to create new financial products and services; support faster and cheaper payments (including cross-border payments); and expand consumer access to the financial system.

On the other side, the practical issues here are substantial. If the Fed decides to regulate the CBDC-based payment networks as if they were banks, then any advantages of such a mechanism largely disappear. On the other side, if these payment networks are regulated differently than banks, then what risks will be accumulating in the CBDC payment system? For example, how much concerns over cybersecurity or resilience differ across the two systems? And if the monetary value of CBDB digital wallets can be quickly shifted back and forth, into and out of regular bank accounts, how might these risks spread across the financial system as a whole? My bias at the moment would be to focus on other methods of reducing payment costs, improving privacy, and helping the unbanked, and let the idea of a CBDC simmer on the bank burner awhile longer.

For those interested in more detail on the subject, Dirk Niepelt has edited an book of 19 short and readable essays on the subject, Central Bank Digital Currency: Considerations, Projects, Outlook (CEPR, November 2021). Some of the essays focus on overall conceptual questions; others provide some detail on what the CBDC experiments that some central banks around the world are already carrying out.

Who Benefits from Canceling Student Loans?

Student loans are packaged together and then sold as financial securities, just like auto loans and credit card debt and home mortgages. These asset-backed securities pay a return to their investors as the debts are repaid. If outstanding students loans were to be cancelled, then the investors in this securities would probably be bailed out by the federal government–which was closely involved in securitizing these loans in the first place, and does not want to get a reputation for reneging on its debts. If such a debt cancellation happened, how would the benefits be distributed? Adam Looney tackles this question in “Student Loan Forgiveness Is Regressive Whether Measured by Income, Education, or Wealth: Why Only Targeted Debt Relief Policies Can Reduce Injustices in Student Loans” (Brookings Institution, Hutchins Center Working Paper #75, January 2022).

One basic insight of the paper is that those who have college degrees will tend to have, on average, more income, education and wealth over their lifetimes than those who do not. An additional insight is that many of the biggest student loan debts are not incurred by someone trying to pick up some career training at the local community college, but rather by those borrowing to finance advanced postgraduate degrees in areas like medicine and law. Thus, Looney writes:

There is no doubt that we need better policies to address the crisis in student lending and the inequities across race and social class that result because of America’s postsecondary education system. But the reason the outcomes are so unfair is mostly the result of disparities in who goes to college in the first place, the institutions and programs students attend, and how the market values their degrees after leaving school. Ex-post solutions, like widespread loan forgiveness for those who have already gone to college (or free college tuition for future students) make inequities worse, not better. That’s clearly true when assessing the effect of loan forgiveness: the beneficiaries are disproportionately higher income, from more affluent backgrounds, and are better educated.

For example, here’s a calculation by Looney of who holds student debt, broken down by lifetime wealth and by race. By far the biggest share of student loans is held by white non-Hispanic borrowers who over their lifetimes will be toward the top of the wealth distribution.

Looney readily acknowledges that student loans can be a substantial burden for certain borrowers. He writes:

There is no doubt that one of the most disastrous consequences of our student lending system is its punishing effects on Black borrowers. Black students are more likely to borrow than other students. They graduate with more debt, and after college almost half of Black borrowers will eventually default within 12 years of enrollment. Whether measured at face value or adjusted for repayment rates, the average Black household appears to owe more in student debt than the average white household (despite the fact that college-going and completion rates are significantly lower among Black than white Americans). One reason for the disparity is financial need. Another is the differences in the institutions and programs attended. But an important reason is also that Black households are less able to repay their loans and are thus more likely to have their balances rise over time. According to estimates by Ben Miller, after 12 years, the average Black borrower had made no progress paying their loan—their balance went up by 13 percent—while the average white borrower had repaid 35 percent of their original balance. These facts certainly constitute a crisis in how federal lending programs serve Black borrowers.

But deciding to cancel all student loans–including those taken out by the doctors and lawyers of the future–would be an absurdly expensive and overly broad approach to reducing racial disparities or in general helping lower-income people struggling with student debt. Some targeting is called for.

As one example, Looney suggests expanding Pell grant programs that provide grants, not loans, to low-income students. It is also possible to expand the programs that tie student loan repayment to income. My own sense is that if there is a political imperative for forgiving some student loan debt, it should be focused on debt incurred for undergraduate studies, limited in size, and linked to income: for example, a program for the federal government to pay off up to $10,000 in undergraduate student loan debt for those with the lowest income levels could make a substantial difference to those who (perhaps unwisely) ran up loans they could not readily repay, without being a bailout for those who are going on to well-paid careers.

Some Economics for Martin Luther King Jr. Day

On November 2, 1983, President Ronald Reagan signed a law establishing a federal holiday for the birthday of Martin Luther King Jr., to be celebrated each year on the third Monday in January. As the legislation that passed Congress said: “such holiday should serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr..” Of course, the case for racial equality stands fundamentally upon principles of justice, with economics playing only a supporting role. But here are a few economics-related thoughts for the day clipped from posts in the previous year at this blog, with more detail and commentary at the links.

1) “Some Economics of Black America” (June 18, 2021)

The McKinsey Global Institute has published “The economic state of Black America: What is and what could be” (June 2021). Much of the focus of the report is on pointing out gaps in various economic statistics. In terms of income. While such comparisons are not new, they do not lose their power to shock. For example:

Today the median annual wage for Black workers is approximately 30 percent, or $10,000, lower than that of white workers … We estimate a $220 billion annual disparity between Black wages today and what they would be in a scenario of full parity, with Black representation matching the Black share of the population across occupations and the elimination of racial pay gaps within occupational categories. Achieving this scenario would boost total Black wages by 30 percent … The racial wage disparity is the product of both representational imbalances and pay gaps within occupational categories—and it is a surprisingly concentrated phenomenon.

2) “The Broken Promises of the Freedman’s Savings Bank: 1865-1874” (January 18, 2021)

The Freedman’s Savings Bank lasted from 1865 to 1874. It was founded by the US government to provide financial services to former slaves: in particular, there was concern that if black veterans of the Union army did not have bank accounts, they would not be able to receive their pay. In terms of setting up branches and receiving deposits, the bank was a considerable success. However, the management of the bank ranged from uninvolved to corrupt, and together with the Panic of 1873, the combination proved lethal for the bank, and tens of thousands of depositors lost most of their money. 

Luke C.D. Stein and Constantine Yannelis offer some recent research on lessons the grim experience and its long-lasting effects on the trust level of African-Americans for finacial institutions in in “Financial Inclusion, Human Capital, and Wealth Accumulation: Evidence from the Freedman’s Savings Bank” (Review of Financial Studies, 33:11, November 2020, pp. 5333–5377, subscription required Also, Áine Doris offers a readable overview in the Chicago Booth Review (August 10, 2020). 

3) Symposium on Criminal Justice, Fall 2021 Journal of Economic Perspectives

The journal where I work as Managing Editor published a five-paper “Symposium on Criminal Justice” in the Fall 2021 issue. The topics include policing, bail decisions, algorithms, and prison conditions. The authors each have their own perspectives, of course, but in different ways they are all seeking to come to grips with the apparent racial disparities in the criminal justice system.

— “The Economics of Policing and Public Safety,” by Emily Owens and Bocar Ba (pp. 3-28)

— “Next-Generation Policing Research: Three Propositions,” by Monica C. Bel (pp. 29-48)

— “The US Pretrial System: Balancing Individual Rights and Public Interests,” by Will Dobbie and Crystal S. Yang (pp. 49-70)

— “Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System” by Jens Ludwig and Sendhil Mullainathan (pp. 71-96)

“Inside the Box: Safety, Health, and Isolation in Prison,” by Bruce Western (pp. 97-122)

4) “Interview with Rucker Johnson: Supporting Children” (October 19, 2021)

Douglas Clement interviews Rucker Johnson about his research in the Fall 2021 issue of For All, published by the Opportunity & Growth Institute at the Minneapolis Fed (“Rucker Johnson interview: Powering potential,” subtitled “Rucker Johnson on school finance reform, quality pre-K, and integration”). One of the themes of the interview is the potential for gains to both equity and efficiency from supporting socioeconomically disadvantaged children along a variety of dimensions. Johnson notes:

On disparities in spending and opportunity across K-12 schools:

Today, about 75 percent of per pupil spending disparities are between states (rather than between districts within states). And we’ve witnessed that inequality in school spending has risen since 2000. After three decades of narrowing—the ’70s, ’80s, and ’90s—primarily due to the state school finance reforms emphasized in my work with Kirabo Jackson and Claudia Persico, there has been a significant rise in inequality, especially sharply following the Great Recession.

What I want to highlight here is the current disparities nationwide in school resources. School districts with the most students of color have about 15 percent less per pupil funding from state and local sources than predominantly White, affluent areas, despite having much greater need due to higher proportions of poverty, special needs, and English language learners.

Teacher quality is often the missing link that people don’t consider directly when thinking about school resource inequities. For example, schools with a high level of Black and Latino students have almost two times as many first-year teachers as schools with low minority enrollment. And minority students are more likely to be taught by inexperienced teachers than experienced ones in 33 states across the country. … Part of it is that the invisible lines of school district boundaries are powerful tools of segregation. It’s a way of segregating and hoarding access to opportunity. And when I say access to opportunity, I mean quality of teachers, I mean curricular opportunity. For example, only a third of public schools with high Black and Latino enrollment offer calculus. Courses like that are gateways to majoring in STEM in college and having a STEM career. Or simply the fact that less than 30 percent of students in gifted and talented programs are Black or Latino.

5) “The Confidence of Americans in Institutions” (July 24, 2021)

In early July, the Gallup Poll carried out an annual survey in which people are asked about their confidence in various institutions. Here are some of the results, as reported at the Gallup website by Jeffrey M. Jones, “In U.S., Black Confidence in Police Recovers From 2020 Low” (July 14, 2021) and by Megan Brenan, “Americans’ Confidence in Major U.S. Institutions Dips” (July 14, 2021).


6) “The Problem of Automated Screening of Job Applicants” (September 24, 2021)

Employers need to whittle down the online job applicants to a manageable number, so they turn to automated tools for screening the job applications. In “Hiring as Exploration,” by Danielle Li, Lindsey R. Raymond, and Peter Bergman (NBER Working Paper 27736, August 2020). They consider a “contextual bandit” approach. The intuition here, at least as I learned it, refers to the idea of a “one-armed bandit” as a synonym for a slot machine. Say that you are confronted with the problem of which slot machine to play in a large casino, given that some slot machines will pay off better than others. On one side, you want to exploit a slot machine with high payoffs. On the other side, even if you find a slot machine which seems to have pretty good payoffs, it can be a useful strategy to explore a little and see if perhaps some unexpected slot machine might pay as well or better. A contextual bandit model is built on finding the appropriate balance in this exploit/explore dynamic.

From this perspective, the problem with a lot of automated methods for screening job applications is that they do too little exploring. In this spirit, the authors create several algorithms for screening job applicants, and they define an applicant’s “hiring potential” as the likelihood that the person will be hired, given that they are interviewed. The algorithms all use background information “on an applicant’s demographics (race, gender, and ethnicity), education (institution and degree), and work history (prior fims).” The key difference is that some of the algorithms just produce a point score for who should be interviewed, while the contextual bandit algorithm produces both a point score and a standard deviation around that point score. Then, and here is the key point, the contextual bandit algorithm ranks the applicants according to the upper bound of the confidence interval associated with the standard deviation. Thus, an applicant with a lower score but higher uncertainty could easily be ranked ahead of an applicant with a higher score but lower uncertainty. Again, the idea is to get more exploration into the job search and to look for matches that might be exceptionally good, even at the risk of interviewing some real duds. They apply their algorithms to actual job applicants for professional services jobs at a Fortune 500 firm.

They find that several of the algorithms would have the effect of reducing the share of selected applicants who are Black or Hispanic, while the contextual bandit approach looking for employees who are potentially outstanding “would more than double the share of selected applicants who are Black or Hispanic, from 10% to 23%.” They also find that while the previous approach at this firm was leading to as situation where 10% of those interviewed were actually offered and accepted a job, the contextual bandit approach led to an applicant pool where 25% of those who were interviewed were offered and accepted a job.

The US Economy as 2022 Begins

The Federal Reserve Bank of New York puts out a monthly publication called “U.S. Economy in a Snapshot,” a compilation of figures and short notes about the most recently available major macroeconomic statistics. As we take a deep breath and head into 2022, it seemed a useful time to consult pass along some these figures as as a way of showing the path of the US economy since the two-month pandemic recession of March and April 2020.

Here’s the path of GDP growth. It has clearly bounced back from the worse of the recession, but it still remains about 2% below the trend-line from before the recession occurred.

Part of the reason why GDP has not rebounded more fully lies in what is being called the “Great Resignation”–that is, people who left the workforce during the pandemic and have not returned. Just to be clear, to be counted as officially “unemployed” you need to be both out of a job and actively looking for a job. If you are out of a job but not looking, then you are “out of the labor force.” Thus, you can see that while the unemployment rate based on those out of a job and actively looking for work is back down to pre-pandemic levels, the labor force participation rate–which combines those who have job and the unemployed who are looking–has not fully rebounded. A smaller share of the labor force working will typically translate into a smaller GDP. When or if these potential workers return to the workforce will have a big effect on the future evolution of the economy and public policy.

Meanwhile, inflation is a worry. The measure of inflation used by the Federal Reserve is called the Personal Consumption Expenditure deflator, which is preferred to the more familiar Consumer Price Index for a variety of technical reasons like broader coverage of consumer spending, although the two numbers move very much in synch. In particular, the Fed focuses on “core” inflation, which means stripping out any effects of energy and food prices. The thinking is that energy and food prices can bounce around a lot for reasons specific to those markets, so if you want to know about inflation spreading over the breadth of the economy, it’s more useful to look at everything else.

What’s interesting here is that inflation in prices of goods is leading the way, as opposed to inflation in prices of services.

During the aftermath of the pandemic recession, a lot of services industries were hindered by the need for greater in-person contact. Thus, while consumer spending on both goods and services has bounced back, the bounceback has been bigger for goods. If inflation can be roughly defined as too much money chasing too few goods, the demand in the economy has been chasing goods with more enthusiasm than it has been chasing services.

The patterns of real investment in the economy are not unexpected, but they are vivid. Business investment in equipment has spiked back to pre-pandemic levels. One suspect that a certain share of this equipment is what was needed for much greater online interaction.

However, business investment in structures has dropped a lot. With vacancies in business real estate apparent everywhere, the reasons to build more have diminished quite a bit.

On the residential side, there has been a boom in new building. One consequence of the aftermath of the pandemic is that a lot of what was formerly classified as “residential” real estate was quickly repurposed as, in effect, “commercial” real estate that was a common workplace. My suspicion is that this change is causing a lot of people to rethink their residential space: if it’s also going to be a workspace for extended periods, then maybe just planning to drop your laptop on the kitchen table is not a sufficient solution.

In the area of international trade, imports have bounced back, while exports have not. I haven’t seen a fully satisfactory breakdown of why this is so, but one reason is related to the resurgence in purchases of goods just mentioned–including goods with an imported component. Also, “imports” is a category that includes foreign tourism: that is, a US tourist buying German-made goods and services while visiting Germany is viewed as “importing” those goods, while a Japanese tourist buying US goods and services while visiting the US is counted as buying US exports. A surge of overseas US travel helped increase US imports, but there has not been a corresponding surge of tourists coming to the US.

Advice for Academic Writing About Data

As the Managing Editor of an economics journal, I’m always intrigued by advice about what goes into writing a good academic paper. Jon Zelner, Kelly Broen, and Ella August take their whack at this pinata in “A guide to backward paper writing for the data sciences” (Patterns, January 2022). As is usual with these kinds of papers, some of the advice is worthy but dull and very basic. But there are also some insights that resonated with me, and I’ll emphasize those here.

From the introduction:

Academic and applied research in data-intensive fields requires the development of a diverse skillset, of which clear writing and communication are among the most important. However, in our experience, the art of scientific communication is often ignored in the formal training of data scientists in diverse fields spanning the life, social, physical, mathematical, and medical sciences. Instead, clear communication is assumed to be learned via osmosis, through the practice of reading the work of others, writing up our own work, and receiving feedback from mentors and colleagues.

It won’t come as a shock to any reader of academic literature that this “learning by osmosis” is at best a partial success in producing clear writing and communication.

A well-crafted data science paper is a pedagogical tool that not only conveys information from author to reader but facilitates the understanding of complex concepts. This works in both directions: The paper-writing process is an opportunity for the writers to learn about and clarify their understanding of the topic in addition to communicating it to someone else. If we can accept the idea of this kind of writing as teaching, we can take a lesson from research and practice in the field of educational development, particularly the backward approach to curriculum design, introduced by Williams and McTighe in their book Understanding by Design: “Our lessons, units, and courses should be logically inferred from the [learning outcomes] sought, not derived from the methods, books, and activities with which we are most comfortable. Curriculum should lay out the most effective ways of achieving specific results … the best designs derive backward from the learnings sought.

This rings true to me in several ways. When you write, you should listen closely to what you are saying, for the sake of understanding yourself. The great author Flannery O’Connor once wrote: “I don’t know so well what I think until I see what I say.” Writing out results should help you to understand yourself. Another part of the task is not just to tell others what you did, but to teach them. (Surely, all academics know the difference between telling and teaching, right?) Finally, the idea that the paper should emerge from the learning outcome that is sought, not from “the methods, books, and activities with which we are most comfortable” seems worth taking to heart. This approach is what the authors refer to as a “backward-design approach” to academic writing.

Under a backward-design approach, the overarching goals of a course are defined first, and then used to motivate and shape everything from the assignments students will complete, the nature and volume of reading material, and the way class meetings will be used to advance toward these goals. In this way of thinking, a course has a set of standard components—assignments, reading, class time—but the way in which they are devised and arranged is organized around supporting the learning goals of the class. The same approach can be applied to the construction of a research paper: even though most papers have the same sections (introduction, methods, results, discussion) early-career researchers may underestimate the amount of flexibility and room for creativity they have in using these components to achieve their scientific and professional development goals. The backward approach we lay out here is about starting at the end by answering the questions of “What do I want accomplish with this paper?” and scaffolding each piece to help serve those goals. This is contrasted with the more ad hoc forward approach most of us have learned to live with, in which we begin with the introduction and struggle through to the conclusion with the primary goal of simply finishing the manuscript.

The article goes into the backward-design approach in more detail. Finally, I appreciated these thoughts about figures and tables:

If it can be conveyed visually, do it! Prefer figures over tables and in-text descriptions where you can. …. Reasonably informed readers should be able to get what is going on from looking at your figure and reading the legend, even if they have not read the rest of the paper. This is not a hard-and-fast rule, but if you work toward it you will ensure that the figures convey as much information as possible. … If you must make a results table, keep it small and simple. Big, complex tables are where reader attention goes to die. If information is best conveyed by a table, be sure to include only the most essential information. When a table gets too big, it becomes easy to forget what its purpose is. By keeping it short and cutting out extraneous information, you are better able to keep the focus on your message.

I fully intend to steal the phrase that “big, complex tables are where reader attention goes to die” for future editorial comments.

US Pharmaceuticals: A Bifurcated Market

The US pharmaceutical industry faces two important goals–which conflict with each other. One goal is to provide needed drugs at the lowest possible price. The other is to provide incentives for research and development of new drugs, which requires some form of compensation for the risks of undertaking such efforts. The US patent system allows inventors to have a temporary monopoly on their discovery, so that they can charge higher prices for a time. After the patent expires, it then becomes legal for others to produce generic equivalents. The result is a two-part market for US pharmaceuticals: expensive drugs still under patent and inexpensive generic drugs.

William S. Comanor layout some of the patters in his essay, “Why Are (Some) U.S. Drug Prices So High?” which is subtitled, “The Hatch–Waxman Act promotes both pharmaceutical innovation and price competition, confounding simple comparisons of U.S. and foreign drug prices” (Regulation, Winter 2021/2022).

Here’s a table showing how the US pharmaceutical industry has evolved. Notice that about 90% of dispensed prescriptions are generics, accounting for about 20% of total invoice spending, while 10% of prescriptions are branded drugs, accounting for 80% of invoice spending.

Regulation - v44n4 - Article 2 - Table 1

A common complaint about the US pharma industry is that brand-name drugs sell for more in the United States than in other countries, which often impose price controls and other rules on drugs from US firms. What is less-heard is that prices for generic drugs are typically lower in the United States. Here’s a comparison from Comanor. When adjusted for manufacturer discounts (“net pricing correction”), branded drugs cost about twice as much in the US as in Japan, Germany, and the UK. However, generic drugs in the US cost about one-third less than Germany and the UK, and less than half as much as in Japan.

Regulation - v44n4 - Article 2 - Table 3

The bifurcated US approach to pharmaceuticals has led to global dominance of US drug companies. Comanor reports:

As disclosed in the RAND report, the United States accounts for 58% of total pharmaceutical sales revenue among nations in the Organisation for Economic Co-Operation and Development, whereas the second and third highest counties, Japan and Germany, are 9% and 5% respectively. Moreover, because U.S. branded prices are higher than elsewhere, the United States accounts for approximately 78% of worldwide industry profits. All other countries, in aggregate, account for less than one-third of that amount. Put simply, U.S. profits incentivize global innovation.

Of course, one can imagine a variety of potentially useful ways to tweak the patent rules or the rules for drug regulation. It also seems to me that the goal of profit-seeking drug companies is to develop expensive drugs for extreme conditions that will generate high profits, along with in-demand drugs for conditions like hair loss. There’s an important rule here for government support of R&D to encourage innovating in a ways that the market might not reward very well: for example, a focus on the important but neglected health conditions; a focus on “orphan drugs” for health problems that affect only a few people and might not be very profitable; and a focus on versions of drugs with lower costs or fewer side effects. It would also be nice if the rest of the world would step up and pay a bigger part of the bill for developing new drugs, too. But in some broad sense, the bifurcated US drug market–an outcome of the Hatch-Waxman act passed back in 1984–has actually worked out pretty well.