Central Bank Digital Currency: The Fed Speaks

The concept of a “central bank digital currency” is open to all sorts of interpretation, much of it unrealistic. To some, it sounds as if the Federal Reserve is going into competition with Bitcoin, or that the Fed is going set up accounts for individuals. These steps aren’t going to happen. The Federal Reserve takes a first step at setting expectations, without actually committing to any policy choices, in a discussion paper called “Money and Payments: The U.S. Dollar in the Age of Digital Transformation” (January 2022; if you want to submit feedback, you can go to the link). Near the start of the report, the Fed writes:

For the purpose of this paper, a CBDC is defined as a digital liability of a central bank that is widely available to the general public. In this respect, it is analogous to a digital form of paper money. The paper has been designed to foster a broad and transparent public dialogue about CBDCs in general, and about the potential benefits and risks of a U.S. CBDC. The paper is not intended to advance any specific policy outcome, nor is it intended
to signal that the Federal Reserve will make any imminent decisions about the appropriateness of issuing a U.S. CBDC.

The definition of a CBDC hints at the issues involve. How exactly would a “digital form of paper money” work? Perhaps the key point now is that when you get paid or buy something or make a payment with a check or credit card or e-payment, the payments commonly flow from one bank to another. But when you use currency, no individual bank guarantees your payment. Thus, a CBDC opens up the idea of a mechanism for payments that happens outside the banking system, and where the underlying value is backed by the Fed, rather than a bank.

In thinking about this topic, I found some meditations by Raphael Auer, Jon Frost, Michael Lee, Antoine Martin, and Neha Narula of the New York Fed to be useful. In a blog post called “Why Central Bank Digital Currencies?” (December 2021), they point out that when it comes to facilitating payments, there are four goals:

Payment costs. It costs money to pay money. Costs of payments have generally fallen over time, but surprisingly, not by much – credit card networks still routinely charge merchants service fees of 3 percent, and card revenues make up over 1 percent of GDP in the United States and much of Latin America. …

Financial inclusion. Universal access to payment services is a long-standing policy goal … Inclusion is a major societal concern in both developing economies and in some developed economies with a large unbanked population (the United States and the Euro area, for example). …

Consumer privacy. … Digital payments, including bank accounts, payment cards, and digital wallets, create a data trail. Consumers’ private information is aggregated and distributed for monetization. Recent research suggests that there are public good aspects to privacy; individuals may share too much data, as they do not bear the full cost of failing to protect their privacy when choosing their payment instrument. …

Promoting innovation. New, more convenient and secure payment methods not only benefit consumers but can also spur innovative business opportunities. New technologies also offer an opportunity to potentially automate certain financial practices through “smart contracts,” improving efficiency. 

The recent Fed report also lays out problems along these dimensions. As the Fed writes: “A crucial test for a potential CBDC is whether it would prove superior to other methods that
might address issues of concern outlined in this paper. … The Federal Reserve will continue to explore a wide range of design options for a CBDC. While no decisions have been made on whether to pursue a CBDC, analysis to date suggests that a potential U.S. CBDC, if one were created, would best serve the needs of the United States by being privacy-protected, intermediated, widely transferable, and identity-verified.”

Even in boilerplate writing like this, you can see the issues bubbling under the surface. A CBDC is supposed to be both “privacy-protected” and also at the time “identify-verified.” Hmm.

A CBDC is likely to be “intermediated,” which refers to the idea that it would operate through outside financial institutions, which would not necessarily need to be banks. People or companies might have a “digital wallet” at these companies to hold their central bank digital currency. But once you have added additional outside financial institutions, with their own costs and profit margins, are payment costs actually likely to fall? And “unbanked” people who currently lack a connection to the financial system more likely to build such a connection with with “digital wallets”?

When it comes to financial innovation creating different kinds of payments, it seems to me that there are plenty of companies giving this a try already, and it’s not clear to me that a CBDC would either increase innovation or cut costs (and thus drive down prices) for this industry.

At present, the case for a CBDC seems to rest on a lot of “could potentially” statements. The Fed report writes:

A CBDC could potentially offer a range of benefits. For example, it could provide households and businesses a convenient, electronic form of central bank money, with the safety and liquidity that would entail; give entrepreneurs a platform on which to create new financial products and services; support faster and cheaper payments (including cross-border payments); and expand consumer access to the financial system.

On the other side, the practical issues here are substantial. If the Fed decides to regulate the CBDC-based payment networks as if they were banks, then any advantages of such a mechanism largely disappear. On the other side, if these payment networks are regulated differently than banks, then what risks will be accumulating in the CBDC payment system? For example, how much concerns over cybersecurity or resilience differ across the two systems? And if the monetary value of CBDB digital wallets can be quickly shifted back and forth, into and out of regular bank accounts, how might these risks spread across the financial system as a whole? My bias at the moment would be to focus on other methods of reducing payment costs, improving privacy, and helping the unbanked, and let the idea of a CBDC simmer on the bank burner awhile longer.

For those interested in more detail on the subject, Dirk Niepelt has edited an book of 19 short and readable essays on the subject, Central Bank Digital Currency: Considerations, Projects, Outlook (CEPR, November 2021). Some of the essays focus on overall conceptual questions; others provide some detail on what the CBDC experiments that some central banks around the world are already carrying out.

Who Benefits from Canceling Student Loans?

Student loans are packaged together and then sold as financial securities, just like auto loans and credit card debt and home mortgages. These asset-backed securities pay a return to their investors as the debts are repaid. If outstanding students loans were to be cancelled, then the investors in this securities would probably be bailed out by the federal government–which was closely involved in securitizing these loans in the first place, and does not want to get a reputation for reneging on its debts. If such a debt cancellation happened, how would the benefits be distributed? Adam Looney tackles this question in “Student Loan Forgiveness Is Regressive Whether Measured by Income, Education, or Wealth: Why Only Targeted Debt Relief Policies Can Reduce Injustices in Student Loans” (Brookings Institution, Hutchins Center Working Paper #75, January 2022).

One basic insight of the paper is that those who have college degrees will tend to have, on average, more income, education and wealth over their lifetimes than those who do not. An additional insight is that many of the biggest student loan debts are not incurred by someone trying to pick up some career training at the local community college, but rather by those borrowing to finance advanced postgraduate degrees in areas like medicine and law. Thus, Looney writes:

There is no doubt that we need better policies to address the crisis in student lending and the inequities across race and social class that result because of America’s postsecondary education system. But the reason the outcomes are so unfair is mostly the result of disparities in who goes to college in the first place, the institutions and programs students attend, and how the market values their degrees after leaving school. Ex-post solutions, like widespread loan forgiveness for those who have already gone to college (or free college tuition for future students) make inequities worse, not better. That’s clearly true when assessing the effect of loan forgiveness: the beneficiaries are disproportionately higher income, from more affluent backgrounds, and are better educated.

For example, here’s a calculation by Looney of who holds student debt, broken down by lifetime wealth and by race. By far the biggest share of student loans is held by white non-Hispanic borrowers who over their lifetimes will be toward the top of the wealth distribution.

Looney readily acknowledges that student loans can be a substantial burden for certain borrowers. He writes:

There is no doubt that one of the most disastrous consequences of our student lending system is its punishing effects on Black borrowers. Black students are more likely to borrow than other students. They graduate with more debt, and after college almost half of Black borrowers will eventually default within 12 years of enrollment. Whether measured at face value or adjusted for repayment rates, the average Black household appears to owe more in student debt than the average white household (despite the fact that college-going and completion rates are significantly lower among Black than white Americans). One reason for the disparity is financial need. Another is the differences in the institutions and programs attended. But an important reason is also that Black households are less able to repay their loans and are thus more likely to have their balances rise over time. According to estimates by Ben Miller, after 12 years, the average Black borrower had made no progress paying their loan—their balance went up by 13 percent—while the average white borrower had repaid 35 percent of their original balance. These facts certainly constitute a crisis in how federal lending programs serve Black borrowers.

But deciding to cancel all student loans–including those taken out by the doctors and lawyers of the future–would be an absurdly expensive and overly broad approach to reducing racial disparities or in general helping lower-income people struggling with student debt. Some targeting is called for.

As one example, Looney suggests expanding Pell grant programs that provide grants, not loans, to low-income students. It is also possible to expand the programs that tie student loan repayment to income. My own sense is that if there is a political imperative for forgiving some student loan debt, it should be focused on debt incurred for undergraduate studies, limited in size, and linked to income: for example, a program for the federal government to pay off up to $10,000 in undergraduate student loan debt for those with the lowest income levels could make a substantial difference to those who (perhaps unwisely) ran up loans they could not readily repay, without being a bailout for those who are going on to well-paid careers.

Some Economics for Martin Luther King Jr. Day

On November 2, 1983, President Ronald Reagan signed a law establishing a federal holiday for the birthday of Martin Luther King Jr., to be celebrated each year on the third Monday in January. As the legislation that passed Congress said: “such holiday should serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr..” Of course, the case for racial equality stands fundamentally upon principles of justice, with economics playing only a supporting role. But here are a few economics-related thoughts for the day clipped from posts in the previous year at this blog, with more detail and commentary at the links.

1) “Some Economics of Black America” (June 18, 2021)

The McKinsey Global Institute has published “The economic state of Black America: What is and what could be” (June 2021). Much of the focus of the report is on pointing out gaps in various economic statistics. In terms of income. While such comparisons are not new, they do not lose their power to shock. For example:

Today the median annual wage for Black workers is approximately 30 percent, or $10,000, lower than that of white workers … We estimate a $220 billion annual disparity between Black wages today and what they would be in a scenario of full parity, with Black representation matching the Black share of the population across occupations and the elimination of racial pay gaps within occupational categories. Achieving this scenario would boost total Black wages by 30 percent … The racial wage disparity is the product of both representational imbalances and pay gaps within occupational categories—and it is a surprisingly concentrated phenomenon.

2) “The Broken Promises of the Freedman’s Savings Bank: 1865-1874” (January 18, 2021)

The Freedman’s Savings Bank lasted from 1865 to 1874. It was founded by the US government to provide financial services to former slaves: in particular, there was concern that if black veterans of the Union army did not have bank accounts, they would not be able to receive their pay. In terms of setting up branches and receiving deposits, the bank was a considerable success. However, the management of the bank ranged from uninvolved to corrupt, and together with the Panic of 1873, the combination proved lethal for the bank, and tens of thousands of depositors lost most of their money. 


Luke C.D. Stein and Constantine Yannelis offer some recent research on lessons the grim experience and its long-lasting effects on the trust level of African-Americans for finacial institutions in in “Financial Inclusion, Human Capital, and Wealth Accumulation: Evidence from the Freedman’s Savings Bank” (Review of Financial Studies, 33:11, November 2020, pp. 5333–5377, subscription requiredhttps://academic.oup.com/rfs/article-abstract/33/11/5333/5732662). Also, Áine Doris offers a readable overview in the Chicago Booth Review (August 10, 2020). 

3) Symposium on Criminal Justice, Fall 2021 Journal of Economic Perspectives

The journal where I work as Managing Editor published a five-paper “Symposium on Criminal Justice” in the Fall 2021 issue. The topics include policing, bail decisions, algorithms, and prison conditions. The authors each have their own perspectives, of course, but in different ways they are all seeking to come to grips with the apparent racial disparities in the criminal justice system.

— “The Economics of Policing and Public Safety,” by Emily Owens and Bocar Ba (pp. 3-28)

— “Next-Generation Policing Research: Three Propositions,” by Monica C. Bel (pp. 29-48)

— “The US Pretrial System: Balancing Individual Rights and Public Interests,” by Will Dobbie and Crystal S. Yang (pp. 49-70)

— “Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System” by Jens Ludwig and Sendhil Mullainathan (pp. 71-96)

“Inside the Box: Safety, Health, and Isolation in Prison,” by Bruce Western (pp. 97-122)

4) “Interview with Rucker Johnson: Supporting Children” (October 19, 2021)

Douglas Clement interviews Rucker Johnson about his research in the Fall 2021 issue of For All, published by the Opportunity & Growth Institute at the Minneapolis Fed (“Rucker Johnson interview: Powering potential,” subtitled “Rucker Johnson on school finance reform, quality pre-K, and integration”). One of the themes of the interview is the potential for gains to both equity and efficiency from supporting socioeconomically disadvantaged children along a variety of dimensions. Johnson notes:

On disparities in spending and opportunity across K-12 schools:

Today, about 75 percent of per pupil spending disparities are between states (rather than between districts within states). And we’ve witnessed that inequality in school spending has risen since 2000. After three decades of narrowing—the ’70s, ’80s, and ’90s—primarily due to the state school finance reforms emphasized in my work with Kirabo Jackson and Claudia Persico, there has been a significant rise in inequality, especially sharply following the Great Recession.

What I want to highlight here is the current disparities nationwide in school resources. School districts with the most students of color have about 15 percent less per pupil funding from state and local sources than predominantly White, affluent areas, despite having much greater need due to higher proportions of poverty, special needs, and English language learners.

Teacher quality is often the missing link that people don’t consider directly when thinking about school resource inequities. For example, schools with a high level of Black and Latino students have almost two times as many first-year teachers as schools with low minority enrollment. And minority students are more likely to be taught by inexperienced teachers than experienced ones in 33 states across the country. … Part of it is that the invisible lines of school district boundaries are powerful tools of segregation. It’s a way of segregating and hoarding access to opportunity. And when I say access to opportunity, I mean quality of teachers, I mean curricular opportunity. For example, only a third of public schools with high Black and Latino enrollment offer calculus. Courses like that are gateways to majoring in STEM in college and having a STEM career. Or simply the fact that less than 30 percent of students in gifted and talented programs are Black or Latino.

5) “The Confidence of Americans in Institutions” (July 24, 2021)

In early July, the Gallup Poll carried out an annual survey in which people are asked about their confidence in various institutions. Here are some of the results, as reported at the Gallup website by Jeffrey M. Jones, “In U.S., Black Confidence in Police Recovers From 2020 Low” (July 14, 2021) and by Megan Brenan, “Americans’ Confidence in Major U.S. Institutions Dips” (July 14, 2021).

20210713_RacialGroupsv3_@2x

6) “The Problem of Automated Screening of Job Applicants” (September 24, 2021)

Employers need to whittle down the online job applicants to a manageable number, so they turn to automated tools for screening the job applications. In “Hiring as Exploration,” by Danielle Li, Lindsey R. Raymond, and Peter Bergman (NBER Working Paper 27736, August 2020). They consider a “contextual bandit” approach. The intuition here, at least as I learned it, refers to the idea of a “one-armed bandit” as a synonym for a slot machine. Say that you are confronted with the problem of which slot machine to play in a large casino, given that some slot machines will pay off better than others. On one side, you want to exploit a slot machine with high payoffs. On the other side, even if you find a slot machine which seems to have pretty good payoffs, it can be a useful strategy to explore a little and see if perhaps some unexpected slot machine might pay as well or better. A contextual bandit model is built on finding the appropriate balance in this exploit/explore dynamic.

From this perspective, the problem with a lot of automated methods for screening job applications is that they do too little exploring. In this spirit, the authors create several algorithms for screening job applicants, and they define an applicant’s “hiring potential” as the likelihood that the person will be hired, given that they are interviewed. The algorithms all use background information “on an applicant’s demographics (race, gender, and ethnicity), education (institution and degree), and work history (prior fims).” The key difference is that some of the algorithms just produce a point score for who should be interviewed, while the contextual bandit algorithm produces both a point score and a standard deviation around that point score. Then, and here is the key point, the contextual bandit algorithm ranks the applicants according to the upper bound of the confidence interval associated with the standard deviation. Thus, an applicant with a lower score but higher uncertainty could easily be ranked ahead of an applicant with a higher score but lower uncertainty. Again, the idea is to get more exploration into the job search and to look for matches that might be exceptionally good, even at the risk of interviewing some real duds. They apply their algorithms to actual job applicants for professional services jobs at a Fortune 500 firm.

They find that several of the algorithms would have the effect of reducing the share of selected applicants who are Black or Hispanic, while the contextual bandit approach looking for employees who are potentially outstanding “would more than double the share of selected applicants who are Black or Hispanic, from 10% to 23%.” They also find that while the previous approach at this firm was leading to as situation where 10% of those interviewed were actually offered and accepted a job, the contextual bandit approach led to an applicant pool where 25% of those who were interviewed were offered and accepted a job.

The US Economy as 2022 Begins

The Federal Reserve Bank of New York puts out a monthly publication called “U.S. Economy in a Snapshot,” a compilation of figures and short notes about the most recently available major macroeconomic statistics. As we take a deep breath and head into 2022, it seemed a useful time to consult pass along some these figures as as a way of showing the path of the US economy since the two-month pandemic recession of March and April 2020.

Here’s the path of GDP growth. It has clearly bounced back from the worse of the recession, but it still remains about 2% below the trend-line from before the recession occurred.

Part of the reason why GDP has not rebounded more fully lies in what is being called the “Great Resignation”–that is, people who left the workforce during the pandemic and have not returned. Just to be clear, to be counted as officially “unemployed” you need to be both out of a job and actively looking for a job. If you are out of a job but not looking, then you are “out of the labor force.” Thus, you can see that while the unemployment rate based on those out of a job and actively looking for work is back down to pre-pandemic levels, the labor force participation rate–which combines those who have job and the unemployed who are looking–has not fully rebounded. A smaller share of the labor force working will typically translate into a smaller GDP. When or if these potential workers return to the workforce will have a big effect on the future evolution of the economy and public policy.

Meanwhile, inflation is a worry. The measure of inflation used by the Federal Reserve is called the Personal Consumption Expenditure deflator, which is preferred to the more familiar Consumer Price Index for a variety of technical reasons like broader coverage of consumer spending, although the two numbers move very much in synch. In particular, the Fed focuses on “core” inflation, which means stripping out any effects of energy and food prices. The thinking is that energy and food prices can bounce around a lot for reasons specific to those markets, so if you want to know about inflation spreading over the breadth of the economy, it’s more useful to look at everything else.

What’s interesting here is that inflation in prices of goods is leading the way, as opposed to inflation in prices of services.

During the aftermath of the pandemic recession, a lot of services industries were hindered by the need for greater in-person contact. Thus, while consumer spending on both goods and services has bounced back, the bounceback has been bigger for goods. If inflation can be roughly defined as too much money chasing too few goods, the demand in the economy has been chasing goods with more enthusiasm than it has been chasing services.

The patterns of real investment in the economy are not unexpected, but they are vivid. Business investment in equipment has spiked back to pre-pandemic levels. One suspect that a certain share of this equipment is what was needed for much greater online interaction.

However, business investment in structures has dropped a lot. With vacancies in business real estate apparent everywhere, the reasons to build more have diminished quite a bit.

On the residential side, there has been a boom in new building. One consequence of the aftermath of the pandemic is that a lot of what was formerly classified as “residential” real estate was quickly repurposed as, in effect, “commercial” real estate that was a common workplace. My suspicion is that this change is causing a lot of people to rethink their residential space: if it’s also going to be a workspace for extended periods, then maybe just planning to drop your laptop on the kitchen table is not a sufficient solution.

In the area of international trade, imports have bounced back, while exports have not. I haven’t seen a fully satisfactory breakdown of why this is so, but one reason is related to the resurgence in purchases of goods just mentioned–including goods with an imported component. Also, “imports” is a category that includes foreign tourism: that is, a US tourist buying German-made goods and services while visiting Germany is viewed as “importing” those goods, while a Japanese tourist buying US goods and services while visiting the US is counted as buying US exports. A surge of overseas US travel helped increase US imports, but there has not been a corresponding surge of tourists coming to the US.

Advice for Academic Writing About Data

As the Managing Editor of an economics journal, I’m always intrigued by advice about what goes into writing a good academic paper. Jon Zelner, Kelly Broen, and Ella August take their whack at this pinata in “A guide to backward paper writing for the data sciences” (Patterns, January 2022). As is usual with these kinds of papers, some of the advice is worthy but dull and very basic. But there are also some insights that resonated with me, and I’ll emphasize those here.

From the introduction:

Academic and applied research in data-intensive fields requires the development of a diverse skillset, of which clear writing and communication are among the most important. However, in our experience, the art of scientific communication is often ignored in the formal training of data scientists in diverse fields spanning the life, social, physical, mathematical, and medical sciences. Instead, clear communication is assumed to be learned via osmosis, through the practice of reading the work of others, writing up our own work, and receiving feedback from mentors and colleagues.

It won’t come as a shock to any reader of academic literature that this “learning by osmosis” is at best a partial success in producing clear writing and communication.

A well-crafted data science paper is a pedagogical tool that not only conveys information from author to reader but facilitates the understanding of complex concepts. This works in both directions: The paper-writing process is an opportunity for the writers to learn about and clarify their understanding of the topic in addition to communicating it to someone else. If we can accept the idea of this kind of writing as teaching, we can take a lesson from research and practice in the field of educational development, particularly the backward approach to curriculum design, introduced by Williams and McTighe in their book Understanding by Design: “Our lessons, units, and courses should be logically inferred from the [learning outcomes] sought, not derived from the methods, books, and activities with which we are most comfortable. Curriculum should lay out the most effective ways of achieving specific results … the best designs derive backward from the learnings sought.

This rings true to me in several ways. When you write, you should listen closely to what you are saying, for the sake of understanding yourself. The great author Flannery O’Connor once wrote: “I don’t know so well what I think until I see what I say.” Writing out results should help you to understand yourself. Another part of the task is not just to tell others what you did, but to teach them. (Surely, all academics know the difference between telling and teaching, right?) Finally, the idea that the paper should emerge from the learning outcome that is sought, not from “the methods, books, and activities with which we are most comfortable” seems worth taking to heart. This approach is what the authors refer to as a “backward-design approach” to academic writing.

Under a backward-design approach, the overarching goals of a course are defined first, and then used to motivate and shape everything from the assignments students will complete, the nature and volume of reading material, and the way class meetings will be used to advance toward these goals. In this way of thinking, a course has a set of standard components—assignments, reading, class time—but the way in which they are devised and arranged is organized around supporting the learning goals of the class. The same approach can be applied to the construction of a research paper: even though most papers have the same sections (introduction, methods, results, discussion) early-career researchers may underestimate the amount of flexibility and room for creativity they have in using these components to achieve their scientific and professional development goals. The backward approach we lay out here is about starting at the end by answering the questions of “What do I want accomplish with this paper?” and scaffolding each piece to help serve those goals. This is contrasted with the more ad hoc forward approach most of us have learned to live with, in which we begin with the introduction and struggle through to the conclusion with the primary goal of simply finishing the manuscript.

The article goes into the backward-design approach in more detail. Finally, I appreciated these thoughts about figures and tables:

If it can be conveyed visually, do it! Prefer figures over tables and in-text descriptions where you can. …. Reasonably informed readers should be able to get what is going on from looking at your figure and reading the legend, even if they have not read the rest of the paper. This is not a hard-and-fast rule, but if you work toward it you will ensure that the figures convey as much information as possible. … If you must make a results table, keep it small and simple. Big, complex tables are where reader attention goes to die. If information is best conveyed by a table, be sure to include only the most essential information. When a table gets too big, it becomes easy to forget what its purpose is. By keeping it short and cutting out extraneous information, you are better able to keep the focus on your message.

I fully intend to steal the phrase that “big, complex tables are where reader attention goes to die” for future editorial comments.

US Pharmaceuticals: A Bifurcated Market

The US pharmaceutical industry faces two important goals–which conflict with each other. One goal is to provide needed drugs at the lowest possible price. The other is to provide incentives for research and development of new drugs, which requires some form of compensation for the risks of undertaking such efforts. The US patent system allows inventors to have a temporary monopoly on their discovery, so that they can charge higher prices for a time. After the patent expires, it then becomes legal for others to produce generic equivalents. The result is a two-part market for US pharmaceuticals: expensive drugs still under patent and inexpensive generic drugs.

William S. Comanor layout some of the patters in his essay, “Why Are (Some) U.S. Drug Prices So High?” which is subtitled, “The Hatch–Waxman Act promotes both pharmaceutical innovation and price competition, confounding simple comparisons of U.S. and foreign drug prices” (Regulation, Winter 2021/2022).

Here’s a table showing how the US pharmaceutical industry has evolved. Notice that about 90% of dispensed prescriptions are generics, accounting for about 20% of total invoice spending, while 10% of prescriptions are branded drugs, accounting for 80% of invoice spending.

Regulation - v44n4 - Article 2 - Table 1

A common complaint about the US pharma industry is that brand-name drugs sell for more in the United States than in other countries, which often impose price controls and other rules on drugs from US firms. What is less-heard is that prices for generic drugs are typically lower in the United States. Here’s a comparison from Comanor. When adjusted for manufacturer discounts (“net pricing correction”), branded drugs cost about twice as much in the US as in Japan, Germany, and the UK. However, generic drugs in the US cost about one-third less than Germany and the UK, and less than half as much as in Japan.

Regulation - v44n4 - Article 2 - Table 3

The bifurcated US approach to pharmaceuticals has led to global dominance of US drug companies. Comanor reports:

As disclosed in the RAND report, the United States accounts for 58% of total pharmaceutical sales revenue among nations in the Organisation for Economic Co-Operation and Development, whereas the second and third highest counties, Japan and Germany, are 9% and 5% respectively. Moreover, because U.S. branded prices are higher than elsewhere, the United States accounts for approximately 78% of worldwide industry profits. All other countries, in aggregate, account for less than one-third of that amount. Put simply, U.S. profits incentivize global innovation.

Of course, one can imagine a variety of potentially useful ways to tweak the patent rules or the rules for drug regulation. It also seems to me that the goal of profit-seeking drug companies is to develop expensive drugs for extreme conditions that will generate high profits, along with in-demand drugs for conditions like hair loss. There’s an important rule here for government support of R&D to encourage innovating in a ways that the market might not reward very well: for example, a focus on the important but neglected health conditions; a focus on “orphan drugs” for health problems that affect only a few people and might not be very profitable; and a focus on versions of drugs with lower costs or fewer side effects. It would also be nice if the rest of the world would step up and pay a bigger part of the bill for developing new drugs, too. But in some broad sense, the bifurcated US drug market–an outcome of the Hatch-Waxman act passed back in 1984–has actually worked out pretty well.

Donating to the Conversable Economist

Last summer, ten years after starting this Conversable Economist blog, I finally put up a link for donations. A number of readers have already responded generously, and I appreciate your support more than I can say. Moving forward, my plan is to remind readers of the donation button about twice a year, and the time has come for such a reminder.

This blog serves many purposes for me. It’s an outlet for stuff in my head, so I don’t have to burden family and friends with an overload of economics. It’s a memory aid, so that I can track down things I read 6 or 12 or 18 months ago with relative ease. It’s a commitment device, forcing me to actually read various reports and articles that I might otherwise skim past. But the honest truth is that without a group of faithful readers, none of those motivations would be enough motivation for me to keep the blog going for almost 11 years now.

For readers, my hope is that the blog serves as an example of what economic sociologist Mark Granovetter once called “the strength of weak ties.” His argument was that all of our social networks have “strong ties” and “weak ties,” where strong ties refers to a connections who are also quite likely to be connected with others in our personal network, and weak ties refers connections to those who are mostly not connected with others in our personal network. Granovetter makes the point that when you learn something from one of your strong ties, the same lesson could have (and probably would have) been passed along by another one of your strong ties. But the information and lessons that you learn from weak ties might not have come to you in any other way.

If you are looking for a blog with predictable, partisan, and preferably snarky opinions about the headlines of the day, then the Conversable Economist is not going to be your cup of tea. Instead, much of what I do on this blog is to provide weak ties to articles, subjects, and authors that you are less likely to have run across. I’m sure that some of my personal opinions come across in what I choose to pass along, but I’m neither trying to hide my own opinions nor to push them very hard. My own belief is that the supply of opinionated and partisan opinion-writing on the web has become so large that the value of marginal contributors to that dialog has sunk to near-zero. Instead, I hope that whether you agree with me or not, the facts and connections that I pass along are of some value. I am less invested in persuading readers to agree (although agreement is always nice!) than I am in what John Courtney Murray called “achieving disagreement,” by which he meant disagreement reached with a full and sympathetic understanding of the alternative position, rather than disagreement that occurs from confusion, distrust, and a cussed disposition.

I will keep the Conversable Economist blog freely available to all readers, no matter what. But if you feel moved to make a contribution in support of my efforts and if you have the financial resources to do so, I encourage you to click on the “Donation” button near the upper-right of this page.

The Fed as Borrower of Last Resort?

Economics textbooks teach that one role of central bank during a financial or economic crisis is to act as a short-term “lender of last resort”: that is, when the financial system is in danger of freezing up in a way that can lead to an expanding vicious circle of defaults–as those who can’t get roll-over loans are unable to repay others, who also can’t roll over their loans, and son on–the central bank makes short-term credit available. When the panic passes, the central bank lender-of-last-resort loans get repaid: indeed, because the central bank was the only one willing and ready to make large-scale loans during the crisis, it can even end up making some money from the interest it charges on these loans. Often some firms will end up going broke in the aftermath of the crisis, but the purpose of a lender-of-last-resort policy is not to do a complete or widespread bailout of specific firms. Instead, the goal is to prevent the market as a whole from melting down.

In the last decade or so, the Federal Reserve has transformed how it conducts monetary policy, and recently, part of the new policy seems to involve acting as a borrower of last resort. Kyler Kirk and Russell Wong of the Richmond Fed explain the situation in “The Borrower of Last Resort: What Explains the Rise of ON RRP Facility Usage?” (Economic Brief, December 2021, No. 21-43).

The starting point here is that when the Federal Reserve conducts monetary policy, it seeks to control a specific interest rate called the “federal funds” rate, which can be thought of as a rate at which depository institutions like banks are willing to make very short-term and low-risk overnight loans to each other, often for the purpose of settling up accounts at the end of a business day.

The primary method that the Fed uses for affecting this interest rate is to alter the interest rate that it pays to banks on the reserves that the banks hold at the Fed. But a secondary backup method is to use the ON RRP, which stands for the Fed’s Overnight Reverse Repo facility. For an overview of the ON RRP and how it works, here is a good starting point. But for present purposes, the key is that the ON RRP allows big financial players, like money market mutual funds as well as banks or pension funds, to lend money to the Fed on a very short-term and thus low-risk overnight basis. What’s peculiar is that the amount of Fed borrowing through the ON RRP exploded in size during 2021. Kirk and Wong explain:

The idea is that the ON RRP facility gives participants in the short-term funding market the risk-free overnight option of lending to the Fed at the guaranteed ON RRP rate. Thus, other lending rates — like the fed funds rate — will be above the ON RRP rate. … In the original design, ON RRP was a backstop, as market participants should lend to banks or others (via federal funds, repo, wholesale deposits, commercial papers, etc.) before lending to the Fed. This was the case during the early stage of the pandemic: The daily usage of ON RRP averaged $8.7 billion from March 2020 to March 2021. However, ON RRP usage steadily increased after March 2021 and reached an unprecedented $1.6 trillion in September 2021. This hints at an excess supply of nonbank savings that is not intermediated by banks or absorbed by Treasuries, which has ultimately flowed into the ON RRP facility. The Fed becoming the borrower-of-last-resort has prompted concerns about how the U.S. banking system is functioning during the pandemic. 

In case that blob of text was a bit much to read, let me hit the high points one more time. The ON RRP was intended as a backup for short-term overnight lending. But from March 2021 to September 2021, this backup expanded in size from $8.7 billion to $1.6 trillion. Any shift from a few billion to more than a trillion is a huge deal. Apparently, there is $1.6 trillion or so in funds that large-scale market participants want to lend short-term, overnight, and the Fed is the only party in the market willing to be the borrower–the borrower of last resort, if you will.

Here are a couple of factors going on behind the scenes. Start by considering money market mutual funds. These funds are required to hold only highly liquid short-term assets, so that they can readily finance any withdrawals. When the pandemic hit the economy in March 2020, more investors wanted the safety and flexibility of holding assets in a money market mutual fund, so as you can see from the figure below, the holdings of such funds went up by well over $1 trillion. These additional assets were largely held in US Treasuries, as shown by the upward jump in the green area of the figure. But starting in April 2021 and since then, the holdings of money market mutual funds have shifted. Instead of holding US Treasuries, money market funds are now holding well over $1 trillion in the Federal Reserve’s Overnight Reverse Repo (ON RRP) facility.

What’s behind this dramatic rise and fall in US Treasuries? The story behind the scenes is that when the pandemic recession hit, the US Treasury felt that it needed to have more liquid assets on hand, in case of a banking or financial panic. But then starting in spring 2021, the Treasury began to reduce its holdings of these assets. As Kirk and Wong explain:

In particular, the Treasury has been draining reserve balances in the Treasury General Account (TGA), which is the reserve account maintained by the Treasury to make and receive payments in the banking system. The Treasury has been reducing this account with the Fed by issuing fewer T-bills, effectively releasing these reserves into the banking system. Pre-COVID, the TGA maintained a moderate balance that averaged $290 billion in 2019 and varied with financial conditions and government spending. However, due to uncertainty around the COVID-19 pandemic and the magnitude of the fiscal response, the Treasury increased the TGA balance from approximately $380 billion in mid-March 2020 to an unprecedented $1.6 trillion by June 2020. This rapid increase was primarily facilitated by an expansion in T-bills outstanding from $2.56 trillion in March 2020 to over $5 trillion in June 2020. The Treasury maintained the elevated TGA balance and T-bill supply through February 2021 before normalizing via a reduction in T-bill supply. TGA normalization decreased the TGA from over $1.6 trillion in February 2021 to less than $100 billion by October 2021, primarily through T-bill supply falling from $5 trillion to $3.7 trillion. This effectively reduced the supply of risk-free short-term assets by $1.3 trillion …

So roughly speaking, the US Treasury dramatically increased its holdings of short-term Treasury bills at the start of the pandemic. As market investors moved into money market funds, the money market fund held more of this short-term debt. But then the Treasury decided to hold less short-term T-bill debt, and while market investors still wanted a very safe place to stash their liquid reserves overnight–and they turned to the Federal Reserve ON RRP account. So far, this arrangement seems to be working OK. But as far as I know, no one anticipated that the Fed would become both a lender of last resort in financial crises and also a borrower of last resort as the economy recovered.

For more on this chain of events, I’m just seeing on January 11 this take from Gara Afonso, Lorie Logan, Antoine Martin, William Riordan, and Patricia Zobel of the New York Fed.

What Americans Like about their Health Care

The OECD has just published Health at a Glance 2021, a compendium of health care statistics across the (mostly) high-income countries that make up its membership. Some of the graphs confirm standard reactions to the US health care system: that is, it costs much more than other countries, but the health outcomes for Americans are often no better or worse than in countries that spend less on health care. That said, I was also struck in skimming through the report by several graphs which suggest that, by international standards, Americans have some reasons to like their health care system.

First, here are a few of the standard comparisons. These first two figures show national health care spending as a share of GDP and on a per capita basis. On both measures, the US spends far more than other countries on health care.

But despite this high level of health care spending, basic measures of US health outcomes are not especially good. There are lots of examples, but here are a few. The first graph shows that US life expectancy is not especially good by international standards: indeed, the growth in US life expectancy since 1970 isn’t all that good, either. The second graph show US rankings on infant mortality: just beating out China, but not quite as good as Russia. The third graph looks at mortality from what the OECD classifies as “preventable” conditions and “treatable” conditions. For example, lung cancer is classified as “preventable” through reductions in smoking, while breast and colorectal cancers are classified as “treatable.” By either measure, US mortality rates are disturbingly high.

Given these high costs and sub-average outcomes, what do Americans have to be pleased about in their health care system? A few examples caught my eye. One is how people rate their own health. It turns out that by international standards, a low proportion of Americans rate their health as “bad” or “very bad,” while a high percentage rate it as “good” or “very good.” (Although the second figure also shows that the gap between how Americans in the top income group and bottom income group rate their own health is substantial.)

Americans tend to feel satisfied with the quality of health care available in their area–at least more so than people in Sweden, the United Kingdom, or Japan.

It also turns out that when it comes to out-of-pocket health care costs, the US does reasonably well in these international comparisons: for example, the share of household consumption going to out-of-pocket health care spending in the US is the same as in Canada, only a bit higher than in the United Kingdom, and lower than in Sweden, Denmark, or Norway.

This final group of comparisons showing some ways in which Americans like their health and their health care system are not intended to suggest that the US health care system broadly understood (that is, understood to include public health efforts that happen outside the health care system itself) doesn’t need some meaningful reforms to reduce costs and to improve access and health outcomes. But it does perhaps help to explain why meaningful reform has been hard to achieve.

Putting a Value on Sweat Equity

For a company with shareholders, it’s straightforward to calculate the market value of the firm: just add up the cost of buying all the stock. For a company that is privately owned, calculating the market value is a lot harder. For a bare minimum value, one could add up the value of any physical or intellectual property assets the company owns. But many small private firms are worth a lot more than this minimum value. This additional value lies in the relationships that the company has built over time with supplier, customers, and the community. This extra value is sometimes called “sweat equity,” because it’s the value derived from the efforts of the owner.

Jeff Horwich of the Minneapolis Federal Reserve provides a nice overview of some recent efforts to measure sweat equity in “From crêpes to Cargill: Hot on the trail of sweat equity: Fed economists pursue ever-better estimates of the invisible value beneath much of the U.S. economy” (January 3, 2022). Horwich is summarizing and putting in context a recent research paper by Anmol Bhandari and Ellen R. McGrattan “Sweat Equity in U.S. Private Business” which was published in the May 2021 issue of the Quarterly Journal of Economics (136: 2, pp. 727-781), but is freely available as a Minneapolis Fed “staff report.” The abstract of that research paper states:

We develop a theory of sweat equity—the value of business owners’ time and expenses to build customer bases, client lists, and other intangible assets. We … estimate a value for sweat equity in the private business sector equal to 1.2 times U.S. GDP, which is about the same magnitude as the value of fixed assets in use in these businesses. For a typical owner, 26 percent of the sweat equity is transferable through inheritance or sale. 

Horwich provides this background:

Private entities account for more than 60 percent of U.S. business income. Small businesses employ more than 60 million people—nearly four in 10 American workers. … The category of private companies includes some big names. With 155,000 employees around the world, family-owned Cargill is a notable example in the Minneapolis Fed’s district. However, 98 percent of private businesses in the United States have fewer than 20 employees; 96 percent have fewer than 10. …

The equity value of a public company is established by the research and daily decisions of millions of stock market investors: It is the value of all company shares outstanding. For private companies, however—a category that includes sole proprietorships, partnerships, S-corporations, and private C-corporations—we have limited public data and no market to work out what they add up to. … These invisible assets briefly appear when a private company is sold. Public records of the transaction provide a snapshot of the value of its various assets, as declared by the seller. Bhandari and McGrattan access data from the IRS and third-party-compiled records to determine that, for the median transaction, intangible components of equity comprise 64 percent of the price … Bhandari and McGrattan derive what they call a “proof-of-concept” estimate for the sweat equity of all U.S. private businesses: approximately 1.2 times U.S. GDP. This would amount to almost $28 trillion in late 2021.

Bhandari and McGrattan go beyond just looking at sales of private firms and also dig into income-tax data. The compensation that a business owner receives will be a return on both the physical assets and the sweat equity of the firm.

The importance of sweat equity raises a number of issues. First, sweat equity pretty much by definition involves the time of the business owner in one way or another. Thus, it may be that the key parameters in encouraging small businesses involve this investment of business owner time. For example, it may be that the key to how taxes or subsidies for small firms affect the firm is through how they affect the incentives of owners to invest hours. It may also be that any financial disincentives for small firms are no more important than the cost in time of following government rules and regulations.

Another issue is that sweat equity may be quite personal to the current business owner. Thus, even if a current owner is making a good living from a business, a potential buyer of the firm must worry that they can only buy the physical assets of the firm, and only a part of the detailed connections, knowledge, loyalties and reputation of the firm. An ongoing project for Bhandari and McGrattan is to study what factors can make sweat equity easier or harder to transfer to a buyer of the firm.