Development Policies with the Best Benefit-Cost Ratios

In a world with lots of problems and even more proposed policies to address each of these problems, it makes sense to study the possibilities–and then to prioritize policies with highest estimated ratio of benefits to costs. The Copenhagen Consensus think tank carried out this exercise and came up with 12 policies. A special issue of the Journal of Benefit-Cost Analysis (Spring 2023 14: S1) has a symposium of 12 open-access papers describing the calculations for each policy. The overview essay by Bjorn Lomborg carries the unwieldy but descriptive title “Save 4.2 Million Lives and Generate $1.1 Trillion in Economic Benefits for Only $41 Billion: Introduction to the Special Issue on the Most Efficient Policies for the Sustainable Development Goals.” He writes:

The approaches cover tuberculosis, education, maternal and newborn health, agricultural R&D, malaria, e-procurement, nutrition, land tenure security, chronic diseases, trade, child immunization, and skilled migration. Spanning 2023–2030, these policy approaches are estimated to cost an annual average of $41 billion (of which $6 billion is non-financial). They will realistically deliver $2.1 trillion in annual benefits, consisting of $1.1 trillion in economic benefits and 4.2 million lives saved. The pooled benefit–cost ratio of all 12 investments is 52.

You can check out the essays for yourself, but here’s just a little more information:

  • A global plan to increase spending on responding to tuberculosis could lead to “27.3 million averted deaths over the 27-year period between 2023 and 2050, or 1 million averted deaths per year on average.”
  • Two policies could use education resources more effectively: “The first approach is called “teaching at the right level.” This intervention takes students out of age-based classes for 1 hour a day to learn at their specific learning level, either physically or with a tablet. Material is tailored so children are not lost or bored. Just 1 hour a day for a year can make the full school year deliver learning that would have taken 2–3 years otherwise. The second strategy is `structured pedagogy.’ As used successfully in Kenya, teachers are provided with lesson plans and supported with workshops and text messages. The extra cost for one student for 1 year is less than $8, and it has been shown to deliver learning that is equivalent to almost 1 extra year of schooling.”
  • Higher spending on global agricultural R&D, via national-level programs, international research coordination, and involvement of private firms can increase output and farm incomes, while also reducing food prices and hunger. “The average increased spending annually is about $5.2 billion, with annual benefits of $184 billion. Over the entire 35-year period, the average BCR [benefit-cost ratio] is 33.”
  • Improving the use of long-lasting insecticide-treated nets can prevent malaria. “The study estimates the costs and benefits of scaling up the number of LLINs by 10 percentage points in the 29 highest-burden countries in Africa from 2023 to 2030. By the end of the decade, this effort will have halved deaths from malaria. Each dollar spent will deliver $48 of social benefits.”
  • Scaling up the access to maternal and neonatal health interventions . Focusing in 55 lower-income countries can save more than 160,000 mothers and 1.2 million children each year.”
  • When government uses e-procurement, it can reduce the cost of government spending and reduce corruption. The study looks at “11 e-procurement initiatives in low-income countries like Bangladesh and Rwanda, middle-income countries such as Ukraine and Tunisia, and high-income countries like Italy and South Korea. They show that the cost is likely very low. Over the first 12 years, costs average $16.7 million, irrespective of a country’s size – a trivial sum compared to most government budgets. “The average reduction in procurement prices is 6.75%, leading to savings by 2030 worth more than $100 million per year for an average low-income country. In lower middle-income countries, the average savings are more than $1 billion per year.”
  • Improving children’s diets in their first 1,000 days pays off. “The first two policies focus on micronutrients for pregnant women, with a BCR of 24. The third and fourth policies focus on two ways of promoting complementary feeding in the 40 low- and lower middle-income countries with the highest rates of stunting. The third intervention delivers only information to the top 40%, who can afford food but need more guidance (BCR of 16), while the fourth intervention focuses on the lower 60%, who need both information and more food, which means higher costs, leading to a lower BCR of 7.5. Finally, the fifth intervention examines the costs and benefits of 1Small quantity lipid-based nutrient supplements,’ which delivers a BCR of 14.”
  • Improving property rights in lower-income countries involves many steps: “surveying and registering land, digitizing land registries, and operation costs, along with the need to strengthen institutions and resolving land disputes.” But estimates for sub-Saharan Africa suggest the “BCR is 18 for rural areas, and the BCR is 30 for urban areas.”
  • Tackling noncommunicable diseases can have big payoffs, including cardiovascular preventive care (like meds to lower blood pressure) as well as policies related to tobacco, alcohol, and salt use: “In total, the cost of introducing all these approaches across low- and lower middle-income countries requires an annual additional $4.4 billion and this will save about 1.5 million lives, delivering a BCR of 23.”
  • A 5% increase in global trade, after taking into account both benefits and costs to those in industries exposed to imports, has a BCR for high-income countries of “only” 7. ” However, for the poorer half of the world, the benefits are vastly higher than the costs, with a BCR of 95.”
  • Increased childhood immunization programs in 80 low- and middle-income countries could cost $1.7 billion, but with benefits outweighing costs by more than 100 to 1. Specific vaccines are “the pentavalent vaccine, human papillomavirus (HPV) vaccine, Japanese encephalitis (JE) vaccine, measles (MCV) vaccine, measles-rubella (MR) vaccine, meningococcal conjugate A (Men A) vaccine, pneumocccal conjugate (PCV) vaccine, rotavirus vaccine, and yellow fever (YF) vaccine.”
  • A 10% increase in the “skilled migration” that has already taken place, where this includes “physicians, engineers or STEM workers, and other persons with advanced education,” seems politically possible. “In total, 10% more highly skilled migration can deliver a global BCR of 20, and within Africa of 4–7.”

A proposal to spend $41 billion per year is of course a lot of money, but some perspective is useful. The proposed US budget that President Biden sent to Congress earlier this year suggested $7.3 trillion in total spending. Divided over 365 days, this represents is average spending of $20 billion per day, every day US federal budget. Thus, the proposed amount is two days of federal spending out of the year. Of course, there’s no reason why the US government should pay the entire bill, except for the political optics of having the United States save millions of lives and more than $1 trillion in costs. We could negotiate with the governments of other high-income countries to pay half the cost. Or we could work with private charities: US charitable giving about $500 billion in 2024.

Yes, of course, there can be multiple slips between the cup and the lip in supporting these kinds of programs. They need to be run transparently and effectively. But the potential payoffs are remarkable.

All Three Sides of the Global Energy Challenge

It sometimes feels to me as if issues about energy are all being condensed down into climate change issues. But energy policy debates should have three sides. Michael Greenstone delivered the AEA Distinguished Lecture on “The Economics of the Global Energy Challenge” in San Antonio in January. It’s now published in the AEA Papers and Proceedings (2024, 114: 1-30). As he explains it: “The global energy
challenge is defined by three often conflicting goals that all societies are pursuing: inexpensive and reliable energy, clean air, and limiting damages from climate change.”

Greenstone emphasizes some basic arithmetic about global energy use. There is a close correlation between energy per capita use and a country’s per capita GDP: to put it another way, there are no counterexamples of a country that has become rich without a dramatic rise in energy use.

As a result, there are billions of people around the world living in low-income and middle-income countries whose aspirations for themselves and their descendants involve a dramatic rise in energy production and consumption. An average American uses about 13,000 kilowatt-hours of electricity per year. In comparison, Greenstone writes:

How pervasive is “low” energy consumption? 3.85 billion people live in countries with per capita electricity consumption below 1,500 kWh per capita annually. And 4.36 and 6.83 billion people live in countries with consumption below 2,500 kWh and 5,500 kWh per capita annually, respectively. The point is that there are billions of people who want the higher energy consumption that helps to unlock higher living

In comparison to the billions of people, the US population is 330 million. As I’ve pointed out before, the US now accounts for about 14% of global carbon emissions, while China accounts for 31% and India for 7% (and rising quickly). The nations of Africa now account for less than 4% of global carbon emissions, as do the nations of Central and South America. Greenstone cites one estimate, based on current trends, that energy demand in high-income countries will be basically flat over the next few decades, but energy demand for the rest of the world will triple.

I’m certainly not opposed to US and other high-income countries seeking to reduce their carbon emissions. But looking ahead a few decades, the outcome for carbon emissions around the world is going to be determined by the development path of today’s low- and middle-income countries. The result is what Greenstone calls “cruel arithmetic:”

[T]he global energy challenge exposes climate change’s cruel arithmetic. For the OECD and non-OECD groups of countries, Figure 7, panel A reports their cumulative emissions of CO 2 in metric tons since the Industrial Revolution and their projected emissions of CO2e between 2021 and 2100. [The “e” after CO2 stands for “equivalent,” meaning that emissions of all other greenhouse gases, like methane, are being included as well, but measured in terms of carbon-equivalent effect on climate change.] Through 2020, OECD countries accounted for 955,151 billion metric tons of CO2 , which is roughly 56.2 percent of emissions to date; today’s wealthy countries are responsible for a majority of historical emissions and a disproportionate share when one accounts for their share of global population (17.3 percent in 2020). The projections for the remainder of the century tell a very different story: OECD countries are projected to account for another 746,192 billion metric tons of CO2e but the non-OECD countries’ projected cumulative emissions are much larger at 2,599,515 billion metric tons. …

However, the planet and its atmosphere only care about total emissions and have no interest in history or equality or any other metric. … Today’s OECD countries are only projected to emit 8 billion tons of CO2e at the end of the century, while non OECD countries are projected to emit 36 billion tons. So even if the OECD countries become carbon neutral by then … meeting the 2.0° C target requires the non-OECD countries to cut their end-of-century CO2 emissions by roughly 85 percent relative to their current baseline projections.

A policy which tells countries around the world not to have a dramatic rise in per capita energy use–that is, not to develop–seems like a non-starter. And for these countries, fossil fuels are projected to be the dominant source of energy (even if their share will decline somewhat) through the middle of the 21st century. Pretty much whatever happens with carbon emissions in the US and other high-income countries, the outcome will be determined by what happens in low- and middle-income countries. I’m not saying that’s fair or right, but as Greenstone notes, the level of carbon emissions doesn’t care about history or fairness.

I’ll add a few more points here:

1) Greenstone’s second point is about costs of conventional air pollution. One part of the case for aggressive reduction of fossil fuel use is its immediate health effects from conventional pollutants. In the United States, the estimates are that health pollution reduces life expectancy by about 0.3 years on average–more in places like southern California. But the health costs are much higher in other places. Here’s Greenstone:

It may be surprising, but air pollution is the greatest current external threat to human health globally, with the average person losing more than two years of life expectancy from air pollution. This loss is comparable to that from tobacco smoking and much greater than that from alcoholism, terrorism, war, and so on (Greenstone and Hasenkopf 2023). The air-pollution-induced loss of life expectancy varies widely around the globe, with relatively low levels in today’s wealthy countries and losses of four years or more in India, Bangladesh, Pakistan, and other parts of South Asia (Greenstone and Hasenkopf 2023). Energy production that limits local air pollution is typically more expensive largely because coal is bountiful and inexpensive compared to combining coal combustion with air pollution control devices or using alternative energy sources. In the context of the global energy challenge, it is noteworthy that the costs and benefits of policies that address the air pollution externality largely occur within the same country.

Thus, if low- and middle-income countries take steps to reduce conventional air pollution from fossil fuels, it provides an additional reason to search out alternatives sources of energy.

2) Greenstone is focused on the global picture, not the US situation. However, I’d note that the US and other high-income countries have their own version of his first theme, the inexpensive and reliable energy challenge. For the US, it’s not about a dramatic increase in overall energy production. Instead, the issue is the the US currently gets about 79% of its energy from fossil fuels (petroleum, natural gas, coal). If that share is to be dramatically reduced, then alternative energy sources are going to need to be dramatically expanded. There are hard questions here, like a potential expansion of nuclear electric power (now 8% of all US energy). Or if the primary part of the answer is going to be an expansion of solar (now about 2% of total US energy supply) and wind (4% of US total energy supply), then these are going to need to be expanded by multiples of five or ten, which would need to be combined an comparable rise in building the needed power transmission lines around the country and finding ways to store energy for night and when the wind isn’t blowing, which (with current technology) requires either a truly enormous expansion of battery capacity or some kind of back-up energy generation. All of this is against a backdrop where energy needs for server centers and applications like artificial intelligence are rising sharply.

3) For those interested in learning more about international dimensions of climate change policy, I can recommend the freely available symposium in the Summer 2023 issue of the Journal of Economic Perspectives (where I work as Managing editor). The papers are:

“Check Your Banners and Your Membership Cards at the College Gate”

The Columbia University campus, along with many other college campuses, has been convulsed in the last few months by slogan-chanting, banner-waving protests. Thus, I was struck when I ran across an address delivered by Frank Fackenthal, then the president of Columbia, on September 26, 1946, on the occasion of the start of a new academic year. It’s reprinted in The Greater Power and Other Addresses, a collection of speeches from Fackenthal, published in 1949.

(I originally ran across part of the quotation that follows in a letter to the editor from Irving Kushner, published Wall Street Journal, March 5, 2024. Kushner was apparently a first-year student at Columbia that year. But one of my promises to myself and to readers, here at Conversable Economist, is that I don’t pass along quotations without verifying them–and the finding time to track down the speech in the 1949 book took me awhile.)

Here is a chunk of Fackenthal’s speech:

You who have reached the age of advanced study will, of course, have opinions, maybe even prejudices; but acceptance in an academic community carries with it the obligation to submit those opinions and those prejudices to examination under the bright light of human thought and experience. If, perchance, your views have been crystallized into slogans held aloft on banners, or are subject to control by allegiance to minor or major pressure groups, check your banners and your membership cards at the college gate. A slogan-decorated banner is alien to the academic life, and is in addition an unwieldy, an embarrassing, a distracting thing i a classroom or wherever free discussion is in progress. Time and energy needed or the study of ideas will be wasted in protecting a preconceived notion: a notion, be it admitted, that study may confirm. …

Any young man or woman who makes application for admission to an American college or university, by that very act agrees, if admitted, to try to develop his faculties, to think independently, to form his own judgments, to gain a sense of values. Without such agreement, his entrance into college is a travesty …

If when you leave the University on Commencement Day, after having submitted yourself to the processes of true academic life, you wish to have back your old banner, claim it, and you can take your place in the body politic with the deep satisfaction of tested and confirmed judgment. Equally deep can be your satisfaction should you decide not to claim it, for you will know that you have the ability and the willingness to face and to evaluate ideas.

I’m personally sympathetic to Fackenthal’s view, but in some ways, I was an old fuddy-duddy even when I was young. When I was in college, without any clear idea of what would come next, I viewed my time as perhaps the one time in my life with a chance to read and study and discuss, in a concentrated way, across a wide range of topics. (Not just economics!) I really wanted to put in the time, to attend the classes and do the readings and write the papers–and to spend additional time chasing down facts and ideas of interest. Of course, I also had time for friends and fun and extracurriculars–but I was there for the curriculars. My chance for academic life was precious to me.

Of course, I knew plenty of other students for whom the extracurriculars of all sorts played a larger role, including some who were politically quite active. They spent substantial time in off-campus political activities from door-knocking to protests, or even took off for a semester or a year to support a cause. But when a group decided to bring its slogans and signs to campus, as sometimes happened, I walked around the gathering. In other settings, from classroom to dining hall to dormitory, I was happy to converse with friends who were protesting about how they perceived their cause. But for me, as for Fackenthal, pressure groups with slogans and banners are fine and appropriate in many public settings, but alien to academic life.

Uber/Lyft vs. the Minneapolis City Council: Denouement

Last week, I posted about the attempt by the Minneapolis City Council to set higher pay for Uber and Lyft drivers, and the threat by the rideshare companies to withdraw from Minneapolis or from Minnesota as a whole if the law was enacted. The dispute seems settled, at least for now, and I thought some readers might like to know the denouement of the story.

The governor of Minnesota is a Democrat and the Minnesota state legislature is Democrat-controlled, and thus was ideologically sympathetic to an attempt to raise the wages of rideshare drivers. However, unlike the Minneapolis City Council, the state-level politicians were open to evidence and even compromise. Rather than stand by and watch the Minneapolis City Council play brinksmanship games with the ride-share companies, the state legislature blocked the city from setting its own rules, and passed rules of its own. For perspective, here’s a chart comparing the recent proposal from a story by Max Nesterak in Minnesota Reformer (May 21, 2024, “Here’s what’s in the bill regulating Uber and Lyft driver pay and labor standards”).

As I discussed in the previous post, rideshare drivers are paid based on a per-mile and a per-minute fee schedule when they are driving customers. The proposals in the chart appear in chronological order. Thus, last year Uber reached agreement with a group called the Minnesota Rideshare Drivers Association, which does not represent all drivers, for a rate of $1.17/mile and $0.34 per minute. Based on existing trip patterns, this would have raised driver earnings by 11%. This agreement was rejected by state legislators, who passed legislation that would have raised earnings 67%–only to see it vetoed by the governor.

This March, the Minneapolis City Council entered the picture with a proposed fee schedule that would increase driver earnings by 45%. Literally the next day, a report from the Minnesota state Department of Labor and Industry (apparently conducted by economists James Parrott of The New School and Michael Reich of the University of California, Berkeley) calculated that if the goal was to assure drivers reached the minimum wage, substantially lower per-mile and per-minute charges were appropriate. The mayor of Minneapolis chimed in with his own proposal. The bill that passed the state legislature is in the bottom row of the table: $1.28/mile and $0.31/minute.

A few thought here:

1) The ultimate compromise is extremely close to the per-minute and per-mile charges that Uber had agreed to last year, as well as close to the results from the state-level study for assuring that drivers would earn the minimum wage. The much higher pay levels proposed in state legislation last year, as well as by the Minneapolis City Council, failed to pass.

2) One way to get a flavor for these number is to take a fairly common and representative trip, and see what it would cost under these rules. A news story from the local Star Tribune newspaper calculated: “The deal means that for a 10-mile, 15-minute ride, a driver would make $17.45. The rates proposed by DFL leaders earlier this month would have netted drivers $20.05 for the same trip, and the original Minneapolis ordinance would have given drivers $21.75.”

3) The estimates of how much the new pay schedule will raise driver pay are probably unreliable. As I noted in my previous post, they are based on the earlier data using actual trips per driver. However, higher pay for drivers will tend to attract more drivers, which means drivers will need to wait longer for a fare. Higher fares will also discourage some customers. To some extent, this rise in supply of drivers and drop in demand for rides will combine to offset some, perhaps most, of how the pay schedule would otherwise affect drivers.

4) Now that the issue of having government set pay for rideshare drivers is on the table, it’s not going away. As Nesterak writes: “[Minneapolis] City Council members blasted [Governor] Walz for that part of the agreement, saying he caved to multi-billion dollar corporations. They also took credit for the deal.” Similarly, supporters of the bill at the state legislature claimed the bill as only a partial success, and promised to keep negotiating for higher per-mile and per-minute charges in the future.

Declining Labor Share: Measurement and Causes

The “labor share” refers to the share of income in an economy paid to labor. Back in the dark ages of the late 1970s and early 1980s when I first started studying economics, it was often assumed that the labor share was more-or-less a constant term over decades. But in the last to decades, the labor share has dropped, both for the US economy and across most high-income countries.

For some perspective over time, the graph shows a time series of the labor share for the nonfarm business sector of US economy going back to the late 1940s. The vertical axis is set equal to 100 in 2017, which is not super-convenient for my purposes, but for eyeballing the chart, it’s roughly correct to say that 112.5 on the vertical axis is a labor share of about 64% of output produced by this part of the economy.

Thus, you can see a fall in the labor share in the 1960s, and then a bounceback in the late ’60s; a fall in the early 1970s, and a bounceback in the late ’70s. When there was a fall in the early 1980s, looking at this data, I found myself thinking “wait a few yeas to see what happens,” and there’s something of a bounceback in the late 1980s. There’s again a fall in the early 1990s, but a hearty bounceback in the late 1990s and early 2000s. With this history, when the labor share fell in the early 2000s, I again found myself saying “wait a few years to see what happens.” But as you can see on the figure, the prevailing level in the last 15 years or so (with some fluctuation around the pandemic) has been lower–more like 57% of total output than the earlier level of 64%.

There are (at least) two big questions here: measurement and causes. Loukas Karabarbounis takes on both topics in the Spring 2024 issue of the Journal of Economic Perspectives in “Perspectives on the Labor Share.” (Full disclosure: I’ve been Managing Editor of JEP for 38 years. All articles back to the first issue are available free of charge, compliments of the American Economic Association, which publishes the journal.)

The hyperattentive reader will have noticed that the figure above is the “nonfarm business sector,” which leaves out both agriculture (not a huge part of the US economy in recent decades) and also government and nonprofits, as well as the output of services received by those living in owner-occupied housing. This has been the traditional method of calculating labor share, but what if one looks instead at labor share of the entire GDP?

As intro econ students are taught, the GDP of a country can be measured in several ways. The best-known way is as a measure of output. But in the economic statistics, output only counts when it is sold, which means that output is also a form of income for someone, somewhere–workers, owners of firms, others. Thus, you can also measure GDP by adding up categories of income (along with allowing for income that just offsets depreciation of existing capital). Here are the main categories of income for 2019 from the official national income and product accounts:

Looking at the rows, it’s clear that “compensation to employees” is labor income. It’s clear that corporate profits, rental income, and interest are not labor income. The conceptual problem arises in two categories: taxes (less subsidies) on production/imports, and “proprietor’s income,” which refers to businesses owned by the person (or people) who runs them, like certain car dealerships, franchises, or health care practices. For the taxes/subsidies category, standard practice seems to be just to divide them proportionally between capital and labor; in this way, it doesn’t influence the labor share. But how to think about “proprietor’s income” is hard.

Consider someone who owns a car dealership or a fast-food franchise. Say the firm makes money, and so the owner get that money at the end of the year. Should that money be treated as labor income? Or is it really a business profit that should be treated as a return to capital? Or should it be divided in some way? The question is hard, and rather than try to resolve it, Karabarbounis calculates the labor share several different ways. For example, you can treat “proprietor’s income” as divided equally between labor and non-labor income, or you can can treat the “proprietors’s income” that matches employee income as labor, and the rest as capital income. Karabarbounis also suggests looking at the labor share just for output and income from corporations, which is about 50-60% of GDP, but avoids the question of how to divide up other kinds of income.

When he compares these various measures (and there are more measures in the appendix of the paper for those who can’t get enough of this stuff), they all show a decline in the labor share in the last couple of decades. The method closest to the official Bureau of Labor Statistics data shown above shows the biggest declines. The decline in labor share also holds across almost all US industries.

In international data, the decline in labor share appears across most large economies as well. He writes:

[Consider} the labor share of the 16 largest economies of the world based on 2015 GDP. Out of these, we can see 13 economies whose labor share has declined. The pattern of these declines is not related in any obvious way to geography, level of the labor share, or level of development. We observe labor-share declines in advanced Anglo-Saxon economies (Australia, Canada, and United States), advanced European economies (France, Germany, Italy, and Spain), advanced Asian economies (Japan and Korea), and emerging markets (China, India, Mexico, and Thailand). The only countries with increases are Brazil, the United Kingdom, and Russia.

This breadth of the declining labor share across industries and countries in the last couple of decades means that, when looking for explanations, you need to throw a broad net–that is, you need an explanation that works across countries and across sectors. For example, trying to trace the decline in labor share to the decline in US unionization rates is not much help in explaining the decline in labor share in Italy or India) over several decades.

As Karabarbounis points out, “[I]t is fair to say that there is no strong
consensus yet about the deeper causes of the labor share decline.” He goes through a list of plausible alternatives, and suggest two as being more likely than others. One is what economists call “capital-augmenting technology” and humans call “automation.” A common model is to think of work as made up of a number of separate “tasks.” “The key assumption is that some types of services can be produced with either capital or labor, whereas other services are produced only with labor. Automation decreases the number of tasks which are produced only with labor, enabling capital to substitute for labor in a larger share of tasks.” Indeed, some researchers have used the declining labor share as a measure of the effects of automation.

The other plausible alternative mentioned by Karabarbounis are based in changes in product markets. For example: “The evidence shows that an increasing share of economic activity has been concentrated at larger firms with lower labor shares.” In addition, these large firms may be able to charge higher mark-ups over the cost of production. Along similar lines, I find myself wondering if we live increasingly in an economy where the digital goods and services make up a larger share of what we consume, both directly via entertainment, but also indirectly as producers use these digital services behind the scenes. For digital goods and services, the marginal cost of an additional user can be very close to zero, but the sellers do need to charge enough to recover the fixed costs of providing these goods and service. The resulting outcomes in terms of prices and mark-ups are an evolving story.

Pushback on Pessimism About Randomized Controlled Trials

Back in January, I posted about an article that was getting some attention in my world. Megan T. Stevenson is an active researcher in the criminal-justice-and-economics literature. She argues that when you look at the published studies that use randomized control trial methods to evaluate ways of reducing crime, most of the studies don’t show a meaningful effect, and of those that do show a meaningful effect, the effect often isn’t replicated in follow-up studies. She mulls over this finding in “Cause, Effect, and the Structure of the Social World” (forthcoming in the Boston University Law Review when they get around to finalizing the later issues of 2023, pp. 2001-2027, but already available at the Review’s website).

This essay feels disheartening. Thus, the editors of Vital City online magazine asked a dozen or so social scientists to react. Here are a few of the reactions from a few of the essays that caught my eye:

When studying policies for long-standing problems, like reducing crime or improving education, we should expect that the results will often be negative, because that’s how reality is, a case made by Sharon Gleid. She writes:

Most new ideas fail. When tested, they show null results, and when replicated, apparent findings disappear. This is a truth that is in no way limited to social policy. Social science RCTs are modeled on medical research — but fewer than 2% of all drugs that are investigated by academics in preclinical trials are ultimately approved for sale. A recent study found that just 1 in 5 drugs that were successful after Stage 1 trials made it through the FDA approval process.

Even after drugs are approved for sale at the completion of the complex FDA process (involving multiple RCTs), new evidence often emerges casting those initial results in doubt. There’s a 1 in 3 chance that an approved drug is assigned a black-box warning or similar caution post-approval. And in most cases, the effectiveness of a drug in real-world settings, where it is prescribed by harried physicians and taken by distracted patients, is much lower than its effectiveness in trial settings, where the investigative team is singularly focused on ensuring that the trial adheres to the sponsor’s conditions — or where an academic investigator is focused on publishing a first-class paper. Most of the time, new ideas and products don’t work in the physical world either — and a darned good thing that is, or we’d be changing up everything all the time …

In contrast, most social science problems are very, very old, and the ideas we have to address them generally employ technologies that have existed for a long time. Our forebears were not all fools — if these strategies were successful, they’d almost certainly have been implemented already (and many have been, so we take them for granted). Operating near the feasible margin means recognizing that, even when they work, interventions are likely to have very modest effects. … It would be worrisome if there were big, effective criminal justice interventions out there that we had missed for centuries. Perhaps we should start our analysis by recognizing that we stand on the shoulders of centuries of social reformers and are operating fairly close to the feasible margin.

It’s implausible to expect transformational change from a randomized trial, but incremental gains can be real and meaningful, argues Aaron Chalfin.

[W]hy should we expect randomized experiments to produce evidence of transformational social change? That seems an impossible standard given that our world is shaped by human nature and a variety of unforgiving social and political constraints. If change is hard and most interventions fail to change the world in transformational ways, then it stands to reason that randomized control trial (RCT) evidence should reflect this seemingly fundamental truth. The fact that most RCT evidence is associated with modest impacts at best matches our understanding of the structure of the social world and serves as a sign that research evidence, rather than being subject to researcher biases, is credible. … Is a 5% or 10% improvement in a given problem a transformation — or is the only true transformation a much bigger result than that? More to the point, why is transformation, which is in the eye of the beholder, the standard to which we must adhere? … Lots of stuff we try indeed doesn’t make much of a difference. But at the same time, RCTs have also led to genuine learning — both about what fails and what succeeds.

Positive incremental reforms do happen over time, and in fact can be better than leaping into a unknown big-picture change, argue Philip J. Cook and Jens Ludwig.

 In these hyperpolarized times, there’s a growing view that the quality of life can’t be improved by modest policy interventions that are limited with respect to scope and scale. In this view, interventions need to be bold and broad, or the status quo will inevitably reassert itself.  Of course, the evidence base for “bold and broad” interventions is often nonexistent, so what is really being advocated is a giant leap into the unknown — what one might call, for lack of a better term, the “you only live once” (YOLO) approach to policy. We disagree.

Ludwig and Cook point to examples of success, including the gradual spread of compulsory K-12 school attendance, or a program in Chicago that reduced violence among young men with a combination of behavioral counseling and mentors.

Maybe the randomized control trial method shouldn’t be viewed as the “gold standard” methodology for causal evidence, a case made by Anna Harvey.

For those not familiar with the idea of a “randomized control trial,” the basic idea is that a group of people are randomly divided. Some get access to the program or the intervention or are treated in a certain way, while others do not. Because the group was randomly divided, a researcher can then just compare the outcomes between the treated and untreated group. This approach is sometimes called a “gold standard” methodology, because it’s straightforward and persuasive. But of course, no method is infallible. One can always ask questions like: “Was it really random?” “Was some charismatic person involved in the treatment in a way that won’t carry over to future projects?” “Was the sample size big enough to draw a reliable result?” “Did the researcher study a bunch of treatments, on a number of groups, but then only publish the few results that looked statistically significant?”

Harvey points out that there are a number of “quasi-experimental” methods where the randomness is not designed by a research study, but instead emerges from a situation. For example, some public programs are rolled out in different places at different times, and if the roll-out is random, one can compare across these places. Sometimes a program is set up so all of those above a certain score can enter a program, while others below the score cannot. Comparing those who are just barely above the benchmark with those just barely below it–who are likely to be quite similar in other ways, can offer a useful comparison. One can look at a certain trendline before and after a given event, and see if it has shifted.

Studies may have gains for participants in terms of exposing them to expertise and experiences they would not otherwise have had, argues John Maki.

I can’t think of a reform I’ve worked on where the process wasn’t as, or arguably even more, valuable than the outcomes it produced. For instance, when I led the state of Illinois’ public safety grantmaking and research agency from 2015-2019, my colleagues and I created a multiyear funding opportunity for medium-sized cities to implement evidence-based programs to reduce gun violence. The award also required grantees to meet regularly with my staff and subject matter experts to talk about their experiences. While the funding ended several years ago, I still hear from people who were part of the program. They talk not so much about the outcomes their work produced but about the relationships they built with experts they otherwise would never have met, what they learned about working with state grantmakers and researchers, and what they learned about better engaging their community.

Uber/Lyft vs. the Minneapolis City Council

The Minneapolis City Council voted back on March 7 to require that ride-sharing firms like Uber and Lyft needed to increase the pay received by their drivers. Uber and Lyft both responded by saying that they would stop travelling to or from locations in the city of Minneapolis; Uber said that it would leave the state of Minnesota altogether, while Lyft said that it would continue to serve non-Minneapolis destination in the broader metro area.

My extended family and I live in suburbs that border Minneapolis, and one of my adult children lives in Minneapolis. The ride-share services are mostly a convenience for us, but one of my adult children has a disability and it’s a transportation lifeline for him. Thus, I do have a personal stake in this issue.

(For those not familiar with this area, the population of the broader Minneapolis-St. Paul-Bloomington metro area is about 3.6 million. The population of the city of Minneapolis is about 425,000. However, a number of office buildings as well as destinations like theaters, sport venues, and shopping are within the Minneapolis city limits.)

A complexity in this dispute is that Uber and Lyft drivers are not paid by the hour: they are only paid when actually transporting fares. Their pay is determined by a combination of a fee per mile travelled and a fee per minute of driving. Thus, part of the controversy is over what mixture of per-mile and per-minute fees would assure that drivers receive the minimum hourly wage.

The Minneapolis City Council voted on March 7 for Uber and Lyft drivers to be paid 51 cents per minute and $1.40 per mile. Before the vote, the mayor of Minneapolis pointed out that the state Department of Labor and Industry (run by a sympathetic Democratic governor) was scheduled to publish a study literally the next morning with estimates of what mixture of fees would be needed to assure drivers a minimum wage, and asked that the Minneapolis City Council wait for those numbers. But waiting an entire day for data is not a skill possessed by the council, so it went ahead and voted.

The state report on “Transportation Network Company Driver Earnings Analysis and Pay Standard Options” (March 8, 2024) came out the next day. The “study analyzed extensive data provided by Uber and Lyft about more than 18 million Minnesota transportation network company (TNC) trips and driver earnings for all of 2022, and the results of a survey completed by 1,827 Minnesota drivers.” The report determined that the appropriate mix of fees so that the drivers would make minimum wage would be $0.89 per mile and $0.49 per minute.  

The Minnesota state study noted that “a third of all drivers provided more than two-thirds (69 percent) of all trips,” and focused on the situation of those drivers, who drive more than 20 hours per week. It broke down the time of drivers into three categories: “Period 1 or “P1” is the time drivers are logged into the app and waiting to accept a ride; period 2 or “P2” is the time drivers are enroute to pick up a passenger; period 3 or “P3” is the time when a driver is transporting a passenger from the pickup location to the drop off location. For purposes of this report, the sum of P1, P2 and P3 equals the driver’s total working hours.”

Obviously, the hourly compensation for driving will differ on which time category you use. Also, it will differ according to what is included in estimates of vehicle costs and other expenses. The report states:

The analysis of company data indicates that gross hourly earnings per passenger time (P3) for drivers in the seven-county Twin Cities metro area averaged $52.94 in 2022. But drivers had a passenger in the car only 58 percent of the time they were logged into the app and available for a dispatch. As a result, average gross hourly earnings per working hour (P1 + P2 + P3) are 42 percent smaller: $30.27. After factoring in expenses for total miles driven during working time, average net hourly earnings were even smaller: $14.48 (or 27 percent, of gross hourly earnings based on passenger time). … Those amounts are averages; some drivers earn less, some earn more. … Twenty-five percent of drivers had net, after-expense hourly earnings of $10.54 or less, and 25 percent of drivers had net after-expense hourly earnings of $17.51 or more.

The TNC [Transportation Network Company] pay standard options include two components: a per minute component to compensate for the driver’s time, and a per mile component to compensate for vehicle and other necessary expenses, and as explained below, to cover the cost of possible common workplace benefits. … The Minnesota per minute rate is designed to compensate drivers at the equivalent of the minimum wage, plus the employer share of federal Social Security and Medicare payroll tax … The Minnesota base per mile rate provides for the 63.8 cents per mile cost of acquiring, operating, and maintaining a vehicle based on Minnesota-specific costs from early 2024. The respective per minute and per mile pay standard components are applied to the time and distance of a TNC passenger trip; that is, the pay rates are pegged to the passenger (P3) time and miles. In order to pay drivers for the entirety of their on-app time and for all the miles they drive during on-app time, the per minute and per mile rates are scaled up. Scaling up the per minute pay rate involves dividing by the P3 share of on-app time; scaling up the per mile expense rate involves dividing by the P3 share of total miles driven during all three of the time segments for each trip. … The scaled-up 2024 base compensation rates for the Twin Cities metro area are 48.7 cents per minute and 89.0 cents per mile. … Applying the 2024 base rate pay standard per minute and per mile rates to the hours worked and miles driven during 2022 indicates that average pay per trip for Twin Cities drivers would rise by about 10 percent under the base pay standard.”

I won’t spend time here digging into with the underlying assumptions behind this calculation, like how the total expenses of operating a vehicle are calculated, or whether the P1 time (driver logged on) should be treated the same as P2 time (driving toward picking up a fare) and P3 time (actually transporting someone). But notice that the difference in per-minute fees here is small: 51 cents per minute for the Minneapolis City Council, and 49 cents per minute for the state report. The big gap is the per-mile payments: $1.40 per mile vs. 89 cents per mile. I’ll also add that when the bottom line from the state report is “pay would go up 10%,” my working assumption is that one could also tweak the calculations to say that pay is just fine where it is.

The Minnesota state legislation, with a Democratic majority, has now entered the picture. The most recent “compromise” proposal seems to be $1.27 per mile and 49 cents per minute.  This proposed per-mile fee of course, is well above what the state report recommended. Uber and Lyft are still announcing that they will leave the state if this is enacted.

Here, I’ll just offer a couple of thoughts: one about current pay levels for being a ride-share driver, and the other about likely outcomes if the current proposals go into effect–and then Uber and Lyft renege on their oft-repeated promises to leave and instead decide to stay.

The local labor market here is strong. The unemployment rate in the broader Minneapolis metro area has been about 2.5% since 2017–leaving aside the jump and decline in 2020 and 2021 related to the pandemic. The Minnesota state report notes:

In the post-pandemic period, and especially in 2022 and early 2023, the number of such job openings substantially exceeded the number of job seekers. As a result, pay has increased in low-wage jobs, reversing decades of stagnant wages and growing wage inequality. Data from the Minnesota Department of Employment and Economic Development (DEED) indicates pay for the lowest-wage jobs in the seven-county Twin Cities metro area rose nearly twice as fast during the past two years as for the overall private sector workforce, 13.7 percent compared to 7.1 percent.

During this period when jobs were available in the metro area and pay for low-wage jobs was rising, the number of drivers for Uber/Lyft was rising fast. There were apparently about 12,000 drivers for Uber and Lyft early in 2024, and the number increased by 25% from start of 2023 to the start of 2024. Indeed, a market had developed between Hertz and ride-share drivers, in which Uber and Lyft drivers could rent a car for about $335/week. When the Minneapolis City Council started voting to set rules for driver reimbursement, Hertz shut down the program. Apparently, being a ride-share driver looked like a good option, relative to other local job market alternatives.

If new rules for compensating ride-share drivers do go into effect, they will affect both the quantity demanded of rides and the quantity supplied of drivers. The state report offers some general and qualified thoughts about these effects.

The Minnesota state report notes: “Previous studies based on Uber data have found a one percent fare increase lowered demand for rides by 0.33 percent in Los Angeles, 0.52 percent in San Francisco, 0.61 percent in New York City and 0.66 percent in Chicago. The lower estimates are in cities with less dense mass transit alternatives. If Twin Cities rider price sensitivity is roughly in the middle of this range, passenger fares would have to increase significantly to substantially reduce the aggregate number of TNC trips.” Based on these kinds of studies, one might roughly estimate that a 10% rise in fares would reduce ridership by 5%. This drop-off isn’t extreme–but of course, it means that the 95% who continue to ride are paying the higher fares.

The potential adjustment in number of drivers is perhaps more remarkable. The state report notes: “”Nonetheless, it seems likely that the number of drivers will increase as a result of the pay standard. To the extent driver supply increases, forward dispatch and P3 shares will decline. The pay standard could be adjusted to offset these likely responses.” To put this into English, a higher wage could attract more drivers (remember, the number of drivers in this market was rising rapidly), and then the average driver will spend a greater share of time waiting for fares and less time actually transporting passengers.

The per-mile and per-minute fees only apply when actually driving people around, so less time with passengers actually in the vehicle will tend to offset higher per-minute and per-mile fees. This effect is potentially substantial.

Jonathan V. Hall, John J. Horton, and Daniel T. Knoepfle have done a study called “Ride-Sharing Markets Re-Equilibrate” (NBER Working Paper 30883, February 2023). They have data on Uber drivers and trips across 36 US cities from mid-2014 to early 2017. They look at situations where Uber raised the base fare, and find that in the short-run, drivers make more money. But in the medium-run, after a couple of months, the higher fares attract more drivers and then two other effects kick in. One is that with more drivers, it becomes less necessary for Uber to use “surge pricing” to raise fares when demand is high, which reduces revenue for drivers who would otherwise have received those higher surge fares. The other effect is that the larger number of drivers means less time actually spent driving passengers: a fare increase of 10%, after more drivers have entered the market, leads to a drop in utilization of 7%. They write: “After about 8 weeks, there is no clear difference in the driver’s gross average hourly earnings rate compared to before the fare increase.”

(Full disclosure: The authors of this article work for Uber, in the research department. However, the NBER Working Papers is a well-regarded outlet for high-quality publications. Also, the data is what the data is.)

Of course, I don’t know if the higher fares proposed by the Minneapolis City Council and the Minnesota state legislature would be 100% offset by these counterbalancing effects of less time with passengers in the car. But there would surely be some offsetting effect.

There’s method of argument called “arguing in the alternative.” It most commonly arises in legal cases. You can imagine a criminal defendant who argues that they weren’t present at the scene of the crime; and if they were present, they didn’t do anything; and if they did do something, it wasn’t intentional; and if it was intentional, it wasn’t harmful; and if it was harmful, there were other contributing factors that caused most of the harm; and so on and so on. In a courtroom, the prosecution needs to offer proof on all of these points, so arguing in the alternative can be a useful strategy. But in real life, the long string of “if” statements sounds evasive and comical.

Thus, I’ve taken a sour amusement in listening to the evolution of arguments among supporter of the higher pay standards for Uber/Lyft drivers, which has gone roughly like this: Don’t worry, Uber and Lyft won’t leave; they are only bluffing. Also, if they do leave, then lots of other ride-share companies will move quickly to enter the market. Also, taxis and other forms of transit can take up the slack. Also, if other ride-share companies don’t enter the market, local entrepreneurs can build their own ride-share apps and enter the market. Of course, if the speaker really believed the first point–that Uber and Lyft won’t leave–the rest of the statements are irrelevant.

In 2023, there were almost 2 million trips per month in the broader Minneapolis metro area via Uber and Lyft. Back in 2015, there were about 1300 cabs licensed in Minneapolis in 2015; by a recent count, there are now 14. Yes, if Uber and Lyft leave Minnesota, there will be a cascade of other adjustments and something will take their place. But the short-term and intermediate disruption are likely to be severe. The costs will fall on those who have come to depend on these services for getting to health care appointments, to work, and even for getting home after overindulging in alcohol. If that happens, I will have an indigestible suspicion that if the state and the Minneapolis City Council had just followed the per-minute and per-mile numbers proposed by the state’s own Department of Labor and Industry , then the stated goal that Uber and Lyft drivers should earn the minimum wage would have been accomplished, and the costs and disruption could have been avoided.

What Economic Research do Policymakers Want?

The obvious answer is that policymakers want research that supports their personal and political preferences. Conversely, policymakers don’t want research that might pressure them to change their views.

But with that central truth duly noted, situations often arise where policymakers have an overall goal, but the details of how to achieve that goal, or how to estimate costs and benefits of different approaches, needs to be filled in. Consider this comment from Jed Kolko, who just completed a two-year stint as Under Secretary of Economic Affairs at the US Department of Commerce. Kolko writes:

I’ve spent the majority of my career as an economist in the private sector and at think tanks, producing research that I hoped would be useful for policymakers. But I recently completed two years in the Commerce Department, consuming research that could inform the work of the Biden-Harris Administration and the Commerce Department.

And having now seen this from the other side, more as a consumer than a producer of research, I can tell you that most academic research isn’t helpful for programmatic policymaking — and isn’t designed to be. I can, of course, only speak to the policy areas I worked on at Commerce, but I believe many policymakers would benefit enormously from research that addressed today’s most pressing policy problems.

But the structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.

The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:

  1. New measures of the economy
  2. Broad literature reviews
  3. Analyses that directly quantify or simulate policy decisions.

If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.

The first item of Kolko’s list seems to me most directly relevant to work at Commerce. But the other two points reflect themes that come up in a symposium in the Spring 2024 Journal of Economic Perspectives (where I work as Managing Editor), on the subject of “How Research Informs Policy Analysis.” The four papers are:

Like all papers in JEP, these are freely available online, so I won’t attempt a detailed summary here. But as a quick overview:

The staff of the CBO provide an overview of how it works, but the main focus is on specific questions in seven topic areas where the CBO would like to see additional research: credit and insurance, energy and the environment, health, labor, macroeconomics, national security, and taxes and transfers.

Cass Sunstein focuses on OMB Circular A-4, which governs how the federal government does cost-benefit analysis, and which was thoroughly revised in 2023. As he describes it, the revised document includes “new directions on behavioral economics and nudging; on discount rates and effects on future generations; on distributional effects and how to account for them; and on benefits and costs that are hard or impossible to quantify. The revised document leaves numerous open questions, involving (for example) the valuation of human life, the valuation of morbidity effects, and the value of the lives of children.”

Wendy Edelberg and Greg Feldberg led the Financial Crisis Inquiry Commission authorized by Congress in 2009, in the aftermath of the worst events of the Great Recession, which relied in many ways on economic research. For example, they write:

Based on these experiences, we believe that the best way for a researcher to help government staff get up to speed is to be willing to explain not just research that was personally conducted, but more generally explain where the literature is broadly. Government staff need a framework to understand disagreements in the existing literature. A government report is not likely to incorporate the full worldview of a single expert, no matter how compelling. Staff is best-served when researchers can be honest brokers not just for their own research, but also in characterizing where there is and is not consensus and elucidating points of disagreement.

Moreover, for researchers to help staff pursue fruitful avenues for analysis, researchers should be willing to think outside their models. Of course, the real world is far more complicated than theoretical models and even empirical models. Recognizing that, staff is best served when researchers can brainstorm about the relevance of their analysis to the problem at hand. Sometimes, government staff need a number—even while appreciating the enormous uncertainty around that number.

Finally, Emily Oehlsen runs Open Philanthropy, an organization with a task that sounds a little like an undergraduate essay competition: If you had some money to give away, how would you decide–in some more-or-less objective way– where your donation would have the greatest positive effect? The answer relies in part on how you value gains in income and health, and also on the extent to which others may already be focused on a certain topic, or not, which will affect the gains from your particular contribution. As Oehlsen described, Open Philanthropy tries to spell out the reasons for its decision making, often based in available economic research,.

One message from JEP these essays, along with the comments from Jed Kolko, is that the kinds of economic research that are rewarded with publication in top journals and tenure at colleges and universities is often disconnected from the advice that those in the detail-work of policy-making institutions would like to have available. There are various attempts to present research findings in more accessible ways, including my own efforts in editing the Journal of Economic Perspectives and in writing this blog. But ultimately, there’s no substitute for academic economists taking the time and energy, and developing the maturity of judgement, to put their work in the broader context of big questions and other existing research.

A Primer on Federal Home Loan Bank System

There are three “government-sponsored enterprises,” commonly called GSEs, that play a big role in US housing finance: the Federal Home Loan Banks, Fannie Mae, and Freddie Mac. Perhaps the key similarity across all three is that when they borrow money, the financial markets perceive that the federal government is standing behind the loan–and so they can borrow at a lower interest rate. However, the ways in which the GSEs are organized and interact with housing markets is rather different. Most notably, Fannie Mae and Freddie Mac were converted to private companies, with shareholders, which then went broke when housing prices declined in the lead-up to the Great Recession, and have been run by the federal government under a bankruptcy conservatorship since then. However, the Federal Home Loan Banks came through the financial crisis of 2008-09 without requiring any financial support.

For those occasions when it is useful to understand the Federal Home Loan Banks, and how it differs from the other housing-related institutions, the Congressional Budget Office offers some guidance in “The Role of Federal Home Loan Banks in the Financial System (March 2024). From the “At a Glance” overview to the report:

In 1932, lawmakers created a system of Federal Home Loan Banks (FHLBs) as a government-sponsored enterprise (GSE) to support mortgage lending by the banks’ member institutions. The 11 regional FHLBs raise funds by issuing debt and then lend those funds in the form of advances (collateralized loans) to their members—commercial banks, credit unions, insurance companies, and community development financial institutions.

In addition to supporting mortgage lending, FHLBs provide a key source of liquidity, during periods of financial stress, to members that are depository institutions. During such periods, advances can go to institutions with little mortgage lending. Some of those institutions have subsequently failed, but the FHLBs did not bear any of the losses.

FHLBs receive subsidies from two sources because of their GSE status:

  • The perception that the federal government backs their debt, often referred to as an implied guarantee, which enhances the perceived credit quality of that debt and thereby reduces FHLBs’ borrowing costs; and
  • Regulatory and income tax exemptions that reduce their operating costs.

Federal subsidies to FHLBs are not explicitly appropriated by the Congress in legislation, nor do they appear in the federal budget as outlays. The Congressional Budget Office estimates that in fiscal year 2024, the net government subsidy to the FHLB system will amount to $6.9 billion (the central estimate, with a plausible range of about $5.3 billion to $8.5 billion). That subsidy is net of the FHLBs’ required payments, totaling 10 percent of their net income, to member institutions for affordable housing programs. CBO estimates that in fiscal year 2024, such payments will amount to $350 million.

What is the ownership structure of the FHLB system?

The FHLB system is organized as a cooperative; the individual banks are owned by their members, and FHLBs do not issue publicly traded stock (in contrast to Fannie
Mae and Freddie Mac). One implication is that the system is run for the benefit of its members. The 11 FHLBs are jointly and severally liable for the system’s debt; if any one of them fails, the remaining banks become responsible for its debt.

In dollar terms, what’s the shape of the FHLB system?

As of December 31, 2022, the FHLBs reported assets of $1,247 billion, liabilities of $1,179 billion, and capital (the difference between assets and liabilities) of
$68 billion. Assets included $819 billion in advances, $204 billion of investments, and a $56 billion mortgage portfolio. Liabilities included $1,161 billion of debt.
For calendar year 2022, FHLBs reported net income of $3.2 billion and paid members $1.4 billion in cash and stock dividends. FHLBs’ affordable housing payments that year amounted to $0.4 billion.

What makes the FHLB system so safe, so that it sailed through even the 2008-09 Great Recession?

FHLBs require borrowing members to pledge specific collateral against advances, thus giving the FHLBs priority in receivership over other creditors, including the FDIC [Federal Deposit Insurance Corporation]. Such lending therefore limits the
assets that the FDIC has access to when resolving a failed commercial bank. Moreover, if a commercial bank that is a member institution fails, FHLBs’ advances are paid before the FDIC is paid because the FHLB has a priority claim on collateral. The FDIC is thus exposed to more losses, whereas FHLBs are fully protected. … However, FHLBs face interest rate risk, which is the risk that changes in rates will affect the value of bonds and other securities. FHLBs attempt to limit that risk by
matching the maturities of their assets and liabilities and through other types of hedging. Interest rate risk stemming from mortgage portfolios has contributed to losses by some banks in the past.

How is the FHLB system different from Fannie Mae and Freddie Mac?

[A] large secondary (or resale) mortgage market has developed in which Fannie Mae and Freddie Mac, two other housing GSEs that are now in federal conservatorship, play dominant roles … Fannie Mae and Freddie Mac purchase mortgages from lenders (including members of the regional FHLBs) and
package the loans into mortgage-backed securities that they guarantee and then sell to investors … Today, the primary business of FHLBs still is making
advances to their members.

Why is the FHLB system referred to as the “lender of next-to-last resort”?

During financial crises and other periods of market stress, FHLBs also provide liquidity to member institutions, including those in financial distress. Providing liquidity is one way to protect the financial system from liquidity-driven bank failures. In normal times, however, FHLBs aim to increase the availability of, and lower the rates of, residential mortgages by serving as a source of subsidized funds for financial institutions originating those mortgages. … FHLBs are a “lender of next-to-last resort.” (Banks turn to them before accessing the Federal Reserve’s discount window because borrowing from the window signals that a bank is under stress.)

If we were reinventing the housing finance system today, it seems unlikely to me that we would create the Federal Home Loan Bank system. It’s a long time since 1932. Today, the financing for more than half of all home mortgages doesn’t originate from banks, but instead from “nonbank” financial institutions. But that said, the FHLB system doesn’t cost the government anything directly, it’s part of the network of housing-related finance, and it provides a layer of extra protection as the lender of next-to-last resort for banks under stress. Given that the institution already exists, it feels as if unwinding the $1 trillion-plus in assets would be a substantial task, with limited benefits.

A Downside of the 15-Minute City

The “15-minute city” is getting some attention from urban planners. The idea is that everyone should be able to access the key destinations in their day-to-day life–work, food, schools, recreation–within a 15-minute walk, bike ride, or mass transit ride of their residence. Cars would then be unnecessary for many daily tasks. Most Americans do not live with the experience of a 15-minute city: for example, the average commute to work, typically by car, is about 25 minutes each way. Here, I’ll sidestep the potential environmental or exercise-related benefits, and instead turn to an interview with Edward Glaeser by the McKinsey Global Institute (“What’s the future for cities in the postpandemic world?” April 17, 2024). When asked about the 15-minute city, Ed responds:

I do, in fact, have views on the 15-minute city. And I certainly applaud the idea that we’re going to have land-use regulations that are such that it’s easy to put residences, and workplaces, and cafés, and stores all in the same neighborhood. There are wonderful things about the 15-minute city, a vision of neighborhoods being full of lots of different amenities. It’s great. The ability for us to have access to lots of things without driving a car, that’s fantastic. But the view that we should basically see ourselves as being citizens of a sort of small neighborhood, rather than citizens of an entire metropolis, that feels deeply dangerous to me, especially in America, with its history of profound racial and income segregation.

Together with Carlo Ratti and a series of other coauthors, we put together a paper looking at, essentially, mobility using cellphones and the 15-minute city. And what we find in the US is actually the more that rich people, elites, live within their 15-minute area, they actually integrate more. So in an elite setting, it’s not a terrible thing. If you’re coming from a poorer area, if you’re an African American, the 15-minute-city experience is one that involves just much more experience segregation for them. And so if you want a city that’s integrated, you want to eschew the 15-minute city. You want to embrace a metropolis-wide vision of the city, not one that focuses on small little neighborhoods.

Glaeser always has interesting comments on the history of urban areas and where they are headed, and I recommend the interview as a whole. Here’s one other thought from him about how segregation within cities, by income and by race, varies between adults and children.

Residential segregation feels like it’s really important in lots of ways. And I think it is very important for children. Segregation has a very powerful effect in explaining differential outcomes for whites and African American kids. But as recent work using cellphone data, by Susan Athey and Matthew Gentzkow and their coauthors have shown, experience segregation for adults can be very different than residential segregation.

In most American cities, you get up in the morning, you leave your segregated neighborhood. You go to an integrated firm. You interact with lots of different people. And so the neighborhood doesn’t matter. But it does matter for kids. Because the kids actually don’t go to work in an integrated company. They go to a segregated school. They play on a segregated street corner. Understanding this feels important to me. I have new work with Cody Cook and Lindsey Currier that tries to differentially look at them, the cellphone mobility patterns of poor kids and rich kids, and just documents how much more of a life that is disconnected from the marvels of urban areas that the kids of poverty experience, even in wealthy cities.

Of course, Glaeser’s argument is not a dispositive or unanswerable argument against the idea of a 15-minute city. But it can be a thin line between the idea that it would be nice if more people could aim to work and carry of many aspects of day-to-day living in walkable neighborhoods around our residences, and the argument that people really should mostly stay in their own 15-minute zones, rather than mixing more widely across our urban areas.