Interview with Joel Mokyr: Past and Future Growth

Allison Schrager has a 50-minute interview with economic historian Joel Mokyr on her “Risk Talking” podcast (May 17, 2022). It’s full of lively thoughts, and includes a transcript. Here are a few that caught my eye.

Kranzberg’s Law

“Technological progress is neither good nor bad, nor is it neutral.” This is known as Kranzberg’s law. It was Melvin Kranzberg who said that, and people keep citing that, although nobody quite knows what he meant.

Economic progress and the small minority

Economic growth and economic progress is not driven by the masses. It is not driven by the population at large. It is driven by a small minority of people who economists refer to in their funny language as upper-tail people, meaning if you think of the world following some kind of bell-shaped or normal distribution, it’s the elite, it’s the people who are educated—not necessarily intellectuals. They could be engineers, they could be mechanics, they could be applied mathematicians. …

[I]f you look at the top 2 percent or 3 percent of the population anywhere, those are the people that are driving economic growth. And that’s still the case. I mean, in the United States, much of the technological progress they’ve been experiencing has been driven by a fairly small number of people. Some of them are Caltech geeks, and some of them are just really good people who are coming up with novel ideas, but basically that’s what it is about. …

And it’s not just about the steam engine or the mule or anything like that, it’s about ideas that try to manipulate nature in a way that benefits humans. And so, I’ll give you one example—it’s not machinery, but it is very critical. It’s vaccination against smallpox, which is very much on people’s minds these days, right? But this is an 18th-century idea. This English country doctor, Edward Jenner, basically came up with this idea. It’s not a machine in any way, but it is a pathbreaking, I would say a radical idea, of how to use what we know about nature to improve human life. And that’s what economic growth in the end is all about. Now, it’s not all human life, it’s material things. It’s how not to get sick, how to get more to eat, how to have better clothing, better housing, to heat your place, to be warmer in the winter and cooler in the summer. It’s about all these things that define our material comfort and our material wellbeing.

Underestimated Productivity Gains in an Information Age

[T]he real problem is that most of the important contributions to economic welfare are often seriously, seriously, seriously underestimated in our procedures. And I believe that they are getting more and more underestimated. If the degree of underestimation is more or less constant, then you don’t care because over time if it isn’t changing over time, you can still see what the trend looks like. But I think that’s not right. I think we are more and more underestimated because the knowledge economy and the digital economy are famously subject to underestimation. …

I mean, just look at the enormous gain in human welfare that we have achieved because we were able to come up with vaccines against corona. Now, it’s not a net addition to GDP because before that we didn’t have corona, but think about the subtraction we would’ve had if it wasn’t for that. And so, I remain a technological optimist, but I’m also very much aware that measures that measure technological progress in a system that was designed for an economy that produced wheat and steel aren’t appropriate for an economy that produces high-tech things that are produced by a knowledge economy. …

Library Technology

[T]hings have changed dramatically in how I do my work. … I used to go to the library four days a week and I would spend 20 to 25 hours a week at least in the shelves, pulling out books, pulling out articles, pulling out journals. I barely go to the library anymore. I mean, [why should I? Everything in bloody hell is on my screen here.

Every article I want to look at—even things that aren’t even published—I Google them. Even books published 200 years ago appear on my screen as PDFs. Many of them searchable. I’d be crazy to go to the library. And so the way I do my research, the way I write my own stuff has changed dramatically. My best research assistant is sitting right here on my desk, and it’s my laptop. I mean, that is an amazing thing. Somebody would’ve told me this when I was a graduate student, I would’ve laughed him out of the room. Of course you need to go to the library. When I was a graduate student, I lived in the library. I was there 12 hours a day. I mean, these things have changed dramatically. The way you go to the dentist, the way you go to the doctors, you go to the hospital, you do your shopping—everything has changed.

This last point rings especially true for me, in my work as an editor. I used to have to spend at least half-a-day each week in the library, tracking down articles so that I would have a better understanding of the papers I was editing. Now, pretty much all of those articles–and older books, as well–are easily available through my access to an online college library.

But what Mokyr doesn’t emphasize here is that this change in work patterns is not necessarily an overall savings of time. Because it has become so much easier to look things up, I’m much more likely to do so. My guess is that more than 100% of the time savings from not going to the library and pulling volumes off the shelves now goes into time spent checking and doing background reading on a wider array of articles and data. My productivity is higher, but so is the workload that I impose on myself.

Talking Econometrics

So let’s say for the sake of argument that you’re already interested in econometrics–which is to say how quantitative research in economics is actually done. Then all I really need to say to you is that Isaiah Andrews has a set of four interviews with Joshua Angrist and Guido Imbens, ranging in length from 10-20 minutes each over at Marginal Revolution University.

If you don’t know the names, here’s a brief introduction. Angrist and Imbens, along with David Card, shared the most recent Nobel prize in economics. The prize specified that Angrist and Imbens were honored “for their methodological contributions to the analysis of causal relationships.”

For a sense of how Angrist thinks about these causality issues, here are a couple of useful starting points in the Journal of Economic Perspectives (where I work as Managing Editor):

For a couple of examples of work by Guido Imbens in JEP, see:

The guy who is interviewing Angrist and Imbens, Isaiah Andrews, won the John Bates Clark medal in 2021, which is awarded each year “to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge,” for “contributions to econometric theory and empirical practice.” An overview of his work in the Winter 2022 issue of JEP is here.

These really are conversations between Andrews, Angrist, and Imbens, not lectures. No powerpoint presentations or equations are used! No proofs are involved. It’s just a chance to hear these folks talk about their field and more broadly how they see economics.

Why Didn’t the US Adopt the Metric System Long Ago?

Why hasn’t the United States adopted the metric system for widespread use? I’ve generally thought there were two reasons. One is that with the enormous US internal market, there was less incentive to follow international measurement standards. The other was that the US has long had a brash and rebellious streak, a “you’re not the boss of me” vibe, which means that there will inevitably be pushback against some external measurement system invented by a French guy and run an international committee based in a Paris suburb.

However, Stephen Mihm makes a persuasive case that my internal monologue about the metric system is wrong, or at least seriously incomplete, in “Inching toward Modernity: Industrial Standards and the Fate of the Metric System in the United States” (Business History Review, Spring 2022, pp. 47-76, needs a library subscription to access). Mihm focuses on the early battles over US adoption of the metric system, waged in the 19th and early 20th century. He makes the case that the metric system was in fact blocked by university-trained engineers and management, with the support of big manufacturing firms.

The metric system is part of US policy discussions in the early 1800s, after it is adopted in France. Mihm writes:

By the 1810s, most commentators considered the metric system a failed experiment. One writer in 1813 noted that the French government, despite wielding considerably more power over their own populace than the United States could, nonetheless failed to secure adoption of the metric units. “The new measures . . . are on the counter . . . but the transactions are regulated by the old.” In 1819, a House of Representatives committee studying the issue concurred in this assessment, pointing to France’s failure to secure widespread adoption of the metric system.

Throughout the 19th century, there was an ongoing discussion about appropriate systems of weights and measures, and the metric system was part of those discussions. But the battle-lines for this dispute began to be clarified in 1860s. The growth of industrialization across the United States meant that there was also a movement across US industries and engineers to standardize measurements in areas like screw threads, nuts and bolts, sheet metal, wire, and pipe–so that it was possible for a manufacturing firm to use inputs from a variety of different suppliers around the country. confident that the parts would fit together. A similar movement arose in the railroad industry, standardizing axles, couplings, valves, and other elements so that rolling stock would fit together. This movement was led by a mixture of mechanical engineers and management experts, and it was based on the inch as the standard unit of measure. At least at this time, it’s fair to say that most people cared much less about measures of weight or volume.

But a number of scientists and social reformers preferred the logical organization of the metric system. Mihm reports that in 1863, “the newly created National Academy of Sciences recommended that the United States adopt the metric system. That same year, the United States participated in international congresses on postage and statistics that endorsed the metric system for both scientific and commercial purposes.” Federal legislation passed in 1866 to legalize the use of the metric system.

The struggle over how the US measuring system would be standardized then evolves from the late 1800s up through the early 20th century. Mihm lays out the details. For example, in 1873 the prominent educator and president of Columbia University, Frederick Barnard, founded the the American Metrological Society to push for the metric system. In 1874, a American Railway Master Mechanics’ Association instead pushed for the already-developed system of standardization based on inches. He said: “While French savants were laboring to build up this decimal system of interchangeable measures … the better class of American mechanics were solving the problem of making machinery with interchangeable parts.”

The dispute got a little weird at times. Mihm tells of the International Institute for Preserving and Protecting Weights and Measures, founded in 1879, which promoted “Great Pyramid metrology,” defined as “a belief that the Egyptians had inscribed the inch as a sacred unit of measurement in the design of their famed structures. … Over the 1870s and 1880s, pyramid metrology channeled much of the opposition to the metric system in the United States.” Lest this seem a little whacky to us, remember that this is a time when scientists and engineers were also exploring mesmerism and divining rods. To put it another way, being a logically rigorous scientist or engineer in one area does not rule out more imaginative approaches to other topics, then or now.

The central practical issue became what economists call “path-dependency.” Imagine two different paths for standardization. Perhaps in the abstract one is preferable. But if you have already committed to the other path, and all your machine tools and existing equipment are based on that alternative path, and all your workers and suppliers and customers are using that other path–then the costs of transition to the other approach are formidable. Indeed, the longer you wait to make the switch, the more committed you are to the path you are on. For example, if you have laid down pipelines for water and oil measured in an inch-based system, as well as set up train tracks and rolling stock based on that system, then you are going to have physical equipment for an inch-based system around for decades.

The metric issue kept bubbling along. “In 1896, the House of Representatives considered a bill that mandated the immediate, exclusive use of the metric system in the federal government, with the rest of the country to follow suit a few years later.” It almost passed, with relatively little attention, but was concern that the risk of disrupting industrial production in an election year wasn’t a political winner. When the US Bureau of Standards was created in 1901, the administrators preferred the metric system, but engineers and big companies pushed back hard.

By the early 20th century, remember that this argument had now been going on for decades. US industry had already felt firmly committed to an inch-based system of measurement back in the 1860s and 1870s; by the early 1900s, the idea of redoing all of their capital stock in the metric system seemed crazy to them. Indeed, US manufacturing was so dominant in the world at this time that US companies routinely exported inch-based equipment to companies in countries that were nominally on the metric system already. Some US manufacturers even argued that the unique inch-based measurement system helped to protect them from foreign competitors.

Bills for the metric system kept coming up in Congress in the early 1900s, and being shot down. Mihm writes:

In 1916, these efforts culminated in the creation of a new anti-metric organization known as the American Institute of Weights and Measures. … Much of its success can be attributed to a sophisticated public relations campaign. It placed advertisements and editorials in industry journals; successfully lobbied hundreds of trade associations, chambers of commerce, and technical societies to go on the record condemning mandatory use of the metric system; and obsessively monitored legislation on the local, state, and national levels. When the group identified a bill that endorsed mandatory metric conversion—or merely contained clauses that opened the door to greater reliance on the metric system—it mobilized hundreds of industrialists, engineers, and managers to defeat the legislation with letters, testimony, and editorials. By the 1920s, its membership rolls included many of the most important firms in the nation as well as presidents of the National Association of Manufacturers, the Association of American Steel Manufacturers, the American Railroad Association, and other national organizations. These organizations had a stake in standardization, actively joining government-sponsored efforts to bring further uniformity to the nation’s economy over the course of the 1920s. As inch-based standards governing everything from automobile tires to pads of paper became the norm, the prospects for going metric became ever more remote. Only in scattered pockets of the business community—the electrical field, for example, and pharmaceuticals—did the metric system become dominant.

We have now reached an odd point in the US experience where two measurement systems co-exist: the inch-based traditional system, along with pint and gallons, ounces and pounds, is how most Americana talk, most of the time, in ordinary life, but the metric system is how all science and most business operates (with the exception of the building trades). Many Americans step back and forth between the two systems of measurement every day in their personal and work lives, barely noticing.

Some readers will be interested to know that this issue of Business History Review has othre paper about standardization. Here’s the list of papers. The introductory essay by Yates and Murphy is open access:

Robert E. Lucas on Monetary Neutrality: A 50th Anniversary

Back in 1972, Robert E. Lucas (Nobel ’95) published a paper called “Expectations and the neutrality of money,” in the Journal of Economic Theory (4:2, 103–124). The paper was already standard on macroeconomics reading lists when I started graduate school in 1982, and I suspect it’s still there. For the 50th anniversary, the Journal of Economic Methodology (22:1, 2022) has published a six-paper symposium on Lucas’s work and the 1972 paper in particular.

Reading the heavily mathematical 1972 paper isn’t easy, and summarizing it isn’t easy, either. But at some substantial risk of oversimplifying, it addresses a big question. Why does policy by a central bank like the Federal Reserve affect the real economy? In the long-run, there is a widely-held belief (backed by a solid if not indisputable array of evidence) that in the long-run, money is a “veil” over real economic activity: that is, money facilitates economic transactions, but at over time it is preferences and technologies, working through forces of supply and demand, that determine real economic outcomes. To put it another way, changes in money will alter the overall price level over long-term time horizons, but it is “neutral” to real economic outcomes.

However, when the Federal Reserve or other central banks conduct monetary policy, it clearly does have an effect on the real economy. When a central bank lowers interest rates and makes credit available in a recession, the length of the recession seems diminished. Today, the concern is that if the central bank raises interest rates, it may cause an economic slowdown or recession. Apparently money is not just a veil over real activity, at least not in the short-run. But why not?

One possible answer here is that people are bad at anticipating the future. Thus, when the Fed stimulates the economy in the short-run, people don’t recognize that this stimulus might lead to inflation. When the Fed was spurring the economy during the pandemic recession in 2020 and into 2021, relatively few people were anticipating higher inflation. But for economists, the theory that monetary policy depends on people and markets being perpetually bad at understanding what’s going on feels like, at best, a partial answer.

Thus, in the 1972 paper, Lucas tried a different approach. He wanted to construct an example–that is, a model–of an economy where all the agents are fully rational. For economists, “rational” doesn’t mean you are always correct. It just means that you take advantage of all available information in making decisions–and as a consequence, you won’t make the same mistake over and over again. Thus, the central bank can’t “fool” these rational agents by juicing up the economy with low interest rates.

However, that phrase “all available information” opens up a possibility. What if economic agents do not and cannot have full information about what’s happening in the economy? In particular, say that when a number of prices rise, it’s hard for an economic agent to know if this is a “real” change because of forces of supply and demand or if it’s a “monetary” change of generally higher price levels. In summer and fall 2021, for example, it was hard to tell whether the higher prices were a “real” result of supply chains fracturing, or a “monetary” result of an overstimulated economy and a generalized inflation.

At the end of the 1972 paper, Lucas writes:

These rational agents are thus placed in a setting in which the information conveyed to traders by market prices is inadequate to permit them to distinguish real from monetary disturbances. In this setting, monetary fluctuations lead to real output movements in the same direction. In order for this resolution to carry any conviction, it has been necessary to adopt a framework simple enough to permit a precise specification of the information available to each trader at each point in time, and to facilitate verification of the rationality of each reader’s behavior. To obtain this simplicity, most of the interesting features of the observed business cycle have been abstracted from, with one notable exception: the Phillips curve emerges not as an unexplained empirical fact, but as a central feature of the solution to a general equilibrium system.

This Lucas paper is heavy on mathematics and will be a tough read for the uninitiated. It has come to be called an “island economy,” although the word “island” doesn’t actually appear in the 1972 paper, because in a mathematical sense the economic actors have a hard time distinguishing between what is on their own “island” and the broader economy. Information is spread out. It’s hard to perceive accurately what’s happening.

The emphasis on what information people have, and how they form their expectations about inflation–when filtered through decades of research since 1972–has had a substantial effect on real-world economic policy is conducted. It meant that monetary policy had a greater emphasis on expectations of inflation and how those expectations are formed. At present, for example, a key question for the Federal Reserve is the extent to which expectations of a higher inflation rate are becoming embedded throughout the economy in price-setting, wage-setting, and interest rates–because entrenched inflationary expectations would pose a different policy problem. The emphasis in monetary policy about rules that will be followed over time (like at target inflation rate of 2%) or “forward guidance” about how monetary policy will evolve in the future are both focused on addressing people’s expectations about future inflation. Keeping central banks reasonably independent from the political process can also be viewed as a way of reassuring people that even if politicians with a short-run focus control federal spending and borrowing, the central bank will be allowed to follow its own course–which again influences the expectations that people have about future inflation.

I should add that the Lucas (1972) paper became just one in a vast literature exploring the reasons why it might be hard for people to distinguish between changes in prices that arise from supply and demand and the changes that are part of an overall inflation. For example, some prices might be preset for certain time by past contracts, and when you know that some prices cannot adjust for a certain time, but others can, figuring out the real and monetary distinctions again become tricky.

In the Journal of Economic Methodology symposium, my guess is that a number of economist-readers may be most interested in the personal essays by Thomas J. Sargent (Nobel ’11) on “Learning from Lucas” and Harald Uhlig on “The lasting influence of Robert E. Lucas on Chicago economics,” both of which are fully of descriptions of how Lucas influenced the intellectual journey of the authors.

I found particular interest in the first essay in the symposium, “Lucas’s way to his monetary theory of large-scale fluctuations,” by Peter Galbács. The focus here is not on the legacy of Lucas’s work but on the earlier research leading up to it. Some of this ground is also covered in Lucas’s Nobel lecture on “Monetary Neutrality.” Galbács writes in the conclusion:

The way Lucas arrived at his monetary island-model framework was thus a step-by-step process starting in the earliest stage of his career. The first step was the choice-theoretic analysis of firm behaviour. At this stage, Lucas’s focus was on the firm’s investment decision through which he distinguished short-run and long-run reactions of the firm and the industry. The climax of this period is his Adjustment costs and the theory of supply (Lucas, 1966/1967a) that contained the basic supply and-demand framework that Lucas and Rapping (1968/1969a; 1968/1969b; 1970/1972a) shortly extended to labour market modelling – so Lucas’s work with Rapping is rooted in his earlier record in firm microeconomics. As they assumed, the household decides on short-run labour
supply on the basis of a given set of price and wage expectations, while it adjusts to long-run changes with a firm-like investment decision that implies the revision of expectations.

After this second step taken in labour market modelling, the third stage realizing his Expectations and the neutrality of money (Lucas, 1970/1972a) directly followed – although the complexity of influences renders the connection with the previous phase subtle (Lucas, 2001, p. 21). His monetary island model was a ‘spin-off’ from his work with Rapping (Lucas’s letter to Edmund S. Phelps, November 7, 1969. Box 1A, Folder ‘1969’), but the paper may be more appropriately regarded as a spin-off from the related impressions Lucas received. First of all, he needed the very island-model framework. It is Phelps (1970, pp. 6–9) who called his attention to the option of reformulating the decision problem by scattering the agents over isolated markets, while it is Cass who led Lucas to a correct mathematical exposition. However, it is Prescott who in their collaboration prepared Lucas for this exposition; and it is also Prescott who, teamed up with Lucas, provided the paradigmatic example of applying the Muthian rational expectations hypothesis in a stochastic setting with which Lucas (1966/1981b) had formerly dealt only in the less interesting non-stochastic case. As the present paper argued, Lucas’s monetary island model is thus the unification of the impressions Lucas gained under his graduate
studies at the University of Chicago, and later from Rapping, Phelps, Cass and Prescott under his years at Carnegie Mellon University

Here’s the full Table of Contents for the symposium, which requires a subscription to access:

Spring 2022 Journal of Economic Perspectives Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Spring 2022 issue, which in the Taylor household is known as issue #140. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.


Symposium on Macro Policy in the Pandemic

A Social Insurance Perspective on Pandemic Fiscal Policy: Implications for Unemployment Insurance and Hazard Pay,” by Christina D. Romer and David H. Romer

This paper considers fiscal policy during the pandemic through the lens of optimal social insurance. We develop a simple framework to analyze how government taxes and transfers could mimic the insurance that people would like to have had against pandemic income losses. Permutations of the framework provide insight into how unemployment insurance should be structured, when and how much hazard pay is called for, and whether fiscal policy should aim just to redistribute income or also to stimulate aggregate demand during a pandemic. When we use the insights from the model to evaluate unemployment insurance measures taken during the pandemic, we find that some, but far from all, of the implications of the social insurance framework were followed. In the case of hazard pay, we find that the proposal for a national program (the never-implemented HEROES Act) was both broader and more generous than a social insurance perspective would call for. We suggest that the social insurance perspective on fiscal policy is likely to become increasingly relevant as pandemics and climate-related natural disasters become more common causes of unemployment and recessions.

Full-Text Access | Supplementary Materials

“Should We Insure Workers or Jobs during Recessions?” by Giulia Giupponi, Camille Landais and Alice Lapeyre

What is the most efficient way to respond to recessions in the labor market? To this question, policymakers on the two sides of the pond gave diametrically opposed answers during the COVID-19 crisis. In the United States, the focus was on insuring workers by increasing the generosity of unemployment insurance. In Europe, instead, policies were concentrated on saving jobs, with the expansion of short-time work programs to subsidize labor hoarding. Who got it right? In this article, we show that far from being substitutes, unemployment insurance and short-time work exhibit strong complementarities. They provide insurance to different types of workers and against different types of shocks. Short-time work can be effective at reducing socially costly layoffs against large temporary shocks, but it is less effective against more persistent shocks that require reallocation across firms and sectors. We conclude that short-time work is an important addition to the labor market policy-toolkit during recessions, to be used alongside unemployment insurance.

Full-Text Access | Supplementary Materials

“The $800 Billion Paycheck Protection Program: Where Did the Money Go and Why Did It Go There?” by David Autor, David Cho, Leland D. Crane, Mita Goldar, Byron Lutz, Joshua Montes, William B. Peterman, David Ratner, Daniel Villar and Ahu Yildirmaz

The Paycheck Protection Program (PPP) provided small businesses with roughly $800 billion dollars in uncollateralized, low-interest loans during the pandemic, almost all of which will be forgiven. With 94 percent of small businesses ultimately receiving one or more loans, the PPP nearly saturated its market in just two months. We estimate that the program cumulatively preserved between 2 and 3 million job-years of employment over 14 months at a cost of $169K to $258K per job-year retained. These numbers imply that only 23 to 34 percent of PPP dollars went directly to workers who would otherwise have lost jobs; the balance flowed to business owners and shareholders, including creditors and suppliers of PPP-receiving firms. Program incidence was ultimately highly regressive, with about three-quarters of PPP funds accruing to the top quintile of households. PPP’s breakneck scale-up, its high cost per job saved, and its regressive incidence have a common origin: PPP was essentially untargeted because the United States lacked the administrative infrastructure to do otherwise. Harnessing modern administrative systems, other high-income countries were able to better target pandemic business aid to firms in financial distress. Building similar capacity in the U.S. would enable improved targeting when the next pandemic or other large-scale economic emergency inevitably arises.

Full-Text Access | Supplementary Materials

Symposium on Economics of Slavery

American Enslavement and the Recovery of Black Economic History,” by Trevon D. Logan

This paper reconsiders the evidence needed to answer pressing questions of economic history and racial inequality, the Third Phase of research on American Enslavement and its Aftermath. First, I briefly summarize how economists have sought to understand slavery as an institution. Second, using my family’s narrative as a lens, I show how answers to questions from economic history and economic theory can be answered by expanding our evidentiary base and methodological approaches. In the process, I highlight some areas of what these “traditional” economic perspectives miss. Finally, I briefly provide some examples from other fields—such as recent work by historians—that have sought to provide texture on some of the key dimensions of slavery and racial inequality that have been under-studied by economists.

Full-Text Access | Supplementary Materials

“The Cumulative Costs of Racism and the Bill for Black Reparations,” by William Darity Jr., A. Kirsten Mullen and Marvin Slaughter

Two major procedures for establishing the monetary value of a plan for reparations for Black American descendants of US slavery are considered in this paper: 1) Enumeration of atrocities and assignment of a dollar value to each as a prelude to adding up the total, and 2) Identification of a summary measure that captures the dollar amount of the cumulative, intergenerational effects of anti-Black atrocities. Under the first approach, the itemization strategy, we assess wage costs to the enslaved of bondage; financial gains to the perpetrators of slavery; damages to Black victims of post-Civil War white massacres and lynchings; losses from discrimination in the provision of the home buying supports from the Federal Housing Administration and the G.I. Bill; and income penalties due to racial discrimination in employment. Under the second approach, the global indicator strategy, we calculate the present value of providing 40 acres of land to freed slaves in 1865 and the current wealth gap between Black and White Americans. We conclude that the latter standard, the racial wealth gap, provides the best gauge for the size of the bill for Black reparations.

Full-Text Access | Supplementary Materials

“Slavery and the Rise of the Nineteenth-Century American Economy,” by Gavin Wright

The essay considers the claim that slavery played a leading role in the acceleration of US economic growth in the nineteenth century. Although popular among pro-slavery apologists, the proposition fails under rigorous historical scrutiny. The slave South discouraged immigration, underinvested in transportation infrastructure, and failed to educate the majority of its population. It is not even clear that the region produced more cotton than it would have under a counterfactual alternative settlement by free family farmers, on the free-state pattern. The grain of truth in recently popular narratives is that many northerners and business interests were complicit in the crime of slavery: routinely engaging in transactions with slaveholders, even promoting activities that facilitated slavery and the domestic slave trade. Complicity complicates simple historical moralism, but it is quite different from the notion that the prosperity of the nation as a whole derived from slavery in any fundamental way.

Full-Text Access | Supplementary Materials

Symposium on Childhood Interventions

“Children and the US Social Safety Net: Balancing Disincentives for Adults and Benefits for Children,” Anna Aizer, Hilary Hoynes and Adriana Lleras-Muney

Economic research on the safety net has evolved over time, moving away from a focus on the negative incentive effects of means-tested assistance on employment, earnings, marriage, and fertility to include the potential positive benefits of such programs to children. Initially, this research on benefits to children focused on short-run impacts, but as we accumulated knowledge about skill production and better data became available, the research evolved further to include important long-run economic outcomes such as employment, earnings, and mortality. Once the positive long-run benefits to children are considered, many safety net programs are cost-effective. However, the current government practice of limiting the time horizon for cost-benefit calculations of policy initiatives often fails to take this into account. Finally, we discuss why child poverty in the United States is still higher than most OECD countries and how research on children and the safety net can better inform policy-making.

Full-Text Access | Supplementary Materials

“Universal Early-Life Health Policies in the Nordic Countries,” by Miriam Wüst

Given mounting evidence on the negative impact of early-life shocks for the wellbeing of people over the life course, a growing economics literature studies whether early-life policies have symmetric positive effects. This paper zooms in on research on this topic from the Nordic countries, where all families have access to a comprehensive set of early-life health programs, including prenatal, maternity, and well-infant care. I describe this Nordic model of universal early-life health policies and discuss the existing evidence on its causal effects from two categories of studies. First, studying the introduction of universal policies, research has documented important short- and long-run benefits for the health, education, and labor market trajectories of treated cohorts. Second, exploiting modern-day changes to policy design, research for now documents short- and medium-run impacts of universal care on primarily maternal and child health as well as parental investment behaviors. I conclude with directions for future research.

Full-Text Access | Supplementary Materials

“Inequality in Early Care Experienced by US Children,” by Sarah Flood, Joel McMurry, Aaron Sojourner and Matthew Wiswall

Using multiple datasets on parental and non-parental care provided to children up to age six, we quantify differences in American children’s care experiences by socioeconomic status (SES), proxied primarily with maternal education. Increasingly, higher SES children spend less time with their parents and more time in the care of others. Non-parental care for high-SES children is more likely to be in childcare centers, where average quality is higher, and less likely to be provided by relatives, where average quality is lower. Even within types of childcare, higher-SES children tend to receive care of higher measured quality and higher cost. Inequality is evident at home as well: measures of parental enrichment at home, from both self-reports and outside observers, are on average higher for higher-SES children. Parental and non-parental quality are positively correlated, leading to substantial inequality in the total quality of care received from all sources in early childhood.

Full-Text Access | Supplementary Materials

“Economics of Foster Care,” by Anthony Bald, Joseph J. Doyle Jr., Max Gross and Brian A. Jacob

Foster care provides substitute living arrangements to protect maltreated children. The practice is remarkably common: it is estimated that 5 percent of children in the United States are placed in foster care at some point during childhood. This paper describes the main tradeoffs in child welfare policy and provides background on policy and practice most in need of rigorous evidence. Trends include efforts to prevent foster care on the demand side and to improve foster home recruitment on the supply side. With increasing data availability and a growing interest in evidence-based practices, there are opportunities for economic research to inform policies that protect vulnerable children.

Full-Text Access | Supplementary Materials


Retrospectives: “Joan Robinson on Karl Marx: `His Sense of Reality Is Far Stronger,'” by Carolina Alves

This paper revisits why Joan Robinson turned to Karl Marx in 1942 and which insights from Marxian economics she sought to incorporate into her later works, while commenting on how her encounter with Marx was received by some her of contemporaries. By the end of the 1930s, Robinson wanted to bring academic and Marxian economics together in a search for a more realist theory of the rate of profit and income distribution, along with clarifications on Keynes’s concept of full employment and the nature of technical progress and a long-period theory within the Keynesian framework. The result, An Essay on Marxian Economics (1942), was her most important work in terms of laying the foundations of her enduring challenge to the orthodox economics. Here she relied on Marxian insights to escape Marshallian orthodoxy. It is the story of how the originator of imperfect competition pushed further into a theory of exploitation.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

The Pandemic Response: Policy Lessons

The actual economic recession connected with the COVID pandemic turned out to be extremely short, lasting only during March and April 2020. Of course, the dislocations and restrictions associated with the pandemic in areas like health, jobs, sectors of the economy, online education, and travel have continued in various forms since then. But focusing on the economic issues, what have we learned? Recession Remedies: Lessons Learned from the U.S. Economic Policy Response to COVID-19, edited by Wendy Edelberg, Louise Sheiner, and
David Wessel
and freely available online, provides nine essays on different aspects of the economic policy response.

Here’s an overview of some of the issues from “Lessons Learned from the Breadth
of Economic Policies during the Pandemic,” by Wendy Edelberg, Jason Furman, and Timothy F. Geithner.

The U.S. economy experienced a V-shaped recovery of a type not seen in recent recessions. Real Gross Domestic Product (GDP) exceeded its pre-pandemic level by the second quarter of 2021 and was close to pre-pandemic estimates of potential by the fourth quarter of 2021. The unemployment rate ended 2021 below 4.0 percent, just slightly above where it was two years earlier, prior to the pandemic. …

Overall, the United States’ fiscal response appears to have been much larger
than the response undertaken by any other country; this was especially true in
2021, when fiscal policy was as supportive as it was in 2020. The U.S. GDP recovery
has been among the strongest of any of the advanced economies, but the U.S.
employment recovery has been among the weakest; this suggests that both the size
of the response and, perhaps, its character and preexisting institutions all matter. …

The economy experienced major side effects from the pandemic and associated
policy response, most notably the highest inflation rate in 40 years, far
outpacing the increase in wages and leading to the largest real wage declines in
decades. In addition, the U.S. government incurred substantial debt during the
pandemic. With the expiration of most forms of fiscal support, real household
income is likely to be lower in 2022 than in 2021 and could well be below its
pre-pandemic trend. As a result, poverty is on track to rise in 2022. Moreover,
inflationary pressures and the efforts to moderate those pressures might bring
an end to the expansion.

Ultimately, the economic policy response to the COVID-19 recession should
be judged not just by its consequences in the spring of 2020, not what happened
over the next two years, but also by the longer-term effects, and whether the
response will prove to have contributed to a stronger and more sustainable
economy going forward. …

Here is a nonexhaustive list of the lessons I took away from the essays in the book. I’ll list the table of contents for the book below.

1) When the pandemic recession first hit, the effects were severe and there was no good sense of how long it might last. Thus, the priority of economic policy was to go big and fast: in particular, certain policies spent large chunks of money in rebates, stimulus, unemployment insurance, and others. Some of these handed out money in essentially untargeted ways. As one example, the Paycheck Protection Program funneled several hundred billion dollars to businesses with fewer than 500 employees, with the idea that it would protect jobs, but given that it was essentially free money from the government, a lot of it ended up going to the owners of the firms. Economic policy in early 2020 faced a choice between targeting and speed, and mostly chose speed.

2) The early goal of economic policy in March and April 2020 was not really seeking to help the economy recover: it was to help large parts of the economy shut down to minimize the chance of the pandemic spreading, but in a way that tried to support income.

3) The economic recovery from the pandemic happened faster than many people expected. Thus, when Joe Biden took the presidential oath of office in January 2021, there was a widespread sense that additional fiscal stimulus was needed. But the recession had ended in April 2020, and the vaccines had arrived. In fact, the US economy in early 2021 was in a quite different place from a year earlier. In a crisis, there is sometimes a sense that “you can never do too much.” But Continuing and extending federal support payments in 2021, as if it was still 2020, was a mistake and a contributor to the launch of inflation.

4) Compared to EU experience, the job market in the US had a much steeper fall. One reason was that US payments to unemployed workers were very high, sometimes more than 100% of previous wages, while payments in European countries typically replaced about 70-90% of lost income. Second, European countries emphasized “short-time work” policies that are like part-time unemployment. The idea was that instead of having a company lay off some workers completely, the company could reduce the hours of all workers, with the government then making up much the difference in pay. Such policies seek to preserve employer-employee ties, with the idea that such ties make it easier to return to work–and much easier for the employer to require that employees return to their jobs. There are longstanding arguments about the merits of subsidizing workers via unemployment insurance or subsidizing jobs with short-term work programs. There is probably a role for both approaches–but during a short, sharp pandemic shock, short-time work has some real benefits.

5) Near the start of the pandemic recession, there was concern that state and local governments might face severe strains, but the eventual result was more mixed. Louise Sheiner writes:

So, what happened to state and local government revenues, employment, and spending during the first two years of the pandemic? Revenues did not decline nearly as much as had been first feared and federal aid was more than sufficient to offset any revenue losses in every state. Nevertheless, state and local government employment declined sharply, and the decline has been quite persistent: employment by state and local governments in February 2022 was three percent below the January 2020 level. Looked at another way, in February 2022, the state and local sector accounted for 23 percent of the shortfall in
U.S. employment from its pre-pandemic trend. … Overall, it seems clear that the employment losses vary a lot by state in ways that cannot fully be explained. … [G]enerous
federal aid to states was clearly not sufficient to reverse or prevent all the employment losses. One important question is, why not? What did state and local governments do with the federal aid, and why didn’t they use it to increase employment?

6) The vulnerabilities of the US financial system had played a large role in propagating some recent recessions, including the Great Recession of 2008-2010 and the 1991 recession which had some links to the collapse of the savings and loan industry. But in the pandemic recession, the US banking system performed just fine. A large part of that performance was the rules put into place after the Great Recession on the capital and safety standards that banks needed to meet were effective. The Federal Reserve also played a role in extending short-run credit and making sure that financial markets didn’t freeze in place, especially in March 2020, but overall, the story in the financial sector is the success of the earlier reforms.

The book often returns to the theme that the next recession is likely to come from its own idiosyncratic cause–that is, not from a pandemic–and it is worth thinking about what policies might be put in place now that would kick in automatically when that recession hits. Here’s the table of contents for the book as a whole:


China’s Move to a Central Bank Digital Currency

China is taking the lead in moving to a central bank digital currency. It’s not altogether clear how much the US and other high-income countries should be worried about this. Sometimes it’s better to be the one who watches someone else go first, and then learns from their experience. For sorting out the benefits and risks, a useful starting point is Digital Currencies: The US, China, and the World at a Crossroads, edited by Darrell Duffie and Elizabeth Economy based on the discussions of a task force convened at the Hoover Institution.

I’ve described the central bank digital currency controversies before at this blog, but it’s probably useful to review. What we’re talking about here is how payments from one party to another are made behind the scenes–debit cards, credit cards, direct deposit, even old-fashioned paper checks. Duffie and Economy describe how the “bank-railed” systems of the past have worked :

For centuries, the world has relied on banks to handle the vast majority of payments via a straightforward and generally safe method. In the simplest common cases, a bank-railed payment system works like this: Alice pays Bob $100 by instructing her bank to deduct $100 from her bank account and to deposit $100 into Bob’s account at his bank. The instruction can take the form of the tap of a credit or debit card, a wire transfer, or a paper check, among other methods. In some countries, including the United States, the payment medium—bank deposits—is extremely safe, and banks take reasonable care to protect the privacy of their customers and monitor the legality of payments.

As shown in figure 1.1, many countries have been upgrading bankrailed payments by introducing “fast-payment systems,” which can make instant payments possible around the clock, largely eliminating costly delays and payment risks. The United States has a fast-payment system provided by a consortium of large banks. Not satisfied that the bank-provided solution will be sufficient, the US central bank, the Federal Reserve, will introduce its own fast-payment system, FedNow, by 2024.

With this and certain other improvements in traditional payment systems, why are most countries now considering radically disrupting their bank-railed payment systems by introducing CBDCs, or by accommodating other kinds of digital currencies? The answer is that most central banks have begun to question whether merely upgrading their bank-railed payment systems will be enough to meet the challenges of the future digital economy. They have also begun to consider whether to encourage, and how to regulate, private sector fintech innovations such as stablecoins. Moreover, some in the official sector are concerned about whether banks face sufficient competition for providing innovative and cost-efficient payment services.

How would a central bank digital currency work differently? Duffie and Economy explain:

Often in response to private fintech innovations or the declining use of paper money, some central banks are developing CBDCs. A CBDC is a deposit in the central bank that can be used to make payments. For example, Alice can pay Bob $100 by shifting $100 out of her central bank account and into Bob’s central bank account, whether on an internet website, a mobile phone app, or a payment smart card, among other methods. Depending on their designs, CBDCs can also be used for offline payments, meaning without access to the internet or a phone network. In many cases, Alice and Bob would obtain their CBDC and the necessary application software (“apps”) from private sector firms such as banks,
even though the CBDC itself is a claim against the central bank. A general purpose CBDC, often called a “retail” CBDC, would be available to anyone and accepted by anyone, much like paper currency but allowing for greater efficiencies and a wider range of uses. Special-purpose CBDCs can also improve the efficiency of wholesale financial transaction settlements and cross-border payments. …

Most CBDCs currently being developed adopt a hybrid model, according to which the central bank issues the CBDC to banks and other payment service providers, which in turn distribute the CBDC to users throughout the economy and provide them with account-related services.

In some ways, this doesn’t sound like much of a change. It sounds as if payments would still go through banks, but now, behind the scenes, the accounts would be settled with the CBDC. How does this approach provide any gains?

The short-term gain for US consumers is that payments could be faster and cheaper. Duffie and Economy write:

North Americans pay over 2 percent of their GDP for payment services, according to data from McKinsey, more than most of the rest of the world pays, particularly because of extremely high fees for credit cards. US payments are also processed and cleared slowly, often taking more than a day before they can be used by the recipient. And Americans’ primary payment instrument, bank deposits, is compensated with very low interest rates relative to wholesale money-market rates.

The longer-term benefit has to do with financial innovation and competition. Say that we shift away from a “bank-railed” system, where financial transactions take place between banks, and that other financial technology companies would also be able to have a CBDC account at the central bank. A number of US companies are among the innovators in payment systems. Duffie and Economy mention “Arbitrum, Avanti Bank, Betterfin, Celo, Chime, Circle (USD Coin), Coinbase, Diem, Electric Capital, Imperial PFS, Jiko, JP Morgan, Mobile Coin, Optimism, Paxos, Plaid, Polygon, R3, Ripple, SoFi, Stellar, Topl, Varo Bank, Venmo, Yodlee, and Zelle.” But whatever services these firms offer, in the US economy they are ultimately, behind the scenes, operating through banks. If they instead could have CBDC accounts at the central bank, new types of financial communication could be unlocked.

Duffie and Economy emphasize that when it comes to financial technology and payments technology, large Chinese firms have taken the lead: “In China, for example, 94 percent of mobile payments are now processed by Alipay and WeChat Pay, with 90 percent of residents
of China’s largest cities using these services as their primary method of payment … Building on Alipay, the Ant Group provides relatively low-cost and widely accessible small-business credit, wealth management, and insurance. Alipay now reaches small-tier cities and many
rural areas in China.” In addition, China has been experimenting in major cities with a new central bank digital currency, the e-CNY, and plans to launch it more broadly later this year.

China’s public posture is that the e-CNY is really just for domestic use. But one potential concern for the United States is that the system would, at least in concept, allow international payments as well in a way that would circumvent the SWIFT (Society for Worldwide Interbank Financial Telecommunications) system that is now the standard coordinator for international financial payments–and also a primary tool for imposing financial sanctions on other countries.

I’ve written about the risks of a CBDC in the past, and won’t repeat it all here. Might such a system pose risks to conventional banks? If conventional banks face standard financial regulation, with its costs and requirements, what regulations are appropriate for non-banks that would have a pipeline into their own account at the Fed? For example, banks face “know-your-customer” rules aimed at limiting money-laundering or financing other illegal activities. Would all the non-banks need to abide by similar rules? If not, do these nonbank financial firms create a risk of financial instability? A CBDC based on US dollars would be one of the preeminent targets for computer hackers all over the world. How would a CBDC be related to cryptocurrencies and blockchain-related innovations? What degree of privacy would a CBDC involve? In China, there doesn’t seem to be much of a problem with the idea that the central bank would oversee all the accounts in this system: in the US and other high-income countries, such a step might be more controversial.

Finally, remember that these non-bank financial firms aren’t necessarily just about payments. They might also offer loans, or assurances of kinds of contractual financial payments. They might offer insurance or investment options.

I’m underconfident that the Federal Reserve is ready to launch a central bank digital currency, and US financial markets and innovation are rather different from those in China. But China’s experiment with launching the e-CNY is worth watching.

Some Economics of Dominant Currencies

Oleg Itskhoki was just awarded the John Bates Clark medal, given each year by the American Economic Association “to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge.” It’s sometimes called the “baby Nobel,” because when the recipients get close to or enter retirement someday, they will often be among the top contenders for a Nobel prize in economics. For those interested in knowing more about Itskhoki’s work in international finance and trade, he offers a readable overview of one slice of his work on “Dominant Currencies” in the most recent NBER Reporter (March 2022). He writes:

While the US dollar accounts for a disproportionate share of international trade, there is a small subset of currencies that are actively used in this trade alongside the dollar, most notably the euro, but to a lesser extent the pound, the Japanese yen, the Swiss franc, and the Chinese yuan. In some bilateral trade flows these currencies play as important a role as the dollar [see Figure 1], with considerable variation in currency use across individual firms even within narrowly defined industries. The dollar and the euro have emerged as the two leading currencies in accounting for international trade flows, with the role of the euro elevated by the fact that a large portion of international trade happens among European countries or involves one of the European countries. A distinctive feature of dominant currencies is that the same currency is equally prevalent in both imports and exports, a feature common to both the dollar and the euro, which is also at odds with standard international macro models that assume a greater role for many currencies to be present in global trade. Nonetheless, a clear distinction between the dollar and the euro is that the dollar in many cases is also a vehicle currency, not used domestically by either the importing or the exporting country. One can thus think of the dollar as the dominant global currency and of the euro as the dominant regional currency,

Here’s the Figure 1 mentioned in the above quotation, which focuses on Belgium. Belgium is a smallish economy in Europe (its GDP is about the same size as the American state of Michigan). It uses the euro as its currency and is highly integrated into the economy of Europe and the global economy. The export/GDP and import/GDP ratios for Belgium are about 80%; for comparison, the comparable export and import ratios for the US economy are in the range of 12-15% in recent years.

The figure shows the use of the US dollar and the euro for Belgium’s international trade with different countries. The left-hand panel looks at Belgium’s exports. As you can see, Belgium’s exports to the US are mostly invoiced in US dollars. Belgium’s exports to India are about 40% US dollar and 40% euro. Belgium’s exports to Japan are about 30% euro and 15% US dollar–with the rest of those imports probably being invoiced in Japanese yen.

With this kind of data for many countries, a researcher can study the determinants of what currency gets used for what reasons, and thus get insights into why the US dollar and a few others are dominant currencies–and what the effects would be of shifts in dominant currencies. Itskhoki lays out the theories of currency choice, which are based on the canonical three functions fulfilled by money.

Medium-of-exchange theories emphasize that a currency is adopted if it guarantees the lowest transaction costs or maximizes room for mutually beneficial exchange. These theories stress country size as a fundamental force, as well as the likelihood of multiple coordination equilibria and other macroeconomic factors that make it too costly to use currencies of developing countries … Store-of-value theories link currency choice in exports with the currency of financing of the firm as part of a combined risk-management decision. Finally, unit-of-account theories postulate that a price is set in a given currency and is not adjusted in the short run …

One basic insight here is that firms producing in one country and selling in another must figure out how to deal with a risk of shifting exchange rates–and the currency for a given transaction will be chosen with this issue in mind. For example, imagine a Belgian firm that imports lots of inputs from the US, and then exports back to the US. For that firm, using the US dollar to invoice both its imports and exports means that it is protected from fluctuations in the exchange rate of the US dollar. Similarly, it turns out that firms with cross-border ownership are more likely to invoice in US dollars. Most real-world examples are more complex that this, of course, but the decision about currency ends up involving the extent to which higher costs incurred in one currency can be passed through to a buyer in another currency.

One of the questions I am asked repeatedly is when or whether the Chinese renminbi yuan will become the world’s dominant currency. Such shifts of dominant currency have historically happened only slowly and occasionally. But here’s Itskhoki on how shifting conditions could foster such a change:

One possibility is that the US dollar strengthens its position as the dominant global currency. This could happen with greater globalization of production and more intensive reliance on global value chains; our results show that cross-border foreign direct investment — a proxy for global value chains — is associated with more US dollar currency invoicing. This would render exchange rates less relevant as determinants of relative prices and expenditure switching in the global supply chain. In contrast, fragmentation and localization of production chains, which might happen in response to a global pandemic shock, can reverse this trend and speed up the transition to a multicurrency equilibrium, with more intensive regional trade and greater barriers to cross-regional trade. This, in turn, may increase the expenditure-switching role of bilateral exchange rate movements.

Alternatively, a shift in the exchange rate anchoring policies of the major trade partners, such as China, could trigger a long-run shift in the equilibrium environment. If China were to freely float its exchange rate, encouraging Chinese exporters to price more intensively in renminbi, then the equilibrium environment would change for exporting firms around the world. In particular, this would alter both the dynamics of prices in the input markets as well as the competitive environment in the output markets across many industries. As our results show, the currency in which a firm’s imports are invoiced and the currency in which its competitors price are key determinants of an exporting firm’s currency choice, and hence this shift could dramatically change the optimal invoicing patterns for exporting firms.

Universal Schooling in Developing Countries

There are no examples of countries with generally low levels of educational achievement that are also high-income nations; conversely, there are many examples of countries where the level of educational achievement first increased substantially and then was followed by economic growth. Eric A. Hanushek and Ludger Woessman describe their recent research on basic skills around the world in “The Basic Skills Gap” (Finance & Development, June 2022).

Six stylized facts summarize the development challenges presented by global deficits in basic skills:

1. At least two-thirds of the world’s youth do not obtain basic skills.

2. The share of young people who do not reach basic skills exceeds half in 101 countries and rises above 90 percent in 37 of these.

3. Even in high-income countries, a quarter of young people lack basic skills.

4. Skill deficits reach 94 percent in sub-Saharan Africa and 90 percent in south Asia, but they also hit 70 percent in Middle East and North Africa and 66 percent in Latin America.

5. While skill gaps are most apparent for the third of global youth not attending secondary school, fully 62 percent of the world’s secondary school students fail to reach basic skills.

6. Half of the world’s young people live in the 35 countries that do not participate in international testing, resulting in a lack of regular foundational performance information.

They write: “The key to improvement is an unwavering focus on the policy goal: improving student achievement. There is no obvious panacea, and effective policies may differ according to context. But the evidence points to the importance of focusing on incentives related to educational outcomes, which is best achieved through the institutional structures of the school system. Notably, education policies that develop effective accountability systems, promote choice, emphasize teacher quality, and provide direct rewards for good performance offer promise, supported by evidence.”

But what specific strategies might be most useful in addressing these issues in developing countries? Justin Sandefur has edited an e-book for the Center for Global Development, including six essays with comments, entitled Schooling for All: Feasible Strategies to Achieve Universal Education.

Around the world, many countries have achieved a substantial increase in primary school enrollment, but only a very modest increase in secondary school enrollment. In Chapter 3, Lee Crawfurd and Aisha Ali make “The Case for Free Secondary Education.” A lot of their proposal comes down the basics: more teachers and more schools. In many developing countries, students must pass an entrance examination before attending secondary school, and if they pass the exam, they then need to pay fees. Here’s the result:

A common concern–and one of the justifications for charging fees in secondary schools–is that many children from lowest-income families still face considerable barriers to success in primary school. Thus, free secondary school could tend to be a regressive program, benefiting mainly children from higher-income families. Thus, a challenge lurking in the background is how to support children from the lowest-income families in their primary education, so they are not already far behind as early as third or fourth grade.

Lee Crawfurd and Alexis Le Nestour look at the evidence on “How Much Should
Governments Spend on Teachers?” They conclude that although hiring more teachers will be necessary if schooling is to expand, there isn’t much evidence that higher pay for the existing teachers will make a substantial difference in the performance of the lowest-income children in the early grades. In that spirit, the notion of feeding children in school comes up in several of the essays, and is the focus of “Chapter 2: Feed All the Kids,” by Biniam Bedasso. He writes:

School feeding programs have emerged as one of the most common social policy interventions in a wide range of developing countries over the past few decades. Before the disruptions caused by the COVID-19 pandemic, nearly half the world’s schoolchildren,
about 388 million, received a meal at school every day (WFP 2020). As such, school feeding is regarded as the most ubiquitous instrument of social protection in the world employed by developing and developed countries alike. But school feeding is also a human capital
development tool. The theory of change for school feeding programs is rooted in the synergistic relationship between childhood nutrition, health, and education
underscored in the integrated human capital approach (Schultz 1997). The stock of human capital acquired as an adult—a key determinant of productivity— is supposed to be a function of the individual and interactive effects of schooling, nutrition, health,
and mobility. … A survey of government officials in 41 Sub-Saharan Africa countries conducted by the Food and Agriculture Organization of the United Nations (FAO) in 2016 shows that a great majority of the countries implement school feeding programs to help achieve education objectives. …

A review of 11 experimental and quasi-experimental studies from low- and middle-income countries reveals that school feeding contributes to better learning outcomes at the same time as it keeps vulnerable children in school and improves gender equity in education. Although school feeding might appear cost-ineffective compared with specialized education or social protection interventions, the economies of scope it generates are likely to make it a worthwhile investment particularly in food-insecure areas.

In short, Bedasso argues that feeding children at school in developing countries probably pays off just in terms of educational outcomes. But even if the payoff just in terms of education isn’t exceptionally high, the payoff to improved child nutrition in general takes many forms, including improved health and gender equity.

The case for feeding children at school seems strong to me. But it’s only one part of addressing the problem. Many developing countries have dramatically increased primary school enrollments, but they are not yet assuring that most children can keep up and actualy achieve basic literacy and numeracy at the primary school level.

A related problem is that the money for this broader agenda is lacking. Jack Rossiter contributes Chapter 6, which is titled “Finance: Ambition Meets Reality. He looks at the costs of universal primary and secondary school spending, along with school meals, and concludes that the cost would be about $1.9 trillion for low- and middle-income countries in 2030. However, the projected education spending for these countries is about $750 billion less. Outside foreign aid might conceivably fill $50 billion of the gap, but probably not more. Rossiter makes the grim case:

Even if international financing comes in line to meet targets, governments are not going to have anything like the sums that costing exercises require. We can choose to ignore this shortfall, stick with plans, and watch costs creep up. Or we can see it as a serious budget constraint, redirect our attention toward finding ways to push costs down, and try hard to get close to universal access in the next decade.

It’s of course tempting to elide these tradeoffs. Maybe if we just root out waste, fraud, and abuse, we will have all the funds we need? Doubtful. As Rossiter points out, universal access by 2030 may require scaling back on the nice-to-haves–like smaller class sizes, higher pay for teachers, new classroom materials, and so on–and being quite hard-headed about the must-haves.

Was Primitive Communism Ever Real?

There’s an image many of us carry around in our minds, an image of a primitive time when small groups of people lived together and shared equally. Manvir Singh describes the evidence in “Primitive communism: Marx’s idea that societies were naturally egalitarian and communal before farming is widely influential and quite wrong” (Aeon, April 19, 2022). He writes:

The idea goes like this. Once upon a time, private property was unknown. Food went to those in need. Everyone was cared for. Then agriculture arose and, with it, ownership over land, labour and wild resources. The organic community splintered under the weight of competition. The story predates Marx and Engels. The patron saint of capitalism, Adam Smith, proposed something similar, as did the 19th-century American anthropologist Lewis Henry Morgan. Even ancient Buddhist texts described a pre-state society free of property. … Today, many writers and academics still treat primitive communism as a historical fact. …

Primitive communism is appealing. It endorses an Edenic image of humanity, one in which modernity has corrupted our natural goodness. But this is precisely why we should question it. If a century and a half of research on humanity has taught us anything, it is to be sceptical of the seductive. From race science to the noble savage, the history of anthropology is cluttered with the corpses of convenient stories, of narratives that misrepresent human diversity to advance ideological aims. Is primitive communism any different?

So that you are not kept in suspense, gentle reader, the evidence in favor of primitive communism is at best highly mixed. In some primitive tribes, perhaps especially the Aché hunter-gatherers who lived in what is now Paraguay, food was shared very equally. But the Aché appear to be at one end of the spectrum. In many other hunter-gatherer tribes, those who succeeded at hunting or gathering could distributed the product of their labor as they personally saw fit. In addition, even among the Aché, as well as every other hunter-gatherer tribe, there were a number of possessions held as private property. Singh writes:

{H]unter-gatherers are diverse. Most have been less communistic than the Aché. When we survey forager societies, for instance, we find that hunters in many communities enjoyed special rights. They kept trophies. They consumed organs and marrow before sharing. They received the tastiest parts and exclusive rights to a killed animal’s offspring. The most important privilege hunters enjoyed was selecting who gets meat.  … Compared with the Aché, many mobile, band-living foragers lay closer to the private end of the property continuum. Agta hunters in the Philippines set aside meat to trade with farmers. Meat brought in by a solitary Efe hunter in Central Africa was ‘entirely his to allocate’. And among the Sirionó, an Amazonian people who speak a language closely related to the Aché, people could do little about food-hoarding ‘except to go out and look for their own’. …

All hunter-gatherers had private property, even the Aché. Individual Aché owned bows, arrows, axes and cooking implements. Women owned the fruit they collected. Even meat became private property as it was handed out. … Some proponents of primitive communism concede that foragers owned small trinkets but insist they didn’t own wild resources. But this too is mistaken. Shoshone families owned eagle nests. Bearlake Athabaskans owned beaver dens and fishing sites. Especially common is the ownership of trees. When an Andaman Islander man stumbled upon a tree suitable for making canoes, he told his group mates about it. From then, it was his and his alone. Similar rules existed among the Deg Hit’an of Alaska, the Northern Paiute of the Great Basin, and the Enlhet of the arid Paraguayan plains. In fact, by one economist ’s estimate, more than 70 per cent of hunter-gatherer societies recognised private ownership over land or trees.

Singh describes how primitive societies often had severe punishments for those who transgressed the applicable property rights. In addition, the social bonds that led to extreme sharing among the Aché had some horrific consequences. When you engaged in extreme sharing, it was based on the idea that in the not-too-distant future you would also be the recipient of extreme sharing by others. But what about those who seemed unlikely to be contributors to future sharing? In Aché society, widows, the sick or disabled, and orphans were likely to be killed: “The Aché had among the highest infanticide and child homicide rates ever reported. Of children born in the forest, 14 per cent of boys and 23 per cent of girls were killed before the age of 10, nearly all of them orphans. An infant who lost their mother during the first year of life was always killed.”

We can debate why the vision of a pre-industrial sharing society has such a sentimental pull. But as a matter of fact, it seems to be a wildly oversimplified story. Singh writes: “For anyone hoping to critique existing institutions, primitive communism conveniently casts modern society as a perversion of a more prosocial human nature. Yet this storytelling is counterproductive. By drawing a contrast between an angelic past and our greedy present, primitive communism blinds us to the true determinants of trust, freedom and equity.”

I will leave my definitive discussion of “the true determinants of trust, freedom and equity” for another day. But in that discussion, the idea that human beings have a pure, primitive, natural, inclination toward trust, sharing, and mutual respect will not play a major role.

Hat tip: I ran across a mention of this article in a post by Alex Tabarrok at the remarkably useful Marginal Revolution website.