Winter 2022 Journal of Economic Perspectives Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2022 issue, which in the Taylor household is known as issue #139. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

___________________

Symposium on Economies of Africa

“Labor Productivity Growth and Industrialization in Africa,” by Margaret McMillan and Albert Zeufack

Manufacturing has made an important contribution to raising living standards in many parts of the world. Concerns about premature deindustrialization have made some observers skeptical about the potential for manufacturing to play this role in Africa. But employment in African manufacturing has grown rapidly over the past 20 years. These employment gains have been accompanied by: (i) large increases in the number of small manufacturing firms; (ii) limited employment gains in large firms; and (iii) robust labor productivity growth in Africa’s large firms. Limited employment growth in Africa’s large manufacturing firms is partly a result of the capital intensity of the manufacturing subsectors in which African countries are most engaged—the processing of resources—and partly a result of rising capital intensity in manufacturing. The potential for manufacturing to raise living standards in Africa depends on indirect job creation by large firms through backward and forward linkages and increasing labor productivity in small firms.

Full-Text Access | Supplementary Materials

“Agricultural Technology in Africa,” by Tavneet Suri and Christopher Udry

We discuss recent trends in agricultural productivity in Africa and highlight how technological progress in agriculture has stagnated on the continent. We briefly review the literature that tries to explain this stagnation through the lens of particular constraints to technology adoption. Ultimately, none of these constraints alone can explain these trends. New research highlights pervasive heterogeneity in the gross and net returns to agricultural technologies across Africa. We argue that this heterogeneity makes the adoption process more challenging, limits the scope of many innovations, and contributes to the stagnation in technology use. We conclude with directions for policy and what we feel are still important, unanswered research questions.

Full-Text Access | Supplementary Materials

“Time Use and Gender in Africa in Times of Structural Transformation,” by Taryn Dinkelman and L. Rachel Ngai

Many African countries are still in the early stages of structural transformation. Typically, as economies move through the structural transformation, activities once conducted within the household are outsourced to the market. This has particular implications for women’s time use. In this paper, we document that current patterns of female time use in home production in several African countries closely resemble historical time use patterns in the Untied States. We highlight two stylized facts about women’s time use in Africa. First, in North Africa, women spend very few hours in market work and female labor force participation overall is extremely low. Second, although extensive margin participation of women is high in sub-Saharan Africa, women tend to work in the market for only a few hours each week, with the rest of their work hours spent in home production. These two facts suggest two different types of constraints that could slow down the reallocation of female time from home to market as economies grow: social norms related to women’s market work, and a lack of infrastructure (e.g., household infrastructure and childcare facilities) to facilitate marketizing home production. We discuss recent empirical evidence related to each set of constraints and highlight new avenues for research.

Full-Text Access | Supplementary Materials

“Young Adults and Labor Markets in Africa,” by Oriana Bandiera, Ahmed Elsayed, Andrea Smurra, and Céline Zipfel

Every year, millions of young adults join the labor market in Africa. This paper harmonizes surveys and censuses from 68 low- and middle-income countries to compare their job prospects to those of their counterparts in other low-income regions. We show that employment rates are similar at similar levels of development but that young adults in Africa are less likely to have a salaried job, especially when the size of their cohort is large. Building on existing evidence on the impacts of interventions targeting both the demand and supply sides of the labor market, we discuss policy priorities for boosting the growth of salaried job creation in the region.

Full-Text Access | Supplementary Materials

“Political Distortions, State Capture, and Economic Development in Africa,”by Nathan Canen and Leonard Wantchekon

This article studies the role of political distortions in driving economic growth and development in Africa. We first discuss how existing theories based on long-run structural factors (e.g., pre-colonial and colonial institutions, or ethnic diversity) may not capture new data patterns in the region, including changes to political regimes, growth patterns, and their variation across regions with similar historical experiences. We then argue that a framework focused on political distortions (i.e., how political incentives impact resource allocation and economic outcomes) may have multiple benefits: it encapsulates many distortions observed in practice, including patronage, variations in contract enforcement and the role of political connections in firm outcomes; it unifies results in Africa and elsewhere; and it leaves a wide scope for policy analysis. We conclude by overviewing reforms that may curb such distortions, including changes to campaign financing rules, bureaucratic reform, free trade agreements, and technology.

Full-Text Access | Supplementary Materials

Articles

The Price of Nails since 1695: A Window into Economic Change,” by Daniel E. Sichel

This paper focuses on the price of nails since 1695 and the proximate source of changes in those prices. Why nails? They are a basic manufactured product whose form and quality have changed relatively little over the last three centuries, yet the process for producing them has changed dramatically. Accordingly, nails provide a useful prism through which to examine a wide range of economic and technological developments that touch on multiple areas of both micro- and macroeconomics. Several conclusions emerge. First, from the late 1700s to the mid-twentieth century, real nail prices fell by a factor of about 10 relative to overall consumer prices. These declines had important effects on downstream industries, most notably construction. Second, while declining materials prices contribute to reductions in nail prices, the largest proximate source of the decline during this period was multifactor productivity growth in nail manufacturing, highlighting the role of the specialization of labor and reorganization of production processes. Third, the share of nails in GDP dropped back from 0.4 percent of GDP in 1810—comparable to today’s share of household purchases of personal computers—to a de minimis share more recently; accordingly, nails played a bigger role in American life in that earlier period. Finally, real nail prices have increased since the mid-twentieth century, reflecting in part an upturn in materials prices and a shift toward specialty nails in the wake of import competition, though the introduction of nail guns partly offset these increases for the price of installed nails.

Full-Text Access | Supplementary Materials

“The Puzzle of Falling US Birth Rates since the Great Recession,” by Melissa S. Kearney, Phillip B. Levine and Luke Pardue

This paper documents a set of facts about the dramatic decline in birth rates in the United States between 2007 and 2020 and explores possible explanations. The overall reduction in the birth rate reflects declines across many groups of women, including teens, Hispanic women, and college-educated white women. The Great Recession contributed to the decline in the early part of this period, but we are unable to identify any other economic, policy, or social factor that has changed since 2007 that is responsible for much of the decline beyond that. Mechanically, the falling birth rate can be attributed to changes in birth patterns across recent cohorts of women moving through childbearing age. We conjecture that the “shifting priorities” of more recent cohorts, reflecting changes in preferences for having children, aspirations for life, and parenting norms, may be responsible. We conclude with a brief discussion about the societal consequences for a declining birth rate and what the United States might do about it.

Full-Text Access | Supplementary Materials

“Isaiah Andrews, 2021 John Bates Clark Medalist,” by Anna Mikusheva and Jesse M. Shapiro

Isaiah Andrews is an exceptionally warm and caring person, a remarkable teacher, a collaborator and mentor, an exemplary contributor to his department and profession, and a brilliant econometrician. In this article, we review Isaiah’s contributions to econometric theory in the context of Isaiah’s receipt of the 2021 John Bates Clark Medal.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

The Evolution of Labor Force Participation by Race

With “the Great Resignation”–that is, the rise in people since the pandemic who are out of the labor force and neither employed nor looking for work–it’s perhaps useful to point out some substantial shifts in racial differences in labor force participation.

This figure shows overall labor force participation for white (blue line), black (red line), Hispanic (green line), and Asian (purple line) workers. Separate data on Asian workers has only been collected since about 2002. But back in the early 1970s, labor force participation rates were quite similar for white, black, and Hispanic populations. Labor force participation then rose for all three groups in the 1970s, 1980s, and 1990s, in large part because of the mass entry of women to the (paid) labor force. However, the rise in labor force participation for whites and Hispanics was faster than the rise for blacks. Roughly around 2000, the labor force participation rate for all three groups started declining. However, the decline was slowest for the Hispanic population. In the lead-up to the pandemic, labor force participation was highest for the Hispanic population, but had roughly equalized for the white, black, and Asian population.

In looking at labor force statistics, it’s standard to separate out men and women. This graph shows labor force participation rates for white, black, and Hispanic men. (Data for Asian men and women is not readily available.) The general pattern is that labor force participation rate is falling for all groups of men over time. However, the decline for Hispanic men is less, and the decline in labor force participation for white men since about 2010 is especially pronounced.

Here’s the parallel figure for labor force participation of white, black, and Hispanic women. For all three groups, the rise in female labor force participation from the 1970s up to around 2000 is apparent. However, in this case the labor force participation rate of black women (red line) is consistently above that of white women (blue line). The rate of labor force participation of Hispanic women (green line) was generally below that of white women for most of the period, but rises above white women around 2010.

I won’t attempt here to offer explanations (or gross generalizations) behind these patterns. But it is notable to me that labor force participation for Hispanics is now higher than that for whites for both men and women. It is notable that labor force participation rates for blacks have converged with those for whites, in large part because the drop in white labor force participation for both men and women in the last decade or so has been so substantial. It will be interesting to see if the Asian labor force participation rate continues to track the rate for whites, as it did before the pandemic, or if it moves to the higher levels of Hispanic labor force participation, as it seems to be doing since the pandemic.

Olympic Air Quality in China

The Olympic Summer Games were held in Beijing in 2008. Now the Winter Games are being held there in 2022. For the sake of the athletes who will be inhaling and exhaling more frequently and deeply than usual in the next few weeks, how has the air quality changed? Michael Greenstone, Guojun He, and Ken Lee discuss the evidence in “The 2008 Olympics to the 2022 Olympics: China’s Fight to Win its War Against Pollution” (February 2022, Energy Policy Institute at the University of Chicago). They write:

In the years before the 2008 Beijing Summer Olympics, pollution in China had been sharply climbing. The government responded with quick reforms that temporarily reduced pollution during the games. The reforms, however, only managed to slow the climb in the long run. By 2013, pollution in China had reached record levels. The following year, the same year Beijing applied to host the 2022 Olympic Games, Chinese Premier Li Keqiang declared a “war against pollution” and vowed that China would tackle pollution with the same determination it used to tackle poverty. Seven years later, pollution has declined dramatically by about 40 percent. In Beijing, there is half as much pollution compared to both 2008 and 2013 levels. In most areas of China, pollution has fallen to levels not seen in more than two decades. To put China’s success into context, these reductions account for more than three quarters of the global decline in pollution since 2013. … Due to these improvements, the average Chinese citizen can expect to live 2 years longer, provided
the reductions are sustained. Residents of Beijing can expect to live 3.7 and 4.6 years longer, since 2008 and 2013 respectively.

Nevertheless, work remains. While China has met its national air quality standard, pollution levels as of 2020 were still six times greater than the World Health Organization (WHO) guideline. To further reduce pollution, China is taking rapid actions ahead of the 2022 Winter Olympics. If those actions were to allow China to permanently reduce pollution to meet the WHO guideline, the average Chinese citizen could expect to gain an additional 2.6 years of life expectancy, on top of the gains since the war against pollution was initiated. Residents of Beijing could gain an additional 3.2 years.

Here are a couple of figures to illustrate. Pollution is measured here by particulate emissions in the air. This method has the advantage that the data is collected by satellite–not from China’s government. This figure shows how particulate emissions rose in the early 2000s, but have dropped since about 2014.

This next figure shows the change in air particulates in a range of Chinese cities. The black bars are 2013; the gray bars are 2020. The lighter brown bar is China’s pollution standard, which is now being met in many cities; the darker brown bar is the World Health Organization Standard, which is not being met in any Chinese city. As the report notes: “While pollution levels are now on par with its national standards, they are still six times greater than the WHO guideline of 5 μg/m³ as of 2020. Using an international lens, Beijing is still three times more polluted than Los Angeles, the most polluted city in the United States.”

As one might expect, the policy tools used in China to reduce air pollution have lacked subtlety: barring cars from the roads, shutting down industrial plants in certain locations, and so on. The authors note the costs of this command-and-control approach to reducing pollution and the prospect of achieving environmental goals at lower cost with a market-oriented tradeable permit system:

[A]lmost all of the policies come from a “command and control” playbook that generally does not consider how to minimize the costs of achieving their goals. Thus, the Chinese government closed, relocated, and reduced the production capacity of a large number of polluting firms, enforced tighter emission standards across many industries, assigned binding abatement targets to local governments, and sent thousands of discipline teams
to inspect local environmental performances. These measures, while being effective in reducing the total emissions in the country, ignored the significant differences in the abatement costs across firms, industries, and regions, and led to large economic and administrative costs in achieving the policy goal. They also led to social media complaints from stakeholders that environmental regulations are too stringent, protests from workers being laid off by the polluting firms, and resistance from local governments for enforcing tighter environmental standards. …

China’s introduction of a national carbon market in July 2021, which upon completion will be the largest such market in the world, positions the country well for the adoption of a particulate pollution and/or sulfur dioxide market.

The Allais Paradox Revisited

Here is an original version of the “Allais paradox.” Take a moment to think about your own choices, and maybe also what you think other people would be likely to choose. Discussion will follow! This formulation is from “On the Experimental Robustness of the Allais Paradox” by Pavlo Blavatskyy, Andreas Ortmann, and Valentyn Panchenko, in the February 2021 issue of the American Economic Journal: Microeconomics. (Full disclosure: AEJ: Micro is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor, but the articles in AEJ: Micro are not freely available online.) The authors write: 

The first Allais (1953) example consisted of two related decision problems, which we call Allais questions.

In the first question, a decision-maker chooses between two options A and B:
• Option A: ₣100 million for certain
• Option B: ₣500 million with probability 0.1, ₣100 million with probability
0.89, nothing with probability 0.01


In the second question, a decision-maker chooses between another two options C and D:
• Option C: ₣100 million with probability 0.11, nothing with probability 0.89
• Option D: ₣500 million with probability 0.1, nothing with probability 0.9

Notice that you need to make two choices here: between A and B, and between C and D. Choice A is 100 million for sure. (The symbol ₣ stands for the French currency of the time, the franc.) Choice B is a 10% chance of 500 million, an 89% chance of 100 million, and a 1% chance of nothing. In short, this choice tells something about your risk preferences. Would you take 100 million for sure? Or would you prefer a small chance of a much larger prize, even at the risk of an even smaller chance of nothing at all?

Now think about the choice between C and D, which on the surface looks very different. Would you prefer an 11% chance of 100 million and an 89% chance of nothing? Or would you prefer a slightly smaller chance of winning but a larger prize if you do win, with 10% chance of 500 million and a 90 percent chance of nothing?

A few years after Maurice Allais won the Nobel price in economics in 1988 (“for his pioneering contributions to the theory of markets and efficient utilization of resources”–the Allais paradox was just one small slice of his body of work), Bertrand Munier wrote an article about his accomplishments in the Spring 1991 issue of the Journal of Economic Perspectives (“Nobel Laureate: The Many Other Allais Paradoxes,” pp. 179-199). With regard to the Allais paradox, Munier wrote:

Allais and Darmois organized a conference in Paris in 1952 on mathematical economics and risk. The above questions were put to several participants by Allais, and in particular to authors of the Neumann-Savage expected utility theory—which Allais calls the “neo-Bernoullian” theory—to researchers supporting the theory, like B. de Finetti, as well as to other persons. Most of these individuals, including Savage himself, preferred a1 to a2 [in the notation above, choices A and B] in the first pair and a4 to a3 [that is, choice D and C] in the second one. Later on, Allais submitted the questionnaire containing, among others, the above questions to a number of colleagues and students. About 65 percent of them made similar choices.

The key insight here is that, from a certain perspective, A and B is actually the same choice as C and D. If you choose A over B, it is logically inconsistent to choose D over C. To understand why, there are theoretical and algebraic explanations of the Allais paradox in the articles by Blavatskyy, Ortmann, ad Panchenko, and the article by Munier. Here, I’ll try a more verbal explanation.

If you look back at the choice between A and B, 89% of the time the outcome is the same: that is, it’s 100 in either option. The difference between A and B lies in the other 11% of what can happen. In A, you get 100 all 11% of the time. In B, you get 500 10% of the time, and zero 1% of the time.

Now look at the choice between C and D. Again, 89% of the time the outcome is the same, but instead of being 100, as it was in the choice between A and B, now the outcome is zero for both C and D, 89% of the time. Again, the difference between C and D thus resides in what happens in the other 11% of outcomes. In choice C, you get 100 all the remaining 11% of the time. In choice D, you get 500 10% of the time and 0 the other 1% of the time.

Notice that once you strip out the fact that the same outcome happens 89% of the time in both sets of choices, the remaining choice over what happens in the remaining 11% of time becomes identical! If you prefer A over B, you thus prefer the more certain outcome over the outcome with more risk in that remaining 11% slice of outcomes. But if you prefer D over C, you prefer the outcome with more risk. In other words, choosing A over B, and then also choosing D over C, is an inconsistent choice according to the expected utility theory. This is the Allais paradox.

You might object that this example–that is, making two choices and the comparing consistency across those choices–is contrived and hard to understand. Fair enough. It definitely is contrived to help Allais make his point. But notice that many prominent supporters of expected utility theory, including Leonard Savage, fell right into the Allais paradox.

You might further object that a choice that offers a life-changing amount with complete certainty, like choice A, has an unfair advantage over any choice with a degree of uncertainty. Or you might say that the fact that the 89% outcome is 100 in the first choice and 0 in the second choice is not something that can be ignored: the reference point that you start from will affect choices, too. Now we’re getting into the heart of the discussion. Is the Allais paradox a fragile outcome that depends on a certain phrasing of the choices? Does it depend on whether a choice is certain or not? Does it depend on large amounts or small probabilities being involved? Would people make these same choices in real-world settings?

In broad terms, what is the applied psychology of making choices like these? The Allais paradox helped to open up these kinds of questions for exploration, and is thus thought of as one of the building blocks of what has now developed into behavioral economics. As Munier put it:

Allais had put the finger on what is now termed the “certainty effect” in experimental decision science. But he suspected, without being able to prove it immediately, that the lesson was more general: attitudes towards risk change not only from an individual to another, but also from a given individual between different patterns of risk.

The recent paper by Blavatskyy, Ortmann, and Panchenko looks at research on the Allais paradox over time. They find:

The Allais Paradox, or the common consequence effect, is a well-known behavioral regularity in individual decision-making under risk. Data from 81 experiments reported in 29 studies reveal that the Allais Paradox is a fragile empirical finding. The Allais Paradox is likely to be observed in experiments with high hypothetical payoffs, the medium outcome being close to the highest outcome and when lotteries are presented as a probability distribution … The Allais Paradox is likely to be reversed in experiments when the probability mass is equally split between the lowest and highest outcomes in risky lotteries.

The specific example of the Allais Paradox, and the many somewhat examples that have been explored in research since then, teach a general lesson: people’s attitudes toward risk can be inconsistent, depending on how the choices are framed, and in particular by valuations people place on certainty and on reference points. Allais died in 2010 at the age of 99. My sense is that his work and legacy is not well-known among many younger economists, but he was a giant among the economists of his time. Munier summed up his philosophy in the 1991 JEP essay:

[Allais] later made clear to his students the few principles on which his economic methodology rests: 1) make reference to original thinkers only; 2) never accept any theory if it has not been successfully checked on empirical data; 3) look for invariants in space and time as much as possible; 4) make use of mathematics only as a way of expressing a theory rigorously (and particularly of explicitly stating the hypotheses on which it rests), but never admit that the mathematical content of a paper is a significant index of quality; and 5) aim at developing synthetic views.

Insects and Hydroponics for Africa (and Elsewhere?)

As the world population rises toward a projected high of 9.7 billion in the mid-2060s, there are questions of how to feed that population. The problem is greater than a simple population count suggests. As economies around the world continue to grow, people with higher incomes are likely to want diets with more meat, which requires more agricultural output for animal feed. In addition, agriculture can cause a variety of environmental stresses in how it uses water and land, the pressure for deforestation, runoff from fertilizers and pesticides, and others. There pressures of rising population, higher incomes, and environmental stressors all seem likely to intersect in countries across Africa. As one way to ease these constraints, a group of World Bank authors have written Insect and Hydroponic Farming in Africa : The New Circular Food Economy, by Dorte Verner, Nanna Roos, Afton Halloran, Glenn Surabian, Edinaldo Tebaldi, Maximllian Ashwill, Saleema Vellani, and Yasuo Konishi. Here’s a flavor of the discussion from the Executive Summary (footnotes omitted):

While current agri-food production models rely on abundant supplies of water, energy, and arable land and generate significant greenhouse gas emissions in addition to forest and biodiversity loss, past practices point toward more sustainable paths. Frontier
agriculture in a circular food economy is meant to be such a model. Frontier agriculture
refers to approaches to agricultural production that sustainably expand the
frontiers of current food production practices. Frontier agriculture includes insect
farming and hydroponic farming, which are the focus of this report. Insect farming
is the process of producing insects for human food and animal feed, and hydroponic
farming is the process of growing crops in nutrient-rich water solutions instead of
soil. These technologies do not require great access to land, water, or wealth—all
limiting factors in Africa and FCV countries [that is, countries affected by fragility, conflict, and violence]. The technologies use organic waste, including agricultural or certain industrial waste, to quickly produce nutritious and protein-rich foods for humans, fish, and livestock and biofertilizers for soils. This improves food and nutrition security while reducing waste, strengthening national accounts, replenishing the environment, saving hard currency reserves by decreasing food and feed imports, and promoting green, resilient, and inclusive development.

Neither consumption of insects nor hydroponic farming is new—humans have engaged in both activities for hundreds of years. … Within a year, African insect farming can generate crude protein worth up to US$2.6 billion and biofertilizers worth up to US$19.4 billion. That is enough protein meal to meet up to 14 percent of the crude protein needed to
rear all the pigs, goats, fish, and chickens in Africa, according to the report’s modeling of the annual production of black soldier fly larvae (BSFL) in Africa. The report estimates that through black soldier fly farming, the continent could replace 60 million tons of traditional feed production with BSFL annually, leading to 200 million tons of recycled crop waste, 60 million tons of organic fertilizer production, and 15 million jobs, while saving 86 million
tons of carbon dioxide equivalent emissions, which is the equivalent of removing 18 million vehicles from the roads.

A detailed discussion of insect farming and hydroponic agriculture follows, with a particular emphasis on how these already have a foothold in a number of African countries, but there is still a lot of room for economies of scale and innovation to bring down prices.

An obvious question is: If this is such a good idea for Africa, why not elsewhere, too? My own answer is that these options are getting a foothold elsewhere. Here in Minnesota, one of my few trips to a restaurant in the last year, during a dip in the pandemic, was to a restaurant that served crickets as one of the appetizers (and yes, they were just fine–although they were also heavily spiced so I can’t say that I know what cricket really tastes like). Hydroponic agriculture is also growing in popularity here, driven by a combination of cheaper LED lighting, improvements in thermal glass, and long and bitter winters. Fresh and locally grown herbs are showing up at the grocery in the sub-zero depths of Minnesota winter.

In the report on insect farming in Africa, I was intrigued by a section on the strategy of Korea’s government to encourage insect farming there. The report notes:

Korea has an advanced regulatory and institutional framework for governing the national insect sector. The country’s modern insect sector has developed rapidly since 2011. Insect farms are now operating throughout the entire country. According to a Korean government survey, in December 2019, there were 2,535 farms, and by May 2021, there were nearly 3,000 registered farmers producing insects for food and feed in Korea. The emerging sector has created thousands of jobs and incomes for insect farmers and processors (RDA 2020b).

Insect legislation is well established in Korea. The Ministry of Food and Drug Safety approved several insect species as safe for human consumption (2016 Korean Food Standards Codex). These include silkworms and grasshoppers as traditional food ingredients and mealworm larvae, white-spotted flower chafer larvae, dynastid beetle larvae, and two-spotted crickets as new food ingredients. In January 2020, super mealworm larvae (nonfat powder) was temporarily registered as a new food ingredient, and locust and honeybee drone pupae are expected to be registered. Another eight insect species were approved for animal feed through the Feed Management Act. These include
dried crickets, dried grasshoppers, mosquito larvae, housefly larvae, mealworms, super mealworm larvae, and BSF larvae and pupae. Insect fat and other approved by-products from insects can also be sold and distributed …

Between 2011 and 2018, the insects-for- feed market in Korea grew from US$21 million
to US$140 million, and the insects-for-food market grew from US$0 to US$354 million (RDA 2020b). By 2030, Korea’s domestic market for insect feed is expected to grow to US$581 million and for insect food to US$815 million.

From a US perspective insect farming in particular may seem an unlikely future for agriculture. But it wasn’t that long ago that many Americans had not entered a restaurant serving kimchi, bulgogi, or Korean barbeque, either.

Waterfall vs. Agile: A Shift in Economic Policy Perspectives

Each year, India’s Ministry of Finance publishes an overview of the economy. Any serious student of India’s economy will want to spend time with this report, but it also helps dabblers like me keep up to speed. The Economic Survey 2021-22 is now available. Here, I’ll limit myself to a broad theme that runs the report about how to think about economic policy.

An event like a pandemic is a particular challenge for countries, like India, where the government has long relied on drawing up a series of five-year plans. Perhaps fortunately, India completed its final five-year plan in 2017 and has been moving to another view of policy-making, which is described in the preface of this Economic Survey:

The default mode of policy-making in India and most of the world has traditionally been to rely on a pre-determined “Waterfall” approach – an upfront analysis of the issue, detailed planning and finally meticulous implementation. This is the framework that underpins five-year plans and rigid urban master-plans. The problem is that the real
world is a complex and unpredictable place buffeted by all kinds of random shocks and unintended consequences. The response of traditional economics was to create ever more detailed plans/regulations, and elaborate forecasting models despite more than adequate evidence that this did not improve outcomes. In his Nobel Prize acceptance
speech, economist Friedrich Hayek dubbed this “The Pretence of Knowledge”.

This Economic Survey sets out to explain the alternative “Agile” approach that informed India’s economic response to the Covid-19 shock. This framework is based on feed-back loops, real-time monitoring of actual outcomes, flexible responses, safety-net buffers and so on. Planning matters in this framework but mostly for scenario analysis, identifying vulnerable sections, and understanding policy options rather than as a deterministic prediction of the flow of events. …

Some form of feedback loop based policy-making was arguably always possible, but the Agile framework is particularly relevant today because of the explosion of real-time data that allows for constant monitoring. Such information includes GST [goods and services tax] collections, digital payments, satellite photographs, electricity production, cargo
movements, internal/external trade, infrastructure roll-out, delivery of various schemes, mobility indicators, to name just a few. Some of them are available from public platforms but many innovative forms of data are now being generated by the private sector

I’m not sure that the “Waterfall” vs. the “Agile” approach is the most appealing or intuitive nomenclature for this distinction! Alternative suggestions are welcome. But I do think that the move to a greater emphasis on real-time data is an important shift in economics. For an readable example of what is becoming possible in a US context, I recommend “Tracking the Pandemic in Real Time: Administrative Micro Data in Business Cycles Enters the Spotlight,” by Joseph Vavra, in the Summer 2021 Journal of Economic Perspectives.

Some Economics of Stablecoins

A “stablecoin” is a kind of cryptocurrency. But unlike (say) Bitcoin, the value of a stablecoin is attached to a certain currency–often the US dollar, but in theory some combination of other assets. In other words, no one should be buying stablecoin because they expect its value to rise! By its nature, it can only be used as a store of value or as a means of payment.

Stablecoin holdings quintuple from October 2020 to October 2021, reaching $127 billion. Thus, the President’s Working Group on Financial Markets (PWG), joined by the Federal Deposit Insurance Corporation (FDIC) and the Office of the Comptroller of the Currency (OCC), released a Report on Stablecoins” in November 2021. Here’s a figure showing the rise in stablecoins, with Tether and USD Coin leading the way.

Why buy stablecoins at all? Imagine that you are someone who wants to buy and sell another blockchain-based cryptocurrency like Bitcoin, or wants to become involved in what is known as “decentralized finance” or DeFi–which the report defines as “a variety of financial products, services, activities, and arrangements supported by smart contract-enabled distributed ledger technology.” In short, you want to live in a blockchain world and avoid dealing in regular currencies like the US dollar, but you also want to the fluctuations in value of other cryptocurrencies like Bitcoin. Thus, you turn to a blockchain-based stablecoin instead. As the report notes: “At the time of publication of this report, stablecoins are predominantly used in the United States to facilitate trading, lending, and borrowing of other digital assets. For example, stablecoins allow market participants to engage in speculative digital asset trading and to move easily between digital asset platforms and applications, reducing the need for fiat currencies and traditional financial institutions.”

How much should federal financial regulators be worried about stablecoins? As the the report points out, there are a number of potential risks. What if the stablecoin is not actually backed with other assets in a way that lets it hold its value? What if payment via stablecoin doesn’t work for reasons ranging from systemic breakdown to honest mistake to fraud?

Because a group of government regulators are writing this report, it is perhaps unsurprising that they recommend a sweeping new set of federal laws to govern stablecoins, with a recommendation that “Congress act promptly to enact legislation to ensure that payment stablecoins and payment stablecoin arrangements are subject to a federal prudential framework on a consistent and comprehensive basis.” This would include steps like: treating all stablecoin companies as insured depository institutions, with standard oversight rules similar to those for banks: regulations for the “digital wallet” companies where people hold their stablecoins for security and financial stability; standard to ensure smooth interoperability between stablecoin platforms; and rules about ownership or affiliation of stablecoin companies with other commercial firms.

But before we try to sweep stablecoins and all other cryptocurrencies under the warm blankets of federal financial regulation, I find myself tempted by more of a “buyer beware” approach. At this stage, the whole idea of blockchain-based payments is in a stage of experimentation. Maybe we should generally let this experimentation play out awhile longer. After all, a government regulatory apparatus offers reassurances that will reduce the risks of stablecoins and other blockchain-related assets–and in this way will tend to encourage use of such assets. It’s not clear to me that it should be an official government policy goal to encourage large chunks of the financial system to migrate to blockchain-related systems.

Right now, in the context of the US financial system as a whole, $127 billion in stablecoins is not a lot. The report notes:

Unlike most stablecoins, the traditional retail non-cash payments systems—that is, check, automated clearing house (ACH), and credit, debit, or prepaid card transactions—all rely on financial institutions for one or more parts of this process, and each financial institution maintains its own ledger of transactions that is compared to ledgers held at other institutions and intermediaries. Together, these systems process over 600 million transactions per day. In 2018, the number of non-cash payments by consumers and businesses reached 174.2 billion, and the value of these payments totaled $97.04 trillion. Risk of fraud or instances of error are governed by state and federal laws …

In addition, if stablecoins or other cryptocurrencies are being used for money-laundering or other illegal transactions, government regulators and law enforcement have shown an ability to break through that anonymity when they have a strong reason to do so. The US Treasury oversees something called the Financial Crimes Enforcement Network, or FinCen. The stablecoin report notes in its acronym-heavy way:

In the United States, most stablecoins are considered “convertible virtual currency” (CVC) and treated as “value that substitutes for currency” under FinCEN’s regulations. All CVC financial service providers engaged in money transmission, which can include stablecoin administrators and other participants in stablecoin arrangements, must register as money services businesses (MSBs) with FinCEN. As such, they must comply with FinCEN’s regulations, issued pursuant to authority under the BSA [Bank Secrecy Act], which require that MSBs maintain AML [anti money-laundering] programs, report cash transactions of $10,000 or more, file suspicious activity reports (SARs) on certain suspected illegal
activity, and comply with various other obligations. Current BSA regulations require the transfer of certain specific information well beyond what can be inferred from the blockchain resulting in non-compliance. While the Office of Foreign Assets Control (OFAC) has provided guidance on how the virtual currency industry can build a risk-based sanctions compliance program that includes internal controls like transaction screening and know your customer procedures, there may be some instances where U.S. sanctions compliance requirements (i.e., rejecting transactions) could be difficult to comply with under blockchain protocols.

For now, this kind of financial regulation for stablecoins seems like plenty to me. If you are worried about the other risks of stablecoins and other cryptocurrencies, then you should be figuring out other ways to reduce those risks, not relying on federal regulation to rescue you.

The Slowdown in Agricultural Productivity Growth

Agricultural productivity growth is of central importance. In many of the lowest-income parts of the world, a majority of the local population is involved in subsistence farming. A standard pattern of economic growth is that people move away from agriculture to manufacturing and services, and move away from rural areas to urban areas. In this process, agriculture itself shifts from a focus on food to output that are sources of fiber, energy and other industrial inputs, and items like cut flowers. Moreover, current predictions are that global population will peak a little short of 10 billion people in the 2060s, and then drop off to less than 9 billion by the end of the 21st century.

It’s also worth remembering that higher agricultural productivity can mean more output with less strain on resources of soil and water–and potentially with reduced use of pesticides as well.

Given the breakthroughs in understanding the genetics of plants and how to look after them, along with the gradual spread of these insights around the world, one might expect growth in agricultural productivity to be steady, or even robust. But when Keith Fuglie, Jeremy Jelliffe, and Stephen Morgan of the Economic Research Service at the US Department of Agriculture looked at the data, they find, “Slowing Productivity Reduces Growth in Global Agricultural Output” (Amber Waves, December 28, 2021).

To set the stage, here’s a measure of growth in global agricultural output over the decades. The breakdown shows that agricultural output can rise for a number of reasons: more land, more irrigation, more inputs like fertilizer. The gains in output that can’t be attributed to these other factors are a measure of productivity gains. Thus, you can see that while gains in agricultural input back in the 1960s and 1970s were largely due to higher inputs, there has been a shift over time to more importance of productivity growth. You can also see that the green bar of productivity growth is smaller in the most recent decade.

A stacked bar chart showing the contributions of these factors to agricultural output from 1961 to 2019: Total factor productivity, increased input use, irrigation expansion, and land expansion.

What patterns emerge when looking into this data more closely? Most of the decline in agricultural productivity growth, it turns out, is in the lower-income developing economies, while agricultural productivity growth in higher-income developed countries has stayed strong. The authors write: “In terms of productivity growth, Latin America and the Caribbean experienced the largest slowdown, followed by Asia. In Sub-Saharan Africa, agricultural productivity growth was already low in the 2000s and turned slightly negative in the 2010s.”

Side-by-side stacked bar charts showing contributors to agricultural output in developed countries and developing countries from the 1960s to the 2010s.

Here’s the overall pattern that emerges. The black line shows population, projected forward to 2040. The blue line shows agricultural production rising faster than population. The red line shows that cropland has risen, but not a lot. The green line shows a general (and occasionally interrupted) pattern of falling agricultural prices in late 20th century, with a price rise starting around 2000s.

Line chart comparing trends in agricultural prices, agricultural production, cropland, and world population from 1900 through 2040.

There are theories about why agricultural productivity growth has been faring poorly in the developing countries that seem as if they should have the most room for gains. Perhaps the agricultural sector in those countries is less flexible and responsive to shifts in patterns of weather or crop disease. Some of the agricultural productivity gains in higher-income countries are based on customizing production using information and feedback from satellite, internet, and cellular infrastructure, which is less available in developing economies. Whatever the reason, the need to find ways of improving agricultural productivity in lower-income countries is vital and pressing.

Sectoral Training: Does it Work?

The idea of “sectoral training” is that everyone needs skills for a well-paid career, but not everyone needs or wants to acquire those skill by getting a four-year undergraduate degree. Might training focused on becoming employable in a specific high-demand sector of the economy, preferably with an employer standing by and ready to hire, work better for some young adults? The US Department of Education runs the What Works Clearinghouse, which collects studies on various programs and writes up an overview of the results. In November, the WWC published evaluations of two sectoral training programs, Project Quest and Year Up. Neither set of findings is very encouraging–but there is some controversy over whether the WWC is focusing on the proper outcomes.

For background, Project Quest started in San Antonio, Texas, in 1992, and has since spread to some other locations in Texas and Arizona. The program accepts those who are at least 18, and who have a high school degree (or equivalent). The US Department of Education looks at three studies of Project Quest, and describes the intervention in this way:

All three interventions target their efforts on recruiting individuals who are unemployed, underemployed, meet federal poverty guidelines, or are on public assistance. … Project QUEST is a community-based organization that partners with colleges, professional
training institutes, and employers. Participants enroll full-time in an occupational training program. They attend weekly group meetings led by a counselor that focus on life skills, time management, study skills, test-taking techniques, critical thinking, conflict resolution, and workforce readiness skills. Participants who need to improve their basic reading and math skills can complete basic skills coursework prior to enrolling in the occupational program. … Participants typically complete their occupational program within one to three years, depending on the length of the program.

What do the results of the three studies show, according to the US Department of Education?

The evidence indicates that implementing Project QUEST:
• is likely to increase industry-recognized credential, certificate, or license completion
• may increase credit accumulation
• may result in little to no change in short-term employment, short-term earnings, medium-term employment, medium-term earnings, and long-term earnings
• may decrease postsecondary degree attainment

Obviously, this is not especially encouraging. What about the other program, Year Up? The US Department of Education describes the program this way:

Year Up is an occupational and technical education intervention that targets high school graduates to provide them with six months of training in the information technology and financial service sectors followed by a six-month internship and supports to ensure that participants have strong connections to employment. … The evidence indicates that implementing Year Up:
• is likely to increase short-term earnings
• may result in little to no change in short-term employment
• may result in little to no change in medium-term earnings
• may result in little to no change in industry-recognized credential, certificate, or license completion
• may result in little to no change in medium-term employment

Of course, reports like these don’t prove that some other kind of sectoral training program might not work. But they do suggest that some of the more prominent examples of sectoral training aren’t performing as well as hoped. However, Harry J. Holzer believes that the official reports provide are too gloomy. He has written “Do sectoral training programs work? What the evidence on Project Quest and Year Up really shows” (Brookings Institution, January 12, 2022). As background, Holzer is an advocate of sectoral training programs (for example, see his “After COVID-19: Building a More Coherent and Effective Workforce Development System in the United States,” Hamilton Project, February 2021). In this essay, he writes:

I argue that the best available evidence still suggests that Project Quest and Year Up, along with other sector-based programs, remain among our most successful education and training efforts for disadvantaged US workers. While major challenges remain in scaling such programs and limiting their cost, the evidence to date of their effectiveness remains strong, and they should continue to be a major pillar of workforce policy going forward.

Holzer points to other reviews of sectoral training programs that reach much more positive outcomes. For example, Lawrence F. Katz, Jonathan Roth, Richard Hendra and  Kelsey Schaberg have written “Why Do Sectoral Employment Programs Work? Lessons from WorkAdvance” (National Bureau of Economic Research Working Paper, December 2020). They discuss four sectoral training programs with randomized control trial (RCT) evaluations. They write:

We first reexamine the evidence on the impacts of sectorfocused programs on earnings from four RCT-based major evaluations – the SEIS, WorkAdvance, Project Quest, and Year-Up – of eight different programs/providers (with one provider Per Scholas appearing in two different evaluations). Programs are geared toward opportunity youth and young adults (Year Up) or broader groups of low-income (or disadvantaged) adults. Participants are disproportionately drawn from minority groups (Blacks and Hispanics), low-income households, and individuals without a college degree. The sector-focused programs evaluated in these four RCTs generate substantial earnings gains from 14 to 39 percent the year or so following training completion. And all three evaluations with available longer-term follow-ups (WorkAdvance for six years after random assignment, Project Quest for nine years, and Year Up for three years) show substantial persistence of the early earnings gains with little evidence of the fade out of treatment impacts found in many evaluations of past employment programs. Sector-focused programs appear to generate persistent earnings gains by moving participants into jobs with higher hourly wages rather than mainly by increasing employment rates.

Why the difference in findings? Holzer suggest several reasons:

  1. The US Department of Education review process for publishing its evaluations is sluggish, and so it leave out at least three recent positive studies of Project Quest and Years Up. These studies also aren’t included in the Katz et al. (2020) review. They are: Roder, Anne and Mark Elliott. 2021. Eleven Year Gains: Project QUEST’s Investment Continues to Pay Dividends. New York: Economic Mobility Corporation; Rolston, Howard et al. 2021. Valley Initiative for Development and Advancement: Three-Year Impact  Report. OPRE Report No. 2021-96, US Department of Health and Human Services; and Fein, David et al. 2021. Still Bridging the Opportunity Divide for Low-Income Youth: Year Up’s Longer-Term Impacts. OPRE Report.
  2. The recent studies also have longer follow-up periods, and leaving out these studies means that the WWC summaries don’t include positive long-run effects.
  3. With Project Quest, the WWC summary includes some spin-off programs that are similar, but not the same.
  4. The WWC apparently has a strict rule in its evaluations that either effects are statistically significant at the 5% level, or worthless. Thus, an effect that is significant at, say, the 6% or 7% level is not viewed as a finding that might be worth more investigation with a larger sample size, but as a purely negative result. There are controversies in economics and statistics over how and when to use a 5% level of significance, but both sides of the controversy agree that this kind of black-and-white use of a rigid standard is not sensible.
  5. The WWC also has strict rules about what it counts as evidence. For example, say that the WWC wants to evaluate effects after 3, 5, and 7 years, but a study evaluates the evidence after 4, 6, and 8 years. Holzer says that the WWC would then ignore that study, because it does not “align with WWC’s preferred measures.

At a minimum, concerns like these suggest that sectoral training should not be dismissed based on the US Department of Education website evaluations. These programs have a range of different procedures and are focused on different groups, and there remains much to learn about best practices. But there is a strong case for continuing to expand and study such programs.

Public Opinion on Capitalism and Socialism

The Gallup Poll asks Americans about their attitudes toward capitalism and socialism on a semi-regular basis. Some recent results are reported by Jeffrey M. Jones in “Socialism, Capitalism Ratings in U.S. Unchanged” (December 26, 2021, from a poll carried out in October 2021).

This figure shows the share of Americans who have a “positive image” of capitalism or of socialism. The answers over the last decade are, perhaps surprisingly, quite stable.

To put those percentages in context, here’s a ranking of capitalism and socialism relative to some other terms.

Everyone loves small business. If you are a supporter of “capitalism,” it may be a good public relations move to refer instead to “free enterprise,” and thus perhaps to sidestep possibly being associated with “big business.” However, big business generally garners more positive support than either socialism or the federal government. The rankings of these other terms have remained the same in the last decade, too, although levels of positivity associated with both big business and government have declined since about 2012.

Finally, you might think that people view capitalism and socialism as two sides of a coin: that is, you favor one or the other. But this isn’t necessarily true, as Frank Newport points out in “Deconstructing Americans’ Views of Socialism, Capitalism” (December 17, 2021). As the figure shows, about one-fifth of Americans have a favorable image of both capitalism and socialism, and another one-fifth have an unfavorable image of both.

BY FRANK NEWPORT