Did Antitrust Really Used to Be So Tough?

There’s a current argument that the antitrust enforcers at the US Department of Justice and the Federal Trade Commission used to be really tough on big business, at least from the 1940s into maybe the 1970s. But then there was a counterrevolution, often referred to as “Chicago school,” which provided a justification for the legal system to retreat from tough antitrust enforcement, and since then corporate power has run unchecked. This pocket history is far too glib. Brian R. Cheffins provides some relevant background about those supposed good old days of tough antitrust in “History and Turning the Antitrust Page” (Business History Review, Winter 2021, 95:4, pp. 805-821). Here are a few points that occur to me in reading it.

1) As a matter of history, it’s not clear that antitrust enforcement was exceptionally active from the 1940s into the 1970s. Think of some of the giant US companies back around 1970: General Motors, Standard Oil of New Jersey (which was later renamed as Exxon), General Electric, IT&T, US Steel, Dupont, and others. Those firms had been around for a long time, and the antitrust authorities had not broken them up. When I was first learning some economics in high school in the late 1970s, one of the best-selling books was Global Reach: The Power of Multinational Corporations.

In the Business History Review essay, Cheffins points out that back in the 1960s in particular, there were plenty of complaints that antitrust wasn’t nearly strong enough. One prominent example was “Richard Hofstadter’s well-known 1964 essay “What Happened to the Antitrust Movement?” Hofstadter argued that because of `growing public acceptance of the large corporation,’ antitrust was `a faded passion’ that had become `specialized, and bureaucratized.'” The Kennedy and Johnson presidential administrations sometimes talked a big game on antitrust, but didn’t actually do much. Cheffins continues:

Humorist Art Buchwald speculated in a 1966 Washington Post column that by 1978 all corporations west of the Mississippi River would have merged into a single corporation, that the same would have happened east of the Mississippi, and that the two companies wouldsoon be looking to merge so there would be only one corporation in the United States. Those responsible for administering and applying America’s antitrust laws were far from sanguine themselves. Victor Hansen, head of the Antitrust Division from 1956 to 1959, said while in office, “Economic concentration is increasing.” The Wall Street Journal reported in 1961 that “trust-busters are convinced many industries set prices by follow-the-leader techniques.”

2) Much of the active antitrust enforcement of the 1950s and 1960s was focused on “horizontal mergers,” which refers to when two companies in the same industry seek to merge. This is different from “vertical mergers,” where a company buys one of its suppliers; it’s different from conglomerate mergers, where two firms in different industries combine; and it’s also different from when a dominant firm uses anticompetitive behavior to ward off actual or potential competitors. The antitrust authorities did win a lot of horizontal merger cases, but on grounds that look a little silly today.

One of the prominent cases of the time was the 1966 Supreme Court case of United States v. Von’s Grocery Co. (384 U.S. 270). The Supreme Court described the fact pattern in this way:

The market involved here is the retail grocery market in the Los Angeles area. In 1958, Von’s retail sales ranked third in the area, and Shopping Bag’s ranked sixth. In 1960, their sales together were 7.5% of the total two and one-half billion dollars of retail groceries sold in the Los Angeles market each year. For many years before the merger, both companies had enjoyed great success as rapidly growing companies. From 1948 to 1958, the number of Von’s stores in the Los Angeles area practically doubled from 14 to 27, while at the same time, the number of Shopping Bag’s stores jumped from 15 to 34. During that same decade, Von’s sales increased four-fold and its share of the market almost doubled, while Shopping Bag’s sales multiplied seven times and its share of the market tripled. The merger of these two highly successful, expanding and aggressive competitors created the second largest grocery chain in Los Angeles … In addition, the findings of the District Court show that the number of owners operating single stores in the Los Angeles retail grocery market decreased from 5,365 in 1950 to 3,818 in 1961. By 1963, three years after the merger, the number of single store owners had dropped still further to 3,590. During roughly the same period, from 1953 to 1962, the number of chains with two or more grocery stores increased from 96 to 150.

The US District Court had held that when the two firms combined were 7.5% of the market, the merger did not pose an anticompetitive risk. The US Supreme Court overturned this verdict, essentially saying that antitrust law should focus on preserving single-owner grocery stores and that if there was a trend to more concentration, it should be stopped. The possible benefits to consumers of letting popular (and efficient) grocery store chains expand in a modest way barely gets mentioned.

Antitrust enforcement against these kinds of small mergers was highly uneven, and often seemed to depend on underlying political pressures. Moreover, while the antitrust authorities were focusing on small grocery store mergers, the giant firms mentioned earlier mostly went along their merry way. Cheffins writes: “As George David Smith and Davis Dyer maintain in a 1996 essay on the history of the American corporation, `During the 1950s and ’60s, most leading U.S. industrials held their dominant positions in domestic markets without substantial price competition.’ Historian Gabriel Winant agrees, saying, `The postwar years of the 1950s and ’60s were the age of ‘monopoly capitalism,’ as the Marxists then called it, or, less polemically, an era of ‘administered prices.’”

3) The shift in antitrust doctrine in the 1970s had other major causes. For example, the late 1970s under the Carter administration were a time of industry deregulation, often led by such congressional Democrats like Ted Kennedy. The 1970s are also a time when the US economy comes under dramatically more pressure from international competition. Cheffins writes:

In 1991, the Economist focused on foreign competition to explain why America’s trustbusters had become “timid”: “America’s economy is more open today, exposing many big firms to foreign competition. This does not make it impossible for a domestic market to be dominated and then abused, but it is far less likely to happen. If General Motors, Ford and Chrysler were foolish enough to conspire to fix prices, they would quickly lose market share to Toyota, Volkswagen and Hyundai, at home as well as abroad.” The rise of foreign competition dovetailed with the intellectual trends in operation to reshape thinking about antitrust. … The percentage of goods that Americans used that were imported increased from 8 percent in 1969 to 21.2 percent in 1979. By the end of the 1970s, over 70 percent of goods produced in the United States were actively competing with foreign-made goods. As the 1980s got underway, foreign competition had sideswiped various major industries, including apparel, automobiles, footwear, shipbuilding, steel, and televisions. Moreover, concerns were growing that American business was stumbling in response to the challenge foreign firms were posing.

As foreign competition rose, the idea that US firms lacked competition in a way that called for aggressive antitrust enforcement diminished.

4) It’s not obvious that concentration is in fact substantially higher in most industries today than it used to be. It is quite possible to argue that greater antitrust activity might be warranted for some companies like Amazon, Alphabet (formerly Google) Microsoft, Apple, or Facebook (now Meta), or for specific situations like certain mergers between local hospitals, but not to believe that the US economy is dramatically more concentrated than in the past. As one example, Berkeley economics professor Carl Shapiro (who was a member of Council of Economic Advisers and also a Deputy Assistant Attorney General in the Antitrust Division of the US Department of Justice during the Obama administration) has taken a skeptical view of the idea that US industry concentration is up overall, while still advocating for targeted antitrust interventions for certain companies and situations.

5) My own view is that some of the most interesting antitrust actions of the earlier era were not about breaking up large companies, which didn’t much happen, and not about the many efforts to prevent small-scale horizontal mergers. Instead, the antitrust efforts more perhaps more applicable to today were to take a close look at how companies may use intellectual property protection to constrain competition.

As one example, it was fairly common practice in the 1940s and 1950s for the antitrust authorities to require firms to offer “compulsory licenses” to their intellectual property: that is, a firm could not use its intellectual property to shut off potential competitors. Perhaps the best-known case happened in 1956, when Bell Labs signed a consent decree that required it to put all of its patents in the public domain–a step that many industry insiders credit with allowing the birth of the US semiconductor industry. In another prominent case in 1973, antitrust authorities found that the Xerox corporation was using an ever-evolving and ever-expanding array of patents to block entry into the photocopier market. In the context of the giant modern tech companies, a related proposal is that antitrust authorities should beware when large companies buy smaller ones that could have grown into viable competitors.

Just to be clear, there’s no question in my mind that prevailing antitrust doctrine did shift in the 1970s in a direction that was less aggressive. But the ideas that antitrust enforcement of the 1960s was especially active, or that it mostly focused on breaking up big companies, just don’t seem correct. In addition, the shift to less active antitrust was not just an ideological/political lighting bolt from a blue sky, but happened in response to factors like the previous antitrust focus on smallish horizontal mergers and the rise in international competition in the 1970s. The lessons of that earlier time of antitrust regulation are more nuanced, and at least in my view, the most useful lessons for our time are focused on issues related to how competition and mergers interact with intellectual property and potential future competitors.

The Inaccuracy of Inflation Expectations

The question of whether a burst of inflation turn into permanent inflation should depend, at least in part, on expectations about inflation. If workers and firms expect higher inflation, then the workers are more likely to press for higher wages to compensate–and firms are more likely to be amenable to such increases. An inflationary cycle can emerge where expectations of higher inflation lead to more price and wage increases, and those price and wage increases lead to higher inflation.

However, the evidence of the last couple of decades is that people’s expectations of inflation aren’t very accurate. Instead, people’s expectations often react to past rises or falls in inflation after they have happened. John O’Trakoun of the Richmond Fed gives a short overview of the evidence in “Your Attention Please …” (Macro Minute, February 5, 2022). He writes:

Historically, households have not been very good at predicting inflation. Figure 1 below plots the expected one-year inflation rate from the survey against realized Consumer Price Index (CPI) inflation a year later. Since 2007, the correlation between the two measures has been -0.35, which casts doubt on whether this survey tells us much about future inflation at all. But as we explore in this post, consumers’ expectations may now have some additional weight.

Here’s an accompanying figure. The blue line shows people’s expectations of inflation a year into the future. The red dashed line shows what inflation actually happened, but shifted back by 12 months. Thus, the right hand side of the figure shows that about a year ago, people were forecasting inflation of about 3.5% a year in the future; the red line shows that the actual rate of inflation turned out to be more like 7.5%,

There are couple of patterns worth noting in the graph. First, you can see several situations where inflation expectations change in response to earlier movements in inflation–and can end up being completely wrong. For example, the left-hand side of the graph shows how inflation expectations were rising in 2008 at about the same time that actual inflation turned out to be plummeting during the Great Recession. When people tell you what inflation they are “expecting,” it often looks as if they are actually telling you what inflation they recently experienced. Second, you can see that for most of the decade from 2010 to 2020, inflation expectation just didn’t change much, even when actual inflation fell and then rose. A plausible interpretation here is that many people haven’t paid much attention to inflation for the last decade.

Thus, one possibility is that inflation expectations don’t much match inflation when people aren’t much paying attention, and when inflation is fluctuating in a fairly narrow range. But if inflation stays higher for a significant time, in a way that is salient to many people, then the chance that current inflation expectations feeding into future inflation become higher.

The Pandemic as Global Financial Shock: WDR 2022

The World Development Report is an annual flagship report of the World Bank, and the 2022 version is focused on the theme “Finance for an Equitable Recovery.” The report emphasizes that the COVID-19 pandemic was a truly global economic shock, in the sense that a greater share of countries around the world were affected than in previous economic cataclysms. The authors write (footnotes omitted):

Economic activity contracted in 2020 in about 90 percent of countries, exceeding the number of countries seeing such declines during two world wars, the Great Depression of the 1930s, the emerging economy debt crises of the 1980s, and the 2007–09 global financial crisis (figure O.1). In 2020, the first year of the COVID-19 pandemic, the global economy shrank by approximately 3 percent, and global poverty increased for the first time in a generation.

The challenge for policymakers around the world was to provide sufficient assistance to help households and firms over the worst of the pandemic, but at the same time not to pile up so much debt as to create dramatic future problems.

As the COVID-19 crisis unfolded in 2020, it became clear that many households and firms were ill-prepared to withstand an income shock of the length and scale of the pandemic. In 2020, more than 50 percent of households globally were not able to sustain basic consumption for more than three months in the event of income losses, while the cash reserves of the average business would cover fewer than 51 days of expenses. Within countries, the crisis disproportionately affected disadvantaged groups. In 2020, in 70 percent of countries the incidence of temporary unemployment was higher for workers who had completed only primary education. Income losses were similarly larger among youth, women, the self-employed, and casual workers with lower levels of education. Women, in particular, were affected by income and employment losses because they were more likely to be employed in sectors most affected by lockdown and social distancing measures, and they bore the brunt of the rising family care needs associated, for example, with the closures of childcare centers and schools. According to high-frequency phone survey data collected by the World Bank, in the initial phase of the pandemic, up to July 2020, 42 percent of women lost their jobs, compared with 31 percent of men, further underscoring the unequal impacts of the crisis by gender.

The common pattern around the world was that governments of high-income countries were able to borrow more easily and thus do spend as a share of GDP during the pandemic.

The World Bank of course focuses on issues of lower- and middle-income countries. Although the governments of these countries on average mostly did not take on large amounts of extra debt, many of them pushed the limits of what they were able to borrow. In addition, their financial systems nonetheless face the substantial challenge of firms and households that are unable to repay their loans or pay their bills.

Thus, these countries face a challenge of how to manage a situation where so many firms and households are in loan distress, in a legal context where provisions for bankruptcy law are underdeveloped, and indeed where it may be better for the economy to offer some additional loans rather than see widespread bankruptcies that eliminate a substantial share of existing firms. Of course, in giving additional loans, the challenge is to avoid subsidizing money-losing “zombie” firms that really should go out of business and stop sucking up credit that could be put to better use elsewhere in the economy. The report notes:

Past crises have revealed that without a swift, comprehensive policy response, loan quality issues are likely to persist and worsen over time, as epitomized by the typical increase in loans to “zombie firms”— that is, loans to weak businesses that have little or no prospect of returning to health and fully paying off their debts. Continued extension and rolling over of loans to such firms (also known as evergreening) stifles economic growth by absorbing capital that would be better directed to loans for businesses with high productivity and growth potential.

It’s a fragile situation. Not making extra loans has risks. Making extra loans has risks. Figuring out when to make extra loans and when not to do so is inherently difficult. It’s one reason why the report states: “The COVID-19 pandemic is possibly the largest shock to the global economy in over a century.”

India’s Economy: The Satellite Photo Tour

Last week I mentioned the Economic Survey 2021-22 published by India’s Ministry of Finance. As usual, most of this annual report is an overview of fiscal, monetary, and trade developments, along with discussions of sectors like agriculture, industry/infrastructure and services, as well as employment, social infrastructure, and sustainable development. The last chapter of this year’s report focuses on “Tracking Development through Satellite Images & Cartography.”

One prominent example is called “night lights,” which is just a satellite picture of light emissions at night. The left-hand photo shows India in 2012; the right-hand photo is India nine years later in 2021. The spread of electric lighting in India is clear.

The use of night lights data as a way of estimating economic development has been a research topic for a few years now. For economists, one advantage of night lights data is that it isn’t produced by the national government–unlike, say statistics on gross domestic product. For a discussion of using night lights data to estimate GDP, see Noam Angrist, Pinelopi Koujianou Goldberg, and Dean Jolliffe, “Why Is Growth in Developing Countries So Hard to Measure?” in the Summer 2021 issue of the Journal of Economic Perspectives. (Full disclosure: I’m the Managing Editor of the JEP, which is freely available online to all courtesy of the American Economic Association.)

These authors also point out that satellite imagery is not limited to luminosity: for example, it’s also possible to look at plant cover and even to identify different kinds of plants. For example, here’s a photo of agricultural activity near a certain reservoir after the water infrastructure was improved. Again, if the choice is between trusting a government report on the benefits of the improved infrastructure or trusting a satellite image that can be readily double-checked, the satellite image has some obvious benefits.

It’s also possible to use satellite images to look at industrial development or urban patterns. Here’s a photo of a “wasteland” area before and after it is converted to industrial uses.

In urban areas of some developing countries, attempting to count buildings and land-use through a ground-level army of census-takers (say, for purposes of calculating property taxes) may be a difficult and costly task. Satellite photos offer an overview.

One can also use satellite photos for environmental purposes: for example, to get a clear view of the size of the Amazon rain forest or the extent of cultivated land. Here’s an example from India, looking at the annual cycle of water storage at a certain reservoir.

Finally, there can be cases where good old maps, unassisted by satellite images, can tell a story. Here’s a comparison of the extent of India’s national highway system, as the network doubled its road-miles from 10 years ago up to the present.

It’s easy enough to find lengthy and legitimate lists of concerns about India’s economic growth. But these kinds of images make clear that India’s growth is indeed very real. Those who want to to read more about India’s economy from a broader perspective than this year’s Annual Survey might start with the three-paper “Symposium on the Economics of India” in the Winter 2020 issue of the Journal of Economic Perspectives:

— “Dynamism with Incommensurate Development: The Distinctive Indian Model,” by Rohit Lamba and Arvind Subramanian

— “Why Does the Indian State Both Fail and Succeed?” by Devesh Kapur

— “The Great Indian Demonetization,” by Amartya Lahiri

Global Connectness and the Pandemic

How did the pandemic affect the economic side of global connectedness? International travel went way down in 2020, as one would expect. But other aspects of connectedness–flows of trade, capital, and especially information–held up pretty well in the first year of the pandemic. Steven A. Altman and Caroline R. Bastian tell the story with figures and short descriptions in DHL Global Connectedness Index 2021 Update: Globalization Shock and Recovery in the Covid-19 Crisis (November 2021).

For example, here’s a pattern of merchandise and services trade around the world. Merchandise trade peaked in 2008, and the drop in 2020 was modest. The authors report that trade in goods in 2021 has now risen above pre-pandemic levels. Services trade has generally been rising, and it drops more substantially in 2020, but much of that decline is due to the fall of nearly three-quarters in international travel.

Another interesting pattern is that the distance that exports travelled has been rising over time, and kept rising in 2020. One possible interpretation here is that the pandemic had a bigger effect on limiting short-distance international trade (say, trucks crossing borders), rather than long-distance international trade (say, container ships).

In the areas of international capital flows, the authors write: “The pandemic dealt a major blow to international capital flows, but portfolio equity flows stabilized in mid-2020 and foreign direct investment began a strong rebound in 2021.”

Flows of international travelers (per capita) dropped off, as shown in the first panel. However, the second panel shows that the share of students coming across borders rose in 2020, and the third panel shows that international migrants rose in 2020, too.

International information flows kept rising. International internet traffic had been growing about 20-25% per year before the pandemic. That growth rate spiked to almost 50% in 2020, but then has fallen back into the usual range in 2021. The left-hand figure shows that the share of all phone calls that are international keeps rising; the right-hand paper shows that the share of scientific publications co-authors by teams from different countries keeps rising, too.

One section of the report looks at trade ties between the US and China. After the trade war that kicked off in 2018 and the pandemic, one might expect trade between the world’s two biggest economies to have fallen, but the pattern is more one of continuity than of change. For example, the left-hand panels show China-US trade as a share of China’s GDP (top) and a share of China’s trade with all countries (bottom). China’s overall exports soared in the early 2000s, after China joined the World Trade Organization in 2001 The bottom left panel shows that the share of China’s exports to the US as opposed to other countries didn’t rise much at that time–China’s exports soared everywhere. As a share of China’s GDP, exports to the US spiked in the early 2000s, but then dropped back to previous levels and have been sagging since about 2010. You can see the downward bump of the 2018 trade war in these figures, but US-China trade generally recovered back to earlier levels in 2020.

One doesn’t want to over-interpret data from any single year, of course, much less a pandemic year. But it feels as if the nature of globalization, broadly understood, is evolving from goods to services, and from finances to information flows. If someone asks about your views of globalization, it’s reasonable to being by sorting out the different categories.

Winter 2022 Journal of Economic Perspectives Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Winter 2022 issue, which in the Taylor household is known as issue #139. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

___________________

Symposium on Economies of Africa

“Labor Productivity Growth and Industrialization in Africa,” by Margaret McMillan and Albert Zeufack

Manufacturing has made an important contribution to raising living standards in many parts of the world. Concerns about premature deindustrialization have made some observers skeptical about the potential for manufacturing to play this role in Africa. But employment in African manufacturing has grown rapidly over the past 20 years. These employment gains have been accompanied by: (i) large increases in the number of small manufacturing firms; (ii) limited employment gains in large firms; and (iii) robust labor productivity growth in Africa’s large firms. Limited employment growth in Africa’s large manufacturing firms is partly a result of the capital intensity of the manufacturing subsectors in which African countries are most engaged—the processing of resources—and partly a result of rising capital intensity in manufacturing. The potential for manufacturing to raise living standards in Africa depends on indirect job creation by large firms through backward and forward linkages and increasing labor productivity in small firms.

Full-Text Access | Supplementary Materials

“Agricultural Technology in Africa,” by Tavneet Suri and Christopher Udry

We discuss recent trends in agricultural productivity in Africa and highlight how technological progress in agriculture has stagnated on the continent. We briefly review the literature that tries to explain this stagnation through the lens of particular constraints to technology adoption. Ultimately, none of these constraints alone can explain these trends. New research highlights pervasive heterogeneity in the gross and net returns to agricultural technologies across Africa. We argue that this heterogeneity makes the adoption process more challenging, limits the scope of many innovations, and contributes to the stagnation in technology use. We conclude with directions for policy and what we feel are still important, unanswered research questions.

Full-Text Access | Supplementary Materials

“Time Use and Gender in Africa in Times of Structural Transformation,” by Taryn Dinkelman and L. Rachel Ngai

Many African countries are still in the early stages of structural transformation. Typically, as economies move through the structural transformation, activities once conducted within the household are outsourced to the market. This has particular implications for women’s time use. In this paper, we document that current patterns of female time use in home production in several African countries closely resemble historical time use patterns in the Untied States. We highlight two stylized facts about women’s time use in Africa. First, in North Africa, women spend very few hours in market work and female labor force participation overall is extremely low. Second, although extensive margin participation of women is high in sub-Saharan Africa, women tend to work in the market for only a few hours each week, with the rest of their work hours spent in home production. These two facts suggest two different types of constraints that could slow down the reallocation of female time from home to market as economies grow: social norms related to women’s market work, and a lack of infrastructure (e.g., household infrastructure and childcare facilities) to facilitate marketizing home production. We discuss recent empirical evidence related to each set of constraints and highlight new avenues for research.

Full-Text Access | Supplementary Materials

“Young Adults and Labor Markets in Africa,” by Oriana Bandiera, Ahmed Elsayed, Andrea Smurra, and Céline Zipfel

Every year, millions of young adults join the labor market in Africa. This paper harmonizes surveys and censuses from 68 low- and middle-income countries to compare their job prospects to those of their counterparts in other low-income regions. We show that employment rates are similar at similar levels of development but that young adults in Africa are less likely to have a salaried job, especially when the size of their cohort is large. Building on existing evidence on the impacts of interventions targeting both the demand and supply sides of the labor market, we discuss policy priorities for boosting the growth of salaried job creation in the region.

Full-Text Access | Supplementary Materials

“Political Distortions, State Capture, and Economic Development in Africa,”by Nathan Canen and Leonard Wantchekon

This article studies the role of political distortions in driving economic growth and development in Africa. We first discuss how existing theories based on long-run structural factors (e.g., pre-colonial and colonial institutions, or ethnic diversity) may not capture new data patterns in the region, including changes to political regimes, growth patterns, and their variation across regions with similar historical experiences. We then argue that a framework focused on political distortions (i.e., how political incentives impact resource allocation and economic outcomes) may have multiple benefits: it encapsulates many distortions observed in practice, including patronage, variations in contract enforcement and the role of political connections in firm outcomes; it unifies results in Africa and elsewhere; and it leaves a wide scope for policy analysis. We conclude by overviewing reforms that may curb such distortions, including changes to campaign financing rules, bureaucratic reform, free trade agreements, and technology.

Full-Text Access | Supplementary Materials

Articles

The Price of Nails since 1695: A Window into Economic Change,” by Daniel E. Sichel

This paper focuses on the price of nails since 1695 and the proximate source of changes in those prices. Why nails? They are a basic manufactured product whose form and quality have changed relatively little over the last three centuries, yet the process for producing them has changed dramatically. Accordingly, nails provide a useful prism through which to examine a wide range of economic and technological developments that touch on multiple areas of both micro- and macroeconomics. Several conclusions emerge. First, from the late 1700s to the mid-twentieth century, real nail prices fell by a factor of about 10 relative to overall consumer prices. These declines had important effects on downstream industries, most notably construction. Second, while declining materials prices contribute to reductions in nail prices, the largest proximate source of the decline during this period was multifactor productivity growth in nail manufacturing, highlighting the role of the specialization of labor and reorganization of production processes. Third, the share of nails in GDP dropped back from 0.4 percent of GDP in 1810—comparable to today’s share of household purchases of personal computers—to a de minimis share more recently; accordingly, nails played a bigger role in American life in that earlier period. Finally, real nail prices have increased since the mid-twentieth century, reflecting in part an upturn in materials prices and a shift toward specialty nails in the wake of import competition, though the introduction of nail guns partly offset these increases for the price of installed nails.

Full-Text Access | Supplementary Materials

“The Puzzle of Falling US Birth Rates since the Great Recession,” by Melissa S. Kearney, Phillip B. Levine and Luke Pardue

This paper documents a set of facts about the dramatic decline in birth rates in the United States between 2007 and 2020 and explores possible explanations. The overall reduction in the birth rate reflects declines across many groups of women, including teens, Hispanic women, and college-educated white women. The Great Recession contributed to the decline in the early part of this period, but we are unable to identify any other economic, policy, or social factor that has changed since 2007 that is responsible for much of the decline beyond that. Mechanically, the falling birth rate can be attributed to changes in birth patterns across recent cohorts of women moving through childbearing age. We conjecture that the “shifting priorities” of more recent cohorts, reflecting changes in preferences for having children, aspirations for life, and parenting norms, may be responsible. We conclude with a brief discussion about the societal consequences for a declining birth rate and what the United States might do about it.

Full-Text Access | Supplementary Materials

“Isaiah Andrews, 2021 John Bates Clark Medalist,” by Anna Mikusheva and Jesse M. Shapiro

Isaiah Andrews is an exceptionally warm and caring person, a remarkable teacher, a collaborator and mentor, an exemplary contributor to his department and profession, and a brilliant econometrician. In this article, we review Isaiah’s contributions to econometric theory in the context of Isaiah’s receipt of the 2021 John Bates Clark Medal.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

The Evolution of Labor Force Participation by Race

With “the Great Resignation”–that is, the rise in people since the pandemic who are out of the labor force and neither employed nor looking for work–it’s perhaps useful to point out some substantial shifts in racial differences in labor force participation.

This figure shows overall labor force participation for white (blue line), black (red line), Hispanic (green line), and Asian (purple line) workers. Separate data on Asian workers has only been collected since about 2002. But back in the early 1970s, labor force participation rates were quite similar for white, black, and Hispanic populations. Labor force participation then rose for all three groups in the 1970s, 1980s, and 1990s, in large part because of the mass entry of women to the (paid) labor force. However, the rise in labor force participation for whites and Hispanics was faster than the rise for blacks. Roughly around 2000, the labor force participation rate for all three groups started declining. However, the decline was slowest for the Hispanic population. In the lead-up to the pandemic, labor force participation was highest for the Hispanic population, but had roughly equalized for the white, black, and Asian population.

In looking at labor force statistics, it’s standard to separate out men and women. This graph shows labor force participation rates for white, black, and Hispanic men. (Data for Asian men and women is not readily available.) The general pattern is that labor force participation rate is falling for all groups of men over time. However, the decline for Hispanic men is less, and the decline in labor force participation for white men since about 2010 is especially pronounced.

Here’s the parallel figure for labor force participation of white, black, and Hispanic women. For all three groups, the rise in female labor force participation from the 1970s up to around 2000 is apparent. However, in this case the labor force participation rate of black women (red line) is consistently above that of white women (blue line). The rate of labor force participation of Hispanic women (green line) was generally below that of white women for most of the period, but rises above white women around 2010.

I won’t attempt here to offer explanations (or gross generalizations) behind these patterns. But it is notable to me that labor force participation for Hispanics is now higher than that for whites for both men and women. It is notable that labor force participation rates for blacks have converged with those for whites, in large part because the drop in white labor force participation for both men and women in the last decade or so has been so substantial. It will be interesting to see if the Asian labor force participation rate continues to track the rate for whites, as it did before the pandemic, or if it moves to the higher levels of Hispanic labor force participation, as it seems to be doing since the pandemic.

Olympic Air Quality in China

The Olympic Summer Games were held in Beijing in 2008. Now the Winter Games are being held there in 2022. For the sake of the athletes who will be inhaling and exhaling more frequently and deeply than usual in the next few weeks, how has the air quality changed? Michael Greenstone, Guojun He, and Ken Lee discuss the evidence in “The 2008 Olympics to the 2022 Olympics: China’s Fight to Win its War Against Pollution” (February 2022, Energy Policy Institute at the University of Chicago). They write:

In the years before the 2008 Beijing Summer Olympics, pollution in China had been sharply climbing. The government responded with quick reforms that temporarily reduced pollution during the games. The reforms, however, only managed to slow the climb in the long run. By 2013, pollution in China had reached record levels. The following year, the same year Beijing applied to host the 2022 Olympic Games, Chinese Premier Li Keqiang declared a “war against pollution” and vowed that China would tackle pollution with the same determination it used to tackle poverty. Seven years later, pollution has declined dramatically by about 40 percent. In Beijing, there is half as much pollution compared to both 2008 and 2013 levels. In most areas of China, pollution has fallen to levels not seen in more than two decades. To put China’s success into context, these reductions account for more than three quarters of the global decline in pollution since 2013. … Due to these improvements, the average Chinese citizen can expect to live 2 years longer, provided
the reductions are sustained. Residents of Beijing can expect to live 3.7 and 4.6 years longer, since 2008 and 2013 respectively.

Nevertheless, work remains. While China has met its national air quality standard, pollution levels as of 2020 were still six times greater than the World Health Organization (WHO) guideline. To further reduce pollution, China is taking rapid actions ahead of the 2022 Winter Olympics. If those actions were to allow China to permanently reduce pollution to meet the WHO guideline, the average Chinese citizen could expect to gain an additional 2.6 years of life expectancy, on top of the gains since the war against pollution was initiated. Residents of Beijing could gain an additional 3.2 years.

Here are a couple of figures to illustrate. Pollution is measured here by particulate emissions in the air. This method has the advantage that the data is collected by satellite–not from China’s government. This figure shows how particulate emissions rose in the early 2000s, but have dropped since about 2014.

This next figure shows the change in air particulates in a range of Chinese cities. The black bars are 2013; the gray bars are 2020. The lighter brown bar is China’s pollution standard, which is now being met in many cities; the darker brown bar is the World Health Organization Standard, which is not being met in any Chinese city. As the report notes: “While pollution levels are now on par with its national standards, they are still six times greater than the WHO guideline of 5 μg/m³ as of 2020. Using an international lens, Beijing is still three times more polluted than Los Angeles, the most polluted city in the United States.”

As one might expect, the policy tools used in China to reduce air pollution have lacked subtlety: barring cars from the roads, shutting down industrial plants in certain locations, and so on. The authors note the costs of this command-and-control approach to reducing pollution and the prospect of achieving environmental goals at lower cost with a market-oriented tradeable permit system:

[A]lmost all of the policies come from a “command and control” playbook that generally does not consider how to minimize the costs of achieving their goals. Thus, the Chinese government closed, relocated, and reduced the production capacity of a large number of polluting firms, enforced tighter emission standards across many industries, assigned binding abatement targets to local governments, and sent thousands of discipline teams
to inspect local environmental performances. These measures, while being effective in reducing the total emissions in the country, ignored the significant differences in the abatement costs across firms, industries, and regions, and led to large economic and administrative costs in achieving the policy goal. They also led to social media complaints from stakeholders that environmental regulations are too stringent, protests from workers being laid off by the polluting firms, and resistance from local governments for enforcing tighter environmental standards. …

China’s introduction of a national carbon market in July 2021, which upon completion will be the largest such market in the world, positions the country well for the adoption of a particulate pollution and/or sulfur dioxide market.

The Allais Paradox Revisited

Here is an original version of the “Allais paradox.” Take a moment to think about your own choices, and maybe also what you think other people would be likely to choose. Discussion will follow! This formulation is from “On the Experimental Robustness of the Allais Paradox” by Pavlo Blavatskyy, Andreas Ortmann, and Valentyn Panchenko, in the February 2021 issue of the American Economic Journal: Microeconomics. (Full disclosure: AEJ: Micro is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor, but the articles in AEJ: Micro are not freely available online.) The authors write: 

The first Allais (1953) example consisted of two related decision problems, which we call Allais questions.

In the first question, a decision-maker chooses between two options A and B:
• Option A: ₣100 million for certain
• Option B: ₣500 million with probability 0.1, ₣100 million with probability
0.89, nothing with probability 0.01


In the second question, a decision-maker chooses between another two options C and D:
• Option C: ₣100 million with probability 0.11, nothing with probability 0.89
• Option D: ₣500 million with probability 0.1, nothing with probability 0.9

Notice that you need to make two choices here: between A and B, and between C and D. Choice A is 100 million for sure. (The symbol ₣ stands for the French currency of the time, the franc.) Choice B is a 10% chance of 500 million, an 89% chance of 100 million, and a 1% chance of nothing. In short, this choice tells something about your risk preferences. Would you take 100 million for sure? Or would you prefer a small chance of a much larger prize, even at the risk of an even smaller chance of nothing at all?

Now think about the choice between C and D, which on the surface looks very different. Would you prefer an 11% chance of 100 million and an 89% chance of nothing? Or would you prefer a slightly smaller chance of winning but a larger prize if you do win, with 10% chance of 500 million and a 90 percent chance of nothing?

A few years after Maurice Allais won the Nobel price in economics in 1988 (“for his pioneering contributions to the theory of markets and efficient utilization of resources”–the Allais paradox was just one small slice of his body of work), Bertrand Munier wrote an article about his accomplishments in the Spring 1991 issue of the Journal of Economic Perspectives (“Nobel Laureate: The Many Other Allais Paradoxes,” pp. 179-199). With regard to the Allais paradox, Munier wrote:

Allais and Darmois organized a conference in Paris in 1952 on mathematical economics and risk. The above questions were put to several participants by Allais, and in particular to authors of the Neumann-Savage expected utility theory—which Allais calls the “neo-Bernoullian” theory—to researchers supporting the theory, like B. de Finetti, as well as to other persons. Most of these individuals, including Savage himself, preferred a1 to a2 [in the notation above, choices A and B] in the first pair and a4 to a3 [that is, choice D and C] in the second one. Later on, Allais submitted the questionnaire containing, among others, the above questions to a number of colleagues and students. About 65 percent of them made similar choices.

The key insight here is that, from a certain perspective, A and B is actually the same choice as C and D. If you choose A over B, it is logically inconsistent to choose D over C. To understand why, there are theoretical and algebraic explanations of the Allais paradox in the articles by Blavatskyy, Ortmann, ad Panchenko, and the article by Munier. Here, I’ll try a more verbal explanation.

If you look back at the choice between A and B, 89% of the time the outcome is the same: that is, it’s 100 in either option. The difference between A and B lies in the other 11% of what can happen. In A, you get 100 all 11% of the time. In B, you get 500 10% of the time, and zero 1% of the time.

Now look at the choice between C and D. Again, 89% of the time the outcome is the same, but instead of being 100, as it was in the choice between A and B, now the outcome is zero for both C and D, 89% of the time. Again, the difference between C and D thus resides in what happens in the other 11% of outcomes. In choice C, you get 100 all the remaining 11% of the time. In choice D, you get 500 10% of the time and 0 the other 1% of the time.

Notice that once you strip out the fact that the same outcome happens 89% of the time in both sets of choices, the remaining choice over what happens in the remaining 11% of time becomes identical! If you prefer A over B, you thus prefer the more certain outcome over the outcome with more risk in that remaining 11% slice of outcomes. But if you prefer D over C, you prefer the outcome with more risk. In other words, choosing A over B, and then also choosing D over C, is an inconsistent choice according to the expected utility theory. This is the Allais paradox.

You might object that this example–that is, making two choices and the comparing consistency across those choices–is contrived and hard to understand. Fair enough. It definitely is contrived to help Allais make his point. But notice that many prominent supporters of expected utility theory, including Leonard Savage, fell right into the Allais paradox.

You might further object that a choice that offers a life-changing amount with complete certainty, like choice A, has an unfair advantage over any choice with a degree of uncertainty. Or you might say that the fact that the 89% outcome is 100 in the first choice and 0 in the second choice is not something that can be ignored: the reference point that you start from will affect choices, too. Now we’re getting into the heart of the discussion. Is the Allais paradox a fragile outcome that depends on a certain phrasing of the choices? Does it depend on whether a choice is certain or not? Does it depend on large amounts or small probabilities being involved? Would people make these same choices in real-world settings?

In broad terms, what is the applied psychology of making choices like these? The Allais paradox helped to open up these kinds of questions for exploration, and is thus thought of as one of the building blocks of what has now developed into behavioral economics. As Munier put it:

Allais had put the finger on what is now termed the “certainty effect” in experimental decision science. But he suspected, without being able to prove it immediately, that the lesson was more general: attitudes towards risk change not only from an individual to another, but also from a given individual between different patterns of risk.

The recent paper by Blavatskyy, Ortmann, and Panchenko looks at research on the Allais paradox over time. They find:

The Allais Paradox, or the common consequence effect, is a well-known behavioral regularity in individual decision-making under risk. Data from 81 experiments reported in 29 studies reveal that the Allais Paradox is a fragile empirical finding. The Allais Paradox is likely to be observed in experiments with high hypothetical payoffs, the medium outcome being close to the highest outcome and when lotteries are presented as a probability distribution … The Allais Paradox is likely to be reversed in experiments when the probability mass is equally split between the lowest and highest outcomes in risky lotteries.

The specific example of the Allais Paradox, and the many somewhat examples that have been explored in research since then, teach a general lesson: people’s attitudes toward risk can be inconsistent, depending on how the choices are framed, and in particular by valuations people place on certainty and on reference points. Allais died in 2010 at the age of 99. My sense is that his work and legacy is not well-known among many younger economists, but he was a giant among the economists of his time. Munier summed up his philosophy in the 1991 JEP essay:

[Allais] later made clear to his students the few principles on which his economic methodology rests: 1) make reference to original thinkers only; 2) never accept any theory if it has not been successfully checked on empirical data; 3) look for invariants in space and time as much as possible; 4) make use of mathematics only as a way of expressing a theory rigorously (and particularly of explicitly stating the hypotheses on which it rests), but never admit that the mathematical content of a paper is a significant index of quality; and 5) aim at developing synthetic views.

Insects and Hydroponics for Africa (and Elsewhere?)

As the world population rises toward a projected high of 9.7 billion in the mid-2060s, there are questions of how to feed that population. The problem is greater than a simple population count suggests. As economies around the world continue to grow, people with higher incomes are likely to want diets with more meat, which requires more agricultural output for animal feed. In addition, agriculture can cause a variety of environmental stresses in how it uses water and land, the pressure for deforestation, runoff from fertilizers and pesticides, and others. There pressures of rising population, higher incomes, and environmental stressors all seem likely to intersect in countries across Africa. As one way to ease these constraints, a group of World Bank authors have written Insect and Hydroponic Farming in Africa : The New Circular Food Economy, by Dorte Verner, Nanna Roos, Afton Halloran, Glenn Surabian, Edinaldo Tebaldi, Maximllian Ashwill, Saleema Vellani, and Yasuo Konishi. Here’s a flavor of the discussion from the Executive Summary (footnotes omitted):

While current agri-food production models rely on abundant supplies of water, energy, and arable land and generate significant greenhouse gas emissions in addition to forest and biodiversity loss, past practices point toward more sustainable paths. Frontier
agriculture in a circular food economy is meant to be such a model. Frontier agriculture
refers to approaches to agricultural production that sustainably expand the
frontiers of current food production practices. Frontier agriculture includes insect
farming and hydroponic farming, which are the focus of this report. Insect farming
is the process of producing insects for human food and animal feed, and hydroponic
farming is the process of growing crops in nutrient-rich water solutions instead of
soil. These technologies do not require great access to land, water, or wealth—all
limiting factors in Africa and FCV countries [that is, countries affected by fragility, conflict, and violence]. The technologies use organic waste, including agricultural or certain industrial waste, to quickly produce nutritious and protein-rich foods for humans, fish, and livestock and biofertilizers for soils. This improves food and nutrition security while reducing waste, strengthening national accounts, replenishing the environment, saving hard currency reserves by decreasing food and feed imports, and promoting green, resilient, and inclusive development.

Neither consumption of insects nor hydroponic farming is new—humans have engaged in both activities for hundreds of years. … Within a year, African insect farming can generate crude protein worth up to US$2.6 billion and biofertilizers worth up to US$19.4 billion. That is enough protein meal to meet up to 14 percent of the crude protein needed to
rear all the pigs, goats, fish, and chickens in Africa, according to the report’s modeling of the annual production of black soldier fly larvae (BSFL) in Africa. The report estimates that through black soldier fly farming, the continent could replace 60 million tons of traditional feed production with BSFL annually, leading to 200 million tons of recycled crop waste, 60 million tons of organic fertilizer production, and 15 million jobs, while saving 86 million
tons of carbon dioxide equivalent emissions, which is the equivalent of removing 18 million vehicles from the roads.

A detailed discussion of insect farming and hydroponic agriculture follows, with a particular emphasis on how these already have a foothold in a number of African countries, but there is still a lot of room for economies of scale and innovation to bring down prices.

An obvious question is: If this is such a good idea for Africa, why not elsewhere, too? My own answer is that these options are getting a foothold elsewhere. Here in Minnesota, one of my few trips to a restaurant in the last year, during a dip in the pandemic, was to a restaurant that served crickets as one of the appetizers (and yes, they were just fine–although they were also heavily spiced so I can’t say that I know what cricket really tastes like). Hydroponic agriculture is also growing in popularity here, driven by a combination of cheaper LED lighting, improvements in thermal glass, and long and bitter winters. Fresh and locally grown herbs are showing up at the grocery in the sub-zero depths of Minnesota winter.

In the report on insect farming in Africa, I was intrigued by a section on the strategy of Korea’s government to encourage insect farming there. The report notes:

Korea has an advanced regulatory and institutional framework for governing the national insect sector. The country’s modern insect sector has developed rapidly since 2011. Insect farms are now operating throughout the entire country. According to a Korean government survey, in December 2019, there were 2,535 farms, and by May 2021, there were nearly 3,000 registered farmers producing insects for food and feed in Korea. The emerging sector has created thousands of jobs and incomes for insect farmers and processors (RDA 2020b).

Insect legislation is well established in Korea. The Ministry of Food and Drug Safety approved several insect species as safe for human consumption (2016 Korean Food Standards Codex). These include silkworms and grasshoppers as traditional food ingredients and mealworm larvae, white-spotted flower chafer larvae, dynastid beetle larvae, and two-spotted crickets as new food ingredients. In January 2020, super mealworm larvae (nonfat powder) was temporarily registered as a new food ingredient, and locust and honeybee drone pupae are expected to be registered. Another eight insect species were approved for animal feed through the Feed Management Act. These include
dried crickets, dried grasshoppers, mosquito larvae, housefly larvae, mealworms, super mealworm larvae, and BSF larvae and pupae. Insect fat and other approved by-products from insects can also be sold and distributed …

Between 2011 and 2018, the insects-for- feed market in Korea grew from US$21 million
to US$140 million, and the insects-for-food market grew from US$0 to US$354 million (RDA 2020b). By 2030, Korea’s domestic market for insect feed is expected to grow to US$581 million and for insect food to US$815 million.

From a US perspective insect farming in particular may seem an unlikely future for agriculture. But it wasn’t that long ago that many Americans had not entered a restaurant serving kimchi, bulgogi, or Korean barbeque, either.