A Brief History of Widgets

My tradition on this blog is to take a break (mostly!) from current events in the later part of August. Instead, I pre-schedule daily posts based on things I read during the year about three of my preoccupations: economics, academia, and writing.

___________________

For economists, “widgets” are the example of a hypothetical product you use when you don’t want to get specific. Another common hypothetical product is “leets,” which is “steel” spelled backward. But where did the terminology of widgets first appear, and how did it work its way over to economics?

According to the Oxford English dictionary, the etymology of “widgets” is unclear. It’s sometimes thought to be a spin-off of “gadgets,” but there don’t seem to be examples to support this claim. Instead, the origin of “widgets” is usually credited to the playwrights George S. Kaufman and Marc Connelly in their 1924 play, Beggar on Horseback.

The play revolves around Neil McRae, who is a poor and unknown composer of classical music, working odd jobs to get by. There is a wealthy industrialist named Mr. Cady, with a beautiful daughter named Gladys. Will Neil give up his classical music dreams, marry the boss’s daughter, and work at the factory? In the play, the factory makes “widgets.” Here’s some dialogue from the play between Neil, Mr. Cady, and a secretary named Miss You:

CADY: Why, Neil!

NEIL: Here I am—at work!

CADY: Yes, sir! Business! Big business!

NEIL: Yes. Big business. What business are we in?

CADY: Widgets. We’re in the widget business.

NEIL: The widget business?

CADY: Yes, sir! I suppose I’m the biggest manufacturer in the world of overhead and underground A-erial widgets. Miss You!

MISS YOU: Yes, sir.

CADY: Let’s hear what our business was during the first six months of the fiscal year. [To Neil.] The annual report.

MISS YOU [Reading.]: “The turnover in the widget industry last year was greater than ever. If placed alongside the Woolworth Building it would stretch to the moon. The operating expenses alone would furnish every man, woman and child in the United States, China and
similar places with enough to last for eighteen and one-half years, if laid end to end.”

CADY: How’s that?

NEIL: It’s wonderful!

CADY: And wait for September 17th!

NEIL: Why?

CADY: That’s to be National Widget Week! The whole country!

NEIL: That’s fine, but what I came up about …

CADY: Never mind that now—we’ve got more important things. Conferences, mostly.

The terminology of widget seems to have caught hold fairly soon. I was especially struck by this short 1939 movie by the General Motors Department of Public Relations. It’s called “Round and Round,” and as you will see, it’s an attempt to describe a circular flow in the economy. It’s about a factory that uses skilled labor and machines to make widgets. As the video explains: “A widget might be a radio, a refrigerator, a musical instrument, or a motor car. A widget, you know, is just a symbol for any manufactured product that people use.” The factory sells widgets to farmers, coal miners, steel manufacturers, and others. In turn, they use the widgets to produce the inputs needed by the widget manufacturer to make more widgets.

In 1969, the Guinness company decided to take the widget out of the hypothetical, and to make and patent an actual product that has come to be called a “widget.” The company filed a patent application in Ireland for an “Improved Method of and Means of Dispensing Carbonated Liquids from Containers.” As explained here, the widget is a small plastic ball with a hole in it that sits inside a can of beer. When the beer is put under pressure, there is nitrogenated beer under pressure inside this hole. When the can is popped open, this extra dose of nitrogenated beer combines with the rest of the beer in the can to produce a foamy head on the beer as it is poured.

Modern software programmers have also tried to commandeer the terminology of widgets for their own. For example, the Techopedia webpage defines widgets in this way:

Widget is a broad term that can refer to either any GUI (graphical user interface) element or a tiny application that can display information and/or interact with the user. A widget can be as rudimentary as a button, scroll bar, label, dialog box or check box; or it can be something slightly more sophisticated like a search box, tiny map, clock, visitor counter or unit converter. … The term widget is understood to include both the graphical portion, with which the user interacts, and the code responsible for the widget’s functionality.

This seems a long way from National Widget Week as conceived by Kaufman and Connelly back in 1924! But economists have by and large shrugged off the attempts by beer companies and software firms to appropriate their single most prominent hypothetical example. Instead, economics lecturers stick with the meaning of “widget” as defined by the General Motors Public Relations Department.

A Fertility Patterns Flip-flop

For some decades now, the world has been following the patterns of a demographic transition with life expectancies rising and birth rates falling, as we head for a world where the elderly are a much larger share of the global population. However, Matthias Doepke, Anne Hannusch, Fabian Kindermann, and Michèle Tertilt argue that it’s time for “The New Economics of Fertility” (IZA Discussion Paper #15224, April 2022). For a short readable overview of the main themes, you can check their shorter discussion at the VoxEU website (June 11, 2022).

From the abstract of the academic paper, the authors write:

In this survey, we argue that the economic analysis of fertility has entered a new era. First-generation models of fertility choice were designed to account for two empirical regularities that, in the past, held both across countries and across families in a given
country: a negative relationship between income and fertility, and another negative
relationship between women’s labor force participation and fertility. The economics of
fertility has entered a new era because these stylized facts no longer universally hold. In
high-income countries, the income-fertility relationship has flattened and in some cases
reversed, and the cross-country relationship between women’s labor force participation
and fertility is now positive.

A couple of pictures may help, here. It used to be that countries with higher incomes had lower fertility rates, but among high-income countries, this pattern no longer holds. Here’s a figure taken from the VoxEU overview. The top panel shows that within the group of high-income countries in 1980, countries with higher per capita GDP had lower fertility, but by 2000, countries in this group with higher per capita income had higher fertility.

What about the relationship between women’s fertility and the labor force participation rate of women? Here’s the parallel figure. It shows that in 1980, within the group of high-income countries, those with higher fertility tended to have lower labor market participation for women; by 2000, the countries with higher fertility tended to have higher labor force participation for women.

The previous theories of fertility were based on some intuitively plausible claims. As incomes went up in a given country, the opportunity costs of having a child went up, so women would be more likely to enter the labor force and fertility would decline. But now it appears that as incomes rise in a given country, women are likely to have more children and also to spend more time in the labor force. Instead of higher incomes being less compatible with children and with being in the workforce, they are apparently becoming more compatible. The authors write:

We highlight a number of factors that have blunted the forces emphasized by the first generation of economic models of fertility. For example, in high-income countries, child labor has disappeared and education for most children continues past childhood into the adult years. These changes imply that the tradeoff inherent in quantity-quality models between sending children to school versus having more resources to raise a larger family has lost salience. Similarly, models based on women’s opportunity cost of time posit that raising more children requires mothers to spend less time working in the market. While this tradeoff still exists today, it has weakened as alternative forms of childcare have become more prominent. When childcare is provided by someone other than the mother—whether a hired nanny, a government-run kindergarten, or the child’s father—the cost of children is no longer linked as directly to the mother’s opportunity cost of time.

To explain why the empirical relationship between women’s labor force participation and
fertility has not just flattened, but entirely reverted, research has taken directions that go
beyond the first-generation models. A general theme in this new literature is that the compatibility of family and career has become a key determinant of fertility in high-income economies. Where the two are easy to combine, many women have both a career and multiple children, resulting in high fertility and high female labor force participation. When career and family goals are in conflict, fewer women work and fewer babies are born. We point out four factors that help mothers combine a career with a larger family: the availability of public child care and other supportive family policies; greater contributions from fathers in providing childcare; social norms in favor of working mothers; and flexible labor markets.

It is far too early to discern whether these kinds of shifts will alter global pattern of lower birthrates. But it does suggest that those who would prefer rising birthrates should focus on policies and norms that make it easier for women to work; conversely, those who prefer lower birthrates might prefer policies and norms that increase the tradeoffs for women of entering the workplace.

Thoughts on Globotics and Slobilization

“Globotics” is the name that Richard Baldwin gave to the combination of globalization and robotics in service jobs. In his essay “Globotics and macroeconomics: Globalisation and automation of the service sector” (presented at the ECB Forum on Central Banking 2022, June 27-29, where videos of presentations and comments are included), he argues that you can’t understand the likely future of globalization without it.

Baldwin argued that the global economy is in the throes of a third “unbundling” of globalization, which is a phrase he uses to describe the driving force behind a shift in what is traded across global borders. In his telling, the first “unbundling” “happened when steam power and Pax Britannica radically lowered the cost of moving goods,” and the unfolding process of reduced physical transportation costs over the decades drove the rise of globalization from the 19th century up to the 1960s or 1970s (with interruptions for world wars, the Great Depression, and other events).

The second “unbundling” kicked in around 1990. It wasn’t about transportation costs, but rather about information and communication technology (ICT) and how it affected firms in big high-income countries like the G7 group (the United States, Canada, Great Britain, France, Germany, Italy, Japan). In particular, it wasn’t about how economies of some countries were able to produce goods at different prices, which is the standard intro-level theory of trade, but rather how a combination of higher-technology manufacturing firms in high-income countries could coordinate their production chain with lower-wage labor in other countries. Baldwin writes:

Globalisation changed dramatically around 1990 when it entered its offshoring-expansion
phase, or what I have called the “second unbundling” to contrast it with the first unbundling (Baldwin 2006). This was triggered by the ICT revolution which relaxed the second separation cost – communication and coordination costs. ICT made it feasible for G7 firms to fragment highly complex industrial processes into production stages, and then spatially unbundle some of them to low-wage nations. Think of this as the offshoring-expansion phase of globalisation where G7 manufacturing firms seized low-hanging opportunities for combining their advanced manufacturing knowhow with foreign low-wage labour in factories set up abroad. As the offshored process had to continue to operate as if it were still bundled, we can think of this as factories crossing borders, not just goods. Trade boomed again.

This third “unbundling,” now underway, is about how the newest versions of interconnected information and communications technology, which one might just call the digital economy, are connecting services industries. Indeed, although the rise in international trade in goods has more-or-less levelled off since about 2008, international trade in services has continued to rise and is becoming an ever-larger share of international trade.

What exactly are these “other commercial services”?

The OCS category consists of a few big items and many small items. Some are easily recognisable. Among the bigger categories are Financial Services (9%), and payments for intellectual property rights. The category Telecommunications, Computer, and Information Services accounts for 11% of the total; much of this is made up of computer services related to software, but a large share is tossed into the category ‘Other computer services other than cloud computing’ (this is typical of the lack of precision in trade statistics). The largest sub-category (23%) is ‘Other Business Services’. This includes a broad array of services. Some – like Architectural, Financial, Engineering, R&D, Advertising and Marketing, and Professional and Management Consulting services – are easily associated with sectors and jobs. Others, like Operating Lease Services, and ‘Other Business Services, not elsewhere included’ are difficult to map into jobs and sectors in the domestic economy.

In my own mind, it’s perhaps useful to think of the third “unbundling” in terms of working from home. If your job is one that can be entirely done by someone working from home, by a telecommuter, then it can be done by someone outside the country. As one of many examples, the K-12 and higher education systems just spent a year delivering their services on-line. Baldwin writes:

Note that the arbitrage here is direct wage competition among service sector workers, and wage differences are probably the largest unexploited arbitrage left in today’s world. Taking Colombia as an example of a middle-income emerging market, a recent study matched the US’s occupation classifications with those of Colombia to compare wage rates (Baldwin, Cardenaz, and Fernandez 2021). Focusing only on the occupations that Dingel and Neiman (2020) have classified as teleworkable in the US, the study found that the wages in the US were on average 1500% higher in the US than in Colombia. Plainly low wages are not the only source of competitiveness in services but with wage gaps being that large, it is likely that the digitech-driven globalisation of the service sector will have an impact on prices in advanced economies.

Some of the arbitrage is done via online freelancing platforms like Upwork, Freelancer, and Zhubajie (these are like eBay but for services). Wage comparisons based on worker-level data scrapped from such online freelancing platforms confirm the presence of enormous wage gaps, although the size varies greatly according to the data selection criteria. Data from a number of the largest freelance platforms reported in ILO (2021) indicate that average hourly earnings paid in a typical week for those engaged in online work is US$4.9, with the majority of workers (66%) earning less than the average. While $4.90 an hour seems like a low wage in Europe, it corresponds to full-time equivalent salary of about $10,000 per year – a salary which is considered comfortably middle-class in most countries.

Baldwin argues that the low-hanging fruit in this area is “intermediate services.” For example, it might be hard for a variety of regulatory reasons for a US firm to hire accountants directly from a company based in India or Brazil or Indonesia. But it’s pretty easy for US-based accounting firm to hire those accountants from other places, and to coordinate their work when delivering accounting services to US-based firms. Moreover, for a lot of emerging market economies, providing services can be pretty straightforward.

[E]xport capacity in emerging markets is not as great a limiting factor in services as it is in goods since every nation has a workforce that is already producing intermediate-service tasks. All emerging market economies have bookkeepers, forensic accountants, CV screeners, administrative assistants, online client help staff, graphic designers, copyeditors, personal assistants, travel agents, software engineers, lawyers who can check contracts, financial analysts who can write reports, etc. There is no need to develop whole new sectors, build factories, or develop farms or mines.

The term “slobalization” is used to describe the level of globalization slowing down. When it come to trade in goods, slobilization applies. But Baldwin’s analysis implies that the world economy may already be seeing the roots of a substantial rise in globalization that will happen via the services sector.

Distressed Places: How to Encourage Jobs

The idea of “place-based” economic policies is to focus on those geographic places–sometimes urban areas, sometimes neighborhoods within an urban area–where jobs are especially scarce and incomes especially low. Timothy J. Bartik offers some thoughts on how best to do this in “How State Governments Can Target Job Opportunities to Distressed Places” (Upjohn Institute Technical Report No. 22‐044, June 2022). There’s also a short readable overview in the most recent Employment Research Newsletter from the Upjohn Institute for Employment Research. I’ll quote from the newsletter here:

Distressed places, which have low employment to-population ratios (employment rates), are a big problem in America. Consider local labor markets: multicounty areas that contain most commuting flows, such as metro areas or rural commuting zones. About two-fifths of all Americans live in local labor markets whose employment rate for prime-age workers (ages 25–54) is more than 5 percentage points below full employment. For neighborhoods, about one-fifth of all Americans live in census tracts whose prime-age
employment rate is more than 5 percentage points below their local labor market’s average. These low employment rates are linked to major social problems: substance abuse, crime, and family stress.

Helping distressed local labor markets requires different policies than helping distressed
neighborhoods. In a distressed local labor market, job creation will raise employment rates, with plausibly half of the jobs going to local nonemployed residents. Local job creation is most cost-effectively accomplished by providing businesses with
“customized services” such as infrastructure, customized job training, and business advice
programs—including manufacturing extension services. Such customized services have less than one-third the cost-per-job-created of business tax incentives.

In contrast, in a distressed neighborhood, more neighborhood jobs will not much help the
neighborhood’s residents, as most neighborhood jobs are not held by residents. Residents of distressed neighborhoods can best be helped by services to increase job access,
including better transportation, job training, and child care.

Bartik offers a fleshed-out proposal in the longer paper. I’d emphasize four points here:

First, it’s important to remember that the gains from getting people back to work are partly the present and future gains to the income of workers. But the broader social gains also include stronger families, a better network of informal job connections, a decline in state-level spending on Medicaid and welfare payments, reduced drug use and crime, and other benefits.

Second, while Bartik’s proposals are admittedly expensive, they are also affordable: “Total annual costs for all states would come to $30 billion annually—$21 billion for local labor markets and $9 billion for neighborhoods. Tis $30 billion cost is affordable, as it is less than 3 percent of overall state taxes. Many states could cover the required costs by replacing their business tax incentives.”

Third, notice that Bartik is suggesting the practicality of state-level initiatives here. States have been called the “laboratories of democracy,” where policy ideas can be tried out and evaluated. These proposals don’t require the yet another argument over federal spending and taxes or the ability to get a 60-vote supermajority in the US Senate. They just require some states (maybe yours?) to give it a try.

Finally, the proposals do require states to focus on distressed areas, not on tax breaks for companies. Bartik describes his proposed policy as a set of block grants that would be spent across a state based on the employment rate. He points to funding for K-12 schools as a parallel: in many states, funds are from the state on a per-student basis and then spent by school districts under broad guidelines. In this case, funds would be allocated based on the employment rate and all areas would receive some payments–but those with especially low employment rates would receive more. He writes:

But state geographic targeting is politically difficult. At the state level, ostensibly targeted programs often allocate most aid to nondistressed places, and initially targeted programs
are then extended statewide. The political problem is partly that most state targeting formulas are arbitrary “price subsidies”: for example, this would include job-creation credits that are higher dollar amounts per job in distressed places. Because the
variation in such subsidies has no obvious relationship with need, it is easy to rationalize extending generous subsidies to favored projects in nondistressed places.

In contrast, the state block grants proposed here use targeting formulas directly tied to the number of persons in each area needing jobs. For each distressed neighborhood or local labor market, the formula calculates how many jobs the place is short of full employment by, and then it funds filling some percentage of that employment rate “gap.”

Such needs-based targeting formulas have been successful for other policy areas in making geographic targeting politically feasible. For example, tying state aid for K–12 schools to the number of students eligible for free or reduced-price lunch has been done by many states, resulting in significant targeting of funds to needier school districts.

The block grants also combine targeting with universalism. Most local labor markets would be eligible for some level of block grant, as would most local governments for neighborhood grants. The targeting is accomplished by making higher per-capita grants to places where more people need jobs. Because “everyone” gets something, the block grants have a stronger political constituency.

For some additional posts about research on place-based policies, see:

Summer 2022 Journal of Economic Perspectives Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Summer 2022 issue, which in the Taylor household is known as issue #141. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

_________________________________

Symposium on Intangible Capital

Intangible Capital and Modern Economies,” by Carol Corrado, Jonathan Haskel, Cecilia Jona-Lasinio and Massimiliano Iommi

The production of goods and services is central to understanding economies. The textbook description of a firm, typically in agriculture or manufacturing, focuses on its physical “tangible” capital (machines), labor (workers), and the state of “know-how.” Yet real-world firms, such as Apple, Microsoft, and Google, have almost no physical capital. Instead, their main capital assets are “intangible”: software, data, design, reputation, supply-chain expertise, and R&D. We discuss investment in these knowledge-based types of capital: How to measure it; how it affects macroeconomic data on investment, rates of return, and GDP; and how it relates to growth theory and practical growth accounting. We present estimates of productivity in the US and European economies in recent decades including intangibles and discuss why, despite relatively rapid growth in intangible capital and what seems to be a modern technological revolution, productivity growth has slowed since the global financial crisis.

Full-Text Access | Supplementary Materials

“The Economics of Intangible Capital,” by Nicolas Crouzet, Janice C. Eberly, Andrea L. Eisfeldt and Dimitris Papanikolaou

Intangible assets are a large and growing part of firms’ capital stocks. Intangibles are accumulated via investment–foregoing consumption today for output in the future—but they lack a physical presence. Rather than stopping with this “lack,” we instead focus on the positive properties of intangibles. Specifically, intangibles must be stored, so characteristics of the storage medium have important implications for their value and use. These properties include non-rivalry, allowing the intangible to be used simultaneously in different production streams, and limited excludability, which prevents the firm from capturing all the benefits or rents from the intangible. We develop these ideas in a simple way to illustrate how outcomes such as scalability and distribution of ownership follow. We discuss how intangibles can help to understand important trends in macroeconomics and finance, including productivity, factor shares, inequality, investment and valuation, rents and market power, and firm financing.

Full-Text Access | Supplementary Materials

“Marketing Investment and Intangible Brand Capital,” by Bart J. Bronnenberg, Jean-Pierre Dubé and Chad Syverson

US companies invested over $500 billion in 2021 in intangible brand capital, over 2% of GDP. During the past decade, US companies have also been growing their internal marketing capabilities, an often overlooked source of human capital. We discuss the private and social benefits of these intangible brand capital stocks. While the private returns to companies are fairly clear, the academic literature has been divided over the social benefits and costs of advertising and promotion, the two key investment vehicles. We also discuss the implications of brand capital for measured productivity.

Full-Text Access | Supplementary Materials

Symposium on Human Capital

Four Facts about Human Capital,” by David J. Deming

This paper synthesizes what economists have learned about human capital since Becker (1962) into four stylized facts. First, human capital explains at least one-third of the variation in labor earnings within countries and at least half of the variation across countries. Second, human capital investments have high economic returns throughout childhood and young adulthood. Third, we know how to build foundational skills such as literacy and numeracy, and resources are often the main constraint. Fourth, higher-order skills such as problem-solving and teamwork are increasingly valuable, and the technology for producing these skills is not well understood. We know that investment in education works and that skills matter for earnings, but we do not always know why.

Full-Text Access | Supplementary Materials

“Measuring Human Capital,” by Katharine G. Abraham and Justine Mallatt

We review the existing literature on the measurement of human capital. Broadly speaking, economists have proposed three approaches to constructing human capital measures—the indicator approach, the cost approach, and the income approach. Studies employing the indicator approach have used single measures such as average years of schooling or indexes of multiple measures. The cost approach values human capital investments based on spending. The income approach values human capital investments by looking forward to the increment to expected future earnings they produce. The latter two approaches have the significant advantage of consistency with national income accounting practices and measures of other types of capital. Measures based on the income approach typically yield far larger estimates of the value of human capital than measures based on the cost approach. We outline possible reasons for this discrepancy and show how changes in assumptions can reconcile estimates based on the two approaches.

Full-Text Access | Supplementary Materials

Symposium on Inflation Expectations

“Expected and Realized Inflation in Historical Perspective,” by Carola Binder and Rupal Kamdar

This paper provides historical context for the relationship between expected and realized inflation. We begin with a discussion of early theoretical thought about how inflation expectations are formed. Then, we discuss survey- and asset-based measures of inflation expectations and assess their empirical relationship with realized inflation. Expected and realized inflation are strongly correlated over long samples, but over short samples the correlations can weaken. Lastly, to better understand the subtleties of the interaction between expected and realized inflation over short-lived but important events, we provide a narrative account of the relationship during the Great Depression of the 1930s, the Great Inflation of the 1970s, the Great Recession of 2008–2009, and the recent COVID-19 pandemic. These episodes offer compelling evidence of the importance of expectations and policy regime changes in inflation dynamics.

Full-Text Access | Supplementary Materials

“The Subjective Inflation Expectations of Households and Firms: Measurement, Determinants, and Implications,” by Michael Weber, Francesco D’Acunto, Yuriy Gorodnichenko and Olivier Coibion

Households’ and firms’ subjective inflation expectations play a central role in macroeconomic and intertemporal microeconomic models. We discuss how subjective inflation expectations are measured, the patterns they display, their determinants, and how they shape households’ and firms’ economic choices in the data and help us make sense of the observed heterogeneous reactions to business-cycle shocks and policy interventions. We conclude by highlighting the relevant open questions and why tackling them is important for academic research and policymaking.

Full-Text Access | Supplementary Materials

Symposium on Methods in Applied Micro

Blending Theory and Data: A Space Odyssey,” by Dave Donaldson

This article describes methods used in the field of spatial economics that combine insights from economic theory and evidence from data in order to answer counterfactual questions. I outline a general framework that emphasizes three elements: a specific question to be answered, a set of empirical relationships that can be identified from exogeneity assumptions, and a theoretical model that is used to extrapolate from such empirical relationships to the answer that is required. I then illustrate the application of these elements via a series of twelve examples drawn from the fields of international, regional, and urban economics. These applications are chosen to illustrate the various techniques that researchers use to minimize the theoretical assumptions that are needed to traverse the distance between identified empirical patterns and the questions that need to be answered.

Full-Text Access | Supplementary Materials

“Principles for Combining Descriptive and Model-Based Analysis in Applied Microeconomics Research,” by Neale Mahoney

In this article, I offer guidance on how to combine descriptive and model-based empirical analysis within a paper. Drawing on examples from three recently published applied microeconomics papers, I argue that it is important to create a tight link between the descriptive analysis and the bottom-line deliverable of the model-based analysis, and I try to distill some lessons or principles for doing so. I also offer some thoughts on when a paper should start with descriptive analysis and then proceed to model-based analysis and when alternative structures may be desirable.

Full-Text Access | Supplementary Materials

Article

Overreaction and Diagnostic Expectations in Macroeconomics,” by Pedro Bordalo, Nicola Gennaioli and Andrei Shleifer

We present the case for the centrality of overreaction in expectations for addressing important challenges in finance and macroeconomics. First, non-rational expectations by market participants can be measured and modeled in ways that address some of the key challenges posed by the rational expectations revolution, most importantly the idea that economic agents are forward-looking. Second, belief overreaction can account for many long-standing empirical puzzles in macro and finance, which emphasize the extreme volatility and boom-bust dynamics of key time series, such as stock prices, credit, and investment. Third, overreaction relies on psychology and is disciplined by survey data on expectations. This suggests that relaxing the assumption of rational expectations is a promising strategy, helps theory and evidence go together, and promises a unified view of a great deal of data.

Full-Text Access | Supplementary Materials

Features

Retrospectives: On the Evolution of the Rules versus Discretion Debate in Monetary Policy,” by Harris Dellas and George S. Tavlas

Episodes of macroeconomic upheaval associated with monetary policy failure have provided the stage for important debates on rules versus discretion. We discuss the main features, results, commonalities, and differences in the debates that emerged after three such episodes. The modern debate was born during the Great Inflation of the 1970s and focused on both rules versus discretion and the properties of alternative rules. The middle debate originated with Henry Simons and the Chicago School during the Great Depression in the 1930s and focuses on policy uncertainty. The earliest systematic debate involved the Currency and Banking Schools in Britain in the 1820s, but, in spite the views of many of its participants and doctrinal historians, it seems to have primarily been about the degree of activism under a single rule—that of the gold standard.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

When Civilizations Collide: Economists and Politicians


Alan Blinder offers some meditations on the interaction between economists and politicians in “Beyond the `Lamppost Theory’ of Economic Policy” (Chicago Booth Review, August 1, 2022). The lamppost theory is “that politicians use economics the same way a drunk uses a lamppost: for support rather than illumination.” Here are some of Blinder’s thoughts on differences in the way politicians and economists think and talk, but the essay has a number of additional thoughts worth your time.

As often happens when civilizations collide, politicians and economists each find the other more than a little odd. There are, in fact, important differences between the two groups, not just in their goals and incentives, but in areas as fundamental as the ways they think and talk. For example:

Logic. You might think logic is logic. However, I characterize the logic economists use as Aristotelian logic—that is, the classical system of logic based on syllogisms, corollaries, and deductive reasoning. Politicians often don’t use that logic. They use instead what I call political logic—what will work best with the voters or with other politicians with whom they are negotiating.

Language. Logic-based as it is, the language that we economists use in speaking and writing is often dry—sometimes barely intelligible to laypeople. In contrast, the language that politicians like to speak and write in is often vivid—and in clear English. But because it is so full of spin, economists tend to tune out when we hear it.

Calculation. … Economists use arithmetic and, when necessary, calculus. Economic equations can be complex, but they are straightforward in that they follow the rules most of us learned in math class. But political arithmetic is different—it’s weighted by influence. … Imagine a policy that generates $1 million each for 10 people, and costs 10 million people $2 each. Making the simple economic calculations, we can quickly see the policy leads to $10 million of gains and $20 million of losses. So we conclude it’s probably a bad idea, unless there’s some good reason to do it despite the loss of wealth. But if you apply political calculus to the same numbers, … [t]he 10 people that it helps so much will pay rapt attention and may even show their gratitude with political donations. Meanwhile, the 10 million people who lose two bucks apiece are probably not even going to notice it. The policy, therefore, has political merit.

Intelligence. Academic economists prize traditional intelligence as captured by things such as high IQ, good ideas, and the ability to express those ideas. Success in academic economics does not typically rely on people skills. Successful politicians, on the other hand, depend much more on their social and emotional intelligence. …

Objectives. Economists who engage in policy are generally trying to maximize social welfare. Politicians, of course, are trying to maximize their prospects for election or reelection, which may not be correlated with social welfare.

Policy evaluation. The aspect of a policy that matters for economists is the substance of the policy: Is it really good for society? What matters in the political world are, naturally, the politics and the message involved in the policy. Does it sound good? Needless to say, what is good and what sounds good are not always aligned.

Concerns. Economists’ main concern is efficiency: we talk about it, think about it, dream about it. But efficiency doesn’t much interest politicians, who are far more concerned about fairness, or perceived fairness, which is a broad concept encompassing income distribution but also much more.

The Invention of Peaches and Pimentos

The peach, in the form that Americans know it, was invented in 1875, in the state of Georgia, by Samuel Rumph. Until then, something called a “peach” existed, but it was barely a commercial product, because it couldn’t be shipped. But after years of tinkering and cross-breeding with fruit that no one much wanted, Rumph developed the Elberta peach (named after his wife) in 1875. Cynthia R. Greenlee tells the story in “Reinventing the Peach, the Pimento, and Regional Identity” (Issues in Science and Technology, Summer 2022). She writes:

Just how Rumph begat this new peach is uncertain. It was succulent and bright yellow with red markings. Its pit came out easily, and its fruit matured early in the season. That timing and its firmness were boons, and the trees yielded their large, handsome fruit prolifically. As historian Thomas Okie wrote in his rigorous and compelling study of how the peach became a Georgia icon, Rumph had produced the “industrial peach,” a reliable producer that was reasonably good to eat, relatively resistant to pests and diseases, amenable to growing in different climes and soil, and easily transportable. 

As a pioneer of what would eventually become agribusiness, Rumph considered the whole peach, from grafting to delivery, and intervened at various stages in the supply chain. First, he bred the peach that took the world by storm. Then, as a member of the Georgia State Horticultural Society’s committee on packing and shipping peaches, Rumph devoted himself to studying how to send peaches around the country. Although the first shipment of peaches to New York had happened around the time of Rumph’s birth in the 1850s, shipping still bedeviled the peach grower. Picked too green, they lost flavor when refrigerated. Too ripe, and they rotted almost immediately after emerging from cold shipment.

It wasn’t long before Rumph reported making a successful shipment of peaches to New York, offering proof of concept that Georgia peaches could ride the railways well and sell high, even though it was an arduous journey for the fruit: usually three days total of trains and transfer to steamers. In an effort to make shipping a precise science rather than a gamble, Rumph created a slatted crate that could be stacked and wheeled, founding the Elberta Crate Company. His unpatented invention spawned industrywide imitation, and he went on to invent a refrigerated railway car—also unpatented—that was widely used by fruit growers thereafter.  Rumph’s industry-changing shipping inventions established a durable and productive connection between fruit growers, the state, and industry. Railroads were booming across the South, buoyed by ample northern investment. And peach growers’ earnings—and nurserymen’s active involvement in politics—determined where railroads would go and stop. 

As Greenlee points out, the arrival of the peach altered the economic position and public view of Georgia. It wasn’t long before the same infrastructure of agricultural development and processing started by the peach led to crops of melons, berries and other fruits and vegetables. Georgia boasted that it was a place where both apples and oranges could grow–a pointed jab at northern orchards. The crops and best-practice growing technologies were spread by agricultural extension services.

Greenlee tells the story of another Georgia agricultural family, Samuel Riegel and his sons, who were to the pimento what Rumph was to the peach. In 1905, the family became obsessed with growing peppers. They got their US Congressman to obtain seeds from Europe, which the bred and cross-bred, and by 1911 they were distributing pimento seeds. Again, a chain of complementary innovations was important, and here one of Riegel’s sons took the lead:

Still, a particularly vexing wrinkle marred Perfection’s flawlessness, one common to all pimentos: thick, hard-to-process skin. It had to be softened with lye or burned off in a fire, then peeled by hand.  The other Riegel son, Mark, who worked briefly for the experiment station, thought there had to be a better way. He invented a roasting machine that ferried peppers on a continuous chain through a line of fire, turning the skin burnoff into a quicker mass process. Like Rumph before him, he attended to the supply chain end to end. Not only did Mark Riegel invent a process, he established a string of canneries that induced growers to cultivate peppers on contract. From the seed to the jar, the Riegels had cultivated the “perfect” pimento, enlisted farmers to grow their marvel, developed a better processing method, and created upbeat marketing campaigns with their Sunshine brand.

The overall lesson here is that a successful innovation is a multifaceted process. It often involves individuals willing to take the risks of experimentation and research, but those individuals often have a better chance if they are supported by an invisible infrastructure of science and shared information. The ultimate success depends not just on the product itself, but also on complementary innovations: processing, packaging, transportation. Success also depends on nurturing demand for the new product, often via marketing and publicity efforts. Finally, when these general ingredients are in place, one successful innovation can light the way to others.

Immigrants Assimilating, Then and Now

Noah Smith interviews Leah Boustan about her research on various aspects of immigration at his Noahopinion website (July 17, 2022). In answer to a question about “the biggest popular misconceptions about immigration in America today,” Boustan responds:

Americans vastly overestimate how many immigrants are in the country today. According to a survey conducted by Stefanie Stantcheva and her co-authors, Americans guess that 36% of the country is born abroad, whereas the real number is 14%. So, this misconception gives rise to fears that we are in an “immigration crisis” or that we have a “flood” of immigrants coming to our shores. In reality, the immigrant share of the population today (14%) only just reached the same level as it was during the Ellis Island period for over 50 years! After this, I would say that the second biggest misconception is that immigrants nowadays are faring more poorly in the economy and are less likely to become American than immigrants 100 years ago.

On the issue of how immigrants do at catching up economically:

We find that Mexican immigrants and their children achieve a substantial amount of integration, both economically and culturally. First, on the economic side, we compare the children of Mexican-born parents who were raised at the 25th percentile of the income distribution — that’s like two parents working full time, both earning in the minimum wage — to the children of US-born parents or parents from other countries of origin. The children of Mexican parents do pretty well! Even though they were raised at the 25th percentile in childhood, they reach the 50th percentile in adulthood on average. Compare that to the children of US-born parents raised at the same point, who only reach the 46th percentile. Of course, children of other immigrant backgrounds do even better, but the children from Mexican households are experiencing a lot of upward mobility. …

[T]he pattern … whereby the kids of poor and working-class immigrants do better than their American counterparts, is true both today and in the past. The children of poor Irish or Italian immigrant parents outperformed the children of poor US-born parents in the early 20th century; the same is true of the children of immigrants today. 

We are able to delve into the reasons for this immigrant advantage in the past in great detail, and we find that the single most important factor is geography. Immigrants tended to settle in dynamic cities that provided opportunities both for themselves and for their kids. So, in the past, this meant avoiding Southern states, which were primarily agricultural and cotton-growing at the time, and – outside of the South – moving to cities more than to rural areas. If you think about it, it makes sense: immigrants have already left home, often in pursuit of economic opportunity, so once they move to the US they are more willing to go where the opportunities are. 

Geography still matters a lot today, but not as much as in the past. Instead, we suspect that educational differences between groups matter today. Think about a Chinese or Indian immigrant who doesn’t earn very much, say working in a restaurant or a hotel or in childcare. In some cases, the immigrant him or herself arrived in the US with an education – even a college degree – but has a hard time finding work in their chosen profession. Despite the fact that these immigrant families do not have many financial resources, they can pass along educational advantages to their children.

On the issue of how immigrants assimilate culturally, Boustan comments:

We are economists, so the first work we did on immigration was focused on economic outcomes like earnings and occupations. But, voters often care more deeply about cultural issues – both in the past and today. So, we realized that we wanted to try to measure ‘fitting in’ or cultural assimilation using as many metrics as we could find. We looked at learning English, of course, but also who immigrants marry, whether immigrants live in an enclave neighborhood or a more integrated area, and – one of our favorite measures – the names that immigrant parents choose for their children. These are all measures that can be gathered for immigrants today and 100 years ago; there are other metrics for today that don’t exist for the past — like ‘do immigrants describe themselves as patriotic’ (answer is: they do).

What we learned is that immigrants take steps to ‘fit in’ just as much today as they did in the past. So, for example, we can look at the names that immigrant parents choose for their kids. Both in the past and today, immigrants choose less American-sounding names for their kids when they first arrive in the US, but they start to converge toward the names that US-born parents pick for their kids as they spend more time in the country. Immigrants never completely close this ‘naming gap’ but they move pretty far in that direction, both then and now. 

The No-Burn Forest Policy: Origins and Consequences

The western United States has experienced some extraordinarily large forest fires in recent years. Part of the reason is drought conditions that have left the landscape tinder-dry. But another part of the reason is a century-long legacy of shutting down forest fires–even controlled burns. The PERC Reports magazine considers the history and consequences in a symposium on “How to Confront the Wildfire Crisis” in the Summer 2022 issue.

For example, Brian Yablonski discusses the origins of the policy in “The Big Burn of 1910 and the Choking of America’s Forests.” He writes:

Record low precipitation in April and May [1910] coupled with severe lightning storms in June and sparks from passing trains had ignited many small fires in Montana and Idaho. More than 9,000 firefighters, including servicemembers from the U.S. Army, waged battle against the individual fires. The whole region seemed to be teetering on the edge of disaster. Then, on August 20, a dry cold front brought winds of 70 miles per hour to the region. The individual fires became one. Hundreds of thousands of acres were incinerated within hours. The fires created their own gusts of more than 80 miles per hour, producing power equivalent to that of an atomic bomb dropped every two minutes.

Heroic efforts by firefighters to save small mountain towns and evacuate their people became the stuff of legend. “The whole world seemed to us men back in those mountains to be aflame,” said firefighter Ed Pulaski, one of the mythical figures to emerge from the Big Burn. “Many thought it really was the end of the world.” Smoke from the Mountain West colored the skies of New England. In just two days, the Big Burn torched an unfathomable 3 million acres in western Montana and northern Idaho, mostly on federally owned forest land, and left 85 dead in its wake, 78 of them firefighters. The gigafire-times-three scarred not only the landscape, but also the psyche of the Forest Service, policymakers, and ordinary Americans.

After the Big Burn, forest policy was settled. There was no longer any doubt or discussion. Fire protection became the primary goal of the Forest Service. And with it came a nationwide policy of complete and absolute fire suppression. In the years to follow, the Forest Service would even formalize its “no fire” stance through the “10 a.m. rule,” requiring the nearly impossible task of putting out every single wildfire by 10 a.m. the day after it was discovered. The rule would stay in effect for most of the century.

Yablonski pointed me to an essay called “Fire and the Forest—Theory of Light Burning” by FE Olmsted, published in the January 1911 issue of the Sierra Club Bulletin (and available via the magic of HathiTrust).  Olmsted is one of the seminal figures in American forestry, who is credited as one of the founders of the US National Forest System, and also taught forestry at Harvard. In this essay, Olmstead is writing just after the Big Burn of 1910, and he is arguing against “light burning” in favor of the fullest possible suppression of forest fires. Olmsted is find with burning an area after logging has occurred, to clean it up for new growth. But he argues that more extensive “light burning” will wipe out young trees, which is a waste of timber that could be cut in the future. He wrote:

Public discussion of the matter has brought to light, among other things, the fact that certain people still believe in the old theory of “burning over the woods” periodically in order to get rid of the litter on the ground, so that big fires which may come along later on will find no fuel to feed upon. This theory is usually accompanied by reference to the “old Indian fires” which the redman formerly set out quite methodically for purposes connected with the hunting of game. We are told that the present virgin stands of timber have lived on and flourished in spite of these Indian fires. Hence, it is said, we should follow the savage’s example of “burning up the woods” to a small extent in order that they may not be burnt up to a greater extent bye and bye. Forest fires, it is claimed, are bound to run over the mountains in spite of anything we can do. Besides, the statement is made that litter will gradually accumulate to such an extent that when a fire does start it will be impossible to control it and we shall lose all our timber. Why not choose our time in the fall or spring when the smaller refuse on the ground is dry enough to burn, the woods being damp enough to prevent any serious damage to the older trees, and burn the whole thing lightly? This theory of “light burning” is especially prevalent in California and has cropped out to a very noticeable extent since the recent destructive fires in Idaho and Montana.

The plan to use fire as a preventive of fire is absolutely good. Everything depends, however, upon how it is used. The Forest Service has used fire extensively ever since it assumed charge of the public timber lands in California. We are selling 200,000,000 feet of timber and on all the lands which we logged over we see to it that the slashings and litter upon the ground are piled up and burned. This must be accomplished, of course, in such a way that no damage results to the younger tree growth, such as seedlings, saplings, thickets and poles of the more valuable species. If we should burn without preparing the ground beforehand, most of the young trees would be killed. …

With the exception of two or three lumber companies the Forest Service is the only owner of timber in the State of California which has used and is using fire in a practical way for cleaning-up purposes. What “light burning” has been done on private lands in California, accompanied by preparation of the ground beforehand, shows that wherever the fire has actually burned, practically all young trees up to fifteen years of age have been killed absolutely, as well as a large part of those between the ages of fifteen and forty years. The operation, to be sure, has resulted in cleaning up the ground to a considerable extent and will afford fairly good protection to mature trees in case they are threatened by fire in the future. If a fire comes along it will naturally not have as much rubbish to feed upon and may not be so hot as to injure the larger tree growth. In other words, a safeguard has been provided for timber which may be turned into dollars in the immediate future. With this advantage has come the irreparable damage to young trees. It has amounted, in fact, to the almost total destruction of all wood growth up to the age of twenty years. This is not forestry; not conservation; it is simple destruction. That is the whole story in a nutshell.

The private owner of timber, whose chief concern is the protection of trees which can be turned into money immediately and who cares little or nothing about what happens to the younger stuff which is not yet marketable, may look upon the “light burning” plan as being both serviceable and highly practicable, provided the expense is reasonable. On the other hand, the Government, first of all, must keep its lands producing timber crops indefinitely, and it is wholly impossible to do this without protecting, encouraging, and bringing to maturity every bit of natural young growth. …

The accumulation of ground litter is not at all serious and the fears of future disastrous fires, as a result of this accumulation, are not well founded. Fires in the ground litter are easily controlled and put out. On the other hand, fires in brush or chaparral are very dangerous, destructive, and difficult to handle. Brush areas under and around standing timber are the worst things we have to contend with. Brush is not killed by fire; it sprouts and grows up again just as densely as before. The best way to kill brush is to shade it out by tree growth, but to do this we must let young trees grow. Fires and young trees cannot exist together. We must, therefore, attempt to keep fire out absolutely. Some day we will do this and just as effectively as the older countries have done it for the past 100 years. In the mean time we are keeping fires down in California by extinguishing them as soon as possible after they begin.

It is true that fires will always start; that we can never provide against. On the other hand, the supposition that they will always run is not well taken. If we can stop small fires at the start, fires will never run. With more men, more telephones, and more trails we shall be able to do this and at a cost of only a cent or two more an acre.

After about a century of following Olmsted’s prescription that “[w]e must, therefore, attempt to keep fire out absolutely,” and that this is possible, we are not confronted with waves of historically large wildfires. Notice that Olmsted’s reasoning for stopping all fires, may sound like a form of conservationism, is actually based implicitly on the idea that all the forests will be regularly logged!

Blocking fires changes the character of the forest. One of the other essays in the issue refers to a 2022 done by some authors at the US Forest Service, “Operational resilience in western US frequent-fire forests,” published in Forest Ecology and Management. They look at density of western US forests, and find that the policy of fire suppression has led to much more density (which I suppose Olmsted would view as a successful story of trees waiting to be logged), but also generally smaller trees as the forest growth becomes more competitive. They write:

With the increasing frequency and severity of altered disturbance regimes in dry, western U.S. forests, treatments promoting resilience have become a management objective but have been difficult to define or operationalize. Many reconstruction studies of these forests when they had active fire regimes have documented very low tree densities before the onset of fire suppression. Building on ecological theory and recent studies, we suggest that this historic forest structure promoted resilience by minimizing competition which in turn supported vigorous tree growth. To assess these historic conditions for management practices, we calculated a widely-used measure of competition, relative stand density index (SDI), for two extensive historical datasets and compared those to contemporary forest conditions. Between 1911 and 2011, tree densities on average increased by six to seven fold while average tree size was reduced by 50%. Relative SDI for historical forests was 23–28% of maximum, in the ranges considered ‘free of’ (<25%) to ‘low’ competition (25–34%). In contrast, most (82–95%) contemporary stands were in the range of ‘full competition’ (35–59%) or ‘imminent mortality’ (≥60%). Historical relative SDI values suggest that treatments for restoring forest resilience may need to be much more intensive then the current focus on fuels reduction. 

Their findings are consistent with observations on the ground. For example, Yablonski’s essay in PERC Reports mentions the views of the head of a Californian tribe, the North Fork Mono:

According to Ron Goode, tribal chairman of the North Fork Mono, prior to white settlement, Native Americans carried out “light burning” on 2 percent of the state annually. As a result, most forest types in California had about 64 trees per acre. Today, it is more common to see 300 trees per acre. This has led to a fiery harvest of destruction—bigger, longer, hotter wildfires.

Several implications follow. One is that modern forestry practices since the Big Burn of 1910 have substantially changed the character of western forests. I suppose one can argue over whether the change is on balance good or bad, but the fact of the change itself is established.

The combination of stopping burning for a century or so, along with the resulting heavy growth of trees, along with the decades-long accumulation of dead timber, along with recent years of drought conditions, have set the stage for major fires. Yablonski writes: “Fires that burn more than 100,000 acres are becoming commonplace in America. Nowhere is that more evident than in California. Throughout the 20th century, there were 45 megafires recorded in the state. In the first 20 years of this century, there have already been 35—seven in 2021 alone.”

The century-old policy of putting out every wildfire within a day of its discovery has clearly failed. The alternatives to reduce the potential fuel load for forest fires are much more widespread logging or controlled burning. Another essay in PERC Reports, by Tate Johnson, is called “Returning Fire to the Land.” Johnson describes the current situation in South Carolina:

One morning last March [2022], the South Carolina Forestry Commission website displayed the number of active fires in the state: 163. An interactive map showed each fire, represented by markers that ranged from red to orange to yellow to teal. In contrast to similar maps that are followed closely throughout the summer, particularly in the West, the markers didn’t represent wildfires. Indeed, South Carolina’s wildfire tracker showed zero active that day. Rather, these were “good” fires: prescribed burns that had been planned in advance, set deliberately, and aimed to achieve specific land management objectives, typically to control vegetation and reduce hazardous fuels.

“It’s gonna burn one day or another,” says Darryl Jones, forest protection chief of the South Carolina Forestry Commission, “so we should choose when we burn it and make sure we do it on the right days when it’s most beneficial.” He adds that the idea is to “burn an area purposely before it can burn accidentally.”

The different colors of map markers signified the purpose of each fire. Some burns aimed to improve wildlife habitat by stimulating seed production, clearing out a landscape’s lower layer of growth, or creating forest openings. Others were set to clear crop fields in preparation for planting or to burn debris piles that had been gathered and stacked. Still more were tagged “hazard reduction”: fires set to remove dangerous accumulations of pine needles, briars, shrubs, and other fuels that naturally build up in southern forests. Spring is the prime time to burn given its favorable conditions for wind, temperature, humidity, and fuel, although the burn window can extend earlier or later into the year.

Is a Recession Defined as “Two Negative Quarters”?

The US economy is likely to show negative growth of its gross domestic product for the first two quarters of 2022. In late June, the Bureau of Economic Analysis estimated based on updated evidence that GDP shrunk at an annualized rate of 1.6% in the first quarter of 2022. The first preliminary estimate from BEA about growth for the second quarter of 2022 come out on Thursday morning. But according to estimates from the Federal Reserve Bank of Atlanta, which has a “nowcasting” model that tries to estimate economic changes in real time, the announcement will likely be another decline of 1.6% in the second quarter of 2022.

My question here is one of nomenclature and analysis: Does two quarters of declining GDP mean that the US economy is in a recession? After all, the unemployment rate in June 2022 was 3.6%, which historically would be viewed as a low level. The number of jobs in in the US economy plummeted during the short pandemic recession from 152.5 million in February 2020 to 130.5 million in April 2020, but since then has been rising steadily and is up to $152 million in June 2022. Similarly, the labor force participation rate of the US economy dropped from 63.4% in February 2020 to 60.2% in April 2020, but has rebounded since then and has been in the range of 62.2-62.4% in recent months. So can you have a “recession” that happens simultaneously with low unemployment rates and a rising number of jobs?

The definition of a “recession” is not a physical constant like the boiling point of water. It is quite common to define a recession as “two quarters of negative GDP growth.” But there is actually no US-government approved definition of “recession.” In the US context, the most commonly used recession dating are determined by an group of academic economists calling under the auspices of the National Bureau of Economic Research. For example, here’s my post about the NBER announcement that February 2020 was the end of the previous economic upswing, and here’s my post about the NBER announcement that the pandemic recession was only two months long.

The White House Council of Economic Advisers has recently referred to the NBER Business Cycle Dating committee as “the official recession scorekeeper,” but this is incorrect. Although US business cycle dates have been based on NBER researcher for a long time, going back to the Great Depression, an organized NBER Business Cycle Dating committee wasn’t formed until 1978. It is not authorized by law. In fact, the lack of an official definition is probably a good thing, because it’s good to keep economic statistics out of the hands of politicians, and the there are obvious political implications to pronouncing on the dates when a recession has started, is ongoing, or has stopped. As on recent example, if a “recession” was strictly defined as a decline of two quarters in GDP growth, then the Trump administration would have been justified in saying that the US economy did not have a pandemic “recession” at all.

However, there a number of situations where a recession is in fact defined as two negative quarters for GDP grow. For example, here’s a Eurostat publication stating: “A recession is normally defined in terms of zero or negative growth of GDP in at least two successive
quarters.”
Here are two IMF economists stating: “Most commentators and analysts use, as a
practical definition of recession, two consecutive quarters of decline in a country’s real (inflation adjusted) gross domestic product (GDP)
…” Here’s a Bank of India report (see p. 28) referring to two quarters of negative GDP growth as a “technical recession.” Here’s a “glossary” from the UK Treasury defining a recession: “The commonly accepted definition of a recession in the UK is two or more consecutive quarters (a period of three months) of contraction in national GDP.” Here’s a short comment from the World Economic Forum stating: “There is no official, globally recognized definition of a recession. In 1974, the US economist Julius Shiskin described a recession as `two consecutive quarters of declining growth’, and many countries still adhere to that.”

In short, if the US economy does experience two consecutive quarters of declining GDP growth and some commenters choose to call that a “recession,” those commenters have plenty of justification–as a matter of nomenclature–for doing so.

But setting aside the issue of whether people can find a justification for using the word, is the term “recession” appropriate for a low-unemployment US economy experiencing a surge of inflation? Julius Shisken, mentioned above, was the Commissioner of the US Bureau of Labor Statistics back in 1974, when he wrote an article for the New York Times about dating recessions. He wrote:

A rough translation of the bureau’s [NBER’s] qualitative, definition of a recession into a quantitative one, that almost anyone can use, might run like this:

In terms of duration—declines in real G.N.P. for 2 consecutive quarters; a decline in industrial production over a six‐month period.

In terms of depth—A 1.5 per cent decline in real G.N.P.; a 1.5 per cent decline nonagricultural employment; a two‐point rise in unemployment to a level of at least 6 per cent.

In terms of diffusion—A decline in nonagricultural employment in more than 75 per cent of industries, as measured over six‐month spans, for 6 months or longer.

The specific criteria used by NBER have evolved over time (see here and here), but the general sense that a “recession” should include more than just GDP statistics has continued. Shiskin also wrote in 1974:

The bureau’s [NBER] definition of a recession is, however, known to only a small number of specialists in business cycle studies. Many people use a much simpler definition—a two‐quarter decline in Real G.N.P. while this definition is simplistic, it has worked quite well in the past. … The public at large now appears to use the term recession to describe a period of economic distress, such as we have had in recent months. 

As someone who tries to keep my categories clear, I would not yet refer to the US economy as “in a recession,” even if the GDP numbers later this week show two consecutive quarters of decline. In my mind, not all economic distress is properly called a “recession.” The current problems of the US economy, it seems to me, are a mixture of the surge of inflation that is driving down the buying power of real wages for everyone, and the ongoing adjustment of labor markets, supply chains, and firms to the aftereffects of the pandemic. But I also wouldn’t criticize too loudly those who apply the “recession” label based on two quarters of negative real GDP growth. Moreover, efforts by the Federal Reserve to choke off inflation with a series of interest rate increases have contributed to a number of US recessions since World War II, so there is a real risk that in the coming months, the US economy will end up experiencing the combination of lower output and job losses that will qualify as a “recession” by any definition.