Pandemic Recession: By Far the Shortest on Record

In the United States, there is no government committee to set the start and end dates of recessions–for obvious political reasons. However, the National Bureau of Economic Research has a Business Cycle Dating Committee which meets irregularly to determine peaks and troughs. A peak is the start of a recession: the high point before an economy starts down. A trough is the bottom of a recession: the low point before an economy starts up.

Notice in particular that in this framework, the “end” of a recession does not mean that an economy has returned to its pre-recession norms. It just means that the period of contraction is over an a period of expansion has started.

Thus, back on June 8, 2020, the Business Cycle Dating Committee named February 2020 as the peak month before the recession hit. Frankly, this was not a decision require a deep level of economic insight. The economic statistics for output and employment plunged in an almost audible way in March 2020.

I missed the announcement when it was made on July 19, but the Business Cycle Dating Committee decided that the pandemic recession was just two months long, ending in April 2020. There is an old rule of thumb that “a recession is two quarters of negative growth,” but that rule has never been official. For comparison, the contraction from peak to trough during the Great Recession was 18 months; the recessions of 2001 and of 1990-91 both had eight-month periods of actual contraction. The shortest previous US recession on record was six months, from January to June 1980, although this was then soon followed by a “double-dip” 16 month recession from July 1981 to November 1982. The Great Depression had a contraction period of 43 months from August 1929 to March 1933. With regard to the pandemic recession, the Committee wrote:

In determining that a trough occurred in April 2020, the committee did not conclude that the economy has returned to operating at normal capacity. An expansion is a period of rising economic activity spread across the economy, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales. Economic activity is typically below normal in the early stages of an expansion, and it sometimes remains so well into the expansion. The committee decided that any future downturn of the economy would be a new recession and not a continuation of the recession associated with the February 2020 peak. The basis for this decision was the length and strength of the recovery to date.

As has become its established practice, the NBER committee looked at measures of the economy involving both employment and production. The employment measures were a little complex this time, because some survey data was counting those who were being paid but not at work as “employed.”

On the employment side, the committee normally views the payroll employment measure produced by the Bureau of Labor Statistics (BLS), which is based on a large survey of employers, as the most reliable comprehensive estimate of employment. This series reached a clear trough in April before rebounding strongly the next few months and then settling into a more gradual rise. However, the committee recognized that this survey was affected by special circumstances associated with the COVID-19 pandemic in early 2020. In the survey, individuals who are paid but not at work are counted as employed, even though they are not in fact working or producing. Workers on paid furlough, who became more numerous during the pandemic, thus resulted in an overcount of people working. Accordingly, the committee also considered the employment measure from the BLS household survey, which excludes individuals who are paid but on furlough. This series also shows a clear trough in April. The committee concluded that both employment series were thus consistent with a business cycle trough in April.

On the production side, “[t]The committee believes that the two most reliable comprehensive estimates of aggregate production are the quarterly estimates of real Gross Domestic Product (GDP) and of real Gross Domestic Income (GDI), both produced by the Bureau of Economic Analysis (BEA). Both series attempt to measure the same underlying concept, but GDP does so using data on expenditure while GDI does so using data on income.”

However, the committee wants to name a month for the economic peak and trough, and these GDP and GDI statistics are produced on a quarterly basis. To dig down to the monthly level:

The most comprehensive income-based monthly measure of aggregate production is real personal income less transfers, from the BEA. The deduction of transfers is necessary because transfers are included in personal income but do not arise from production. This measure reached a clear trough in April 2020. The most comprehensive expenditure-based monthly measure of aggregate production is monthly real personal consumption expenditures (PCE), published by the BEA. This series also reached a clear trough in April 2020.

The Pandemic Recession: What Was Different in Labor Markets?

It felt back in September 2008, at least to me, as if the Great Recession erupted all at once. Sure, there had been some earlier warning signs about financial markets and subprime mortgages in late 2007, but in spring 2008, the consensus view (for example, in Congressional Budget Office forecasts) was that these housing market blips were only a modest threat to the overall US economy. But by comparison with the pandemic recession, the Great Recession practically happened in slow motion.

Here’s a figure showing the monthly unemployment rate since 1970. The shaded areas show recessions, and you can see the rise in unemployment during each recession. The rise during the pandemic is much higher and faster; conversely, the decline in unemployment from its peak has was also much faster–indeed, unemployment in July 2021 was down to 5.4%, which is conventionally considered to be pretty good.

But the pandemic recession also had a shocking effect on labor force participation rates. To be counted as “unemployed,” you need to be “in the labor force,” which means that you either have a job or are looking for a job. If you aren’t looking for a job, you are “out of the labor force” and thus are not counted as “unemployed.” There are good reasons for this distinction about being in and out of the labor force: in conventional times, it wouldn’t make sense to count, say, retirees or parents who are voluntarily staying home and looking after children as “unemployed,” because they aren’t looking for work. (Over time, the big inverted-U shape of labor force participation largely reflects the entry of women into the (paid) labor force, which topped out in the late 1990s, and then a gradual decline for both men and women since then.) But the pandemic recession is not a conventional time, and the sharp drop in labor force participation–together with what is so far only a very partial recovery–raises the likelihood that at least some of these people do want to get jobs in the future, when the time is right.

For completeness, here’s a figure showing the rate of real (that is, adjusted for inflation) GDP growth, measured as the change from 12 months earlier (with data through the second quarter of 2021). Again, the pattern of a remarkably sharp drop and then a sharp recovery is apparent.

In the Summer 2021 issue of the Journal of Economic Perspectives, Stefania Albanesi and Jiyeon Kim take a deeper dive into some aspects of the pandemic recession and the US labor market in “Effects of the COVID-19 Recession on the US Labor Market: Occupation, Family, and Gender.” (Full disclosure: I’ve been the Managing Editor of the JEP since the first issue in 1987.) They are focused in particular on what happened in 2020. But the evolution of the labor market at that time also suggests some of the future challenges. Here are a few themes that emerge.

In previous recent recessions, job losses for men tended to be greater than those for women. Indeed, married women usually increase their efforts in the (paid) labor market during recessions, which can be thought of as a way in which families adjust to the risk of income loss. But in the pandemic recession, women were more likely to lose jobs than men. For some discussion of this theme in earlier recessions in in JEP, see Hilary Hoynes, Douglas L. Miller, and Jessamyn Schaller. 2012. “Who Suffers during Recessions?” Journal of Economic Perspectives, 26 (3): 27-48.

The types of occupations where jobs were lost were different in the pandemic recession. For example, back in the Great Recession there was a large loss of construction jobs, which disproportionately affected men. Forsome discussion of the job losses for men in the Great Recession in JEP, see Charles, Kerwin Kofi, Erik Hurst, and Matthew J. Notowidigdo. 2016. “The Masking of the Decline in Manufacturing Employment by the Housing Bubble.” Journal of Economic Perspectives, 30 (2): 179-200.

Here’s a sample of the discussion from Albanesi and Kim:

During 2020, women—especially those with children—experienced a substantial reduction in employment compared to men, contrary to the pattern that prevailed in previous recessions. Both labor demand and supply factors likely contributed to this behavior. Women are more likely to be employed in service-providing industries and service occupations. These tend to be less cyclical compared to goods-producing industries and production occupations that employ a larger share of men, and Albanesi and S¸ahin (2018) show that this accounts for most of the difference in the loss
of employment during recessions since 1990. … However, during the COVID-19, infection risk was most severe in the service sector, leading to a large reduction in demand for services, due to government imposed mitigation measure and customer response to infection risk. The
overrepresentation of women in service jobs likely accounts for a sizable fraction of their decline in employment relative to men.

Another unique factor associated with the pandemic recession was the increased childcare needs associated with the disruptions to school activities, which may have contributed to a reduction in labor supply of parents. Why was it mothers in particular who responded to the lack of predictable in-person schooling activities in households where fathers were also present? Gender norms likely played a role. But from the perspective of an economic model of the family, this response should
also be driven by differences in the opportunity cost as measured by wages. In the United States and other advanced economies, there is a substantial “child penalty” that reduces wages for women when, and even before, they become mothers and throughout the course of their lifetime. The penalty is driven by a combination of occupational choices, labor supply on the extensive and intensive margin, that begin well before women have children (Kleven, Landais, and Søgaard 2019; Adda, Dustmann, and Stevens 2017). … In a recent sample of such work, Cortes and Pan (2020) estimate that the long-run child penalty—three years or more after having the first child—for US mothers is 39 percent, and they also find that child-related penalties account for two-thirds of the overall gender wage gap in the last decade. Given the child penalty, most working mothers at the start of the pandemic were likely to be earning less than their partners, and for those couples the optimal response to the increased child supervision needs was for mothers to reduce labor supply.

What do these patterns imply for the prospects of a more complete labor market recovery?

  1. The pandemic-related decline in service jobs has also offered a strong incentive to push harder toward automating such jobs where this is possible. As a result, the labor market recovery from the pandemic will not be as simple as employers just restoring the previous jobs as demand increases.
  2. The question of parents, day-care, and schools seems likely to remain fraught into this next school year, which will affect ability and willingness of parents to work.
  3. The sudden shift to home-based work cuts in several directions. On one hand, the greater availability of working-from-home may benefit certain workers, and parents in particular, by offering more flexibility. On the other side, one can imagine a two-tier labor market emerging, where the jobs that are viewed by employers as of central importance happen with a large component of personal interaction at an office, and the jobs that are viewed as peripheral, using short-term contracts, happen at home.

The US labor market is not just recovering from the pandemic recession, but along dimensions of occupation, family, and gender, it may also be reshaping itself in ways that are very much still evolving.

I should add that the Summer 2021 issue of JEP includes two other articles about the pandemic recession.

Marcella Alsan, Amitabh Chandra, and Kosali Simon discuss “The Great Unequalizer: Initial Health Effects of COVID-19 in the United States” (Journal of Economic Perspectives, 35:3, 25-46). Everyone knows that the pandemic hit the elderly harder. These authors point out that if you look at “excess deaths” by age group, the pandemic hit tended to hit hardest among those that were already disadvantaged–which is a common pattern in past pandemics, too.

Joseph Vavra writes about “Tracking the Pandemic in Real Time: Administrative Micro Data in Business Cycles Enters the Spotlight” (Journal of Economic Perspectives, 35:3, 47-66). His essay focuses on how economists have been making increasing use of private-sector real-time data. He writes:

Thus, a number of economists turned to private-sector micro data to try to understand the recession while it was still unfolding: for example, data on employment patterns from the payroll processing firm ADP and the scheduling firm Homebase, data on bank accounts and credit card payments from sources like the JPMorgan Chase Institute and firms that provide financial planning services like mint.com and SaverLife, and even data on locations of cell phone users from firms like PlaceIQ and SafeGraph. The use of administrative micro data from these and other sources allowed pandemic-related research to be produced in nearly real-time and the scope for analysis of individual behavior, which would be impossible using traditional aggregate data.

Is Geoengineering Research Objectionable?

Geoengineering is the idea of putting materials–say, certain aerosols–into the atmosphere to counteract the effects of carbon and other greenhouse gases. I’ve written about the technology and arguments a few times: for examples, see here and here.

But when small-scale experiments with this approach are proposed, like a recent effort in Sweden, there are often strong objections of the “slippery slope” variety: that is, there’s probably nothing especially dangerous or wrong with this particular limited experiment. But looking ahead, one risk is that future larger-scale experiments may pose larger risks. Also, if this research leads many people think that there is likely to be a cheap and easy techno-fix for climate change a few years down the road, they will be less likely to support near-term efforts to reduce carbon emissions. Daniel Bodansky and Andy Parker address these arguments in “Research on Solar Climate Intervention Is the Best Defense Against Moral Hazard” (Issues in Science and Technology, Summer 2021). T

Their essay suggests that the case against research experiments in geoengineering is shaky on several grounds:

  1. What if the results of the experiment suggest that geoengineering is not a plausible or workable idea? Bodansky and Parker point the example of a previous set of experiments on “ocean iron fertilization,” the idea proposed back in 1988 was that fertilizing the ocean with iron would create large algae blooms that would draw carbon dioxide from the atmosphere, and then would carry that carbon to the bottom of the ocean as the algae died. One early researcher in the area reportedly joked, “Give me half a tanker of iron and I’ll give you another ice age.” There were a dozen small-scale field experiments (here’s a review from 2012). But the experiments suggested the approach would not be very effective and might have negative side effects. So among experts who favor a broad array of efforts to reduce carbon emissions, this particular idea is not considered relevant. To put it another way, an unresearched idea will always have a certain attraction, especially in an emergency situation. A researched idea can easily seem less attractive.
  2. When people are confronted with a discussion of geoengineering, they often become more willing to consider the other responses. Bodansky and Parker write:

A] team at Yale University sought to test directly the moral hazard argument by assigning study participants in the United Kingdom and United States to two groups: one group was given information about climate intervention as a response to global warming; the other was given information about regulating pollution. The study’s results were remarkable. The researchers found that the group exposed to information about climate intervention was slightly more concerned about climate change risks. That is, they found evidence of a reverse moral hazard response. This research might be dismissed as an academic curiosity, but the same reverse moral hazard effect has been observed using different study methods in GermanySwedenthe United States, and the United Kingdom

It’s easy to imagine an underlying dynamic here. Imagine someone who is a little skeptical about the science behind climate change. When that person is confronted with a discussion of climate intervention, they start thinking, “Gee, if altering the atmosphere is under consideration, then non-carbon energy subsidies, energy efficiency efforts, and a carbon tax don’t sound so bad.”

3) Although Bodansky and Parker don’t emphasize this theme, there are people who have been arguing for some years now that the world was soon about to pass a threshold where, because of carbon and other greenhouse gases in the air, the risks of climate change would become irreversibly high. For the sake of argument, let’s say that those claims and predictions aren’t just posturing and exaggeration in an attempt to stir up a more aggressive policy response, but are literally true. In other words, say that the world reaches a point (or has already reached a point?) where the clean energy/fuel efficiency/carbon tax agenda for reducing risks of climate change is already too late, and something else needs to be done. In that situation, knowing more about when, where, and how climate intervention efforts might be done, in a way that promises the most benefit for the smallest risk, might be pretty important.

Finally, I’ll just add that if those concerned about climate change want to use a slogan of “follow the science”–which is I think is perhaps their strongest argument–then it’s a bad look to start arguing that certain kinds of science shouldn’t be followed.

Wealth and Income Inequality: Only a Weak Correlation?

Income refers to what is received in a certain period of time, which is why we refer to “annual income” or “weekly paycheck.” Wealth refers to the total that has been accumulated over time, which is why we refer to a “retirement account.” It feels as if there should be an intuitive connection between them: for example, greater inequality of incomes should be accompanied by greater inequality of wealth. This connection is true for the US economy. But across economies of western Europe, the correlation between income and wealth inequality is close to nonexistent. Fabian T. Pfeffer and Nora Waitkus present the data in “The Wealth Inequality of Nations” (American Sociological Review, 2021, forthcoming).

Pfeffer and Waitkus use data from the Luxembourg Income Study, which goes to considerable lengths to measure national income and wealth in a way that is common across countries. They look at data for the the US, Canada, and the rest from Europe: Austria, Australia, Finland, Germany, Greece, Italy, Luxembourg, Norway, Slovakia, Slovenia, Spain, Sweden, and the United Kingdom. They measure inequality in each country in two ways: with the Gini coefficient, a standard metric that expresses inequality on a scale from 0 to 1, and by the share of income or wealth held by the top 5% of the population. Here is one of their figures:

As you see, the US economy is in the upper right corner of both figures: that is, the US is high on inequality of income and wealth. But what’s perhaps surprising is that if you leave the US out of the comparison, there doesn’t seem to be any correlation between inequality of income and wealth. The dashed line shows the correlation between the two types of inequality with US data included; the soline line shows the lack of a correlation (that is, a flatter line) with the US data excluded.

For example, consider the top panel. Sweden and Norway are not far behind the US in wealth inequality (horizontal axis), but rank low in income inequality (vertical axis). Conversely, Italy and Spain are not far behind the US in income inequality, but much lower in wealth inequality.

The authors summarize: “[I]nternational differences in income inequality tell us close to nothing
about international differences in wealth inequality. In fact, many countries that we customarily describe as comparatively egalitarian using income-based comparisons (e.g., Scandinavian countries) can be classified as anything but in terms of their levels of wealth inequality. Many countries that are similarly
unequal in terms of income (e.g., Germany and Greece) differ greatly in terms of their level of wealth inequality (with Germany displaying very high levels). “

What lies behind this difference? The authors emphasize that they don’t have a full answer, but as guidance to an answer they break down wealth into categories like housing wealth, financial assets, non-housing real assets (which can include personally-owned business assets), debt, and so on. The breakdown suggests that inequality of housing assets plays a crucial role:

[T]he overall distribution of housing equity, of which the prevalence of homeownership is just one aspect, is the central element accounting for overall wealth inequality. A country’s distribution of housing equity explains its overall level of wealth inequality and concentration to a substantial degree, including both the outlying position of the United States and the overall variation across many different countries. This is not to say the strong concentration of financial assets and business equity at the top of the wealth distribution in most countries is unimportant. In fact, a focus on financial
assets and business equity is likely central to understanding elite closure and the accelerating
wealth accumulation of the top 1 percent …

Cross-national differences in income inequality do not predict cross-national differences in wealth inequality, because the latter are most centrally driven by housing equity. In turn, the distribution of housing equity, we argue, is crucially determined by financialization and housing market dynamics,
that is, in institutional spheres outside the labor market and the classical realms of the welfare state.

Here’s a final caveat about this kind of work. The author’s write: “Our analysis, in line with most other wealth research, applies a definition of net worth that does not include public pensions nor most other forms of employer-provided pensions.” Thus, if I work at a job where the employer promises a pension, that isn’t part of my “wealth.” However, if I work at a job where my employer helps contribute to a retirement account which will pay for my living expenses after retirement, then that retirement account is part of my “wealth.” There are reasonable reasons for this distinction–for example, my personal account can be left as part of an inheritance to the next generation–but it’s still worth keeping in mind. After all, if employer-promised pensions should be considered part of a person’s wealth, then what about government-promised old-age pensions, like Social Security in the United States?

Rethinking the Great Housing Bust of 2007-8

There’s a standard story about the underlying causes of the Great Recession of 2007-9. I’m sure I’ve told it myself a time or two. It starts with excessively easy lending for home mortgages–including the so-called “sub-prime” loans–aided and abetted by financial engineering that repackaged these loans as “collateralized debt obligations” and sold them to investors, including banks. The surge of mortgage lending helped to stimulate home prices to unrealistically large annual increases. But reality eventually catches up. Home prices drop from their unrealistic highs, borrowers are unable to repay their mortgages, the financial sector seizes up, and recession is unleashed.

This story isn’t exactly wrong, but the path of housing prices since 2008 suggests that it isn’t completely right, either. Because remember those home prices that were “unrealistically” high in 2006 and then fell back to earth in 2007-8? Those housing prices have since rebounded back to where they were before the Great Recession.

The figure shows several indexes of housing prices. I will just say briefly (before the light of interest in the eyes of readers begins to fade) that there’s an interesting problem in how to measure home prices over time. If one just looks at prices of recent sales, for example, it’s likely that you will tend to overvalue newer, more expensive homes. Ideally, you want to look at the price of homes of equivalent quality, which is done by using a “repeat sales” method–that is, it looks at the price of a given home that is sold (or the mortgage is refinanced) more than once, so that you can then calculate the rise in the price of homes of unchanged quality over time. The housing price index then combines all the data from these repeat sales of individual homes. Both the “All Transactions” indexes below from the Federal Home Finance Agency and the Case-Schiller Index now published by S&P Corelogic use versions of the repeat sales approach, with some differences in sample size and aggregation method.

Using 20-20 hindsight, it looks as if there has been an overall rise in home prices since the mid-1990s, with a speed bump in the middle of the rise. The blue and red lines show data for the US economy as a whole; the purple line focuses on California, where the housing price rise was faster, the fall was larger, and the ensuing rise has been faster, too. It looks rather as if the higher prices of homes back circa 2006 were in fact not grossly unreasonable when viewed from the perspective of a decade or so later.

How does one make sense of this pattern? Gabriel Chodorow-Reich, Adam M. Guren, and Timothy J. McQuade offer an analysis in “The 2000s Housing Cycle with 2020 Hindsight: A Neo-Kindlebergian View (NBER Working Paper 29140, August 2021). “Kindlebergian” is a grotesque coinage referring to the work of Charles Kindleberger, whose 1978 historical overview of Manias, Panics, and Crashes: A History of Financial Crises is a touchstone for anyone trying to understand cycles of boom and bust. This research paper has the normal dose of mathematical and statistical tools, so I can’t recommend it for general readers. But the conclusion lays out the underlying logic of how a reasonable and understandable rise in house values, based on fundamental factors like local income, amenities, and supply factors, can for a time get ahead of itself. That is, prices rise faster for a time than makes sense based on the underlying fundamentals, but then after a correction, prices move to a new equilibrium. They write:

We revisit the 2000s housing cycle with “2020 hindsight.” At the city level, the areas with the largest price increases during the housing boom from 1997 to 2006 had the largest busts from 2006 to 2012 but also the fastest growth after the trough, and as a result have had the largest price appreciation over the full cycle. We present a standard spatial equilibrium framework of house price growth determined by local income, amenities, and supply determinants and show this framework fits the cross-section of city house price growth between 1997 and 2019. The implied long-run fundamental is correlated not only with long-run price growth but also with a strong boom-bust-rebound pattern.

Our neo-Kindlbergererian interpretation emphasizes the role of economic fundamentals in setting off this asset price cycle. In our model, the boom results from over-optimism about an increase in the “dividend” growth rate, the bust ensues when beliefs of home buyers and lenders correct, exacerbated by a price-foreclosure spiral that pushes prices below their full-information level, and eventually a rebound emerges as the economy converges to a price path commensurate with fundamental growth. We also acknowledge other features of the episode, including changes in credit supply and speculation, but conclude that these forces cannot substitute for the role of fundamentals as a driving force. … Our findings suggest that while policy may want to temper over-optimism and aggressively mitigate foreclosures, it is also important not to suffocate fundamentally-driven growth. Of course hindsight is 20-20; distinguishing fundamental growth from over-optimism in real time rather than after observing a full boom-bust-rebound poses a formidable task.

Of course, demonstrating that it is possible to construct a model reflecting the pattern of home prices in the last 25 years or so does not prove that the particular model used is the correct one. It seems to me that the rock-bottom interest rates since the Great Recession have also played a role in keeping housing prices high. But that said, those who averred that US housing prices were ridiculously and unsustainably high circa 2006 were apparently correct about the short-run, but not the medium-run.

Interview with Ayşegül Şahin: Quits and Start-ups

David A. Price has an interview with “Ayşegül Şahin: On age growth, labor’s share of income, and the gender unemployment gap” (Econ Focus: Federal Reserve Bank of Richmond, Second/Third Quarter 2021, pp. 18-22). I had not known, for example, that Şahin had completed the coursework for a doctorate in electrical and electronics engineering when she first encountered economics, and decided instead to change over and pursue a different PhD. Here are two of the comments that caught my eye, but there’s much more in the interview itself:

Differences in the labor market recovery after the Great Recession and the pandemic recession

What was striking about the Great Recession was its persistence. Everybody kept saying at the time that inflation is around the corner, the labor market is getting tighter, but it took a very long time for the labor market to heal. We are not seeing that this time. This was a very different shock. It was sharp, but it was transitory compared to the Great Recession. So the effect was great, but the recovery has been faster as well. I think that’s the main difference. Another big difference is that the Great Recession was a big shock to the construction sector, and we are seeing the opposite now. We’ve been spending more time at our houses and people want to improve their houses and they want bigger houses. …

But the biggest difference is the persistence. After the Great Recession, it took quits rates five or six years to recover. Today, the quits rate is already back to where it started from before the pandemic hit … The quits rate is the number of quits during the entire month as a share of total employment. The quits rate was in Janet Yellen’s dashboard when she was the chair, actually, so lots of people started paying attention to it. … But when there’s a recession, quits go down because people become more risk averse. They don’t want to risk unemployment. So if you don’t like your boss or you don’t like your career, you just say, “OK, I’d better wait a little bit more.” During the Great Recession, this aversion to quitting lasted for a long time. As a result, people were stuck in jobs that they were not necessarily happy about or they were not very productive at. But in this recession, quits rates bounced back quickly. One reason is because there are a lot of job openings; the second is that people want to go back and find jobs that they are better matched at.

The Long-Term Decline in US Start-up Rates

Startups are important for various reasons. First of all, they are important areas of job creation and productivity growth. I have worked on this in the last five or six years, and what we have found is that the declining startup rate is a consequence of the declining growth rate of the labor force in the U.S. economy. … With the declining labor force growth rate, we also started seeing a decline in the startup rate. You can think of the startup rate as the birth rate of firms.

What happens when the birth rate goes into decline is that the population gets older after a while. The same thing has happened with U.S. firms. What does it mean when more firms are older? Older firms are more stable, but they are also slower. They create fewer jobs, which accounts for part of the decline in job creation. An economy like this is more stable — the unemployment rate tends to be lower — but it also has lower productivity growth. That accounts for a lot of trends we have been seeing in the U.S. economy. … [W]hen you look at different sectors and different locations, as we did — we looked at around 10,000 labor markets — you see a decline in startups in more than 90 percent of them. The point that we are making is that there seems to be a common factor affecting almost all the markets in the U.S. economy. …

[I]f you look at the startup rate of the manufacturing sector in 1980, you could have already predicted that this sector’s employment was going to decline over time. That’s because its employment share was way higher than its startup employment share. The entry or lack of entry of startups into a sector gives you information about its condition before you see existing firms exiting the sector. The startup activity that is happening now is another sign of reallocation. Where the startups are entering will be informative in terms of where the economy is going in the near future.

Some Snapshots of US Income Inequality

The Congressional Budget Office has published “The Distribution of Household Income, 2018” (August 2021). It takes a couple of years to pull this data together in a reliable way. The report is full of data and figures about inequality of income over time, together with what the patterns of inequality would look like with adjustments for non-market income (like Social Security), taxes paid, and means-tested benefits received. Here are a few of the images that caught my eye.

A variety of the figures confirm the well-known fact that inequality of US incomes has expanded in recent years, with the biggest gains at the very top of the income distribution. For example, the upper figure shows average income in dollars for the top quintile (fifth) compared to the rest of the income distribution. The second panel shows cumulative growth over time as a percentage amount. These calculations look only at income, without taking taxes or transfer payments into account.

The next figure breaks down the top fifth into smaller categories, and shows cumulative income growth for these subgroups. As you can see, cumulative income growth (again, not counting taxes or transfers) has been fastest at the very top for the growth for the top 0.01 percent of the income distribution.

If we take taxes into account, an interesting pattern is that the average federal tax rate for the top 1% and the top fifth haven’t changed much since the 1990s. (This calculation combines all federal taxes paid, not just income tax.) However, average federal tax rates have fallen during that time for the rest of the income distribution, especially the bottom quintile. Thus, the share of taxes paid by those with higher income as been rising over time.

What if we now focus on income after taxes and transfer payments? Here, an interesting pattern emerges in the bottom panel of the next figure: the percentage growth in income for the bottom fifth is actually fairly close to that for the top fifth: it’s the middle three-fifths that has slower income growth by this measure. One underlying reason, noted above, is that taxes have fallen by more for the bottom fifth. Another reason is that the cost of health care means that Medicaid payments for the bottom fifth have risen.

Finally, here’s a measure of how the inequality of the US income distribution has been shifting using the Gini coefficient, a standard measure used in these comparisons.

The top line shows rising income inequality is market incomes. The second line shows lower inequality, but still rising if one includes nonmarket sources of income like Social Security and Medicare. The third line adds means-tested benefits like food stamps, Medicaid, welfare payments, and so on. These reduce inequality further, and you will notice that with these adjustments, the rise in inequality is to some extent flattened out. The bottom line is income after taxes and transfer payments. By this measure, it’s striking to notice, the rise in inequality since about 1990 has been almost entirely leveled out. The Gini coefficient for this measure was .437 in the US economy in 2018; for comparison, it was .435 back in 2000 and .425 in 1986. In other words, the rising inequality in market incomes in the last few decades has been mostly offset by changes in federal government tax and transfer payments.

It’s worth noting that the Gini coefficient measure, while very commonly used, has all the standard problems that occur when you try to boil down the entire income distribution to a single number. In this case, in particular, remember the figure above showing the rise for the lowest quintile after taxes and transfers were taken into account. Thus, one way to think about the shifting inequality of the income distribution over time is that the top fifth and the bottom fifth have grown by more than the middle. Overall, this pattern means that overall the distribution of income has not become more unequal. But if you’re in the middle three-fifths, you haven’t been keeping up.

Whither Battery Power?

One possible clean energy agenda is “electrify everything,” with the idea being to focus on carbon-free methods of generating electricity. However, a number of carbon-free methods, like solar and wind (and even hydro-power at some times and places), have the problem that the power generated can ebb and flow. (Nuclear and geothermal power are examples of carbon-free electricity that is not interruptible.) Are there reliable ways of storing sufficient electricity for those times–which can easily be days and might even be weeks– when the sun doesn’t shine and the wind doesn’t blow? The energy storage question may well determine the workability of the “electrify everything” agenda.

One obvious answer is to store electricity in rechargeable lithium-ion batteries, but doing this at sufficient scale will require dramatic improvements in the price and capacity of batteries. Micah S. Ziegler a and Jessika E. Trancik discuss technoloigicl progress in this area in “Re-examining rates of lithium-ion battery technology improvement and cost decline” (Energy and Environmental Science, 2001, issue 4, pp. 1635-1651). They write:

Energy storage technologies have the potential to enable greenhouse gas emissions reductions via electrification of transportation systems and integration of intermittent renewable energy resources into the electricity grid. Lithium-ion technologies offer one possible option, but their costs remain high relative to cost-competitiveness targets, which could hinder these technologies’ broader adoption … However, their deployment is still relatively limited, and their broader adoption will depend on their potential for cost reduction and performance improvement. Understanding this potential can inform critical climate change mitigation strategies, including public policies and technology development efforts. … Here we systematically collect, harmonize, and combine various data series of price, market size, research and development, and performance of lithium-ion technologies. We then develop representative series for these measures, while separating cylindrical cells from all types of cells. For both, we find that the real price of lithium-ion cells, scaled by their energy capacity, has declined by about 97% since their commercial introduction in 1991. We estimate that between 1992 and 2016, real price per energy capacity declined 13% per year for both all types of cells and cylindrical cells, and upon a doubling of cumulative market size, decreased 20% for all types of cells and 24% for cylindrical cells.

For me, several lessons emerge from their essay: 1) Technological progress in rechargeable lithium-ion batteries has been larger than I realized; 2) It’s not nearly enough, as yet, to make heavily reliance on solar and wind energy possible; and 3) We all need to start divide up our thinking about batteries, in the sense that the batteries that power portable electronics may look quite a bit different from, say, batteries installed to store energy for homes, neighborhoods, or factories.

For example, in California there is an enormous lithium-ion battery facility mostly completed, which “will be able to discharge enough electricity to power roughly 300,000 California homes for four hours.” Another facility is planning to use a group of Tesla batteries so that there will be enough electricity storage to power all homes in San Francisco for six hours. Similar facilities are under consideration or actually being built around the world. These kinds of facilities can serve a useful purpose in avoiding electricity outages, and dealing with situations where power demand spikes above supply for a few hours. They can replace the need for backup generating capacity that would otherwise be called on in these situations. But if you want to rely heavily on wind and solar power, and to be very sure that the power won’t go off for an extended time, these sorts of industrial-sized battery facilities are only a start.

However, another issue with batteries is that manufacturing them requires a surge of mining and extensive use of minerals, and then recycling or disposing of the giant industrial-size batteries requires additional efforts. Thus, the debate over storing energy in batteries often drifts into a related debate about other forms of storing electricity.

For example, one approach is pumped-hydropower storage. The basic idea here is to use carbon-free energy to pump water from a lower river or lake so that it is up behind a dam–where it can then run down and generate electricity. There are now 43 of these projects in the US. The International Hydropower Association writes: “Pumped storage hydropower is the world’s largest battery technology, accounting for over 94 per cent of installed energy storage capacity, well ahead of lithium-ion and other battery types.”

What about if you aren’t near hydropower? In that case, I’m reading a little more these days about “gravity-based batteries.” The idea here is that if you can store energy by using carbon-free electricity to pump water up above the dam, so that it will run a turbine when gravity pulls the water back down, why not use the carbon-free electricity to lift up a heavy weight, and then let the weight drive a turbine as it comes back down? The gravity-battery technology is quite immature. But for the sake of argument, imagine the possibility of using an abandoned mineshaft that runs straight down for a few kilometers, and retrofitting the mine as a gravity battery.

Cathleen O’Grady offers an overview of gravity batteries in “Gravity-based batteries try to beat their chemical cousins with winches, weights, and mine shafts” (Science, April 22, 2021). I have no clear idea if gravity batteries will at the end of the day store a meaningful amount of power. But some of the early studies suggest that it could be cost-competitive with industrial-battery technology. Moreover, hoisting up and lowering big weights is a much cleaner environmental proposition than manufacturing and disposal of batteries.

For the sake of completeness, I should add a mention of hydrogen fuel here. If the hydrogen is generated by a non-carbon technology, then it offers yet another way of storing energy for later use. I’ve written about possibilities and issues with hydrogen power before, and won’t repeat it here. But it’s perhaps worth saying that while there’s been a lot of talk about about hydrogen fuel cells as a source of energy for vehicles, my sense is that the current technology may be better-suited for applications that provide heating, cooling, and electricity for homes and buildings.

Global Food Production: Too Little and Too Much

On one hand, it seems terribly important that global food supply increase dramatically in the next few decades, to feed the hungry around the world as global population rises. On another hand, obesity is a huge and ongoing health problem around the world, even in many countries that do not have high income levels. And on yet a third hand, food production around the world is often a major contributor to environmental degradation, being related to issues including water pollution, deforestation, pesticides that linger in the ecosystem, and release of greenhouse gases.

A desirable path for the future of global food production will take all of these into account. The Credit Suisse Research Institute takes a shot at reconciling them in “The Global Food System: Identifying sustainable solutions” (June 2021). Let’s start with a description of the competing priorities:

About 9% of global population, consisting of about 700 million people, is undernourished.

About 40% of the world population is overweight.

When it come to environmental issues the report notes:

Food production and consumption already contribute well over 20% to global greenhouse gas emissions and account for more than 90% of the world’s freshwater consumption. After reviewing the environmental footprint of all major food groups, we conclude that the current situation is likely to worsen significantly unless action is taken. The likely growth in the world’s population to around ten billion people by 2050 coupled with a further shift in diets, especially across the growing emerging middle class, could increase emissions by a further 46%, while demand for agricultural land could increase by 49%. … [T]he growth in agricultural land seen to date has come at the cost of greater deforestation. Data from Globalforestwatch suggest that annual tree loss cover has increased from around 14 million hectares in 2001 to around 25 million hectares in 2019 … . The FAO indicates that some 420 million hectares of forest has been lost since 1990, which is the same as roughly eight times the size of France or 50% of the USA. Deforestation not only releases stored carbon dioxide, but
also reduces the ability to capture future carbon releases. Furthermore, it contributes to a loss in biodiversity and puts pressure on soil quality, which in turn is seen as contributing to the risk of drought and floods.

What’s the pathway through this maze of concerns? Start with the environmental issue. Here’s a table that tries to compare environment costs of a variety of foods. I’m sure one can quarrel with the details, but the broad nature of the overall rankings is clear. Vegetables and fruits in general have lower environment effects. Meat and dairy, and beef in particular, have the highest environmental effects.

As it turns out, many of the human health issues of obesity are related to consumption of meat and dairy, and more broadly to limited consumption of vegetables and fruits. And when it comes to expanding calorie outputs for a growing world population, it’s probably efficient to do so by expanding non-meat alternatives.

Along with a shift away from meat in general and beef in particular, there are some other useful steps to be taken. One is to take food waste seriously as a policy concern:

[M]ore than 30% of food produced is either lost or wasted. By way of example, around USD 408 billion of food produced in 2019 went unsold or uneaten. The FAO estimates the economic, environmental and social costs associated with food waste at USD 2.6 trillion. Eliminating food waste in the United States and Europe alone would add 10% to the world’s available food supply. Solutions need to focus across the entire supply chain as about 50% of food is lost in the production and handling phase, while 45% is wasted in the distribution and consumption phase.

The agenda for reducing food waste often focuses on issues like improved storage, faster transportation, and recognizing alternative uses that will give a stable shelf-life to food that would otherwise have spoiled. To cite one of a number of examples from the report: “Baldor, a major food processor that makes products like `baby’ carrots (i.e. regular carrots carved into tiny pieces), turns fruit and vegetable scraps into multiple products: some fruit scraps go to juice companies, vegetable scraps go to chefs for use in stocks, a mix of vegetables are dried and crushed into a flour that can be used in place of wheat, and other scraps are used in meal kits that include veggie noodles.”

Another set of options involve bringing the technology revolution to farming. For example, the report suggests that “[p]recision farming through the use of artificial intelligence, drones, autonomous machinery and smart irrigation systems could yield productivity increases of 70% by 2050.”

Another option is “vertical farming,” “which is an indoor approach consisting of controlling all environmental factors such as light, humidity and temperature, with the aim of producing more
food by harvesting crops vertically. This concept enables the cultivation of various crop types
ranging from leafy greens and tomatoes to herbs and flowers, as well as microgreens, and fulfills
environmental, social and economic goals. … According to the Ellen MacArthur Foundation, it is possible that, by 2050, 80% of the food consumed in urban areas could be produced using vertical farming
technologies.” In fact, Netherlands is the world’s #2 exporter of agricultural products by value thanks to its embrace of these kinds of technologies.

In thinking about a shift away from traditional meat, several options have been getting a lot of attention. “[L]ivestock provides just 18% of calories consumed by humans, but takes up close to 80% of global farmland.”

One option is “plant-based meat” products like Beyond Meat and Impossible Foods. “The reason for supporting the growth of plant-based meat is that it uses 72%–99% less water and 47%–99% less land than traditional animal-based meat. In addition, water pollution is substantially lower, whereas GHG emissions are also between 30% and 90% lower. One other aspect worth highlighting is that plant-based meat does not require the use of antibiotics, which is very common with animal-based meat production.”
Another option is “cultivated meat,” which is meat grown directly from cells. “For example, cultivated meat has a feed conversion ratio (kg in per kg out) that is more than seven times higher than that of beef cattle and almost six times higher than that of pork.” Yet another option is the use of fermentation: “Alternative proteins can also be produced through fermentation processes using microorganisms. Traditionally, fermentation has been used to make beer, wine and cheese, and the same process
can be used to improve the flavor of plant ingredients. … Biomass fermentation has the clear advantage of speed. The doubling time of the microorganisms used is hours compared to months or longer for animals.”

When thinking about these alternative products, I always add two thoughts in my own mind. First, it seems likely that the potential productivity gains for these products are high, which suggests that it will be possible to drive down their price dramatically. If a plant-based “hamburger” at the fast-food drive-through was half the price of ground beef, or less than half the price, my guess is that I wouldn’t be alone in being willing to make the shift. Second, as long as we think about these products as meat substitutes, I suspect they will feel unsatisfying. But it’s relatively straightforward to tinker with the taste of plant-based product, and eventually some of these products will be created that aren’t viewed as substitutes for something, but instead their popularity will stand on its own.

Reports like this one often jump pretty quickly from a list of problems to discussions of how government regulation and requirements, and this report is no exception. Many jurisdictions are passing taxes on sugared drinks; is a tax on meat next? I’m agnostic on much of this policy agenda, which is to say that I’ll judge the individual proposals as they come along. I suspect that the pull and push of demand and supply will bring about many of these changes, as the world moves toward feeding a few billion more people at a time of rising environmental concerns. Thus, I tend to see this report as a forecast of where we are already heading.

An Earn-and-Learn Career Path

What paths can high school students take in accumulating hard and soft skills so that they can make the transition to a career and job? The main answer in US society is “go to college.” But for a large share of high school graduates, being told that they now need to attend several more years of classes is not what they want to hear.

Historically, a common alternative career path was that companies hired young workers who had little to offer other than energy and flexibility, and then trained and promoted those workers. Unions often played a role in advocating and supporting this training, too. But in the last few decades, it seems that a lot of companies have exited the job training business. Their general sense is that young adults aren’t likely to stay with the company, so in effect, you are training them for their next employer. Instead, better just to require that new hires already have experience.

The earn-and-learn career path tries to steer between these extremes. Yes, it involves additional learning, because that’s what 21st century jobs are like, but it seeks to have that learning take place more in the workplace than in the classroom. Also, instead of paying to learn, you get paid while learning. On the other side, firms that participate in this kind of training don’t need to take on the entire responsibility and cost of doing so.

Annelies Goger sketches this framework in “Desegregating work and learning through ‘earn-and-learn’ models” (Brookings Institution, December 9, 2020). She points out the gigantic difference in public support for higher education vs. public support for an earn-and-learn approach.

The earn-and-learn programs under the public workforce system—authorized under the Workforce Innovation and Opportunity Act (WIOA)—are underused and hard to scale. Publicly funded job training options are tiny overall compared to investments in traditional public higher education or classroom-based job training. Funding for public higher education was $385 billion in 2017-18, compared to about $14 billion for employment services and training across 43 programs. The net result is that higher education is the main provider of publicly funded training for most Americans, and most of the $14 billion for employment services and training goes to services (most of which isn’t training) for special populations such as veterans and people with disabilities.

This difference in public support is even more stark when you recognize that those with a college education are likely to end up with  higher average incomes during their lives, so that we are doing more to subsidize the training of the relatively high earners of the future than we are to subsidize the training of the middle- and lower-level earners.

What do the earn-and-learn programs look like in practice? Here’s a graphic:

Fig1

I won’t try to go through these choices one at a time: for present purposes, the salient fact is that they are all small in size. As long-time readers know, I’m a fan of a dramatic expansion of apprenticeships (for example, here, here, here, and here). But as Goger writes:

For example, the U.S. had roughly 238,000 new registered apprentices in 2018. However, if the U.S. had the same share of new apprentices per capita as Germany, we would have 2 million new apprentices per year; if we had the same share as the United Kingdom or Switzerland, that number would be 3 million.